Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 117/679 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Ovum report: Oracle Database 12c offers new take on multitenancy

    - by Javier Puerta
    Ovum has published a positive research note on Oracle Database 12c. Ovum concludes that Oracle Multitenant will provide significant productivity and resource savings for Oracle customers considering database consolidation, on- or off-premise. The multitenant features of Oracle Database 12c support not only cloud deployment, but also database consolidation. Oracle has purchased electronic distribution rights to this research note and posted it to Oracle.com. The full research note can be downloaded here.  

    Read the article

  • Workshop in Holland - and open questions

    - by Mike Dietrich
    Thanks to everybody visiting yesterday the Upgrade Workshop in Maarsen. I had lots of fun - and I hope you'd enjoy it, too :-) The slides, as always, can be downloaded from: http://apex.oracle.com/folien Use the Schluesselwort/Keyword: upgrade112 And thanks to all those of you sending feedback regarding "traget/destination" (will change it in the slides) and other topics such as Enterprise Manager Grid Control 11g. Enterprise Manager 11g will be launched on 22-APR-2010 - and you can join the event live if you will be accidentialy in New York:http://www.oracle.com/enterprisemanager11g/index.html Thanks for this hint!!! Regarding the open questions: Will there be PSUs available for Intel Solaris? PSUs will be made available on nearly all platforms including Intel Solaris. Please see Note:882604.1 for platform information and Note:854428.1 for direct links to the PSU download location. Is COMMIT_WRITE=NOWAIT the default in patch set 10.2.0.4? I tried to verify this and neither couldn't find a bug entry nor a documentation saying the 10.2.0.4 has a different default setting (default behaviour is WAIT). Checked it in my 10.2.0.4 instances as well and there it is set to WAIT. If this parameter is not explicitly specified, then database commit behavior defaults to writing commit records to disk before control is returned to the client. If only IMMEDIATE or BATCH is specified, but not WAIT or NOWAIT, then WAIT mode is assumed. If only WAIT or NOWAIT is specified, but not IMMEDIATE or BATCH, then IMMEDIATE mode is assumed Please feedback to me if you have different experiences. Service Request escalation by telephone? Thanks for this update - I didn't realize that ;-) Now I know why it hasn't helped last month when I've updated an SR ... here's the official information on that: Note:199389.1 - Note has been updated on 24-FEB-2010. See the telephone number to Oracle support to request an escalation here: http://www.oracle.com/support/contact.html

    Read the article

  • OSB and Coherence Integration

    - by mark.ms.smith
    Anyone who has tried to manage Coherence nodes or tried to cache results in OSB, will appreciate the new functionality now available. As of WebLogic Server 10.3.4, you can use the WebLogic Administration Server, via the Administration Console or WLST, and java-based Node Manager to manage and monitor the life cycle of stand-alone Coherence cache servers. This is a great step forward as the previous options mainly involved writing your own scripts to do this. You can find an excellent description of how this works at James Bayer’s blog. You can also find the WebLogic documentation here.As of Oracle Service Bus 11gR1 (11.1.1.3.0), OSB now supports service result caching for Business Bervices with Coherence. If you use Business Services that return somewhat static results that do not change often, you can configure those Business Services to cache results. For Business Services that use result caching, you can control the time to live for the cached result. After the cached result expires, the next Business Service call results in invoking the back-end service to get the result. This result is then stored in the cache for future requests to access. I’m thinking that this caching functionality would be perfect for some sort of cross reference data that was refreshed nightly by batch. You can find the OSB Business Service documentation here.Result Caching in a dedicated JVMThis example demonstrates these new features by configuring a OSB Business Service to cache results in a separate Coherence JVM managed by WebLogic. The reason why you may want to use a separate, dedicated JVM is that the result cache data could potentially be quite large and you may want to protect your OSB java heap.In this example, the client will call an OSB Proxy Service to get Employee data based on an Employee Id. Using a Business Service, OSB calls an external system. The results are automatically cached and when called again, the respective results are retrieved from the cache rather than the external system.Step 1 – Set up your Coherence Server Via the OSB Administration Server Console, create your Coherence Server to be used as the results cache.Here are the configured Coherence Server arguments from the Server Start tab. Note that I’m using the default Cache Config and Override files in the domain.-Xms256m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=256m -Dtangosol.coherence.override=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-override.xml -Dtangosol.coherence.cluster=OSB-cluster -Dtangosol.coherence.cacheconfig=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dcom.sun.management.jmxremote Just incase you need it, here is my Coherence Server classpath:/app/middleware/jdev_11.1.1.4/oracle_common/modules/oracle.coherence_3.6/coherence.jar: /app/middleware/jdev_11.1.1.4/modules/features/weblogic.server.modules.coherence.server_10.3.4.0.jar: /app/middleware/jdev_11.1.1.4/oracle_osb/lib/osb-coherence-client.jarBy default, OSB will try and create a local result cache instance. You need to disable this by adding the following JVM parameters to each of the OSB Managed Servers:-Dtangosol.coherence.distributed.localstorage=false -DOSB.coherence.cluster=OSB-clusterIf you need more information on configuring a remote result cache, have a look at the configuration documentration under the heading Using an Out-of-Process Coherence Cache Server.Step 2 – Configure your Business Service Under the respective Business Service Message Handling Configuration (Advanced Properties), you need to enable “Result Caching”. Additionally, you need to determine what the cache data will be keyed on. In the example below, I’m keying it on the unique Employee Id.The Results As this test was on my laptop, the actual timings are just an indication that there is a benefit to caching results. Using my test harness, I sent 10,000 requests to OSB, all with the same Employee Id. In this case, I had result caching disabled.You can see that this caused the back end Business Service (BS_GetEmployeeData) to be called for each request. Then after enabling result caching, I sent the same number of identical requests.You can now see the Business Service was only invoked once on the first request. All subsequent requests used the Results Cache.

    Read the article

  • A Reusable Builder Class for Ruby Testing

    - by Liam McLennan
    My last post was about a class for building test data objects in C#. This post describes the same tool, but implemented in Ruby. The C# version was written first but I originally came up with the solution in my head using Ruby, and then I translated it to C#. The Ruby version was easier to write and is easier to use thanks to Ruby’s dynamic nature making generics unnecessary.  Here are my example domain classes: class Person attr_accessor :name, :age def initialize(name, age) @name = name @age = age end end class Property attr_accessor :street, :manager def initialize(street, manager) @street = street @manager = manager end end and the test class showing what the builder does: class Test_Builder < Test::Unit::TestCase def setup @build = Builder.new @build.configure({ Property => lambda { Property.new '127 Creek St', @build.a(Person) }, Person => lambda { Person.new 'Liam', 26 } }) end def test_create assert_not_nil @build end def test_can_get_a_person @person = @build.a(Person) assert_not_nil @person assert_equal 'Liam', @person.name assert_equal 26, @person.age end def test_can_get_a_modified_person @person = @build.a Person do |person| person.age = 999 end assert_not_nil @person assert_equal 'Liam', @person.name assert_equal 999, @person.age end def test_can_get_a_different_type_that_depends_on_a_type_that_has_not_been_configured_yet @my_place = @build.a(Property) assert_not_nil @my_place assert_equal '127 Creek St', @my_place.street assert_equal @build.a(Person).name, @my_place.manager.name end end Finally, the implementation of Builder: class Builder # defaults is a hash of Class => creation lambda def configure defaults @defaults = defaults end def a(klass) temp = @defaults[klass].call() yield temp if block_given? temp end end

    Read the article

  • Incremental Statistics Maintenance – what statistics will be gathered after DML occurs on the table?

    - by Maria Colgan
    Incremental statistics maintenance was introduced in Oracle Database 11g to improve the performance of gathering statistics on large partitioned table. When incremental statistics maintenance is enabled for a partitioned table, oracle accurately generated global level  statistics by aggregating partition level statistics. As more people begin to adopt this functionality we have gotten more questions around how they expected incremental statistics to behave in a given scenario. For example, last week we got a question around what partitions should have statistics gathered on them after DML has occurred on the table? The person who asked the question assumed that statistics would only be gathered on partitions that had stale statistics (10% of the rows in the partition had changed). However, what they actually saw when they did a DBMS_STATS.GATHER_TABLE_STATS was all of the partitions that had been affected by the DML had statistics re-gathered on them. This is the expected behavior, incremental statistics maintenance is suppose to yield the same statistics as gathering table statistics from scratch, just faster. This means incremental statistics maintenance needs to gather statistics on any partition that will change the global or table level statistics. For instance, the min or max value for a column could change after just one row is inserted or updated in the table. It might easier to demonstrate this using an example. Let’s take the ORDERS2 table, which is partitioned by month on order_date.  We will begin by enabling incremental statistics for the table and gathering statistics on the table. After the statistics gather the last_analyzed date for the table and all of the partitions now show 13-Mar-12. And we now have the following column statistics for the ORDERS2 table. We can also confirm that we really did use incremental statistics by querying the dictionary table sys.HIST_HEAD$, which should have an entry for each column in the ORDERS2 table. So, now that we have established a good baseline, let’s move on to the DML. Information is loaded into the latest partition of the ORDERS2 table once a month. Existing orders maybe also be update to reflect changes in their status. Let’s assume the following transactions take place on the ORDERS2 table this month. After these transactions have occurred we need to re-gather statistic since the partition ORDERS_MAR_2012 now has rows in it and the number of distinct values and the maximum value for the STATUS column have also changed. Now if we look at the last_analyzed date for the table and the partitions, we will see that the global statistics and the statistics on the partitions where rows have changed due to the update (ORDERS_FEB_2012) and the data load (ORDERS_MAR_2012) have been updated. The column statistics also reflect the changes with the number of distinct values in the status column increase to reflect the update. So, incremental statistics maintenance will gather statistics on any partition, whose data has changed and that change will impact the global level statistics.

    Read the article

  • Indian government department have more unsecure website then others.

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/10/26/indian-government-department-have-more-unsecure-website-then-others.aspxOne of my friend share his college experience with me. He is not related with computer science. One day he told me that Ankia Fadia come to their college. In front of many student he show how to hack BSNL website by tricks. he break the flow how BSNL site work. I have told them BSNL is one of the most unsecure website of India   If you logged-in to website maybe it’s run in few seconds but sometime it run in 58 minute. OK this is not grammar mistake 58 minute is less then 1 hour. This means open a tab and put the link to open. it will open in hours. If you are using IE8, Chrome and Firefox you will be forced to use IE7 or downgrade. I simply use Ie7 mode in IE for make it work. This happen because they use something that is called DynaTrace. This site is most unsecure. now guess how !   Suppose my username is xyz and password is abc. How I can reset the password I simply go to website and in their site when I do reset my password he told me to fill password and password will not worked here.you can use here password here to reset my password. Remember that username are different then broadband username and password. Suppose if I want to reset your password I simply need to know your broadband username and I can reset it myself. I just logged in with my username and when I open the page for reset password I can fill your bb username and password will work here. I have not tried this. the broadband username can easily guess. this is depend on same way how people’s broandband username made. IS this Safe ? Nope, There are many thing on the site which make me feel that is 1900 century website. They still lived in popup life.  These site are nothing but a crap. not work most of time and when work it’s run too slowly.

    Read the article

  • #AJIReport 16 | Jason Bock on Windows Runtime and Metaprogramming

    - by Jeff Julian
    This episode we sit down with Jason Bock to talk about Windows Runtime and his upcoming book on Metaprogramming. Jason has been a consultant at Magenic for the past 11 years. In this show, Jason walks us through how to get started with Windows RT and talks about what the experience is like deploying to the Windows Store. We get into the new frontier of device development and the restrictions that are in place to protect the users and other applications. Towards the end of the show we start talking about Jason's book on Metaprogramming that he is co-authoring with Kevin Hazard. Listen to the Show Site: http://www.jasonbock.net/ Book: Metaprogramming in .NET Twitter: @JasonBock

    Read the article

  • How-to call server side Java from JavaScript

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The af:serverListener tag in Oracle ADF Faces allows JavaScript to call into server side Java. The example shown below uses an af:clientListener tag to invoke client side JavaScript in response to a key stroke in an Input Text field. The script then call a defined af:serverListener by its name defined in the type attribute. The server listener can be defined anywhere on the page, though from a code readability perspective it sounds like a good idea to put it close to from where it is invoked. <af:inputText id="it1" label="...">   <af:clientListener method="handleKeyUp" type="keyUp"/>   <af:serverListener type="MyCustomServerEvent"                      method="#{mybean.handleServerEvent}"/> </af:inputText> The JavaScript function below reads the event source from the event object that gets passed into the called JavaScript function. The call to the server side Java method, which is defined on a managed bean, is issued by a JavaScript call to AdfCustomEvent. The arguments passed to the custom event are the event source, the name of the server listener, a message payload formatted as an array of key:value pairs, and true/false indicating whether or not to make the call immediate in the request lifecycle. <af:resource type="javascript">     function handleKeyUp (evt) {    var inputTextComponen = event.getSource();       AdfCustomEvent.queue(inputTextComponent,                         "MyCustomServerEvent ",                         {fvalue:component.getSubmittedValue()},                         false);    event.cancel();}   </af:resource> The server side managed bean method uses a single argument signature with the argument type being ClientEvent. The client event provides information about the event source object - as provided in the call to AdfCustomEvent, as well as the payload keys and values. The payload is accessible from a call to getParameters, which returns a HashMap to get the values by its key identifiers.  public void handleServerEvent(ClientEvent ce){    String message = (String) ce.getParameters().get("fvalue");   ...  } Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Find the tag library at: http://download.oracle.com/docs/cd/E15523_01/apirefs.1111/e12419/tagdoc/af_serverListener.html

    Read the article

  • Controlar Autentificaci&oacute;n Crystal Reports

    - by Jason Ulloa
    Para todos los que hemos trabajamos con Crystal Reports, no es un secreto que cuando tratamos de conectar nuestro reporte directamente a la base de datos, se nos viene encima el problema de autenticación. Es decir nuestro reporte al momento de iniciar la carga nos solicita autentificarnos en el servidor y sino lo hacemos, simplemente no veremos el reporte. Esto, además de ser tedioso para los usuarios se convierte en un problema de seguridad bastante grande, de ahí que en la mayoría de los casos se recomienda utilizar dataset. Sin embargo, para todos los que aún sabiendo esto no desean utilizar datasets, sino que, quieren conectar su crystal directamente veremos como implementar una pequeña clase que nos ayudará con esa tarea. Generalmente, cuando trabajamos con una aplicación web, nuestra cadena de conexión esta incluida en el web.config y también en muchas ocasiones contiene los datos como el usuario y password para acceder a la base de datos.  De esta cadena de conexión y estos datos es de los que nos ayudaremos para implementar la autentificación en el reporte. Generalmente, la cadena de conexión se vería así <connectionStrings> <remove name="LocalSqlServer"/> <add name="xxx" connectionString="Data Source=.\SqlExpress;Integrated Security=False;Initial Catalog=xxx;user id=myuser;password=mypass" providerName="System.Data.SqlClient"/> </connectionStrings>   Para nuestro ejemplo, nombraremos a nuestra clase CrystalRules (es solo algo que pensé de momento) 1. Primer Paso Creamos una variable de tipo SqlConnectionStringBuilder, a la cual le asignaremos la cadena de conexión que definimos en el web.config, y que luego utilizaremos para obtener los datos del usuario y el password para el crystal report. SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(ConfigurationManager.ConnectionStrings["xxx"].ConnectionString); 2. Implementación de propiedad Para ser más ordenados crearemos varias propiedad de tipo Privado, que se encargarán de recibir los datos de:   La Base de datos, el password, el usuario y el servidor private string _dbName; private string _serverName; private string _userID; private string _passWord;   private string dataBase { get { return _dbName; } set { _dbName = value; } }   private string serverName { get { return _serverName; } set { _serverName = value; } }   private string userName { get { return _userID; } set { _userID = value; } }   private string dataBasePassword { get { return _passWord; } set { _passWord = value; } } 3. Creación del Método para aplicar los datos de conexión Una vez que ya tenemos las propiedades, asignaremos a las variables los valores que se han recogido en el SqlConnectionStringBuilder. Y crearemos una variable de tipo ConnectionInfo para aplicar los datos de conexión. internal void ApplyInfo(ReportDocument _oRpt) { dataBase = builder.InitialCatalog; serverName = builder.DataSource; userName = builder.UserID; dataBasePassword = builder.Password;   Database oCRDb = _oRpt.Database; Tables oCRTables = oCRDb.Tables; //Table oCRTable = default(Table); TableLogOnInfo oCRTableLogonInfo = default(TableLogOnInfo); ConnectionInfo oCRConnectionInfo = new ConnectionInfo();   oCRConnectionInfo.DatabaseName = _dbName; oCRConnectionInfo.ServerName = _serverName; oCRConnectionInfo.UserID = _userID; oCRConnectionInfo.Password = _passWord;   foreach (Table oCRTable in oCRTables) { oCRTableLogonInfo = oCRTable.LogOnInfo; oCRTableLogonInfo.ConnectionInfo = oCRConnectionInfo; oCRTable.ApplyLogOnInfo(oCRTableLogonInfo);     }   }   4. Creación del report document y aplicación de la seguridad Una vez recogidos los datos y asignados, crearemos un elemento report document al cual le asignaremos el CrystalReportViewer y le aplicaremos los datos de acceso que obtuvimos anteriormente public void loadReport(string repName, CrystalReportViewer viewer) {   // attached our report to viewer and set database login. ReportDocument report = new ReportDocument(); report.Load(HttpContext.Current.Server.MapPath("~/Reports/" + repName)); ApplyInfo(report); viewer.ReportSource = report; } Al final, nuestra clase completa ser vería así public class CrystalRules { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(ConfigurationManager.ConnectionStrings["Fatchoy.Data.Properties.Settings.FatchoyConnectionString"].ConnectionString);   private string _dbName; private string _serverName; private string _userID; private string _passWord;   private string dataBase { get { return _dbName; } set { _dbName = value; } }   private string serverName { get { return _serverName; } set { _serverName = value; } }   private string userName { get { return _userID; } set { _userID = value; } }   private string dataBasePassword { get { return _passWord; } set { _passWord = value; } }   internal void ApplyInfo(ReportDocument _oRpt) { dataBase = builder.InitialCatalog; serverName = builder.DataSource; userName = builder.UserID; dataBasePassword = builder.Password;   Database oCRDb = _oRpt.Database; Tables oCRTables = oCRDb.Tables; //Table oCRTable = default(Table); TableLogOnInfo oCRTableLogonInfo = default(TableLogOnInfo); ConnectionInfo oCRConnectionInfo = new ConnectionInfo();   oCRConnectionInfo.DatabaseName = _dbName; oCRConnectionInfo.ServerName = _serverName; oCRConnectionInfo.UserID = _userID; oCRConnectionInfo.Password = _passWord;   foreach (Table oCRTable in oCRTables) { oCRTableLogonInfo = oCRTable.LogOnInfo; oCRTableLogonInfo.ConnectionInfo = oCRConnectionInfo; oCRTable.ApplyLogOnInfo(oCRTableLogonInfo);     }   }   public void loadReport(string repName, CrystalReportViewer viewer) {   // attached our report to viewer and set database login. ReportDocument report = new ReportDocument(); report.Load(HttpContext.Current.Server.MapPath("~/Reports/" + repName)); ApplyInfo(report); viewer.ReportSource = report; }       #region instance   private static CrystalRules m_instance;   // Properties public static CrystalRules Instance { get { if (m_instance == null) { m_instance = new CrystalRules(); } return m_instance; } }   public DataDataContext m_DataContext { get { return DataDataContext.Instance; } }     #endregion instance   }   Si bien, la solución no es robusta y no es la mas segura. En casos de uso como una intranet y cuando estamos contra tiempo, podría ser de gran ayuda.

    Read the article

  • The Social Business Thought Leaders - John Hagel

    - by kellsey.ruppel
    While many European economies are on the brink of a recession between increasing taxation and mounting loss of jobs and bankruptcy filing rates, there's an understandable risk of losing sight of the deeper forces at play. Yet instead of surrendering to uncertainty and trying to survive in the short term, many organizations are feeling the urge to be better prepared to thrive in these complex times by developing a more articulated long term understanding of both the opportunities / challenges ahead. For example: What long-term economic, technological and societal changes are rolling out? Which foundational dynamics will affect our companies' performance, productivity, competition, and innovative potential in the upcoming decades? How will digital infrastructure change our business landscape? What kind of capabilities will be key to compete in a market shaped by growing turbulence, unpredictability and volatility? Breaking out from a strictly cyclical thinking, studies such as the Shift Index by John Hagel, Co-Chairman of the Center for the Edge at Deloitte & Touche (See Measuring the forces of long-term change - The 2009 Shift Index), depict a worrying performance challenge that affected every industry in the entire US economy over the last 45 years. Amidst a more than doubled competitive intensity of the market, and even with an improved labor productivity, the actual performance of US firms has consistently fallen to 25% of what it was in 1965. Most of this reported value is shifting from institutions and organizations to individuals, whether they are customers or young creative talent. To thrive in the digital economy and reverse declining performance trends, companies will have to fundamentally rethink their management approach by moving from knowledge stocks to knowledge flows, from scalable efficiency to scalable learning, from push organizations to pull organizations. Based on the outcomes of the Shift Index and on the book The Power of Pull, the first episode of the Social Business Thought-Leaders features John Hagel to provide strategic insights on how companies will succeed in the 21st century.

    Read the article

  • QotD: Matt Stephens on OpenJDK in 2012 at the Register

    - by $utils.escapeXML($entry.author)
    While Java SE churns and gets pushed back, the new initiatives do at least show OpenJDK is reinvigorating the Java space. The project has picked up speed just a little too late for the fifth anniversary of the open-sourcing of Java, but if these promised developments really do come together then that means next year should see a series of “one last things” missing from 2011.Matt Stephens in an article in the Register.

    Read the article

  • Device being used by VxVM

    - by Onur Bingul
    If you are using vxvm, you may have issues when you try to unconfigure a disk root@techsupport2 # cfgadm -c unconfigure c1::dsk/c1t3d0cfgadm: Component system is busy, try again: failed to offline:     Resource             Information       ----------------       -------------------------/dev/dsk/c1t3d0   Device being used by VxVM“cfgadm unconfigure” command fails here.The way to resolve this is to disable the disks path from DMP control. Since there is only one path to this disk, the “-f” (for force) option needs to be used:root@techsupport2 # vxdmpadm -f disable path=c1t3d0s2root@techsupport2 # vxdmpadm getsubpaths NAME         STATE[A]   PATH-TYPE[M] DMPNODENAME  ENCLR-NAME   CTLR   ATTRS================================================================================c1t6d0       ENABLED(A)   -          disk_0       disk         c1       -c1t3d0       DISABLED(M)   -          disk_1       disk         c1       -c1t0d0s2     ENABLED(A)   -          disk_2       disk         c1       -c1t1d0       ENABLED(A)   -          disk_3       disk         c1       -c3t47d0      ENABLED(A)   -          sun35100_0   sun35100     c3       -c3t47d1      ENABLED(A)   -          sun35100_1   sun35100     c3       -c3t47d2s2    ENABLED(A)   -          sun35100_2   sun35100     c3       -c3t47d3s2    ENABLED(A)   -          sun35100_3   sun35100     c3       -You can see the path now disabled from DMP.root@techsupport2 # cfgadm -c unconfigure c1::dsk/c1t3d0Now you can unconfigure the disk

    Read the article

  • Installing a VADTools design component into your 3CX Voice Application Designer toolbox

    - by ParadigmShift
    The 3CX Voice Application Designer is an innovative tool for creating IVR (Interactive Voice Response) Applications, or Voice Applications.  It is a familiar drag-and-drop experience that Visual Studio developers will get the hang of pretty quick. Additionally, there are new 3rd party components released by BlueVoice, that are distributed though www.UtahVoIPStore.com I thought I’d post a quick introduction to it, by showing how to install a component into you designer tool box.  In this example I am using the CommandLine component, which lets you call the command line from your voice application. First, copy the ZIP file that came with your component to the root folder of your VAD project. Now extract the zip file into the root directory. The component will be in the root directory and the Libraries directory will have a new DLL file. Open your VAD project and right-click on the project in project explorer to add the new component to your project. Navigate to the root folder of your project and select the new component. The component is now ready for you to use in your toolbox.

    Read the article

  • Oracle AIM, Oracle ABF, and Siebel Results Roadmap Officially Retired as of January 31, 2011

    - by tom.spitz
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} It seems somehow appropriate that the first entry of the Oracle® Unified Method (OUM) blog is about the retirement of several of our legacy methods, most notably AIM Foundation.If you're reading this, you're probably aware that Oracle has been developing OUM to support the entire Enterprise IT lifecycle, including support for the successful implementation of every Oracle product. As Oracle has continued to acquire new companies and technologies, it has become essential that we also create a single, unified language and approach for implementation - across the Oracle ecosystem.With the release of OUM 5.1 in 2009, OUM provided full support for all enterprise application implementation projects including Oracle E-Business Suite R12, Siebel CRM, PeopleSoft Enterprise, and JD Edwards EnterpriseOne projects. In 2010, we released OUM training that supports the use of OUM on these types of projects.That support represented a major milestone in the evolution of OUM and enabled implementers to transition to OUM. Consequently, we announced a staggered retirement schedule for Oracle's legacy methods. On January 31, 2011 we announced the retirement of:Oracle Application Implementation Method (AIM)Oracle AIM for Business Flows (ABF)Siebel Results RoadmapLater this year, we will announce the retirement of Compass - the legacy PeopleSoft method - and Data Warehouse Method Fast Track.OUM is available free of charge to Oracle Gold, Platinum, and Diamond partners through the Oracle Partner Network (OPN) [OUM on OPN]. The OUM Customer Program allows customers to obtain copies of the method for their internal use by contracting with Oracle for an engagement of two weeks or longer meeting some additional minimum criteria.There be more retirement announcements in the coming months. For now it's "Adios AIM." Thanks for the memories...

    Read the article

  • FREE Online Azure Workshop includes a **FREE Azure Account**

    - by Jim Duffy
    My friend and all around good guy, Microsoft Developer Evangelist for the Carolinas, Brian Hitney, along with fellow Microsofties Jim O’Neil and John McClelland will be presenting a FREE Windows Azure online workshop tomorrow, Tuesday, May 4th from 7pm-9pm. What? You can’t make it Tuesday evening? Not to worry. This webcast will be repeated again a number of times over the next month or so. Taken from Brian’s blog post about it: “Elevate your skills with Windows Azure in this hands-on workshop! In this event we’ll guide you through the process of building and deploying a large scale Azure application. Forget about “hello world”! In less than two hours we’ll build and deploy a real cloud app that leverages the Azure data center and helps make a difference in the world. Yes, in addition to building an application that will leave you with a rock-solid understanding of the Azure platform, the solution you deploy will contribute back to Stanford’s Folding@home distributed computing project. There’s no cost to you to participate in this session; each attendee will receive a temporary, self-expiring, full-access account to work with Azure for a period of 2-weeks.” Did you catch that last sentence??  “each attendee will receive a temporary, self-expiring, full-access account to work with Azure for a period of 2-weeks.” A FREE, full-access, Windows Azure account to experiment and learn with? Now we’re talking. For more information check out Brian’s blog post or head here. Have a day. :-|

    Read the article

  • Java Spotlight Episode 84: Anil Gaur on JavaEE 7

    - by Roger Brinkley
    Tweet Interview with Anil Gaur, VP of Java Platform for Enterprise Edition and GlassFish Server, on JavaEE 7. Joining us this week on the Java All Star Developer Panel are Dalibor Topic, Java Free and Open Source Software Ambassador and Arun Gupta, Java EE Guy. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Tori Wieldt - Judges Selected for Duke's Choice Awards Donald Smith - #OpenJDK interview in Java Magazine Henrik Ståhl - Java 7 adoption at 23% JavaOne Kicks Off with Sunday Keynotes at Masonic Auditorium Jersey 2.0 M4 JSF 2.2 Latest Snapshot NetBeans IDE 7.2 - Deploy to Cloud Events May 30, OTN Java Developer Day, Redwood Shores June 11-14, Cloud Computing Expo, New York City June 12, Boulder JUG June 13, Denver JUG June 13, Eclipse Juno DemoCamp, Redwoood Shore June 13, JUG Münster June 14, Java Klassentreffen, Vienna, Austria June 18-20, QCon, New York City June 26-28, Jazoon, Zurich, Switzerland July 5, Java Forum, Stuttgart, Germany July 30-August 1, JVM Language Summit, Santa Clara Feature InterviewAnil Gaur is the Vice President of Java Platform, Enterprise Edition, and GlassFish Server at Oracle in the Fusion Middleware Group. Is responsible for creation of Java EE Specifications, Reference Implementation, and Compatibility Test Suites. Leading the evolution on Java EE into Cloud and PaaS environment through the Java EE 7 standard. Prior to that, managed the delivery of Java EE 6 Platform and SDK which quickly gained momentum in enterprise application development and deployments. In this episode we talk about GlassFish 3.1 release. Mail Bag What’s Cool RFR (L): Adding core file parsing on Mac OS X to SA Sergio Del Valle @swdelvalle is the 1,000 @JavaSpotlight twitter follower

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • Sharp HealthCare Reduces Storage Requirements by 50% with Oracle Advanced Compression

    - by [email protected]
    Sharp HealthCare is an award-winning integrated regional health care delivery system based in San Diego, California, with 2,600 physicians and more than 14,000 employees. Sharp HealthCare's data warehouse forms a vital part of the information system's infrastructure and is used to separate business intelligence reporting from time-critical health care transactional systems. Faced with tremendous data growth, Sharp HealthCare decided to replace their existing Microsoft products with a solution based on Oracle Database 11g and to implement Oracle Advanced Compression. Join us to hear directly from the primary DBA for the Data Warehouse Application Team, Kim Nguyen, how the new environment significantly reduced Sharp HealthCare's storage requirements and improved query performance.

    Read the article

  • What's the value of a Facebook fan?

    - by David Dorf
    In his blog posting titled "Why Each Facebook Fan Is Worth $2,000 to J. Crew," Joe Skorupa lays out a simplistic calculation for assigning a value to social media efforts within Facebook. While I don't believe the metric, at least its a metric that can be applied consistently. Trying to explain the ROI to management to start a program, then benchmarking to show progress isn't straightforward at all. Social media isn't really mature enough to have hard-and-fast rules around valuation (yet). When I'm asked by retailers how to measure social media efforts, I usually fess-up and say I can't show an ROI but the investment is so low you might was well take a risk. Intuitively, it just seems like a good way to interact with consumers, and since your competition is doing it, you better do it as well. Vitrue, a social media management company, has calculated a fan as being worth $3.60 per year based on impressions generated in Facebook's news feed. That means a fan base of 1 million translates into at least $3.6 million in equivalent media over a year. Don't believe that number either? Fine, Vitrue now has a tool that let's you adjust the earned media value of a fan. Jump over to http://evaluator.vitrue.com/ and enter your brand's Facebook URL to get an assessment of the current value and potential value. For fun, I compared Abercrombie & Fitch (1,077,480 fans), Gap (567,772 fans), and Wet Seal (294,479 fans). The image below shows the results assuming the default $5 earned media value for a fan. The calculation is more complicated than just counting fans. It also accounts for postings and comments. Its possible for a brand with fewer fans to have a higher value based on frequency and relevancy of posts. The tool gathers data via the Social Graph API for the past 30 days of activity. I'm not sure this tool assigns the correct value either, but hey, its a great start.

    Read the article

  • 10 steps to enable &lsquo;Anonymous Access&rsquo; for your SharePoint 2010 site

    - by KunaalKapoor
    What’s Anonymous Access? Anonymous access to your SharePoint site enables all visitors to view your SharePoint site anonymously without having to log in. With this blog I’d like to go through an easy step wise procedure to enable/set up anonymous access. Before you actually enable anonymous access on the site, you’ll have to change some settings at the web app level. So let’s start with that: Prerequisite(s): 1. A hosted SharePoint 2010 farm/server. 2. An existing SharePoint site. I just thought I’d mention the above pre-reqs, since the steps mentioned below would’nt be valid or a different type of a site. Step 1: In Central Administration, under Application Management, click on the Manage web applications. Step 2: Now select the site you want to enable anonymous access and click on the Authentication Providers icon. Step 3: On the modal window click on the Default zone. Step 4: Now under the Edit Authentication section, check Enable anonymous access and click Save. This is basically to make the Anonymous Access authentication mechanism available at the web app level @ IIS. Now, web application will allow anonymous access to be set. 5. Going back to Web Application Management click on the Anonymous Policy icon. Step 6: Also before we proceed any further, under the Anonymous Access Restrictions (@ web app mgmt.) select your Zone and set the Permissions to None – No policy and click Save. Step 7:  Now lets navigate to your top level site collection for the web application. Click the Site Actions > Site Settings. Under Users and Permissions click Site permissions. Step 8: Under Users and Permissions, click on Site Permissions. Step 9: Under the Edit tab, click on Anonymous Access. Step 10: Choose whether you want Anonymous users to have access to the entire Web site or to lists and libraries only, and then click on OK. You should now be able to see the view as below under your permissions Also keep in mind: If you are trying to access the site from a browser within the domain, then you’ll need to change some browser settings to see the after affects. Normally this is because the browsers (Internet Explorer) is set to log in automatically to intranet zone only , not sure if you have explicitly changed the zones and added it to trusted sites. If this is from a box within your domain please try to access the site by temporarily changing the Internet Explorer setting to Anonymous Logon on the zone that the site is added example "Intranet" and try . You will find the same settings by clicking on Tools > Internet Options > Security Tab.

    Read the article

  • Silverlight Cream for April 17, 2010 -- #839

    - by Dave Campbell
    In this Issue: ITLackey, SilverLaw, Max Paulousky, Alex Yakhnin, Paul Sheriff, Douglas, Jeremy Likness, Tomasz Janczuk, Anoop Madhusudanan, Adam Kinney, and Ashish Shetty. Shoutout: If you haven't already seen it, CrocusGirl did a great job of summarizing Day 2 of DevConnections with her Silverlight 4 Launch Notes From SilverlightCream.com: RIA Services - IIS6 Virtual Directory Deployment ITLackey has a post up building on his previous post on Windows Authentication with RIA Services and discusses deploying to an IIS Virtual Directory. How To: Determine ChildWindow Position At Runtime - Silverlight 3 SilverLaw has a post up about determining the position of a ChildWindow at run-time, for example after the user moves it. Modularity in Silverlight Applications - An Issue With ModuleInitializeException – Part 2 Max Paulousky has part 2 of his series up on Modularity in Silverlight... he discusses using XAML as a catalog and registering modules at runtime, and compares to WPF. Creating LINQ Data Provider for WP7 (Part 1) Alex Yakhnin has a first cut at a LINQ Data Provider for WP7 ... I was expecting this to hit pretty soon, because we're all going to want it... check out the code and d/l the project. Synchronize Data between a Silverlight ListBox and a User Control Paul Sheriff demonstrates databinding in XAML between local data in a ListBox and a UserControl. The beginnings of Silverlight development with Expression Blend Douglas has a good post up on beginning your Silverlight development with Expression Blend. He covers a lot of ground in this post. Converting Silverlight 3 to Silverlight 4 Jeremy Likness has a video up demonstrating converting Silverlight 3 to Silverlight 4 with download links and also using commanding on buttons. Debugging WCF RIA Services with WCF traces Tomasz Janczuk has a post up discussing the use of WCF RIA Services traces to help diagnose and debug problems in a deployed service. Bing Maps + oData + Windows Phone 7 - Nerd Dinner Client For Windows Phone 7 Check out what Anoop Madhusudanan has provided... Nerd Dinner for WP7, including OData and BingMaps... just very cool! A few cool new features added in Expression Blend 4 RC Adam Kinney announced the availability of the new Expression Blend and highlights some of the new features... like MakeLayoutPath... FTW! Of Crashing and Sometimes Burning Ashish Shetty has a discourse posted about where the causes of errors might come from, what to expect from the platform, where to find crash dumps, and links to more reading. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Microsoft Townhall, An Example for Azure and MVC

    - by Shaun
    Microsoft just released an example named Microsoft Townhall which was built and deployed on Azure. It uses ASP.NET MVC as its webiste framework and the SQL Azure plus LinqToSQL as its the database and the ORM framework. You can download the source code at the MSDN Code Gallery. Basides the Azure it might be more useful to us to learn how they utilized ASP.NET MVC. Just a very quickly review I found it utilized the Enterprise Library Unity as the main IoC container for controllers, services and repositories and customized a lot of ModelBinders, Filters, etc.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Screen shots and documentation on the cheap

    - by Kyle Burns
    Occasionally I am surprised to open up my toolbox and find a great tool that I've had for years and never noticed.  The other day I had just such an experience with Windows Server 2008.  A co-worker of mine was squinting to read to screenshots that he had taken using the "Print Screen, paste" method in WordPad and asked me if there was a better tool available at a reasonable cost.  My first instinct was to take a look at CamStudio for him, but I also knew that he had an immediate need to take some more screenshots, so I decided to check and see if the Snipping Tool found in Windows 7 is also available in Windows Server 2008.  I clicked the Start button and typed “snip” into the search bar and while the Snipping Tool did not come up, a Control Panel item labeled “Record steps to reproduce a problem” did. The application behind the Control Panel entry was “Problem Steps Recorder” (PSR.exe) and I have confirmed that it is available in Windows 7 and Windows Server 2008 R2 but have not checked other platforms.  It presents a pretty minimal and intuitive interface in providing a “Start Record”, “Stop Record”, and “Add Comment” button.  The “Start Record” button shockingly starts recording and, sure enough, the “Stop Record” button stops recording.  The “Add Comment” button prompts for a comment and for you to highlight the area of the screen to which your comment is related.  Once you’re done recording, the tool outputs an MHT file packaged in a ZIP archive.  This file contains a series of screen shots depicting the user’s interactions and giving timestamps and descriptive text (such as “User left click on “Test” in “My Page – Windows Internet Explorer”) as well as the comments they made along the way and some diagnostics about the applications captured. The Problem Steps Recorder looks like a simple solution to the most common of my needs for documentation that can turn “I can’t understand how to make it do what you’re reporting” to “Oh, I see what you’re talking about and will fix it right away”.  I you’re like me and haven’t yet discovered this tool give it a whirl and see for yourself.

    Read the article

  • Devoxx 2011 Trip Report + Pictures

    - by arungupta
    3350 attendees from 40 countries lived in "paradise" for 5 days last week. This paradise had 170+ rock star speakers delivering 200+ hours of technical content in about 150 sessions. And it truly was a paradise with a clear differentiation from other Java conferences. There were several Oracle speakers at the paradise covering the entire gamut of Java platform. I delivered a Java EE 6 hands-on lab (new content), showcased Java EE 7 and GlassFish 4.0 early work at the keynote, and participated in a panel to talk about Contexts and Dependency Injection. The demo in the keynote showed how to deploy a Java EE application in a managed environment. The demo showed a Conference Planner application that can be used by conference organizers to display sessions, tracks, and speaker information. This same application can be deployed and display data from JavaOne 2011 or Devoxx 2011 based upon the SQL chosen for database initialization. If javaone-sf-2011.sql is chosen for datbase initialization then the application looks like as shown: If devoxx-2011.sql is chosen then the application looks like as shown: And of course, clicking on Tracks, Speakers, Sessions shows you information from the respective conference. The complete source code for the application and detailed instructions are availaable at glassfish.org/javaone2011. In short: Download the sample app and unzip Download GlassFish build b05. Download platform-specific Load Balancer template Run "bin/install.sh" to configure GlassFish Pick javaone-sf-2011.sql or devoxx-2011.sql for database initialization You can also watch the application in action in this video: A breaking news shared at the conference was that Devoxx France is coming from April 18- 20 and 75% of the talks will be in French. Stay tuned for more details on that. I'm sure Antonio and gang will put up a great show out there! Just a tip for the first timers to Devoxx ... A bus leaves from Brussels airport to Antwerp city center between 4am - 11pm at the top of every hour, takes about 45 minutes, and costs 10 euros (only cash). Take a tram #6 (going towards Luchtbal) from Astrid station (next to the city center) and get off at the last station for Metropolis. It takes about 15 minutes. Purchase a day pass at the station using kiosks (much cheaper) or you can buy in the bus as well (about double the price). Either way, cash only. Here are a few pictures captured from the event: And the complete album here: Thank you Stephan for giving me an opportunity to speak at my first Devoxx. I hope to be back next year, just in time for Java EE 7 going final!

    Read the article

  • Principles of Service-Oriented Architecture by Douwe P. van den Bos

    - by JuergenKress
    Today I hosted a session on the Principles of Service-Oriented Architecture for my colleagues at Capgemini. A very interesting session because everyone had a very clear view on the Oracle SOA Suite and a technical background. What we wanted to do was creating a common view on what a Service-Oriented Architecture is, what the benefits are that can be achieved and what is needed to create a Service-Oriented Architecture. During this very interactive session we moved from a clearly technology view on the matter (Oracle SOA Suite) to an architectural view slicing from business to technology. And this is where SOA really kicks in, because it is a philosophy. Here is the presentation on SlideShare: Principles of Service-Oriented Architecture. Read also the The Maturity of a Service-Oriented Architecture & SOA Maturity Models. Twitter & LinkedIn SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Governance,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress,Douwe P. van den Bos

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >