Search Results

Search found 15456 results on 619 pages for 'global temporary tables'.

Page 480/619 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • Oracle Products Reflect Key Trends Shaping Enterprise 2.0

    - by kellsey.ruppel(at)oracle.com
    Following up on his predictions for 2011, we asked Enterprise 2.0 veteran Andy MacMillan to map out the ways Oracle solutions are at the forefront of industry trends--and how Oracle customers can benefit in the coming year. 1. Increase organizational awareness | Oracle WebCenter Suite Oracle WebCenter Suite provides a unique set of capabilities to drive organizational awareness. In particular, the expansive activity graph connects users directly to key enterprise applications, activities, and interests. In this way, applicable and critical business information is automatically and immediately visible--in the context of key tasks--via real-time dashboards and comprehensive reporting. Oracle WebCenter Suite also integrates key E2.0 services, such as blogs, wikis, and RSS feeds, into critical business processes, including back-office systems of records such as ERP and CRM systems. 2. Drive online customer engagement | Oracle Real-Time Decisions With more and more business being conducted on the Web, driving increased online customer engagement becomes a critical key to success. This effort is usually spearheaded by an increasingly important executive role, the Head of Online, who usually reports directly to the CMO. To help manage the Web experience online, Oracle solutions are driving a new kind of intelligent social commerce by combining Oracle Universal Content Management, Oracle WebCenter Services, and Oracle Real-Time Decisions with leading e-commerce and product recommendations. Oracle Real-Time Decisions provides multichannel recommendations for content, products, and services--including seamless integration across Web, mobile, and social channels. The result: happier customers, increased customer acquisition and retention, and improved critical success metrics such as shopping cart abandonment. 3. Easily build composite applications | Oracle Application Development Framework Thanks to the shared user experience strategy across Oracle Fusion Middleware, Oracle Fusion Applications and many other Oracle Applications, customers can easily create real, customer-specific composite applications using Oracle WebCenter Suite and Oracle Application Development Framework. Oracle Application Development Framework components provide modular user interface components that can build rich, social composite applications. In addition, a broad set of components spanning BPM, SOA, ECM, and beyond can be quickly and easily incorporated into composite applications. 4. Integrate records management into a global content platform | Oracle Enterprise Content Management 11g Oracle Enterprise Content Management 11g provides leading records management capabilities as part of a unified ECM platform for managing records, documents, Web content, digital assets, enterprise imaging, and application imaging. This unique strategy provides comprehensive records management in a consistent, cost-effective way, and enables organizations to consolidate ECM repositories and connect ECM to critical business applications. 5. Achieve ECM at extreme scale | Oracle WebLogic Server and Oracle Exadata To support the high-performance demands of a unified and rationalized content platform, Oracle has pioneered highly scalable and high-performing ECM infrastructures. Two innovations in particular helped make this happen. The core ECM platform itself moved to an Enterprise Java architecture, so organizations can now use Oracle WebLogic Server for enhanced scalability and manageability. Oracle Enterprise Content Management 11g can leverage Oracle Exadata for extreme performance and scale. Likewise, Oracle Exalogic--Oracle's foundation for cloud computing--enables extreme performance for processor-intensive capabilities such as content conversion or dynamic Web page delivery. Learn more about Oracle's Enterprise 2.0 solutions.

    Read the article

  • The enterprise vendor con - connecting SSD's using SATA 2 (3Gbits) thus limiting there performance

    - by tonyrogerson
    When comparing SSD against Hard drive performance it really makes me cross when folk think comparing an array of SSD running on 3GBits/sec to hard drives running on 6GBits/second is somehow valid. In a paper from DELL (http://www.dell.com/downloads/global/products/pvaul/en/PowerEdge-PowerVaultH800-CacheCade-final.pdf) on increasing database performance using the DELL PERC H800 with Solid State Drives they compare four SSD drives connected at 3Gbits/sec against ten 10Krpm drives connected at 6Gbits [Tony slaps forehead while shouting DOH!]. It is true in the case of hard drives it probably doesn’t make much difference 3Gbit or 6Gbit because SAS and SATA are both end to end protocols rather than shared bus architecture like SCSI, so the hard drive doesn’t share bandwidth and probably can’t get near the 600MiBytes/second throughput that 6Gbit gives unless you are doing contiguous reads, in my own tests on a single 15Krpm SAS disk using IOMeter (8 worker threads, queue depth of 16 with a stripe size of 64KiB, an 8KiB transfer size on a drive formatted with an allocation size of 8KiB for a 100% sequential read test) I only get 347MiBytes per second sustained throughput at an average latency of 2.87ms per IO equating to 44.5K IOps, ok, if that was 3GBits it would be less – around 280MiBytes per second, oh, but wait a minute [...fingers tap desk] You’ll struggle to find in the commodity space an SSD that doesn’t have the SATA 3 (6GBits) interface, SSD’s are fast not only low latency and high IOps but they also offer a very large sustained transfer rate, consider the OCZ Agility 3 it so happens that in my masters dissertation I did the same test but on a difference box, I got 374MiBytes per second at an average latency of 2.67ms per IO equating to 47.9K IOps – cost of an 240GB Agility 3 is £174.24 (http://www.scan.co.uk/products/240gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-500mb-s-85k-iops), but that same drive set in a box connected with SATA 2 (3Gbits) would only yield around 280MiBytes per second thus losing almost 100MiBytes per second throughput and a ton of IOps too. So why the hell are “enterprise” vendors still only connecting SSD’s at 3GBits? Well, my conspiracy states that they have no interest in you moving to SSD because they’ll lose so much money, the argument that they use SATA 2 doesn’t wash, SATA 3 has been out for some time now and all the commodity stuff you buy uses it now. Consider the cost, not in terms of price per GB but price per IOps, SSD absolutely thrash Hard Drives on that, it was true that the opposite was also true that Hard Drives thrashed SSD’s on price per GB, but is that true now, I’m not so sure – a 300GByte 2.5” 15Krpm SAS drive costs £329.76 ex VAT (http://www.scan.co.uk/products/300gb-seagate-st9300653ss-savvio-15k3-25-hdd-sas-6gb-s-15000rpm-64mb-cache-27ms) which equates to £1.09 per GB compared to a 480GB OCZ Agility 3 costing £422.10 ex VAT (http://www.scan.co.uk/products/480gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-410mb-s-30k-iops) which equates to £0.88 per GB. Ok, I compared an “enterprise” hard drive with a “commodity” SSD, ok, so things get a little more complicated here, most “enterprise” SSD’s are SLC and most commodity are MLC, SLC gives more performance and wear, I’ll talk about that another day. For now though, don’t get sucked in by vendor marketing, SATA 2 (3Gbit) just doesn’t cut it, SSD need 6Gbit to breath and even that SSD’s are pushing. Alas, SSD’s are connected using SATA so all the controllers I’ve seen thus far from HP and DELL only do SATA 2 – deliberate? Well, I’ll let you decide on that one.

    Read the article

  • SQL SERVER – CSVExpress and Quick Data Load

    - by pinaldave
    One of the newest ETL tools is CSVexpress.com.  This is a program that can quickly load any CSV file into ODBC compliant databases uses data integration.  For those of you familiar with databases and how they operate, the question that comes to mind might be what use this program will have in your life. I have written earlier article on this subject over here SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress. You might know that RDBMS have automatic support for loading CSV files into tables – but it is not quite as easy as one click of a button.  First of all, most databases have a command line interface and you need the file and configuration script in order to load up.  You also need to know enough to write the script – which for novices can be extremely daunting.  On top of all this, if you work with more than one type of RDBMS, you need to know the ins and outs of uploading and writing script for more than one program. So you might begin to see how useful CSVexpress.com might be!  There are many other tools that enable uploading files to a database.  They can be very fancy – some can generate configuration files automatically, others load the data directly.  Again, novices will be able to tell you why these aren’t the most useful programs in the world.  You see, these programs were created with SQL in mind, not for uploading data.  If you don’t have large amounts of data to upload, getting the configurations right can be a long process and you will have to check the code that is generated yourself.  Not exactly “easy to use” for novices. That makes CSVexpress.com one of the best new tools available for everyone – but especially people who don’t want to learn a lot of new material all at once.  CSVexpress has an easy to navigate graphical user interface and no scripting or coding is required.  There are built-in constraints and data validations, and you can configure transforms and reject records right there on the screen.  But the best thing of all – it’s free! That’s right, you can download CSVexpress for free from www.csvexpress.com and start easily uploading and configuring riles almost immediately.  If you’re currently happy with your method of data configuration, keep up with the good work.  For the rest of us, there’s CSVexpress.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • My Take on Hadoop World 2011

    - by Jean-Pierre Dijcks
    I’m sure some of you have read pieces about Hadoop World and I did see some headlines which were somewhat, shall we say, interesting? I thought the keynote by Larry Feinsmith of JP Morgan Chase & Co was one of the highlights of the conference for me. The reason was very simple, he addressed some real use cases outside of internet and ad platforms. The following are my notes, since the keynote was recorded I presume you can go and look at Hadoopworld.com at some point… On the use cases that were mentioned: ETL – how can I do complex data transformation at scale Doing Basel III liquidity analysis Private banking – transaction filtering to feed [relational] data marts Common Data Platform – a place to keep data that is (or will be) valuable some day, to someone, somewhere 360 Degree view of customers – become pro-active and look at events across lines of business. For example make sure the mortgage folks know about direct deposits being stopped into an account and ensure the bank is pro-active to service the customer Treasury and Security – Global Payment Hub [I think this is really consolidation of data to cross reference activity across business and geographies] Data Mining Bypass data engineering [I interpret this as running a lot of a large data set rather than on samples] Fraud prevention – work on event triggers, say a number of failed log-ins to the website. When they occur grab web logs, firewall logs and rules and start to figure out who is trying to log in. Is this me, who forget his password, or is it someone in some other country trying to guess passwords Trade quality analysis – do a batch analysis or all trades done and run them through an analysis or comparison pipeline One of the key requests – if you can say it like that – was for vendors and entrepreneurs to make sure that new tools work with existing tools. JPMC has a large footprint of BI Tools and Big Data reporting and tools should work with those tools, rather than be separate. Security and Entitlement – how to protect data within a large cluster from unwanted snooping was another topic that came up. I thought his Elephant ears graph was interesting (couldn’t actually read the points on it, but the concept certainly made some sense) and it was interesting – when asked to show hands – how the audience did not (!) think that RDBMS and Hadoop technology would overlap completely within a few years. Another interesting session was the session from Disney discussing how Disney is building a DaaS (Data as a Service) platform and how Hadoop processing capabilities are mixed with Database technologies. I thought this one of the best sessions I have seen in a long time. It discussed real use case, where problems existed, how they were solved and how Disney planned some of it. The planning focused on three things/phases: Determine the Strategy – Design a platform and evangelize this within the organization Focus on the people – Hire key people, grow and train the staff (and do not overload what you have with new things on top of their day-to-day job), leverage a partner with experience Work on Execution of the strategy – Implement the platform Hadoop next to the other technologies and work toward the DaaS platform This kind of fitted with some of the Linked-In comments, best summarized in “Think Platform – Think Hadoop”. In other words [my interpretation], step back and engineer a platform (like DaaS in the Disney example), then layer the rest of the solutions on top of this platform. One general observation, I got the impression that we have knowledge gaps left and right. On the one hand are people looking for more information and details on the Hadoop tools and languages. On the other I got the impression that the capabilities of today’s relational databases are underestimated. Mostly in terms of data volumes and parallel processing capabilities or things like commodity hardware scale-out models. All in all I liked this conference, it was great to chat with a wide range of people on Oracle big data, on big data, on use cases and all sorts of other stuff. Just hope they get a set of bigger rooms next time… and yes, I hope I’m going to be back next year!

    Read the article

  • Extending Oracle CEP with Predictive Analytics

    - by vikram.shukla(at)oracle.com
    Introduction: OCEP is often used as a business rules engine to execute a set of business logic rules via CQL statements, and take decisions based on the outcome of those rules. There are times where configuring rules manually is sufficient because an application needs to deal with only a small and well-defined set of static rules. However, in many situations customers don't want to pre-define such rules for two reasons. First, they are dealing with events with lots of columns and manually crafting such rules for each column or a set of columns and combinations thereof is almost impossible. Second, they are content with probabilistic outcomes and do not care about 100% precision. The former is the case when a user is dealing with data with high dimensionality, the latter when an application can live with "false" positives as they can be discarded after further inspection, say by a Human Task component in a Business Process Management software. The primary goal of this blog post is to show how this can be achieved by combining OCEP with Oracle Data Mining® and leveraging the latter's rich set of algorithms and functionality to do predictive analytics in real time on streaming events. The secondary goal of this post is also to show how OCEP can be extended to invoke any arbitrary external computation in an RDBMS from within CEP. The extensible facility is known as the JDBC cartridge. The rest of the post describes the steps required to achieve this: We use the dataset available at http://blogs.oracle.com/datamining/2010/01/fraud_and_anomaly_detection_made_simple.html to showcase the capabilities. We use it to show how transaction anomalies or fraud can be detected. Building the model: Follow the self-explanatory steps described at the above URL to build the model.  It is very simple - it uses built-in Oracle Data Mining PL/SQL packages to cleanse, normalize and build the model out of the dataset.  You can also use graphical Oracle Data Miner®  to build the models. To summarize, it involves: Specifying which algorithms to use. In this case we use Support Vector Machines as we're trying to find anomalies in highly dimensional dataset.Build model on the data in the table for the algorithms specified. For this example, the table was populated in the scott/tiger schema with appropriate privileges. Configuring the Data Source: This is the first step in building CEP application using such an integration.  Our datasource looks as follows in the server config file.  It is advisable that you use the Visualizer to add it to the running server dynamically, rather than manually edit the file.    <data-source>         <name>DataMining</name>         <data-source-params>             <jndi-names>                 <element>DataMining</element>             </jndi-names>             <global-transactions-protocol>OnePhaseCommit</global-transactions-protocol>         </data-source-params>         <connection-pool-params>             <credential-mapping-enabled></credential-mapping-enabled>             <test-table-name>SQL SELECT 1 from DUAL</test-table-name>             <initial-capacity>1</initial-capacity>             <max-capacity>15</max-capacity>             <capacity-increment>1</capacity-increment>         </connection-pool-params>         <driver-params>             <use-xa-data-source-interface>true</use-xa-data-source-interface>             <driver-name>oracle.jdbc.OracleDriver</driver-name>             <url>jdbc:oracle:thin:@localhost:1522:orcl</url>             <properties>                 <element>                     <value>scott</value>                     <name>user</name>                 </element>                 <element>                     <value>{Salted-3DES}AzFE5dDbO2g=</value>                     <name>password</name>                 </element>                                 <element>                     <name>com.bea.core.datasource.serviceName</name>                     <value>oracle11.2g</value>                 </element>                 <element>                     <name>com.bea.core.datasource.serviceVersion</name>                     <value>11.2.0</value>                 </element>                 <element>                     <name>com.bea.core.datasource.serviceObjectClass</name>                     <value>java.sql.Driver</value>                 </element>             </properties>         </driver-params>     </data-source>   Designing the EPN: The EPN is very simple in this example. We briefly describe each of the components. The adapter ("DataMiningAdapter") reads data from a .csv file and sends it to the CQL processor downstream. The event payload here is same as that of the table in the database (refer to the attached project or do a "desc table-name" from a SQL*PLUS prompt). While this is for convenience in this example, it need not be the case. One can still omit fields in the streaming events, and need not match all columns in the table on which the model was built. Better yet, it does not even need to have the same name as columns in the table, as long as you alias them in the USING clause of the mining function. (Caveat: they still need to draw values from a similar universe or domain, otherwise it constitutes incorrect usage of the model). There are two things in the CQL processor ("DataMiningProc") that make scoring possible on streaming events. 1.      User defined cartridge function Please refer to the OCEP CQL reference manual to find more details about how to define such functions. We include the function below in its entirety for illustration. <?xml version="1.0" encoding="UTF-8"?> <jdbcctxconfig:config     xmlns:jdbcctxconfig="http://www.bea.com/ns/wlevs/config/application"     xmlns:jc="http://www.oracle.com/ns/ocep/config/jdbc">        <jc:jdbc-ctx>         <name>Oracle11gR2</name>         <data-source>DataMining</data-source>               <function name="prediction2">                                 <param name="CQLMONTH" type="char"/>                      <param name="WEEKOFMONTH" type="int"/>                      <param name="DAYOFWEEK" type="char" />                      <param name="MAKE" type="char" />                      <param name="ACCIDENTAREA"   type="char" />                      <param name="DAYOFWEEKCLAIMED"  type="char" />                      <param name="MONTHCLAIMED" type="char" />                      <param name="WEEKOFMONTHCLAIMED" type="int" />                      <param name="SEX" type="char" />                      <param name="MARITALSTATUS"   type="char" />                      <param name="AGE" type="int" />                      <param name="FAULT" type="char" />                      <param name="POLICYTYPE"   type="char" />                      <param name="VEHICLECATEGORY"  type="char" />                      <param name="VEHICLEPRICE" type="char" />                      <param name="FRAUDFOUND" type="int" />                      <param name="POLICYNUMBER" type="int" />                      <param name="REPNUMBER" type="int" />                      <param name="DEDUCTIBLE"   type="int" />                      <param name="DRIVERRATING"  type="int" />                      <param name="DAYSPOLICYACCIDENT"   type="char" />                      <param name="DAYSPOLICYCLAIM" type="char" />                      <param name="PASTNUMOFCLAIMS" type="char" />                      <param name="AGEOFVEHICLES" type="char" />                      <param name="AGEOFPOLICYHOLDER" type="char" />                      <param name="POLICEREPORTFILED" type="char" />                      <param name="WITNESSPRESNT" type="char" />                      <param name="AGENTTYPE" type="char" />                      <param name="NUMOFSUPP" type="char" />                      <param name="ADDRCHGCLAIM"   type="char" />                      <param name="NUMOFCARS" type="char" />                      <param name="CQLYEAR" type="int" />                      <param name="BASEPOLICY" type="char" />                                     <return-component-type>char</return-component-type>                                                      <sql><![CDATA[             SELECT to_char(PREDICTION_PROBABILITY(CLAIMSMODEL, '0' USING *))               AS probability             FROM (SELECT  :CQLMONTH AS MONTH,                                            :WEEKOFMONTH AS WEEKOFMONTH,                          :DAYOFWEEK AS DAYOFWEEK,                           :MAKE AS MAKE,                           :ACCIDENTAREA AS ACCIDENTAREA,                           :DAYOFWEEKCLAIMED AS DAYOFWEEKCLAIMED,                           :MONTHCLAIMED AS MONTHCLAIMED,                           :WEEKOFMONTHCLAIMED,                             :SEX AS SEX,                           :MARITALSTATUS AS MARITALSTATUS,                            :AGE AS AGE,                           :FAULT AS FAULT,                           :POLICYTYPE AS POLICYTYPE,                            :VEHICLECATEGORY AS VEHICLECATEGORY,                           :VEHICLEPRICE AS VEHICLEPRICE,                           :FRAUDFOUND AS FRAUDFOUND,                           :POLICYNUMBER AS POLICYNUMBER,                           :REPNUMBER AS REPNUMBER,                           :DEDUCTIBLE AS DEDUCTIBLE,                            :DRIVERRATING AS DRIVERRATING,                           :DAYSPOLICYACCIDENT AS DAYSPOLICYACCIDENT,                            :DAYSPOLICYCLAIM AS DAYSPOLICYCLAIM,                           :PASTNUMOFCLAIMS AS PASTNUMOFCLAIMS,                           :AGEOFVEHICLES AS AGEOFVEHICLES,                           :AGEOFPOLICYHOLDER AS AGEOFPOLICYHOLDER,                           :POLICEREPORTFILED AS POLICEREPORTFILED,                           :WITNESSPRESNT AS WITNESSPRESENT,                           :AGENTTYPE AS AGENTTYPE,                           :NUMOFSUPP AS NUMOFSUPP,                           :ADDRCHGCLAIM AS ADDRCHGCLAIM,                            :NUMOFCARS AS NUMOFCARS,                           :CQLYEAR AS YEAR,                           :BASEPOLICY AS BASEPOLICY                 FROM dual)                 ]]>         </sql>        </function>     </jc:jdbc-ctx> </jdbcctxconfig:config> 2.      Invoking the function for each event. Once this function is defined, you can invoke it from CQL as follows: <?xml version="1.0" encoding="UTF-8"?> <wlevs:config xmlns:wlevs="http://www.bea.com/ns/wlevs/config/application">   <processor>     <name>DataMiningProc</name>     <rules>        <query id="q1"><![CDATA[                     ISTREAM(SELECT S.CQLMONTH,                                   S.WEEKOFMONTH,                                   S.DAYOFWEEK, S.MAKE,                                   :                                         S.BASEPOLICY,                                    C.F AS probability                                                 FROM                                 StreamDataChannel [NOW] AS S,                                 TABLE(prediction2@Oracle11gR2(S.CQLMONTH,                                      S.WEEKOFMONTH,                                      S.DAYOFWEEK,                                       S.MAKE, ...,                                      S.BASEPOLICY) AS F of char) AS C)                       ]]></query>                 </rules>               </processor>           </wlevs:config>   Finally, the last stage in the EPN prints out the probability of the event being an anomaly. One can also define a threshold in CQL to filter out events that are normal, i.e., below a certain mark as defined by the analyst or designer. Sample Runs: Now let's see how this behaves when events are streamed through CEP. We use only two events for brevity, one normal and other one not. This is one of the "normal" looking events and the probability of it being anomalous is less than 60%. Event is: eventType=DataMiningOutEvent object=q1  time=2904821976256 S.CQLMONTH=Dec, S.WEEKOFMONTH=5, S.DAYOFWEEK=Wednesday, S.MAKE=Honda, S.ACCIDENTAREA=Urban, S.DAYOFWEEKCLAIMED=Tuesday, S.MONTHCLAIMED=Jan, S.WEEKOFMONTHCLAIMED=1, S.SEX=Female, S.MARITALSTATUS=Single, S.AGE=21, S.FAULT=Policy Holder, S.POLICYTYPE=Sport - Liability, S.VEHICLECATEGORY=Sport, S.VEHICLEPRICE=more than 69000, S.FRAUDFOUND=0, S.POLICYNUMBER=1, S.REPNUMBER=12, S.DEDUCTIBLE=300, S.DRIVERRATING=1, S.DAYSPOLICYACCIDENT=more than 30, S.DAYSPOLICYCLAIM=more than 30, S.PASTNUMOFCLAIMS=none, S.AGEOFVEHICLES=3 years, S.AGEOFPOLICYHOLDER=26 to 30, S.POLICEREPORTFILED=No, S.WITNESSPRESENT=No, S.AGENTTYPE=External, S.NUMOFSUPP=none, S.ADDRCHGCLAIM=1 year, S.NUMOFCARS=3 to 4, S.CQLYEAR=1994, S.BASEPOLICY=Liability, probability=.58931702982118561 isTotalOrderGuarantee=true\nAnamoly probability: .58931702982118561 However, the following event is scored as an anomaly with a very high probability of  89%. So there is likely to be something wrong with it. A close look reveals that the value of "deductible" field (10000) is not "normal". What exactly constitutes normal here?. If you run the query on the database to find ALL distinct values for the "deductible" field, it returns the following set: {300, 400, 500, 700} Event is: eventType=DataMiningOutEvent object=q1  time=2598483773496 S.CQLMONTH=Dec, S.WEEKOFMONTH=5, S.DAYOFWEEK=Wednesday, S.MAKE=Honda, S.ACCIDENTAREA=Urban, S.DAYOFWEEKCLAIMED=Tuesday, S.MONTHCLAIMED=Jan, S.WEEKOFMONTHCLAIMED=1, S.SEX=Female, S.MARITALSTATUS=Single, S.AGE=21, S.FAULT=Policy Holder, S.POLICYTYPE=Sport - Liability, S.VEHICLECATEGORY=Sport, S.VEHICLEPRICE=more than 69000, S.FRAUDFOUND=0, S.POLICYNUMBER=1, S.REPNUMBER=12, S.DEDUCTIBLE=10000, S.DRIVERRATING=1, S.DAYSPOLICYACCIDENT=more than 30, S.DAYSPOLICYCLAIM=more than 30, S.PASTNUMOFCLAIMS=none, S.AGEOFVEHICLES=3 years, S.AGEOFPOLICYHOLDER=26 to 30, S.POLICEREPORTFILED=No, S.WITNESSPRESENT=No, S.AGENTTYPE=External, S.NUMOFSUPP=none, S.ADDRCHGCLAIM=1 year, S.NUMOFCARS=3 to 4, S.CQLYEAR=1994, S.BASEPOLICY=Liability, probability=.89171554529576691 isTotalOrderGuarantee=true\nAnamoly probability: .89171554529576691 Conclusion: By way of this example, we show: real-time scoring of events as they flow through CEP leveraging Oracle Data Mining.how CEP applications can invoke complex arbitrary external computations (function shipping) in an RDBMS.

    Read the article

  • Merge Join component sorted outputs [SSIS]

    - by jamiet
    One question that I have been asked a few times of late in regard to performance tuning SSIS data flows is this: Why isn’t the Merge Join output sorted (i.e.IsSorted=True)? This is a fair question. After all both of the Merge Join inputs are sorted, hence why wouldn’t the output be sorted as well? Well here’s a little secret, the Merge Join output IS sorted! There’s a caveat though – it is only under certain circumstances and SSIS itself doesn’t do a good job of informing you of it. Let’s take a look at an example. Here we have a dataflow that consumes data from the [AdventureWorks2008].[Sales].[SalesOrderHeader] & [AdventureWorks2008].[Sales].[SalesOrderDetail] tables then joins them using a Merge Join component: Let’s take a look inside the editor of the Merge Join: We are joining on the [SalesOrderId] field (which is what the two inputs just happen to be sorted upon). We are also putting [SalesOrderHeader].[SalesOrderId] into the output. Believe it or not the output from this Merge Join component is sorted (i.e. has IsSorted=True) but unfortunately the Merge Join component does not have an Advanced Editor hence it is hidden away from us. There are a couple of ways to prove to you that is the case; I could open up the package XML inside the .dtsx file and show you the metadata but there is an easier way than that – I can attach a Sort component to the output. Take a look: Notice that the Sort component is attempting to sort on the [SalesOrderId] column. This gives us the following warning: Validation warning. DFT Get raw data: {992B7C9A-35AD-47B9-A0B0-637F7DDF93EB}: The data is already sorted as specified so the transform can be removed. The warning proves that the output from the Merge Join is sorted! It must be noted that the Merge Join output will only have IsSorted=True if at least one of the join columns is included in the output. So there you go, the Merge Join component can indeed produce a sorted output and that’s very useful in order to avoid unnecessary expensive Sort operations downstream. Hope this is useful to someone out there! @Jamiet  P.S. Thank you to Bob Bojanic on the SSIS product team who pointed this out to me!

    Read the article

  • Oracle Fusion Middleware (OFM) 11g (11.1.1.7) Starter Kit available & Customizable Demos

    - by JuergenKress
    OFM PS6 starter kit is now available from Global Sales Engineering (GSE, formerly DSS).  OFM Starter Kit provides a basic foundation to design and develop middleware demos. It is based on plug and play architecture and designed to use optimal hardware resources.  The starter kit is easily extendable to incorporate more Oracle Fusion Middleware components. New Features Built on the "Build your own demos (POC)" concept Starter Kit comes with core OFM Components Oracle Unified Directory (OUD, SOA, WebCenter Content and WebCenter Spaces) Starter Kit is available over the Internet and is tuned for optimal performance Portable/Downloadable version of the Starter Kit will be available soon. Please check Demos Corner. For and questions/feedback please contact chandan Das or Anand Prasad. Call to Action Review the Release Notes. & Visit the GSE Website and book the “OFM 11.1.1.7.0 Base Platform” customizable instance. Further information about this platform is available on this page. This announcement will appear in the archive as number 412. Customizable Demos We are happy to announce the availability of the SOA 11.1.1.7.0 Platform.  SOA 11g (11.1.1.7) Platform is fully featured, built on Plug and Play architecture, and designed to develop best of breed SOA demos. New Features Built on the "Build your own demos" concept Fusion Middleware products SOA, BAM, OSB, OEP, OER, OSR, WebCenter Content and WebCenter Spaces are installed, configured, and tuned for better performance Demo instances are available over the Internet Portable version of the platform will be available soon. Please check Demos Corner For questions/feedback please contact Anvesh Baluguri or Anand Prasad. Call to Action Review the Release Notes & Visit the GSE Website and book the “SOA 11.1.1.7.0 Platform” customizable demo. Further information about this platform is available on this page.  This announcement will appear in the archive as number 413. To get access to the demo environment please contact OPN! Support If you need assistance or encounter any issues please submit a GSE Repository ticket or call the GSE Support Hotline for assistance. The GSE Support Hotline is available 24 hours a day, Monday through Friday, at: US/CAN: +1.650.506.8763 & EMEA: +44 118 9240808 & APAC: +65.6436.2150 & LAD: +1.650.506.8763 & Japan: +81-3-6834-6097. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: OFM,demos,sales,marketing,dss,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Tablet design guide, Endeca patterns now available

    - by JuergenKress
    UX Direct, an Oracle program that offers consultants, partners, and customers the same scientifically proven and reusable user experience best practices that Oracle uses to build Oracle Applications, recently added links to a new design guide for creating tablet-based solutions for enterprise applications, and to the recently published Endeca User Interface Design Pattern Library. The tablet design guide is available from the UX Direct Home page. Tap the button under “Latest patterns & tools” for “Oracle Applications UX Tablet Guide.” It provides basic help for designers, developers, and project managers trying to approach tablet design and testing from an enterprise point of view. To hear what developers are saying about it, follow the links from this post on the User Experience Assistance blog. The newly released Endeca User Interface Design Pattern Library is also available from the UX Direct Home page and from a post on the User Experience Assistance blog. It describes principled ways to solve common user interface (UI) design problems related to search, faceted navigation, and discovery. The link between Simplified UI and Oracle UX strategy, plus content you can share on the cloud, ADf, tailoring, and more Simplified User Interface in Oracle Fusion Applications Fronts Oracle Cloud Offerings This new article on Simplified UI has just been posted on Usable Apps. Learn about the three themes - simplicity, mobility, and extensibility – that Simplified UI embodies. These same principles are guiding the development of the next generation of the Oracle user experience. Oracle's Applications User Experience Strategy: One Cloud User Experience, with Optimized UIs Where and How You Want This podcast from Misha Vaughan, Director, User Experience, is now available on the Oracle University Knowledge Center. It is available for partners and Oracle employees at this iLearning Link. Oracle Partner Builds User Experience That Hits Right Note for New Employees This new article on the Usable Apps website explores the experience of consultants at IntraSee as they implement a PeopleSoft onboarding process for Invesco, a global asset management company. The Feng Shui of Fusion This article in Oracle Scene is from Grant Ronald, Director of Product Management, on the Tools of Fusion: Oracle JDeveloper and Oracle ADF. Hands-On Workshop with Fusion Applications and ADF UX Desktop Design Patterns This post on the Voice of User Experience, or VoX, blog from Misha Vaughan describes a new kind of workshop for partners and a handful of internal Oracle sales folks on extending Oracle Fusion Applications and building custom applications with Application Development Framework (ADF) while maintaining the Oracle user experience. To learn more about the content that was delivered during this three-day workshop, visit the Usable Apps blog. Recent posts from a new blog series take a look at several of the topics discussed during the workshop. Applications User Experience Fundamentals Visual Design for any Enterprise User Interface / Art School in a Box Wireframing / Blueprinting Usable Applications Concepts. Tailoring videos This blog post from Richard Bingham, Applications Architect, on the Fusion Applications Developer Relations blog provides links to several videos that show many customization and development tasks using the Oracle Fusion Applications platform. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: UX,Architecture,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Should I be looking for an alternative to Zen Cart as my business grows?

    - by MarkS
    I created a business website for a family business which is growing. It's my family, and I'm a software developer, but I don't want to rebuild the wheels or be a shopping cart programmer. For this business, I need the web store to "just work", but... it gets complicated... There are two parts of this business website. One of them is driven by Wordpress and I use the awesome Thesis theme. This is modern, flexible, and saves me a lot of time from doing custom coding and styling. I couldn't be more pleased with this arrangement. The other part of the site is a Zen Cart store. It's administration and it's flexibility is frustrating and archaic Web 1.0. For the past few years, I keep hearing that the developers are working on a 2.0 version of Zen Cart, but they haven't communicated anything significant in the past few years other than to say, "When it's ready, we'll let you know." What I'm looking for in a cart, I would need to install 6-10 additional mods, and would need to do a lot of custom coding. I'm now willing to pay for a top-notch e-commerce solution for a small business that we can grow up into a larger business over time. Requirements: Extremely flexible shipping that let's us set up rules per product/category, tables of rates, calculated rates, max package weighs, etc. (flexibility like that available with CEON Advance Shipping Module for Zen Cart Coupons and gift certificates Manual order entry for phone orders Multi-channel support (We also sell on Amazon, eBay, use Google Base and we want to maintain one set of inventory and have it kept current) Decent SEO features Reviews and star-ratings on products Easy social networking features for sharing, following, liking, etc) Easy integration with AdWords and analytics tracking Modern and very usable product and store administration (Like I was saying, I'm spoiled by Wordpress and Thesis) At the end of the day, I don't care if it's a hosted solution or if I have to host it myself. I just want something that is going to stay up-to-date, regularly be maintained and improved, and if I have to update it, things like the one-click update present in Wordpress is something it has to have. Professional Webmasters, if you had to run a store / website, but you had to spend your time focusing on your sales and marketing efforts rather than diffing php files and copying and tweaking them to change even the slightest details of your site, what would you choose?

    Read the article

  • Customer Spotlight - CSX: Charles Pack

    - by cwarticki
    A couple of weeks ago, I had the distinct privilege of facilitating a training session with CSX.  CSX is a wonderful customer.  They've been a dedicated Oracle customer for many years. They have quite an extensive Oracle footprint including Server Technologies, Fusion Middleware and E-Business Suite products.  They also utilize Oracle's Solution Support Center offering from Advanced Customer Services, for their Database products. I'm always on the lookout for Oracle gems and I discovered one at CSX. Before my session began, I met with Charles Pack.  In my view, he's an Oracle guru.  Don't take my word for it, just read any of the books he's authored or co-authored and the one soon to be released.  Just looking at his bookshelf, I saw titles going back to Oracle 7 & 8, as well as a Solaris 2.x book.  Remember those?   Anyway, Charles is a technologist and a manager (and wears numerous other hats too).  I had a wonderful time talking with Charles and getting to know him.  What do you consider keys to your personal success?  Inability to quit.  When I decide that I will accomplish something, I will, regardless of the nature of the challenge.  Never quitting means a perpetual drive for change and progress and setting examples for others to follow.  The reason I write OCP books is because I can provide a path for people to improve their knowledge of the product, gain a certification, and reach their professional goals. What do you consider the most important part of your job?  Negotiations.  We all have competing goals, incentives and finite resources, but we should all have the same common goal – progress.  So finding the way for all parties to progress is the most important thing we do. What is the most important part of your relationship with Oracle?  Oracle provides solutions – not just products - that are critical to our business success.  So continuous communication regarding education, services, product roadmaps and shared goals is the most important part of our ongoing relationship. Charles is an Oracle loyalist.  His career has been based on using our products and he's passionate about the products he works with.  You can tell, just by talking with him.  I appreciate Charles and other customers like him.  He's an expert in his field and an Oracle evangelist.  He is an asset to CSX and to their success.  He's an advocate for Oracle and an asset to our customers.  You can also friend and follow Charles on Twitter @charlesapack It was a pleasure meeting you Charles! -Chris Warticki Global Customer Management

    Read the article

  • The Value of SOA Specialization - Fujitsu

    - by Jürgen Kress
    Thanks for the nice ink The Value of Specialization In my last post  I talked about Fujitsu's achievement in obtaining SOA and other specializations, but I have heard murmurings from other partners about what just is the value? I think Oracle have to do more to advertise the benefits to customers, we need to see customers asking for specialization for it to really work, but Oracle have made great promises about only recommending those partners who are specialized. For us there was another benefit. Oracle was sponsoring the 3rd Annual SOA Symposium in Berlin and invited us as their first specialized partner to take part. There is a great blog about the symposium on the SOA community blog site. This is real commitment from Oracle and we have other marketing opportunities being worked on with Jürgen. This does generate leads so my message to other Oracle Partners is, you need to do this, it is worthwhile.   Fujitsu - First SOA Specialized Partner Globally Just before Oracle Open World I found out that Fujitsu had achieved the first SOA Specialization globally. I think most partners know what the requirements are for Specialization and that in itself is challenging but the bureaucracy around the actual submission is an exercise in tenacity. I won’t go into that now; I have had my dig at Oracle this month, but enough to say the process could be improved. As a platinum partner we needed 5 specializations and we decided to go for SOA first. The reasoning behind this is that our Oracle Practice is known for being applications centric. We have always had an excellent technical capability but no one ever talked about that, it was just part and parcel of an implementation. However today we have just as many bids that are technology lead as there is applications lead, so it seemed a good plan to work on the areas we were not known for. We appointed a capability lead to be responsible for putting the team through the training and testing and Rosemary (Kell) was excellent, she ensured that everyone was on track and that it wasn’t just getting put into the ‘to do list’. In Fujitsu everyone in the Oracle Practice has an objective to achieve the competency tests in their area, so achieving the 2 pre sales, 2 sales and 1 support was no problem at all. We actually had 22 with the support capability proficiency.  The implementation specialist exams are much harder, more like OCP in the database area. We had help from the Oracle SOA Community; Jürgen Kress who runs this in EMEA is really motivational. At the time we started SOA was a beta exam which means you do not get the results immediately but again we put forward more than we needed. Manjit Chopra, Sukhraj Sahota, Emely Patra, Ian Scorrer and Sunny Sidhu all took the exam and eventually got the results they wanted they had passed. Congratulations. Here is Jurgen expalining why specialization is important. After the tests came the submissions where you need to include deals and experience, this was my bit, and persuading Oracle we really deserved the specialization. Finally we got the news we had been awarded the specialization, and a few days later that we were first globally. I am very proud. However there is no rest for the wicked and we plodded on to make the 5 specializations needed for Platinum and now we are working on the new Diamond status and I think SOA will be one of our 5 ‘super specializations’. This is a global Fujitsu initiative and I work closely with my colleague in Germany Jessika Weiss. It was nice to be able to have a press release about this and a comment from Judson Althoff  head of Oracle Alliances. For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA,SOA Community,OPN,Oracle,Fujitsu,Debra Lilley,Jürgen Kress,Specialization,SOA Specialization

    Read the article

  • Oracle Cloud Hiring Event at Oracle in Redwood on November 9th

    - by user769227
    Wow, 24 hours to go until Cloud Hire 2012 at Oracle! Friday is going to be a great day for many looking to make a life and career changing move. In case you haven’t heard, Oracle is hosting Cloud Hire 2012 this Friday, November 9, at the Oracle Conference Center on our World Wide Headquarters campus in Redwood Shores. This is a one-of-a-kind event to be sure and we are still registering online! We are aggressively expanding our Cloud Development and Product Management organizations to meet to ever-growing demand for Oracle Cloud. And, from this event alone, we are hoping to hire 25+ Developers, Inbound and Outbound Product Managers, Technical Leaders and QA Engineers across several Oracle Cloud groups, including: · Data and Insight Services: Big Data as a Service/Business Directory · Cloud Infrastructure · Application Marketplace · Cloud Portal · Product Management and Marketing: Outbound/Inbound · Testing/Quality Assurance · Cloud Social Platform: Analytics, Media, Big Data, Text Analytics, High Performance Search, · Cloud Social Platform - Social Relationship Management: Mobile Development/Social Network Integrations Why attend this event? Just Google Larry Ellison’s 2012 OpenWorld keynote address and you will learn why! Oracle Cloud is growing every day and we are scaling, adding new products and revolutionizing and improving all areas of the Oracle Cloud. There is no company that can come close to the comprehensive product lineup, services, capabilities and global reach and delivery of Oracle’s Cloud. This why it is a great time to work for Oracle: where consistent, stable financial growth rules and high impact technological advances are occurring every day. If you are serious about managing an upward, expansive path in your career, while staying on the leading edge and making big career impacts, you should join Oracle. Whether you want to design and develop or manage Social, Infrastructure or Applications in the Cloud, you can do it all at Oracle. Whether you’re a Technical Leader, Developer, Architect or Product Manager/Strategist, we are hiring now! Come check us out on Friday, November 9 in-person and see why Oracle Cloud is the place to take your career! RSVP here: and Learn more about the hiring teams in attendance here. Here are just some of the big things happening on Friday, November 9: · 830-3pm: Registration/Refreshments, Oracle Conference Center, 350 Oracle Parkway, Redwood Shores, CA (free parking) · 9am – 3pm: Ongoing Hiring Team Discussions and Product Demos include: Social Marketing, Social Engagement, Social Monitoring, Insight / View, KPI Bundles, Business Directory, Virtualization, Messaging, Provisioning, Cloud Portal · 10:30am – Speaker: Gopalan Arun, Vice President, Oracle Cloud Development Bio: Arun has been with Oracle for 18 years+. He is a testament to the stability and career growth that you can achieve working for Oracle. Arun began as a Developer and ascended through several product organizations into key leadership roles. Over his 18 years at Oracle, he has built and shipped many Database and Middleware products. Arun is one of the founding members of the Oracle Cloud and currently leads the development of many of the core infrastructure and developer-facing services of the Oracle Cloud. Topic: Oracle Cloud for the Developer · 1pm – Speaker: Naresh Revanuru, Lead Architect, Oracle Cloud Bio: Naresh is currently leading Java, Storage and Compute services for Oracle Cloud. Naresh also helps drive decisions for broad based Cloud topics that affect multiple services. http://www.linkedin.com/in/nareshrevanuru Topic: Oracle Cloud Architectural Overview and Challenges to Solve · 1pm-3pm: Ongoing Hiring Team Discussions and Product Demos

    Read the article

  • SQL SERVER – Shard No More – An Innovative Look at Distributed Peer-to-peer SQL Database

    - by pinaldave
    There is no doubt that SQL databases play an important role in modern applications. In an ideal world, a single database can handle hundreds of incoming connections from multiple clients and scale to accommodate the related transactions. However the world is not ideal and databases are often a cause of major headaches when applications need to scale to accommodate more connections, transactions, or both. In order to overcome scaling issues, application developers often resort to administrative acrobatics, also known as database sharding. Sharding helps to improve application performance and throughput by splitting the database into two or more shards. Unfortunately, this practice also requires application developers to code transactional consistency into their applications. Getting transactional consistency across multiple SQL database shards can prove to be very difficult. Sharding requires developers to think about things like rollbacks, constraints, and referential integrity across tables within their applications when these types of concerns are best handled by the database. It also makes other common operations such as joins, searches, and memory management very difficult. In short, the very solution implemented to overcome throughput issues becomes a bottleneck in and of itself. What if database sharding was no longer required to scale your application? Let me explain. For the past several months I have been following and writing about NuoDB, a hot new SQL database technology out of Cambridge, MA. NuoDB is officially out of beta and they have recently released their first release candidate so I decided to dig into the database in a little more detail. Their architecture is very interesting and exciting because it completely eliminates the need to shard a database to achieve higher throughput. Each NuoDB database consists of at least three or more processes that enable a single database to run across multiple hosts. These processes include a Broker, a Transaction Engine and a Storage Manager.  Brokers are responsible for connecting client applications to Transaction Engines and maintain a global view of the network to keep track of the multiple Transaction Engines available at any time. Transaction Engines are in-memory processes that client applications connect to for processing SQL transactions. Storage Managers are responsible for persisting data to disk and serving up records to the Transaction Managers if they don’t exist in memory. The secret to NuoDB’s approach to solving the sharding problem is that it is a truly distributed, peer-to-peer, SQL database. Each of its processes can be deployed across multiple hosts. When client applications need to connect to a Transaction Engine, the Broker will automatically route the request to the most available process. Since multiple Transaction Engines and Storage Managers running across multiple host machines represent a single logical database, you never have to resort to sharding to get the throughput your application requires. NuoDB is a new pioneer in the SQL database world. They are making database scalability simple by eliminating the need for acrobatics such as sharding, and they are also making general administration of the database simpler as well.  Their distributed database appears to you as a user like a single SQL Server database.  With their RC1 release they have also provided a web based administrative console that they call NuoConsole. This tool makes it extremely easy to deploy and manage NuoDB processes across one or multiple hosts with the click of a mouse button. See for yourself by downloading NuoDB here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: NuoDB

    Read the article

  • top Tweets SOA Partner Community &ndash; June 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity Simone Geib Contact me directly for ideas how to improve http://bit.ly/advancedsoasuite and additional posts, presentations, white papers, #soasuite SOA CommunitySOA Community Newsletter May 2012 https://soacommunity.wordpress.com /2012/05/28/soa-community-newsletter-may-2012/ #soacommunity Simone Geib #soasuite advanced OTN page has become too cluttered. Broke it into separate pages to start with. http://bit.ly/advancedsoasuite SOA CommunitySOA Management with Enterprise Manager Cloud Control 12c and Business Transaction Management 12c Demo https://soacommunity.wordpress.com /2012/05/21/soa-management-with-enterprise-manager-cloud-control-12c-and-business-transaction-management-12c-demo/ #soacommunity OracleBlogs June Webcast: SOA Gateway Implementation and Troubleshooting (2 sessions) http://ow.ly/1kbRFA OTNArchBeatEvery cloud needs an SOA lining: analyst | @JoeMcKendrick http://zd.net/KTgMHk ServiceTechSymposium New session just posted to calendar: "NoSQL for Data Services, Data Virtualization & Big Data" by Guido Schmutz, Trivadis AG ://ow.ly/bjjOe OTNArchBeat?Every cloud needs an SOA lining: analyst | @JoeMcKendrick http://zd.net/KTgMHk Debra Lilley looks good - real proof people are using the apps ! RT @fteter:Very cool Fusion Applications Help site: http://bit.ly/L3nvOR #FusionApps OTNArchBeat How to Set JVM Parameters in Oracle SOA 11G | Francis Ip http://bit.ly/JBDYPj demed"rapid proliferation of cloud computing will drive convergence of SOA and cloud paradigms" http://ovum.com/2012/05/18/soa-paves-the-way-for-cloud/ SOA Community Sending out invitations to our advanced Fusion Middleware Summer Camps! Want to learn more register for the community http://www.oracle.com/goto/emea/soa SOA Community Middleware Oracle Excellence Awards 2012 - HAPPY NEW YEAR! https://soacommunity.wordpress.com/ 2012/05/31/middleware-oracle-excellence-awards-2012 happy-new-year/ #soacommunity #opn #opnaward #specialization #oracle Simone Geib #oraclesoa performance tuning resources. All in one: docs, blogs, WPs, ppts: http://bit.ly/soa_resources OracleBlogs Middleware Oracle Excellence Awards 2012 - HAPPY NEW YEAR! http://ow.ly/1k9ri0 ServiceTechSymposiumNew session just posted to Symposium calendar: "Service Modeling & BPM Business Value Patterns" by Jürgen Kress, Oracle http://www.servicetechsymposium.com/ agenda2012.php #service_modeling_and_bpm _business_value_patterns SOA Community Happy New Year #soacommunity thanks for the business! Time for a drink ;-) http://pic.twitter.com/zkK08KWB Jan van ZoggelUsing execute-sql() function for Name-Value pair lookups in Oracle Service Bus http://wp.me/p1H430-jZ SOA Community Middleware Oracle Excellence Awards 2012&ndash;HAPPY NEW YEAR! http://wp.me/p10C8u-q4 orclateamsoa A-Team Blog #ateam: BPM 11g Deployment & Instance Migration - I have seen a number of request lately asking how to http://ow.ly/1jZ0h8 OTNArchBeat Who should ‘own’ the Enterprise Architecture? | Michael Glas http://bit.ly/K0ge0Q Oracle UPK & Tutor TOMORROW! (June 23rd) - UPK Professional Webinar at Noon ET: Discover why user adoption is a key factor for the http://bit.ly/LjZjdx Sabine Leitner Finance Event im Design-Hotel beim Barbeque: 21. Juni FRA mit Kunden SV Informatik, Schufa, LBBW http://bit.ly/JtwE3v #Oracle @itevent OracleEnterpriseMgr SOA Management with Enterprise Manager Cloud Control 12c and Business Transaction Management 12c Demo http://ow.ly/b3WP1 #em12c ServiceTechSymposium New session just posted to Symposium calendar: "Elastic SOA in the Cloud" by Steve Millidge, C2B2 Consulting http://www.servicetechsymposium.com /agenda2012.php #elastic_soa_in_the_cloud OTNArchBeat Securing Heterogeneous Systems Using Oracle Web Services Manager by @rluttikhuizen & Jens Peters http://bit.ly/KjShFi Oracleteamsoa A-Team Blog #ateam: How to Set JVM Parameters in Oracle SOA 11G http://ow.ly/1k2cnl SOA Community Oracle Service Registry in an automated (Maven) SOA/BPM build http://redstack.wordpress.com /2012/05/22/using-oracle-service-registry-in-an-automated-maven-soabpm-build/ #soacommunity #redstack #soa #osr #opn SOA CommunityHigh demand for advanced Fusion Middleware Summer Camps! Want to learn more register for the #soacommunity http://www.oracle.com/goto/emea/soa OracleBlogs? How to Set JVM Parameters in Oracle SOA 11G http://ow.ly/1k1UTv SOA Community top Tweets SOA Partner Community &ndash; May 2012 http://wp.me/p10C8u-pP ServiceTechSymposium New session just posted to Symposium calendar: "SOA Governance at EDP: A Global Energy Company" by Manuel Rosa, Link http://www.servicetechsymposium.com/ agenda2012.php #soa_governance_at_edp For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity,twitter,Oracle,SOA Community,Jürgen Kress,OPN,SOA,BPM

    Read the article

  • Creating shapes on the fly

    - by Bertrand Le Roy
    Most Orchard shapes get created from part drivers, but they are a lot more versatile than that. They can actually be created from pretty much anywhere, including from templates. One example can be found in the Layout.cshtml file of the ThemeMachine theme: WorkContext.Layout.Footer .Add(New.BadgeOfHonor(), "5"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } What this is really doing is create a new shape called BadgeOfHonor and injecting it into the Footer global zone (that has not yet been defined, which in itself is quite awesome) with an ordering rank of "5". We can actually come up with something simpler, if we want to render the shape inline instead of sending it into a zone: @Display(New.BadgeOfHonor()) Now let's try something a little more elaborate and create a new shape for displaying a date and time: @Display(New.DateTime(date: DateTime.Now, format: "d/M/yyyy")) For the moment, this throws a "Shape type DateTime not found" exception because the system has no clue how to render a shape called "DateTime" yet. The BadgeOfHonor shape above was rendering something because there is a template for it in the theme: Themes/ThethemeMachine/Views/BadgeOfHonor.cshtml. We need to provide a template for our new shape to get rendered. Let's add a DateTime.cshtml file into our theme's Views folder in order to make the exception go away: Hi, I'm a date time shape. Now we're just missing one thing. Instead of displaying some static text, which is not very interesting, we can display the actual time that got passed into the shape's dynamic constructor. Those parameters will get added to the template's Model, so they are easy to retrieve: @(((DateTime)Model.date).ToString(Model.format)) Now that may remind you a little of WebForm's user controls. That's a fair comparison, except that these shapes are much more flexible (you can add properties on the fly as necessary), and that the actual rendering is decoupled from the "control". For example, any theme can override the template for a shape, you can use alternates, wrappers, etc. Most importantly, there is no lifecycle and protocol abstraction like there was in WebForms. I think this is a real improvement over previous attempts at similar things.

    Read the article

  • Rip and Convert DVD’s to an ISO Image

    - by Mysticgeek
    If you own a lot of DVD’s, you might want to convert them to an ISO image for backup and easily playing them on your media center. Today we take a look at ripping your discs using DVDFab, then using ImgBurn to create an ISO image of the ripped DVD files. Rip DVD with DVDFab6 DVDFab will remove copy protection and rip the DVD files for free. Other components in the suite require you to purchase a license after the 30 day trial, but you’ll still be able to rip DVD’s after the trial. Install DVDFab by accepting the defaults (link below)…a system restart is required to complete the install process. The first time you run it, a welcome screen is displayed. If you don’t want to see it again check the box Do not show again, then Start DVDFab.  Pop the DVD in your drive and click Next. Now select your region and check Do not show again, then OK. It will then open the DVD and begin to scan it. Under DVD to DVD you can select either Full Disc or Main Movie depending on what you want to rip. If you want to burn the DVD to a disc after it’s created select the Full Disc option. Now click the Start button to begin the ripping process. After the ripping process has completed, you’ll get a message telling you it’s waiting for you to put in a blank DVD. Since we aren’t burning the disc, just cancel the message. Click Finish and close out of DVDFab or just minimize it if you’re going to keep using it to rip another DVD. By default the temporary directory is in My Documents \ DVDFab \ Temp…however you can change it in settings. If you go to the Temp directory you’ll see the DVD files listed there… Convert Files to ISO with ImgBurn Now that we have the files ripped from the DVD, we need to convert them to an ISO image using ImgBurn (link below). Open it up and from the main menu click on Create image file from files/folders. Click on the folder icon to browse to the location of the ripped DVD files. Browse to the DVDFab temp directory and the VIDEO_TS folder for the source and click Ok. Then choose a destination directory, give the ISO a name, and click Save. In this case we ripped the Unbreakable DVD, so named it that.   So now in ImgBurn you have the source being the ripped DVD files, and the destination for the ISO…then click the Build button. If you don’t create a volume label, ImgBurn is kind enough to create on for you. If everything looks correct, click Ok. Now wait while ImgBurn goes through the process of converting the ripped DVD files to an ISO image. The process has successfully completed. The ISO image of the DVD will be in the output directory you selected earlier. Now you can burn the ISO image to a blank DVD or store it on an external hard drive for safe keeping. When you’re done, you’ll probably want to go into the temp DVDFab folder and delete the VOB and other files in the Video_TS folder as they will take up a lot of space on your hard drive.   Conclusion Although this method requires two programs to make an ISO out of a DVD, it’s extremely quick. When burning DVD’s of various lengths, it took less than 30 minutes to get the final ISO. Now, you’ll have your DVD movies backed up in case something were to happen to the discs and are no longer playable. If you use Windows Media Center to watch your movies, check out our article on how to automatically mount and view ISO files in Windows 7 Media Center. With DVDFab, you get a 30 day fully functional trial for all of its features. You’ll still be able rip DVD’s even after the 30 day trial has ended. The more we’ve been using DVDFab, the more impressed we are with its capabilities, so after the 30 day trial you should consider purchasing a license. We will have a full review of the of it to share with you soon.  Download DVDFab Download ImgBurn Similar Articles Productive Geek Tips How To Rip DVDs with VLCCalculate with Qalculate on LinuxConvert a Row to a Column in Excel the Easy WayEnjoy Quick & Easy Unit Conversion with Convert for WindowsConvert Older Excel Documents to Excel 2007 Format TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser

    Read the article

  • SOA+OSB in same JVM

    - by Manoj Neelapu
    Oracle Service Bus 11gR1 (11.1.1.3) supports running in same JVM as SOA. This tutorial covers on how to do create domain in of SOA+OSB combined to run in single JVM . For this tutorial we will use a flavor  WebLogic installer bundled with both OEPE and coherence components (eg oepe111150_wls1033_win32.exe). WebLogic installer bundled with coherence and OEPE components can be seen in the screen shot.Oracle Service Bus 11gR1 (11.1.1.3) has built-in caching support for Business Services using coherence. Because of this we will have to install coherence before  installing OSB.  To get soa and osb running in the same domain, we have to install the SOA and OSB on the above ORACLE_HOME. After installation we should see both the SOA and OSB homes has highlighted in red.We could also see the coherence components which is mandatory for OSB and optional OEPE also installed.Now we will execute RCU(ofm_rcu_win_11.1.1.3.0_disk1_1of1) to install the schema for SOA and OSB. New RCU contains OSB tables (WLI_QS_REPORT_DATA , WLI_QS_REPORT_ATTRIBUTE) gets loaded as part of SOAINFRA schema After this step we will have to create soa+osb domain using config wizard. It is located under $WEBLOGIC_HOME\common\bin\config.* (.cmd or .sh as per your platform) .While creating a domain we will select options for SOA Suite  and Oracle Service Bus Extension-All Domain Topologies.There is another option for OSB  Oracle Service Bus Extension-Single server Domain Topology. This topology is for users who want to use OSB in single server configuration. Currently SOA doesn't support single server topology. So this topology cannot be used with SOA domain but can only be used for stand alone OSB installations.We can continue with domain configuration till we reach the below screen. Following steps are mandatory if we want to have the SOA and OSB run in same JVMwe should select Managed Server, Clusters and Machines as shown below After this selection you should see a screen with two servers One managed server for OSB and one managed for SOA. Since we would like to have both the servers in one managed server (one JVM) we will have to do one important step here. We have to delete either of the servers and rename the other server with deleted server name.eg delete osb_server1 and rename the soa_server1 to osb_server1 or we can also delete soa_server1 and rename the osb_server1 to soa_server1After this steps proceed as as-usual . If we observe created domain we see only one managed server which contains components for both SOA and OSB ($DOMAIN_HOME/startManagedWebLogic_readme.txt). 

    Read the article

  • SQL SERVER – Introduction to Big Data – Guest Post

    - by pinaldave
    BIG Data – such a big word – everybody talks about this now a days. It is the word in the database world. In one of the conversation I asked my friend Jasjeet Sigh the same question – what is Big Data? He instantly came up with a very effective write-up.  Jasjeet is working as a Technical Manager with Koenig Solutions. He leads the SQL domain, and holds rich IT industry experience. Talking about Koenig, it is a 19 year old IT training company that offers several certification choices. Some of its courses include SharePoint Training, Project Management certifications, Microsoft Trainings, Business Intelligence programs, Web Design and Development courses etc. Big Data, as the name suggests, is about data that is BIG in nature. The data is BIG in terms of size, and it is difficult to manage such enormous data with relational database management systems that are quite popular these days. Big Data is not just about being large in size, it is also about the variety of the data that differs in form or type. Some examples of Big Data are given below : Scientific data related to weather and atmosphere, Genetics etc Data collected by various medical procedures, such as Radiology, CT scan, MRI etc Data related to Global Positioning System Pictures and Videos Radio Frequency Data Data that may vary very rapidly like stock exchange information Apart from difficulties in managing and storing such data, it is difficult to query, analyze and visualize it. The characteristics of Big Data can be defined by four Vs: Volume: It simply means a large volume of data that may span Petabyte, Exabyte and so on. However it also depends organization to organization that what volume of data they consider as Big Data. Variety: As discussed above, Big Data is not limited to relational information or structured Data. It can also include unstructured data like pictures, videos, text, audio etc. Velocity:  Velocity means the speed by which data changes. The higher is the velocity, the more efficient should be the system to capture and analyze the data. Missing any important point may lead to wrong analysis or may even result in loss. Veracity: It has been recently added as the fourth V, and generally means truthfulness or adherence to the truth. In terms of Big Data, it is more of a challenge than a characteristic. It is difficult to ascertain the truth out of the enormous amount of data and the one that has high velocity. There are always chances of having un-precise and uncertain data. It is a challenging task to clean such data before it is analyzed. Big Data can be considered as the next big thing in the IT sector in terms of innovation and development. If appropriate technologies are developed to analyze and use the information, it can be the driving force for almost all industrial segments. These include Retail, Manufacturing, Service, Finance, Healthcare etc. This will help them to automate business decisions, increase productivity, and innovate and develop new products. Thanks Jasjeet Singh for an excellent write up.  Jasjeet Sign is working as a Technical Manager with Koenig Solutions. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Big Data

    Read the article

  • 5 Ways to Determine Mobile Location

    - by David Dorf
    In my previous post, I mentioned the importance of determining the location of a consumer using their mobile phone.  Retailers can track anonymous mobile phones to determine traffic patterns both inside and outside their stores.  And with consumers' permission, retailers can send location-aware offers to mobile phones; for example, a coupon for cereal as you walk down that aisle.  When paying with Square, your location is matched with the transaction.  So there are lots of reasons for retailers to want to know the location of their customers.  But how is it done? I thought I'd dive a little deeper on that topic and consider the approaches to determining location. 1. Tower Triangulation By comparing the relative signal strength from multiple antenna towers, a general location of a phone can be roughly determined to an accuracy of 200-1000 meters.  The more towers involved, the more accurate the location. 2. GPS Using Global Positioning Satellites is more accurate than using cell towers, but it takes longer to find the satellites, it uses more battery, and it won't well indoors.  For geo-fencing applications, like those provided by Placecast and Digby, cell towers are often used to determine if the consumer is nearing a "fence" then switches to GPS to determine the actual crossing of the fence. 3. WiFi Triangulation WiFi triangulation is usually more accurate than using towers just because there are so many more WiFi access points (i.e. radios in routers) around. The position of each WiFi AP needs to be recorded in a database and used in the calculations, which is what Skyhook has been doing since 2008.  Another advantage to this method is that works well indoors, although it usually requires additional WiFi beacons to get the accuracy down to 5-10 meters.  Companies like ZuluTime, Aisle411, and PointInside have been perfecting this approach for retailers like Meijer, Walgreens, and HomeDepot. Keep in mind that a mobile phone doesn't have to connect to the WiFi network in order for it to be located.  The WiFi radio in the phone only needs to be on.  Even when not connected, WiFi radios talk to each other to prepare for a possible connection. 4. Hybrid Approaches Naturally the most accurate approach is to combine the approaches described above.  The more available data points, the greater the accuracy.  Companies like ShopKick like to add in acoustic triangulation using the phone's microphone, and NearBuy can use video analytics to increase accuracy. 5. Magnetic Fields The latest approach, and this one is really new, takes a page from the animal kingdom.  As you've probably learned from guys like Marlin Perkins, some animals use the Earth's magnetic fields to navigate.  By recording magnetic variations within a store, then matching those readings with ones from a consumer's phone, location can be accurately determined.  At least that's the approach IndoorAtlas is taking, and the science seems to bear out.  It works well indoors, and doesn't require retailers to purchase any additional hardware.  Keep an eye on this one.

    Read the article

  • MySQL and Hadoop Integration - Unlocking New Insight

    - by Mat Keep
    “Big Data” offers the potential for organizations to revolutionize their operations. With the volume of business data doubling every 1.2 years, analysts and business users are discovering very real benefits when integrating and analyzing data from multiple sources, enabling deeper insight into their customers, partners, and business processes. As the world’s most popular open source database, and the most deployed database in the web and cloud, MySQL is a key component of many big data platforms, with Hadoop vendors estimating 80% of deployments are integrated with MySQL. The new Guide to MySQL and Hadoop presents the tools enabling integration between the two data platforms, supporting the data lifecycle from acquisition and organisation to analysis and visualisation / decision, as shown in the figure below The Guide details each of these stages and the technologies supporting them: Acquire: Through new NoSQL APIs, MySQL is able to ingest high volume, high velocity data, without sacrificing ACID guarantees, thereby ensuring data quality. Real-time analytics can also be run against newly acquired data, enabling immediate business insight, before data is loaded into Hadoop. In addition, sensitive data can be pre-processed, for example healthcare or financial services records can be anonymized, before transfer to Hadoop. Organize: Data is transferred from MySQL tables to Hadoop using Apache Sqoop. With the MySQL Binlog (Binary Log) API, users can also invoke real-time change data capture processes to stream updates to HDFS. Analyze: Multi-structured data ingested from multiple sources is consolidated and processed within the Hadoop platform. Decide: The results of the analysis are loaded back to MySQL via Apache Sqoop where they inform real-time operational processes or provide source data for BI analytics tools. So how are companies taking advantage of this today? As an example, on-line retailers can use big data from their web properties to better understand site visitors’ activities, such as paths through the site, pages viewed, and comments posted. This knowledge can be combined with user profiles and purchasing history to gain a better understanding of customers, and the delivery of highly targeted offers. Of course, it is not just in the web that big data can make a difference. Every business activity can benefit, with other common use cases including: - Sentiment analysis; - Marketing campaign analysis; - Customer churn modeling; - Fraud detection; - Research and Development; - Risk Modeling; - And more. As the guide discusses, Big Data is promising a significant transformation of the way organizations leverage data to run their businesses. MySQL can be seamlessly integrated within a Big Data lifecycle, enabling the unification of multi-structured data into common data platforms, taking advantage of all new data sources and yielding more insight than was ever previously imaginable. Download the guide to MySQL and Hadoop integration to learn more. I'd also be interested in hearing about how you are integrating MySQL with Hadoop today, and your requirements for the future, so please use the comments on this blog to share your insights.

    Read the article

  • Where are my sub templates?

    - by Tim Dexter
    This one is for standalone/BIEE uses of Publisher. All the ERP/CRM/HCM folks are already catered for and can tuck into a nut cutlet and arugala salad. Sorry, I have just watched Food Inc and even if only half of it is true; Im still on a crusade in my house against mass produced food. Wake up World! If you have ventured into the world of sub templating, you'll be reaping some development benefit. In terms of shared report components and calculations they are very useful. Just exporting all of your report headers and footers to a single sub template can potentially save you hours and hours of work and make you look like a star. If someone in management gets it into their head that they would like Comic San Serif font rather than Arial in their report headers, its a 10 min job rather than 100 hours! What about the rest of the report content? I hear you cry. Its coming in 11g, full master template support. Your management wants bright blue borders with yellow backgrounds for all the tables in your reports, 5 minute job! Getting back to sub templates and my comment about all the ERP/CRM/HCM folks be catered for. In the standalone release there is no out of the box directory for you to drop your sub templates. Dropping them into the main report directory would make sense but they are not accessible there via a URL. An oversight on our part and something that will be addressed in 11g. Sub templates are now a first class citizen in the world of BIP, you can upload them and BIP will know what to do with them. But what do you do right now? The easiest place to put them where BIP can 'see' them is to create a directory under the xmlpserver install directory in the J2EE container e.g. $J2EE_HOME/xmlpserver/xmlpserver/subtemplates You can call it whatever you want but when the server is started up, that directory is accessible via a URL i.e. http://tdexter:9704/xmlpserver/subtemplates/mysub.rtf. You can therefore put it into the top of your main templates and call the sub template. <?import: http://tdexter:9704/xmlpserver/subtemplates/mysub.rtf?> Of course, you can drop them anywhere you want, they just need to be in a web server mountable directory. Enjoy the arugala!

    Read the article

  • Exclusive Webcast Series Explains How Project Success Drives Business Success

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In the wake of the global financial crisis, organizations throughout the world are redoubling their efforts to enhance financial discipline, achieve operational excellence, and mitigate risk. How can they address all these areas with one comprehensive strategy? With enterprise project portfolio management solutions that provide greater transparency and visibility across all projects and portfolios, says Guy Barlow, Oracle director of industry strategy. In the following interview and in an exclusive, three-part webcast series, Barlow examines today’s new management realities and explains how organizations can succeed in this environment. Q: Financial discipline has always been important, what’s different today? A: A number of organizations are showing that by fiscally aligning projects with the business goals of their organizations, they can shave off hundreds of thousands if not millions of dollars in inefficiency and waste. For example, one Oracle customer, the Columbus Regional Airport Authority, reduced its unbudgeted costs from US$24.4 million to US$3.5 million, for an 88 percent improvement. Q: How do organizations achieve results like this? A: First, they need to have the vision to see project management as part of a broad and critical element in their overall enterprise strategy. That means using a single solution, such as Oracle‘s Primavera, to manage multiple projects across multiple functions within a company. So someone in corporate mergers and acquisitions as well as a capital projects team can standardize on the same technology. By doing so they all gain greater efficiency in planning and execution—because the technology can be configured for their specific roles and needs—and the IT organization really benefits from lower maintenance. Second, enterprises must give executive leaders—CFOs, COOs, and CEOs—visibility across the entire business to easily see what projects are on track and which ones are falling behind. In fact, once executives see the power of enterprise project portfolio management, uptake is very quick across the organization. Read the full interview here.

    Read the article

  • Bad Spot to Be In: Playing Catch-up with Mobile Advertising

    - by Mike Stiles
    You probably noticed, there’s a mass migration going on from online desktop/laptop usage to smartphone/tablet usage.  It’s an indicator of how we live our lives in the modern world: always on the go, with no intention of being disconnected while out there. Consequently, paid as it relates to mobile advertising is taking the social spotlight. eMarketer estimated that in 2013, US adults would spend about 2 hours, 21 minutes a day on mobile, not counting talking time. More people in the world own smartphones than own toothbrushes (bad news I suppose if you’re marketing toothpaste). They’re using those mobile devices to access social networks, consuming at least 17% of their mobile time on them. Frankly, you don’t need a deep dive into mobile usage stats to know what’s going on. Just look around you in any store, venue or coffee shop. It’s really obvious…our mobile devices are now where we “are,” so that’s where marketers can increasingly reach us. And it’s a smart place for them to do just that. Mobile devices can be viewed more and more as shopping facilitators. Usually when someone is on mobile, they are not in passive research mode. They are likely standing near a store or in front of a product, using their mobile to seek reassurance that buying that product is the right move. They are the hottest of hot prospects. Consider that 4 out of 5 consumers use smartphones to shop, 52% of Americans use mobile devices for in-store for research, 70% of mobile searches lead to online action inside of an hour, and people that find you on mobile convert at almost 3x the rate as those that find you on desktop or laptop. But what are marketers doing? Enter statistics from Mary Meeker’s latest State of the Internet report. Common sense says you buy advertising where people are spending their eyeball time, right? But while mobile is 20% of media use and rising, the ad spend there is 4%. Conversely, while print usage is at 5% and falling, ad spend there is 19%. We all love nostalgia, but come on. There are reasons marketing dollar migration to mobile has not matched user migration, including the availability of mobile ad products and the ability to measure user response to mobile ads. But interesting things are happening now. First came Facebook’s mobile ad, which let app developers pay to get potential downloads. Then their mobile ad network was announced at F8, allowing marketers to target users across non-Facebook apps while leveraging the wealth of diverse data Facebook has on those users, a big deal since Nielsen has pointed out mobile apps make up 89% of the media time spent on mobile. Twitter has a similar play in motion with their MoPub acquisition. And now mobile deeplinks have arrived, which can take users straight to sub-pages of mobile apps for a faster, more direct shopper/researcher user experience. The sooner the gratification, the smoother and faster the conversion. To be clear, growth in mobile ad spending is well underway. After posting $13.1 billion in 2013, Gartner expects global mobile ad spending to reach $18 billion this year, then go to $41.9 billion by 2017. Cheap smartphones and data plans are spreading worldwide, further fueling the shift to mobile. Mobile usage in India alone should grow 400% by 2018. And, of course, there’s the famous statistic that mobile should overtake desktop Internet usage this year. How can we as marketers mess up this opportunity? Two ways. We could position ourselves in perpetual “catch-up” mode and keep spending ad dollars where the public used to be. And we could annoy mobile users with horrid old-school marketing practices. Two-thirds of users told Forrester they think interruptive in-app ads are more annoying than TV ads. Make sure your brand’s social marketing technology platform is delivering a crystal clear picture of your social connections so the mobile touch point is highly relevant, mobile optimized, and delivering real value and satisfying experiences. Otherwise, all we’ve done is find a new way to be unwanted. @mikestiles @oraclesocialPhoto: Kate Mallatratt, freeimages.com

    Read the article

  • ORA-4031 Troubleshooting

    - by [email protected]
      QUICKLINK: Note 396940.1 Troubleshooting and Diagnosing ORA-4031 Error Note 1087773.1 : ORA-4031 Diagnostics Tools [Video]   Have you observed an ORA-04031 error reported in your alert log? An ORA-4031 error is raised when memory is unavailable for use or reuse in the System Global Area (SGA).  The error message will indicate the memory pool getting errors and high level information about what kind of allocation failed and how much memory was unavailable.  The challenge with ORA-4031 analysis is that the error and associated trace is for a "victim" of the problem.   The failing code ran into the memory limitation, but in almost all cases it was not part of the root problem.    Looking for the best way to diagnose? When an ORA-4031 error occurs, a trace file is raised and noted in the alert log if the process experiencing the error is a background process.   User processes may experience errors without reports in the alert log or traces generated.   The V$SHARED_POOL_RESERVED view will show reports of misses for memory over the life of the database. Diagnostics scripts are available in Note 430473.1 to help in analysis of the problem.  There is also a training video on using and interpreting the script data Note 1087773.1. 11g DiagnosabilityStarting with Oracle Database 11g Release 1, the Diagnosability infrastructure was introduced which places traces and core files into a location controlled by the DIAGNOSTIC_DEST initialization parameter when an incident, such as an ORA-4031 occurs. For earlier versions, the trace file will be written to either USER_DUMP_DEST (if the error was caught in a user process) or BACKGROUND_DUMP_DEST (if the error was caught in a background process like PMON or SMON). The trace file contains vital information about what led to the error condition.  Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Information to Support[Video]Oracle Configuration Manager (OCM)Oracle Configuration Manager (OCM) works with My Oracle Support to enable proactive support capability that helps you organize, collect and manage your Oracle configurations.Oracle Configuration Manager Quick Start GuideNote 548815.1: My Oracle Support Configuration Management FAQ Note 250434.1: BULLETIN: Learn More About My Oracle Support Configuration Manager    Common Causes/Solutions The ORA-4031 can occur for many different reasons.  Some possible causes are: SGA components too small for workload Auto-tuning issues Fragmentation due to application design Bug/leaks in memory allocationsFor more on the 4031 and how this affects the SGA, see Note 396940.1 Troubleshooting and Diagnosing ORA-4031 Error Because of the multiple potential causes, it is important to gather enough diagnostics so that an appropriate solution can be identified.  However, most commonly the cause is associated with configuration tuning.   Ensuring that MEMORY_TARGET or SGA_TARGET are large enough to accommodate workload can get around many scenarios.  The default trace associated with the error provides very high level information about the memory problem and the "victim" that ran into the issue.   The data in the default trace is not going to point to the root cause of the problem. When migrating from 9i to 10g and higher, it is necessary to increase the size of the Shared Pool due to changes in the basic design of the shared memory area. Note 270935.1 Shared pool sizing in 10gNOTE: Diagnostics on the errors should be investigated as close to the time of the error(s) as possible.  If you must restart a database, it is not feasible to diagnose the problem until the database has matured and/or started seeing the problems again. Note 801787.1 Common Cause for ORA-4031 in 10gR2, Excess "KGH: NO ACCESS" Memory Allocation ***For reference to the content in this blog, refer to Note.1088239.1 Master Note for Diagnosing ORA-4031 

    Read the article

  • SQL Contests – Solution – Identify the Database Celebrity

    - by Pinal Dave
    Last week we were running contest Identify the Database Celebrity and we had received a fantastic response to the contest. Thank you to the kind folks at NuoDB as they had offered two USD 100 Amazon Gift Cards to the winners of the contest. We had also additional contest that users have to download and install NuoDB and identified the sample database. You can read about the contest over here. Here is the answer to the questions which we had asked earlier in the contest. Part 1: Identify Database Celebrity Personality 1 – Edgar Frank “Ted” Codd (August 19, 1923 – April 18, 2003) was an English computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases. He made other valuable contributions to computer science, but the relational model, a very influential general theory of data management, remains his most mentioned achievement. (Wki) Personality 2 – James Nicholas “Jim” Gray (born January 12, 1944; lost at sea January 28, 2007; declared deceased May 16, 2012) was an American computer scientist who received the Turing Award in 1998 “for seminal contributions to database and transaction processing research and technical leadership in system implementation.” (Wiki) Personality 3 – Jim Starkey (born January 6, 1949 in Illinois) is a database architect responsible for developing InterBase, the first relational database to support multi-versioning, the blob column type, type event alerts, arrays and triggers. Starkey is the founder of several companies, including the web application development and database tool company Netfrastructure and NuoDB. (Wiki) Part 2: Identify NuoDB Samples Database Names In this part of the contest one has to Download NuoDB and install the sample database Hockey. Hockey is sample database and contains few tables. Users have to install sample database and inform the name of the sample databases. Here is the valid answer. HOCKEY PLAYERS SCORING TEAM Once again, it was indeed fun to run this contest. I have received great feedback about it and lots of people wants me to run similar contest in future. I promise to run similar interesting contests in the near future. Winners Within next two days, we will let winners send emails. Winners will have to confirm their email address and NuoDB team will send them directly Amazon Cards. Once again it was indeed fun to run this contest. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >