Search Results

Search found 1961 results on 79 pages for 'ideal'.

Page 48/79 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • The Spotlight is on You

    - by Claudia McDonald
    On the field or off the field, in ballet slippers or singing your heart out on stage, offering a stellar performance every time is key to holding the attention of your audience and having them come back hungry for more. Similarly, showing up to a new business meeting wearing pink tights and a tutu might be one way to holding the attention of your customer, but offering them an unmatched and ground-breaking software solution certainly will get their attention! Simply put, the Oracle Exastack program enables both ISV's and OEM's to rapidly build and deliver faster, more reliable applications. It comes as no surprise that the success of the Oracle Exastack program is centered on establishing Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud as the highest performance, lowest cost platforms available in the industry today.  But here is where the real standing-ovation-worthy facts come in. The Oracle Exadata Database Machine is the only database machine that provides extreme performance for both data warehousing and online transaction processing (OLTP) workloads, making it the ideal platform for consolidating onto private clouds. Whereas the Oracle Exalogic Elastic Cloud is an engineered hardware and software system tested and tuned by Oracle to provide the best foundation for cloud computing, while allowing Java applications, Oracle Applications and other enterprise applications to run with extreme performance. – And the crowd goes wild, ladies and gentlemen! In just four months alone, our partners have already achieved over 150 Oracle Exastack Ready milestones for Oracle Solaris, Oracle Linux, Oracle Database and Oracle WebLogic Server.  As Judson has said, “With the Oracle Exastack program, Oracle is helping partners test, tune and optimize their applications to deliver optimal performance and reliability, accelerating innovation and delivering superior value to customers." And get this, not only are their applications running faster and more efficiently, they are actually being delivered at a lower cost to customers than ever before – extreme performance well deserving of 3 consecutive arabesques! If you haven’t already, check out what some of our partners are saying about the Oracle Exastack program in this video, and find out all that is available to you today. By participating in the Oracle Exastack program, partners now have the ability to achieve Oracle Exadata Optimized, Oracle Exalogic Optimized, Oracle Exadata Ready and Oracle Exalogic Ready status for their solutions. New Oracle Exastack labs, provide OPN members with access to Oracle technical resources, on-premise facilities and remote lab environments. With Oracle Exastack Optimized, partners experience faster and more reliable applications to run on the Oracle Exadata Database Machine, as well as the long awaited Oracle Exalogic Elastic Cloud. Savvy OPN members are leveraging the Oracle Exastack Optimized program toward their advancement to Platinum or Diamond level in OPN. Partners are achieving Oracle Exadata Ready and Oracle Exalogic Ready giving them a competitive advantage and signaling to customers that their applications readily support Oracle Exadata Database Machine or Oracle Exalogic Elastic Cloud to deliver extreme performance. Get your dancing shoes on, The OPN Communications Team

    Read the article

  • Elastic PaaS with WebLogic and OpenStack, part I

    - by Jernej Kaše
    In my previous blog I described the steps to get OpenStack on Solaris up and running. Now we'll explore how WebLogic and OpenStack can work together to deliver truly elastic Middleware Platform as a Service. Middleware / Platform as a Service goals First, let's define what PaaS should be : PaaS offerings facilitate the deployment of applications without the complexity of managing the underlying hardware and software and provisioning hosting capabilities. To break it down: - PaaS provides a complete platform for hosting solutions (Java EE, SOA, BPM, ...) - Infrastructure provisioning (virtual machine, OS, platform) and managing is hidden from the PaaS user [administrator or developer] - Additionally, PaaS could / should define target SLAs, and the platform should ensure the SLAs are meet automatically. PaaS use case To make it more tangible, we have an IT Administrator who has the requirement to deploy a Java EE enterprise application. The application is used by external users who need to submit reports by the end of each month. As a result, the number of concurrent users will fluctuate, with expected huge spikes around the end of each month. The SLA agreed by the management is that no more than 100 requests should be waiting to be processes at any given time. In addition, the IT admin has no more than 3 days to have the platform and the application operational. The Challenges Some of the challenges the IT Administrator is facing are: - how are we going to ensure the processing power? - how are we going to provision the (virtual) machines, Java EE platform and deploy the application? - how are we going to monitor the SLA? - how are we going to react to SLA, and increase capacity?  The Ideal Solution Ideally, the whole process should be automated, "set it and forget" and require no human interaction: - The vendor packages the solution as deployable image(s) - The images are deployed to the IaaS - From there, automated processes take care of SLA  Solution Architecture with WebLogic 12c, Dynamic Clusters, OpenStack & Solaris OracleSolaris provides OS and virtualisation through Solaris Zones OpenStack is a part of Solaris 11.2 and provides Cloud Management (console and API) WebLogic 12c with Dynamic Clusters provides the Platform Trafic Manager provides load balancing On top of out that, we are going to implement a small control script - Cloud Manager - which is going to monitor SLA through WebLogic Diagnostic Framework. In case there are more than 100 pending requests, the script will: - provision a new virtual machine based on image which is configured for the WebLogic domain - add the machine to WebLogic domain - Increase the number of servers in dynamic cluster - Start the newly provisioned server  Stay tuned for part II The hole solution with working demo will be presented in one of our Partner WebCasts in June, exact date TBA. Jernej Kaše is a Fusion Middleware Specialist working closely with Oracle Partners in the ECEMEA region to grow their business by leveraging Oracle technology.

    Read the article

  • MySQL Enterprise Backup 3.8.2 - Overview

    - by Priya Jayakumar
      MySQL Enterprise Backup (MEB) is the ideal solution for backing up MySQL databases. MEB 3.8.2 is released in June 2013. MySQL Enterprise Backup 3.8.2 release’s main goal is to improve usability. With this release, users can know the progress of backup completed both in terms of size and as a percentage of the total. This release also offers options to be able to manage the behavior of MEB in case the space on the secondary storage is completely exhausted during backup. The progress indicator is a (short) string that indicates how far the execution of a time-consuming MEB command has progressed. It consists of one or more "meters" that measures the progress of the command. There are two options introduced to control the progress reporting function of mysqlbackup command (1) –show-progress (2) –progress-interval. The user can control the progress indicator by using “--show-progress” option in any of the MEB operations. This option instructs MEB to output periodically short reports on the progress of time-consuming commands. The argument of this option instructs where the output could be sent. For example it could be stderr, stdout, file, fifo and table. With the “--show-progress” option both the total size of the backup to be copied and the size that’s already copied will be shown. Along with this, the state of the operation for example data or meta-data being copied or tables being locked and other such operations will also be reported. This gives more clear information to the DBA on the progress of the backup that’s happening. Interval between progress report in seconds is controlled by “--progress-interval” option. For more information on this please refer progress-report-options. MEB can also be accessed through GUI from MySQL WorkBench’s next version. This can be used as the front end interface for MEB users to perform backup operations at the click of a button. This feature was highly requested by DBAs and will be very useful. Refer http://insidemysql.com/mysql-workbench-6-0-a-sneak-preview/ for WorkBench upcoming release info. Along with the progress report feature some of the important issues like below are also addressed in MEB 3.8.2. In MEB 3.8.2 a new command line option “--on-disk-full” is introduced to abort or warn the user when a backup process encounters a full disk condition. When no option is given, by default it would abort. A few issues related to “incremental-backup” are also addressed in this release. Please refer 3.8.2 documentation for more details. It would be good for MEB users to move to 3.8.2 to take incremental backups. Overall the added usability and the important defects fixed in this release makes MySQL Enterprise Backup 3.8.2 a promising release.  

    Read the article

  • Selectively Exposing Functionallity in .Net

    - by David V. Corbin
    Any developer should be aware of the principles of encapsulation, cross-tier isolation, and cross-functional separation of concerns. However, it seems the few take the time to consider the adage of "minimal yet complete"1 when developing the software. Consider the exposure of "business objects" to the user interface. Some common situations occur: Accessing a given element requires a compound set of calls that do not "make sense" to the User Interface. More information than absolutely required is exposed to the user interface It would be much cleaner if a custom interface was provided that exposed exactly (and only) the information that is required by the consumer. Achieving this using conventional techniques would require the creation (and maintenance!) of custom classes to filter and transpose the information into the ideal format. Determining the ROI on this approach can be very difficult to ascertain, and as a result it is often ignored completely. There is another approach, which is largely made practical by virtual of the Action and Func delegates. From a callers point of view, the following two samples can be used interchangeably:     interface ISomeInterface     {         void SampleMethod1(string param);         string SamepleMethod2(string param);     }       class ISomeInterface     {         public Action<string> SampleMethod1 {get; }         public Func<string,string> SamepleMethod2 {get; }     }   The capabilities this simple changes enable are significant (and remember it does not cange the syntax at the call site): The delegates can be initialized to directly call the proper method of any target class. The delegates can be dynamically updated based on the current state. The "interface" can NOT be cast to the concrete class (which often exposes more functionallity). This patterns By limiting the interface to the exact functionallity required, the reduced surface area will typically result in lower development, testing and maintenance costs. We are currently in the process of posting a project on CodePlex which illustrates this (and many other) techniques which have proven helpful in creating robust yet flexible solutions that are highly efficient2 and maintainable. This post will be updated as soon as the project is published. 1) Credit: Scott  Meyers, Effective C++, Addison-Wesley 1992 2) For those who read my previous post on performance it should be noted that the use of delegates is on the same order of magnitude (actually a tiny amount faster) as conventional interfaces.

    Read the article

  • Missing Indexes DMV Report, 3 billion Impact!

    - by Tara Kizer
    We’ve been having some major performance issues with one of the applications that I support.  The database is on SQL Server 2005 and is about 150GB in size.  We’ve identified a couple of issues already on the database side.  The first issue is that some query (or maybe several queries) is getting a bad execution plan at some point in time during the day.  When it occurs, database performance comes to a grinding halt.  We know it’s a bad execution plan as running DBCC FREEPROCCACHE immediately resolves the problem system-wide.  As we have not yet identified the problematic query, we’ve put a temporary solution in place that frees the procedure cache on an hourly basis via a SQL Agent job.  This is not ideal, but it is getting us through the day without a major problem.  We are actively working on identifying the problematic query and hope to disable the SQL Agent job soon. Earlier this week, we had a major slowdown for one of the processes of this application.  I was unable to find any database performance issues, but I continued to investigate it.  One of things that I typically do when investigating database performance issues is run the “Missing Indexes DMV Report” (that’s what I call it at least).  When analyzing the output of that report, I immediately dismiss anything under 1 million “Impact” as I want to target the “low-hanging fruit” initially.  When I ran the report earlier this week, I was shocked to find a suggested index with an impact of over 3 billion! Do I win a prize for the highest impact?  Has anyone seen a value higher than mine?  My exact value was 3154284120.67765. The performance issue from earlier this week ended up being an application problem, but it also brought to light a much needed index.  I had previously seen this index come up in that report but always with a much lower impact.  I had never considered it as the index’s selectivity is very low.  It’s a composite index with three columns.  The first column is not selective, the first two columns are not selective, and the three columns together are not selective.  In fact, no matter how I order it, the index will not be selective at all.  I briefly discussed this with Kimberly Tripp, and she said that this was okay for covering indexes.  Selectivity is irrelevant for a covering index.  She indicated that she’s even created indexes with gender as the first column in the index.  I’ve got lots to learn still!

    Read the article

  • Build Dependencies and Silverlight 4

    - by Kyle Burns
    At my current position, I’ve been doing quite a bit of Silverlight development and have also been working with TFS2010 build services to enable continuous integration.  One of the critical pieces of a successful continuous build setup (and also one of the benefits of having one) is that the build system should be able to “get latest” against the source repository and immediately build with no errors.  This can break down both in an automated build scenario and a “new guy” scenario when the solution has external dependencies that may not be present in the build environment. The method that I use to address the dependency issue is to store all of the binaries upon which my solution depends in a folder under the solution root called “Reference Items”.  I keep this folder as part of the solution and check all of the binaries into source control so when I get the latest version of the solution from source control all of the binaries are downloaded to my machine as well and gets me closer to the ideal where a new developer installs the development IDE, get latest and can immediately build and run unit tests before jumping into coding the feature of the day. This all sounds pretty good (and it is), but a little while back I ran into one of those little hiccups that requires a little manual intervention.  The issue that I ran into is that with Silverlight (at least version 4), the behavior of the “Add Reference” command when adding reference to a DLL that is present in the GAC is to omit the HintPath element that it includes with regular .Net projects, so even if the DLL is setting in the Reference Items folder and downloaded to the build machine it cannot be found at compile time and the build will fail. To work around this behavior, you need to be comfortable editing the XML project files generated by Visual Studio (in my case this is typically a .csproj file).  Simply open the project file in your favorite text editor, find the Reference element that refers to the component, and modify the XML to include the HintPath.  Here’s a before and after example of the component that ultimately led me to the investigation behind this post: Before: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL" /> After: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL">       <HintPath>..\Reference Items\Telerik.Windows.Controls.dll</HintPath>     </Reference>

    Read the article

  • When to use SOAP over REST

    So, how does REST based services differ from SOAP based services, and when should you use SOAP? Representational State Transfer (REST) implements the standard HTTP/HTTPS as an interface allowing clients to obtain access to resources based on requested URIs. An example of a URI may look like this http://mydomain.com/service/method?parameter=var1&parameter=var2. It is important to note that REST based services are stateless because http/https is natively stateless. One of the many benefits for implementing HTTP/HTTPS as an interface is can be found in caching. Caching can be done on a web service much like caching is done on requested web pages. Caching allows for reduced web server processing and increased response times because content is already processed and stored for immediate access. Typical actions performed by REST based services include generic CRUD (Create, Read, Update, and Delete) operations and operations that do not require state. Simple Object Access Protocol (SOAP) on the other hand uses a generic interface in order to transport messages. Unlike REST, SOAP can use HTTP/HTTPS, SMTP, JMS, or any other standard transport protocols. Furthermore, SOAP utilizes XML in the following ways: Define a message Defines how a message is to be processed Defines the encoding of a message Lays out procedure calls and responses As REST aligns more with a Resource View, SOAP aligns more with a Method View in that business logic is exposed as methods typically through SOAP web service because they can retain state. In addition, SOAP requests are not cached therefore every request will be processed by the server. As stated before Soap does retain state and this gives it a special advantage over REST for services that need to preform transactions where multiple calls to a service are need in order to complete a task. Additionally, SOAP is more ideal for enterprise level services that implement standard exchange formats in the form of contracts due to the fact that REST does not currently support this. A real world example of where SOAP is preferred over REST can be seen in the banking industry where money is transferred from one account to another. SOAP would allow a bank to perform a transaction on an account and if the transaction failed, SOAP would automatically retry the transaction ensuring that the request was completed. Unfortunately, with REST, failed service calls must be handled manually by the requesting application. References: Francia, S. (2010). SOAP vs. REST. Retrieved 11 20, 2011, from spf13: http://spf13.com/post/soap-vs-rest Rozlog, M. (2010). REST and SOAP: When Should I Use Each (or Both)? Retrieved 11 20, 2011, from Infoq.com: http://www.infoq.com/articles/rest-soap-when-to-use-each

    Read the article

  • Referencing a picture in another DLL in Silverlight and Windows Phone 7

    - by Laurent Bugnion
    This one has burned me a few times, so here is how it works for future reference: Usually, when I add an Image control into a Silverlight application, and the picture it shows is local (as opposed to loaded from the web), I set the picture’s Build Action to Content, and the Copy to Output Directory to Copy if Newer. What the compiler does then is to copy the picture to the bin\Debug folder, and then to pack it into the XAP file. In XAML, the syntax to refer to this local picture is: <Image Source="/Images/mypicture.jpg" Width="100" Height="100" /> And in C#: return new BitmapImage(new Uri( "/Images/mypicture.jpg", UriKind.Relative)); One of the features of Silverlight is to allow referencing content (pictures, resource dictionaries, sound files, movies etc…) located in a DLL directly. This is very handy because just by using the right syntax in the URI, you can do this in XAML directly, for example with: <Image Source="/MyApplication;component/Images/mypicture.jpg" Width="100" Height="100" /> In C#, this becomes: return new BitmapImage(new Uri( "/MyApplication;component/Images/mypicture.jpg", UriKind.Relative)); Side note: This kind of URI is called a pack URI and they have been around since the early days of WPF. There is a good tutorial about pack URIs on MSDN. Even though it refers to WPF, it also applies to Silverlight Side note 2: With the Build Action set to Content, you can rename the XAP file to ZIP, extract all the files, change the picture (but keep the same name), rezip the whole thing and rename again to XAP. This is not possible if the picture is embedded in an assembly! So what’s the catch? Well the catch is that this does not work if you set the Build Action to Content. It’s actually pretty simple to explain: The pack URI above tells the Silverlight runtime to look within an assembly named MyOtherAssembly for a file named MyPicture.jpg in the Images folder. If the file is included as Content, however, it is not in the assembly. Silverlight does not find it, and silently returns nothing. The image is not displayed. And the fix? The fix, for class libraries, is to set the Build Action to Resource. With this, the picture will gets packed into the DLL itself. Of course, this will increase the size of the DLL, and any change to the picture will require recompiling the class library, which is not ideal. But in the cases where you want to distribute pictures (icons etc) together with a plug-in assembly, well, this is a good way to have everything in the same place Happy coding, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • OWB 11gR2 &ndash; OMB and File Editing

    - by David Allan
    Here we will see how we can use the IDE for editing OMB scripts. The 11gR2 release is based on the common Oracle platform IDE used also by JDeveloper. It comes with a bunch of standard behavior for editing and rendering code. One of the lesser known things is that if you drop a text file into OWB you can edit it. So you can drop your tcl scripts right into OWB and edit in-place, and don’t need another IDE like Eclipse just for this task. Cool, so you have the file here. There may be no line numbers, you can toggle line numbers on by right clicking in the gutter. If we edit the file within the OWB IDE, the save is a little different from normal. OWB doesn’t normally manipulate files so things like ctrl-s to save, saves the OWB objects, but if you edit a file the closing of the file will ask if you want to save it – check it out. Now we enter the realm of ‘he who dares’…. Note the IDE doesn’t know about tcl files out of the box, so you see above there is no syntax highlighting. The code is identified by the extension… .java is java, .html is HTML etc. With OWB, the OMB scripts are tcl, we usually have .tcl extension on these files. One of the things we can do to trick up the syntax highlighting is to simply rename the file to have a .java suffix, then all of a sudden we get syntax highlighting, see the illustration here where side by side we see a the file with a .java extension and a .tcl extension. Not ideal pretending to be .java but gets us a way to having something more useful than notepad. We can then change the syntax highlighting such that we get Eclipse like highlighting within the IDE from the Tools Preferences option; You then get the Eclipse like rendering albeit using a little tweak on the file names… Might be useful if you are doing any kind of heavy duty OMB script development and just want a single IDE. The OMBPlus panel is then at hand for executing and testing it out.

    Read the article

  • Regular Expression Transformation

    The regular expression transformation exposes the power of regular expression matching within the pipeline. One or more columns can be selected, and for each column an individual expression can be applied. The way multiple columns are handled can be set on the options page. The AND option means all columns must match, whilst the OR option means only one column has to match. If rows pass their tests then rows are passed down the successful match output. Rows that fail are directed down the alternate output. This transformation is ideal for validating data through the use of regular expressions. You can enter any expression you like, or select a pre-configured expression within the editor. You can expand the list of pre-configured expressions yourself. These are stored in a Xml file, %ProgramFiles%\Microsoft SQL Server\nnn\DTS\PipelineComponents\RegExTransform.xml, where nnn represents the folder version, 90 for 2005, 100 for 2008 and 110 for 2012. If you want to use regular expressions to manipulate data, rather than just validating it, try the RegexClean Transformation. The component is provided as an MSI file, however for 2005/200 you will have to add the transformation to the Visual Studio toolbox by hand. This process has been described in detail in the related FAQ entry for How do I install a task or transform component?, just select Regular Expression Transformation in the Choose Toolbox Items window. Downloads The Regular Expression Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Regular Expression Transformation for SQL Server 2005 Regular Expression Transformation for SQL Server 2008 Regular Expression Transformation for SQL Server 2012 Version History SQL Server 2012Version 2.0.0.87 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008Version 2.0.0.87 - Release for SQL Server 2008 Integration Services. (10 Oct 2008) SQL Server 2005 Version 1.1.0.93 - Added option for you to choose AND or OR logic when multiple columns have been selected. Previously behaviour was OR only. (31 Jul 2008) Version 1.0.0.76 - Installer update and improved exception handling. (28 Jan 2008) Version 1.0.0.41 - Update for user interface stability fixes. (2 Aug 2006) Version 1.0.0.24 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. (12 Jun 2006) Version 1.0.0.9 - Public Release for SQL Server 2005 IDW 15 June CTP (29 Aug 2005) Screenshots  

    Read the article

  • EPM Planning (Hyperion) V11.1.2 Implementation Hands-On Boot-camp

    - by Mike.Hallett(at)Oracle-BI&EPM
    5-Day Training for Partners: 29th October - 2nd November 2012, London (UK): REGISTER Here This FREE for Partners 5-day workshop is designed to provide implementation instruction on Oracle Hyperion EPM Planning.  This boot-camp is intended for prospective implementers of the Planning and Budgeting functionality of Oracle EPM or implementers that are currently familiar with the basics of EPM Planning and looking to strengthen their base of knowledge in the product. The class begins with an overview of Essbase, the foundation of Hyperion Planning. It provides a general overview of Planning and Planning terms, the architecture of all the Planning components, and how they are commonly used. The course goes over all the steps to create an application from scratch. This involves some preparation work outside of Planning and leads to developing the application in both the Planning Windows and Web clients. Participants will modify existing dimensions and build out the hierarchies using the Web client. Topics Covered The boot-camp shows developers how to build out dimensions using Classic Planning and by using EPMA. It covers the mechanics and cover strategies for automating the build process such as interface tables. It reviews data loads using Load Rules to load the Planning database. The course focuses on tasks that end-users must perform during the planning cycle. It walks students through creating and modifying forms, working with forms to enter data, adding annotations, and the rest of the form features such as running business rules and managing task lists. It covers how to use the forms in the Smart View client and finishes up the end-user perspective by going through Workflow Management and the process of submitting a plan for review. The final section of the course covers Security and other administration topics such as automation and deployment. Prerequisites Ideal participants are Oracle partners (SIs and resellers) with a background in business information systems and a clientele of customers with ongoing or prospective EPM initiatives. Alternatively, partners with the background described above and an interest in evolving their practice to a similar profile are suitable participants. Further online OPN guided learning path information and webinars are available at: Oracle Hyperion Planning 11 Essentials. Please note that attendees are required to bring a laptop. View here laptop requirements and detailed agenda. ·       REGISTER Here : acceptance is subject to availability and your place will be confirmed within two weeks  ( and for help see the Partner Registration Guide ). Training Location: Oracle Corporation UK Ltd Columbus Room Customer Visit Center 1 South Place London EC2M 2RB Training Dates: 29th October - 2nd November  9:30 am – 5:00 pm BST For more information please contact [email protected].

    Read the article

  • Move Data into the grid for scalable, predictable response times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. It is still in its limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.ph , or send your questions to [email protected]. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: CloudTran,data grid,M,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • Cheating on Technical Debt

    - by Tony Davis
    One bad practice guaranteed to cause dismay amongst your colleagues is passing on technical debt without full disclosure. There could only be two reasons for this. Either the developer or DBA didn’t know the difference between good and bad practices, or concealed the debt. Neither reflects well on their professional competence. Technical debt, or code debt, is a convenient term to cover all the compromises between the ideal solution and the actual solution, reflecting the reality of the pressures of commercial coding. The one time you’re guaranteed to hear one developer, or DBA, pass judgment on another is when he or she inherits their project, and is surprised by the amount of technical debt left lying around in the form of inelegant architecture, incomplete tests, confusing interface design, no documentation, and so on. It is often expedient for a Project Manager to ignore the build-up of technical debt, the cut corners, not-quite-finished features and rushed designs that mean progress is satisfyingly rapid in the short term. It’s far less satisfying for the poor person who inherits the code. Nothing sends a colder chill down the spine than the dawning realization that you’ve inherited a system crippled with performance and functional issues that will take months of pain to fix before you can even begin to make progress on any of the planned new features. It’s often hard to justify this ‘debt paying’ time to the project owners and managers. It just looks as if you are making no progress, in marked contrast to your predecessor. There can be many good reasons for allowing technical debt to build up, at least in the short term. Often, rapid prototyping is essential, there is a temporary shortfall in test resources, or the domain knowledge is incomplete. It may be necessary to hit a specific deadline with a prototype, or proof-of-concept, to explore a possible market opportunity, with planned iterations and refactoring to follow later. However, it is a crime for a developer to build up technical debt without making this clear to the project participants. He or she needs to record it explicitly. A design compromise made in to order to hit a deadline, be it an outright hack, or a decision made without time for rigorous investigation and testing, needs to be documented with the same rigor that one tracks a bug. What’s the best way to do this? Ideally, we’d have some kind of objective assessment of the level of technical debt in a software project, although that smacks of Science Fiction even as I write it. I’d be interested of hear of any methods you’ve used, but I’m sure most teams have to rely simply on the integrity of their colleagues and the clear perceptions of the project manager… Cheers, Tony.

    Read the article

  • Five Fake Sounds Engineered to Make Your Feel Better [Science]

    - by Jason Fitzpatrick
    As objects in our environment (like cars, ATMs, and phones) have grown lighter and quieter scientists have been carefully engineering their sounds so that they continue to sound like we expect them to. Read on to see how. At the design blog Humans Invent they share five interesting ways that the world around us is being engineered so it sounds the way we expect it to. They start with the example of the car door. Years ago cars were almost entirely steel, the doors were weighty, and when you slammed them it sounded like one big hunk of steel locking into another big hunk of steel (which, in fact, it was). Newer cars are lighter but people still crave that substantial clunk. Humans Invent highlights the effect of consumer desire: A car door is essentially a hollow shell with parts placed inside it. Without careful design the door frame amplifies the rattling of mechanisms inside. Car companies know that if buyers don’t get a satisfying thud when they close the door, it dents their confidence in the entire vehicle. To produce the ideal clunk, car doors are designed to minimise the amount of high frequencies produced (we associate them with fragility and weakness) and emphasise low, bass-heavy frequencies that suggest solidity. The effect is achieved in a range of different ways – car companies have piled up hundreds of patents on the subject – but usually involves some form of dampener fitted in the door cavity. Locking mechanisms are also tailored to produce the right sort of click and the way seals make contact is precisely controlled. On average it takes 1.8 seconds to close a car door but in that time you’re witnessing a strange kind of symphony composed by engineers and designers whose goal is to reassure you that its rock solid. They mention lock mechanisms, something you may never have thought about. A friend of mine had a Ford Focus some years ago and that particular model had electric locks that, instead of giving a satisfying thunk or solid click, made this horrible gates-of-the-prison-buzzing sound that was completely unnerving. Hit up the link below to see how sounds are engineered for car doors, electric motors, ATM machines, and more. 5 Fake Sounds Designed to Help Humans [Humans Invent via Boing Boing] How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)

    Read the article

  • Database unit testing is now available for SSDT

    - by jamiet
    Good news was announced yesterday for those that are using SSDT and want to write unit tests, unit testing functionality is now available. The announcement was made on the SSDT team blog in post Available Today: SSDT—December 2012. Here are a few thoughts about this news. Firstly, there seems to be a general impression that database unit testing was not previously available for SSDT – that’s not entirely true. Database unit testing was most recently delivered in Visual Studio 2010 and any database unit tests written therein work perfectly well against SQL Server databases created using SSDT (why wouldn’t they – its just a database after all). In other words, if you’re running SSDT inside Visual Studio 2010 then you could carry on freely writing database unit tests; some of the tight integration between the two (e.g. right-click on an object in SQL Server Object Explorer and choose to create a unit test) was not there – but I’ve never found that to be a problem. I am currently working on a project that uses SSDT for database development and have been happily running VS2010 database unit tests for a few months now. All that being said, delivery of database unit testing for SSDT is now with us and that is good news, not least because we now have the ability to create unit tests in VS2012. We also get tight integration with SSDT itself, the like of which I mentioned above. Having now had a look at the new features I was delighted to find that one of my big complaints about database unit testing has been solved. As I reported here on Connect a refactor operation would cause unit test code to get completely mangled. See here the before and after from such an operation: SELECT    * FROM    bi.ProcessMessageLog pml INNER JOIN bi.[LogMessageType] lmt     ON    pml.[LogMessageTypeId] = lmt.[LogMessageTypeId] WHERE    pml.[LogMessage] = 'Ski[LogMessageTypeName]of message: IApplicationCanceled' AND        lmt.[LogMessageType] = 'Warning'; which is obviously not ideal. Thankfully that seems to have been solved with this latest release. One disappointment about this new release is that the process for running tests as part of a CI build has not changed from the horrendously complicated process required previously. Check out my blog post Setting up database unit testing as part of a Continuous Integration build process [VS2010 DB Tools - Datadude] for instructions on how to do it. In that blog post I describe it as “fiddly” – I was being kind when I said that! @Jamiet

    Read the article

  • Becoming the well-integrated content company (and combating AIUTLVFS)

    - by Lance Shaw
    Every single day, each of us create more and more content. Sometimes it is brand new material and many times it is iterations of existing content, but no one would argue that information and content growth is growing at an almost exponential rate. With all this content being created and stored, a number of problems naturally arise. One of the most common issues that users run into is "Am I Using The Latest Version of this File Syndrome", or AIUTLVFS. This insidious syndrome is all too common and results in ineffective, poor or downright wrong business decisions being made.  When content or files are unavailable or incorrect within the scope of key business processes, the chance for erroneous and costly business decisions is magnified even further. For many companies, the ideal scenario is to be able to connect multiple business systems, both old and new, into one common content repository.  Not only does this reduce content duplication, it also helps guarantee that everyone in various departments is working off the proverbial "same page".  Sounds simple - but for many organizations, the proliferation of file shares, SharePoint sites, and other storage silos of content keep the dream of a more efficient business a distant one. We've created some online assets to help you in your evaluation and eventual improvement of your current content management and delivery systems. Take a few minutes to check out our Online Assessment Tool.  It's quick, easy and just might provide you with insights into how you can improve your current content ecosystem. While you are there, check out our new Infographic that outlines common issues faced by companies today. Feel free to save our informative Infographic PDF and share it with business colleagues and your management to help them understand the business costs and impact of inaction. Together we can stop AIUTLVFS in its tracks and run our businesses more effectively than ever. Additionally, we hope you will take a few minutes to visit our new and informative webpages dedicated to the value of a well connected, fully integrated content management system. It's a great place to learn more about how integrating WebCenter Content into your infrastructure can lower your operational costs while boosting process and worker efficiency.

    Read the article

  • Issue 56 - Super Stylesheets Skinning in DotNetNuke 5

    May 2010 Welcome to Issue 56 of DNN Creative Magazine In this issue we show you how to use the powerful new Super Stylesheets skinning feature in DotNetNuke 5. Super Stylesheets are ideal for both beginner and experienced skin designers, they provide skin layouts using CSS. The advantage of Super Stylesheets is that you can easily create a skin layout which works in all browsers without the need to learn complex CSS techniques. They are also very quick to build and you can change a skin layout in a matter of minutes rather than hours. We show you how to build a skin from the very beginning using Super Stylesheets, we show you how to create various skin layouts, as well as multi-layouts. We also show you how to style the skin, how to add tokens such as the logo, menu, login links etc. and walk you through how to create a fully working skin from scratch. Following this we continue the Open Web Studio tutorials, this month we demonstrate how to create an installable DotNetNuke PA module using OWS. This is an essential technique which allows you to package up the OWS applications that you have created and build them into an installable zip package. The zip file is then installable as a standard DotNetNuke module which means you can easily install your OWS applications on other DotNetNuke installations by simply installing them as a standard DotNetNuke module. To finish, we have part six of the "How to Build a News Application with DotNetMushroom Rapid Application Developer (RAD)" article, where we demonstrate how to create a News Carousel using RAD, JQuery and the JCarousel plugin. This issue comes complete with 15 videos. Skinning: Super Stylesheets Skinning in DotNetNuke 5 - DNN Layouts (12 videos - 98mins) Module Development Series: How to Create an Installable DotNetNuke PA Module Using OWS (3 videos - 23mins) How to Implement a News Carousel Using DotNetMushroom RAD and JQuery View issue 56 to download all of the videos in one zip file DNN Creative Magazine for DotNetNuke Web Designers Covering DotNetNuke module video reviews, video tutorials, mp3 interviews, resources and web design tips for working with DotNetNuke. In 56 issues we have created 578 videos!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Manic Monday - More OpenWorld Solaris Sessions: Developers, Cloud, Customer Insights, Hardware Optimization

    - by Larry Wake
    We're overflowing with Monday sessions; literally more than one person can take in. Learn more about what's new in Oracle Solaris Studio, hear about the latest x86 and SPARC hardware optimizations, get some insights on cloud deployment strategies, and find out from your peers what they're doing with Oracle Solaris. If you're an OpenWorld attendee, go to to Schedule Builder to guarantee your space in any session or lab. See yesterday's blog post and the "Focus on Oracle Solaris" guide for even more sessions. Monday, October 1st: 10:45 AM - Maximizing Your SPARC T4 Oracle Solaris Application Performance(CON6382,  Marriott Marquis - Golden Gate C3) Hear how customers and commercial software partners have reached peak performance on SPARC T4 servers and engineered systems with Oracle Solaris Studio and its latest tools for analyzing, reporting, and improving runtime performance: Autoparallelizing, high-performance compilers Performance Analyzer (used to find performance hotspots) Thread Analyzer (to expose data races and deadlocks) Code Analyzer (used to discover latent memory corruption issues) 10:45 Cloud Formation: Implementing IaaS in Practice with Oracle Solaris(CON8787, Moscone South 302) Decisions, decisions--at the same time, we've got a session that covers why Oracle Solaris is the ideal OS for public or private clouds, IaaS or PaaS, with built-in features for elastic infrastructure, unrivaled security, superfast installation and deployment, nonstop availability, and crystal-clear observability. This session will include a customer study on how Oracle Solaris is used in the cloud today to implement the Oracle stack. 12:15 PM - Customer Insight: Oracle Solaris on Oracle Exadata, Oracle Exalogic, and SPARC SuperCluster(CON8760, Moscone South 270) Hear from customers what benefits they have realized from using the Oracle stack on Oracle Exadata and Oracle’s SPARC SuperCluster and from using Oracle Solaris on those engineered systems, taking advantage of built-in lightweight OS virtualization (Zones), enterprise reliability and scale, and other key features. 1:45 PM - Case Study: Mobile Tornado Uses Oracle Technology for Better RAS and TCO?(CON4281, Moscone West 2005) Mobile Tornado develops and markets instant communication platforms, replacing traditional radio networks with cellular networks. Its critical concern is uptime. Find out how they've used Oracle Solaris, Netra SPARC T4, and Oracle Solaris Cluster, including Oracle Solaris ZFS and Zones, for their Oracle Database deployments to improve reliability and drive down cost. 3:15 PM - Technical Panel: Developing High Performance Applications on Oracle Solaris(CON7196, Marriott Marquis - Golden Gate C2) Engineers from the Oracle Solaris, Oracle Database, and Oracle Tuxedo development teams, and Oracle ISV Engineering discuss how they develop high-performance enterprise applications that take advantage of Oracle's SPARC and x86 servers, with Oracle Solaris Studio and new Oracle Solaris 11 features. Topics will include developer tools, parallel frameworks, best practices, and methodologies, as well as insights and case studies on parallelizing and optimizing application performance on Oracle Solaris. Bring your best questions! 3:15 PM -  x86 Power Management with Oracle Solaris: Current State, Opportunities, and Future(CON6271, Moscone West 2012) Another option for this time slot: learn about how Intel Xeon and Oracle Solaris work together to reduce server power consumption. This presentation addresses some of the recent power management improvements in Oracle Solaris, opportunities to further improve energy efficiency, and some future directions for Oracle Solaris power management.

    Read the article

  • Using HBase or Cassandra for a token server

    - by crippy
    I've been trying to figure out how to use HBase/Cassandra for a token system we're re-implementing. I can probably squeeze quite a lot more from MySQL, but it just seems it has come to clinging on to the wrong tool for the task just because we know it well. Eventually will hit a wall (like happened to us in other areas). Naturally I started looking into possible NoSQL solutions. The prominent ones (at least in terms of buzz) are HBase and Cassandra. The story is more or less like this: A user can send a gift other users. Each gift has a list of recipients or is public in which case limited by number or expiration date For each gift sent we generate some token that uniquely identifies that gift. For each gift we track the list of potential recipients and their current status relating to that gift (accepted, declinded etc). A user can request to see all his currently pending gifts A can request a list of users he has sent a gift to today (used to limit number of gifts sent) Required the ability to "dump" or "ignore" expired gifts (x day old gifts are considered expired) There are some other requirements but I believe the above covers the essentials. How would I go and model that using HBase or Cassandra? Well, the wall was performance. A few 10s of millions of records per day over 2 tables kept for 2 weeks (wish I could have kept it for more but there was no way). The response times kept getting slower and slower until eventually we had to start cutting down number of days we kept data. Caching helps here but it's not an ideal solution since a big part of the ops are updates. Also, as I hinted in my original post. We use MySQL extensively. We know exactly what it can and can't do both in naive implementations followed by native partitioning and finally by horizontally sharding our dataset on the application level to reside on multiple DB nodes. It can be done, but that's not really what I'm trying to get from this. I asked a very specific question about designing a solution using a NoSQL solution since it's very hard to find examples for designs out there. Brainlag, not trying to come off as rude. I actually appreciate it a lot that you are the only one who even bothered to respond. but I see it over and over again. People ask questions and others assume they have no idea what they're talking about and give an irrelevant answer. Ignore RDBMS please. The question is about nosql.

    Read the article

  • Hadoop, NOSQL, and the Relational Model

    - by Phil Factor
    (Guest Editorial for the IT Pro/SysAdmin Newsletter)Whereas Relational Databases fit the world of commerce like a glove, it is useless to pretend that they are a perfect fit for all human endeavours. Although, with SQL Server, we’ve made great strides with indexing text, in processing spatial data and processing markup, there is still a problem in dealing efficiently with large volumes of ephemeral semi-structured data. Key-value stores such as Cassandra, Project Voldemort, and Riak are of great value for ephemeral data, and seem of equal value as a data-feed that provides aggregations to an RDBMS. However, the Document databases such as MongoDB and CouchDB are ideal for semi-structured data for which no fixed schema exists; analytics and logging are obvious examples. NoSQL products, such as MongoDB, tackle the semi-structured data problem with panache. MongoDB is designed with a simple document-oriented data model that scales horizontally across multiple servers. It doesn’t impose a schema, and relies on the application to enforce the data structure. This is another take on the old ‘EAV’ problem (where you don’t know in advance all the attributes of a particular entity) It uses a clever replica set design that allows automatic failover, and uses journaling for data durability. It allows indexing and ad-hoc querying. However, for SQL Server users, the obvious choice for handling semi-structured data is Apache Hadoop. There will soon be an ODBC Driver for Apache Hive .and an Add-in for Excel. Additionally, there are now two Hadoop-based connectors for SQL Server; the Apache Hadoop connector for SQL Server 2008 R2, and the SQL Server Parallel Data Warehouse (PDW) connector. We can connect to Hadoop process the semi-structured data and then store it in SQL Server. For one steeped in the culture of Relational SQL Databases, I might be expected to throw up my hands in the air in a gesture of contempt for a technology that was, judging by the overblown journalism on the subject, about to make my own profession as archaic as the Saggar makers bottom knocker (a potter’s assistant who helped the saggar maker to make the bottom of the saggar by placing clay in a metal hoop and bashing it). However, on the contrary, I find that I'm delighted with the advances made by the NoSQL databases in the past few years. Having the flow of ideas from the NoSQL providers will knock any trace of complacency out of the providers of Relational Databases and inspire them into back-fitting some features, such as horizontal scaling, with sharding and automatic failover into SQL-based RDBMSs. It will do the breed a power of good to benefit from all this lateral thinking.

    Read the article

  • Is there an API for determining congressional districts?

    - by ardavis
    I'm looking to determine the congressional district based on an address my user is providing. This will avoid having the user to look it up themselves. Does an API of this sort exist? Note Through my attempts to find one, I've only come across these: http://www.govtrack.us/developers/api (not sure how to submit an an address or zip code however) The following resources are available in the API ...Bills and resolutions in the U.S. Congress since 1973 (the 93rd Congress). ...A (bill, person) pair indicating cosponsorship, with join and withdrawn dates. ...Members of Congress and U.S. Presidents since the founding of the nation. ...Terms held in office by Members of Congress and U.S. Presidents. Each term corresponds with an election, meaning each term in the House covers two years (one 'Congress'), as President four years, and in the Senate six years (three 'Congresses'). ...Roll call votes in the U.S. Congress since 1789. How people voted is accessed through the Vote_voter API. ...How people voted on roll call votes in the U.S. Congress since 1789. See the Vote API. Filter on the vote field to get the results of a particular vote... http://www.opencongress.org/api (seems to be a way to find congress information, but not districts) This API provides programmers with structured access to all the data on OpenCongress, everything from official bill info to news and blog coverage to user-generated votes on bills and much more... This API defaults to returning XML. All queries can also return JSON... https://groups.google.com/forum/?fromgroups=#!topic/opendems-discuss/CeKyi_aANaE (similar question, no resolution) I've been looking over Open Dems, and seeing what's exposed at this point and what isn't. I work with Democrats Abroad, and am interested in using stuff from the lab for their sites. I quickly looked over the Precinct API, which does both more and less than what I'd need. An ideal resource would be any way of translating addresses into CD at the very least (getting state district data would be good as well), since that would make it easier for DA's membership to make a difference in races like last month's NY26 race... Update I'm looking at the source for the govtrack.us website and the 'doGeoCode' function may be useful. view-source:http://www.govtrack.us/congress/members If no one has any suggestions, I will try to go off of what they are doing.

    Read the article

  • .Net Application & Database Modularity/Reuse

    - by Martaver
    I'm looking for some guidance on how to architect an app with regards to modularity, separation of concerns and re-usability. I'm working on an application (ASP.Net, C#) that has distinctly generic chunks of functionality, that I'd love to be able to lift out, all layers, into re-usable components. This means the module handles the database schema, data access, API, everything so that the next time I want to use it I can just register the module and hook into it. Developing modules of re-usable functionality is a no-brainer, but what is really confusing me is what to do when it comes to handling a core re-usable database schema that serves the module's functionality. In an ideal world, I would register a module and it would ensure that the associated database schema exists in the DB. I would code on the assumption that the tables exist, calling the module's functionality through the DLL, agnostic of the database layer. Kind of like Enterprise Library's Caching/Logging Application Block, which can create a DB schema in the target DB to use as a data store. My Questions is: What do you think is the best way to achieve this, firstly, in terms design architecture, and secondly solution structure. What patterns/frameworks do you know that exist & support this kind of thing? My thoughts so far: I mostly use Entity Framework and SQL Server DB Projects. I thought about a 'black box' approach to modules of functionality. I could use use a code-first approach in EF4, and use the ObjectContext to create a database when the module is initialized. However this means that all of the entities that my module encapsulates would be disconnected from the rest of the application because they belonged to an abstracted ObjectContext. Further - Creating appropriate indexes and references between domain entities and the module's entities would be impossible to do practically. I've thought of adopting Enterprise Library and creating my own Application Blocks. I'm not sure how this would play nice with Entity Framework (if at all) though. I like the idea of building on proven patterns & practices to encapsulate established, reusable functionality. I thought of abandoning Entity Framework for the Module, and just creating a separate DB schema for the module with its own set of stored procedures & ADO.Net. Then deploying the script at run-time if interrogation shows that it doesn't exist. But once again, for application developing outside of the application, I would want to use Entity Framework and I would have to use the module separately, disconnected from the domain ObjectContext. Has anyone had experience developing these sorts of full-stack modules? What advice can you offer? Am I biting off more than I can chew?

    Read the article

  • Microsoft and Application Architectures

    Microsoft has dealt with several kinds of application architectures to include but not limited to desktop applications, web applications, operating systems, relational database systems, windows services, and web services. Because of the size and market share of Microsoft, virtually every modern language works with or around a Microsoft product. Some of the languages include: Visual Basic, VB.Net, C#, C++, C, ASP.net, ASP, HTML, CSS, JavaScript, Java and XML. From my experience, Microsoft strives to maintain an n-tier application standard where an application is comprised of multiple layers that perform specific functions, for example: presentation layer, business layer, data access layer are three general layers that just about every formally structured application contains. The presentation layer contains anything to do with displaying information to the screen and how it appears on the screen. The business layer is the middle man between the presentation layer and data access layer and transforms data from the data access layer in to useable information to be stored later or sent to an output device through the presentation layer. The data access layer does as its name implies, it allows the business layer to access data from a data source like MS SQL Server, XML, or another data source. One of my favorite technologies that Microsoft has come out with recently is the .Net Framework. This framework allows developers to code an application in multiple languages and compiles them in to one intermediate language called the Common Language Runtime (CLR). This allows VB and C# developers to work seamlessly together as if they were working in the same project. The only real disadvantage to using the .Net Framework is that it only natively runs on Microsoft operating systems. However, Microsoft does control a majority of the operating systems currently installed on modern computers and servers, especially with personal home computers. Given that the Microsoft .Net Framework is so flexible it is an ideal for business to develop applications around it as long as they wanted to commit to using Microsoft technologies and operating systems in the future. I have been a professional developer for about 9+ years now and have seen the .net framework work flawlessly in just about every instance I have used it. In addition, I have used it to develop web applications, mobile phone applications, desktop applications, web service applications, and windows service applications to name a few.

    Read the article

  • Alert: It is No Longer 1982, So Why is CRM Still There?

    - by Mike Stiles
    Hot off the heels of Oracle’s recent LinkedIn integration announcement and Oracle Marketing Cloud Interact 2014, the Oracle Social Cloud is preparing for another big event, the CRM Evolution conference and exhibition in NYC. The role of social channels in customer engagement continues to grow, and social customer engagement will be a significant theme at the conference. According to Paul Greenberg, CRM Evolution Conference Chair, author, and Managing Principal at The 56 Group, social channels have become so pervasive that there is no longer a clear reason to make a distinction between “social CRM” and traditional CRM systems. Why not? Because social is a communication hub every bit as vital and used as the phone or email. What makes social different is that if you think of it as a phone, it’s a party line. That means customer interactions are far from secret, and social connections are listening in by the hundreds, hearing whether their friend is having a positive or negative experience with your brand. According to a Mention.com study, 76% of brand mentions are neutral, neither positive nor negative. These mentions fail to get much notice. So think what that means about the remaining 24% of mentions. They’re standing out, because a verdict, about you, is being rendered in them, usually with emotion. Suddenly, where the R of CRM has been lip service and somewhat expendable in the past, “relationship” takes on new meaning, seriousness, and urgency. Remarkably, legions of brands still approach CRM as if it were 1982. Today, brands must provide customer experiences the customer actually likes (how dare they expect such things). They must intimately know not only their customers, but each customer, because technology now makes personalized experiences possible. That’s why the Oracle Social Cloud has been so mission-oriented about seamlessly integrating social with sales, marketing and customer service interactions so the enterprise can have an actionable 360-degree view of the customer. It’s the key to that customer-centricity we hear so much about these days. If you’re attending CRM Evolution, Chris Moody, Director of Product Marketing for the Oracle Marketing Cloud, will show you how unified customer experiences and enhanced customer centricity will help you attract and keep ideal customers and brand advocates (“The Pursuit of Customer-Centricity” Aug 19 at 2:45p ET) And Meg Bear, Group Vice President for the Oracle Social Cloud, will sit on a panel talking about “terms of engagement” and the ways tech can now enhance your interactions with customers (Aug 20 at 10a ET). If you can’t be there, we’ll be doing our live-tweeting thing from the @oraclesocial handle, so make sure you’re a faithful follower. You’ll notice NOBODY is writing about the wisdom of “company-centricity.” Now is the time to bring your customer relationship management into the socially connected age. @mikestilesPhoto: Sue Pizarro, freeimages.com

    Read the article

  • Move Data into the Grid for Scalable, Predictable Response Times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. Still in limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.php, or, send your questions to [email protected]. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Coherence,cloudtran,cache,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >