Search Results

Search found 18790 results on 752 pages for 'photo blogs'.

Page 458/752 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • Missed the Call for Papers Deadline? Don't Despair!

    - by [email protected]
    Now, You've Got a Second Chance You were skiing in the Alps. Your dog ate your paper. You were locked in a time capsule that opened March 22 (one day after the Call for Papers deadline). No matter what your reason was for missing the deadline, you can still have a say in what's covered at Oracle OpenWorld 2010 and Oracle Develop 2010. We've just brought back the Suggest a Session program. And that means you've got a second chance to suggest presentations for Oracle OpenWorld and Oracle Develop 2010 and to share your ideas, experiences, and accomplishments with Oracle customers, developers, and partners. So hang up your skis and show us what you've got. The deadline for submission is June 20. Get all the information on the Suggest a Session process, timeline, and guidelines.

    Read the article

  • Wireframing: A Day In the Life of UX Workshop at Oracle

    - by ultan o'broin
    The Oracle Applications User Experience team's Day in the Life (DITL) of User Experience (UX) event was run in Oracle's Redwood Shores HQ for Oracle Usability Advisory Board (OUAB) members. I was charged with putting together a wireframing session, together with Director of Financial Applications User Experience, Scott Robinson (@scottrobinson). Example of stunning new wireframing visuals we used on the DITL events. We put on a lively show, explaining the basics of wireframing, the concepts, what it is and isn't, considerations on wireframing tool choice, and then imparting some tips and best practices. But the real energy came when the OUAB customers and partners in the room were challenge to do some wireframing of their own. Wireframing is about bringing your business and product use cases to life in real UX visual terms, by creating a low-fidelity drawing to iterate and agree on in advance of prototyping and coding what is to be finally built and rolled out for users. All the best people wireframe. Leonardo da Vinci used "cartoons" on some great works, tracing outlines first and using red ochre or charcoal dropped through holes in the tracing parchment onto the canvas to outline the subject. (Image distributed under Wikimedia commons license) Wireframing an application's user experience design enables you to: Obtain stakeholder buy-in. Enable faster iteration of different designs. Determine the task flow navigation paths (in Oracle Fusion Applications navigation is linked with user roles). Develop a content strategy (readability, search engine optimization (SEO) of content, and so on) Lay out the pages, widgets, groups of features, and so on. Apply usability heuristics early (no replacement for usability testing, but a great way to do some heavy-lifting up front). Decide upstream which functional user experience design patterns to apply (out of the box solutions that expedite productivity). Assess which Oracle Application Development Framework (ADF) or equivalent technology components can be used (again, developer productivity is enhanced downstream). We ran a lively hands-on exercise where teams wireframed a choice of application scenarios using the time-honored tools of pen and paper. Scott worked the floor like a pro, pointing out great use of features, best practices, innovations, and making sure that the whole concept of wireframing, the gestalt, transferred. "We need more buttons!" The cry of the energized. Not quite. The winning wireframe session (online shopping scenario) from the Applications UX DITL event shown. Great fun, great energy, and great teamwork were evident in the room. Naturally, there were prizes for the best wireframe. Well, actually, prizes were handed out to the other attendees too! An exciting, slightly different aspect to delivery of this session made the wireframing event one of the highlights of the day. And definitely, something we will repeat again when we get the chance. Thanks to everyone who attended, contributed, and helped organize.

    Read the article

  • How to leverage the internal HTTP endpoint available on Azure web roles?

    - by Alfredo Delsors
    Imagine you have a Web application using an in-memory collection that changes occasionally but is used very often. The collection gets loaded from storage on the Application_Start global.asax event and is updated whenever its content changes. If you want to deploy this application on Azure you need to keep in mind that more than one instance of the application can be running at any time and therefore you need to provide some mechanism to keep all instances informed with the latest changes. Because the communication through internal endpoints between Azure role instances is at no cost, a good solution can be maintaining the information on Azure Storage Tables, reading its contents on the Application_Start event and populating its changes to all other instances using the internal HTTP port available on Azure Web Roles. You need to follow these steps to leverage the internal HTTP endpoint available on Azure web roles to maintain all instances up to date. 1.   Define an internal HTTP endpoint in the Web Role properties, for example InternalHttpEndpoint   2.   Add a new WCF service to the Web Role, for example NotificationService.svc 3.   Disable multiple site bindings in web.config: <serviceHostingEnvironment multipleSiteBindingsEnabled="false"> 4.   Add a method on the new service to receive notifications from other role instances. namespace Service { [ServiceContract] public interface INotificationService { [OperationContract(IsOneWay = true)] void Notify(Information info); } } 5.   Declare a class that inherits from System.ServiceModel.Activation.ServiceHostFactory and override the method CreateServiceHost to host the internal endpoint. public class InternalServiceFactory : ServiceHostFactory { protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { var internalEndpointAddress = string.Format( "http://{0}/NotificationService.svc", RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["InternalHttpEndpoint"].IPEndpoint); ServiceHost host = new ServiceHost( typeof(NotificationService), new Uri(internalEndpointAddress)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); host.AddServiceEndpoint( typeof(INotificationService), binding, internalEndpointAddress); return host; } } Note that you can use SecurityMode.None because the internal endpoint is private to the instances of the service. 6.   Edit the markup of the service right clicking the svc file and selecting "View markup" to add the new factory as the factory to be used to create the service <%@ ServiceHost Language="C#" Debug="true" Factory="Service.InternalServiceFactory" Service="Service.NotificationService" CodeBehind="NotificationService.svc.cs" %> 7.   Now you can notify changes to other instances using this code: var current = RoleEnvironment.CurrentRoleInstance; var endPoints = current.Role.Instances .Where(instance => instance != current) .Select(instance => instance.InstanceEndpoints["InternalHttpEndpoint"]); foreach (var ep in endPoints) { EndpointAddress address = new EndpointAddress( String.Format("http://{0}/NotificationService.svc", ep.IPEndpoint)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); var factory = new ChannelFactory<INotificationService>(binding); INotificationService instance = factory.CreateChannel(address); instance.Notify(changedinfo); }

    Read the article

  • Deleting ASP.NET application subdirectories causes application recycle!

    - by geekrutherford
    This may not be news to most people, but was definitely a shock to me!   In the .NET 2.x framework a "feature" was implemented where by an ASP.NET application is automatically recycled if any subdirectory is deleted. This was apparently implemented to prevent stale content from appearing on a site.   The unfortunate side effect of this "feature" is that when using the "InProc" model for session management, all session data is lost if a subdirectory is deleted.   For those who progammatically may be adding/deleting directories within their application as inherent functionality, this causes a rather large problem.   The solution? Create your folder(s) which may be programmatically deleted outside of the root folder for the application. Alernatively, utilize a file based structure vs. folders since deleting files does not result in the same issue.

    Read the article

  • Upgrading to Code Based Migrations EF 4.3.1 with Connector/Net 6.6

    - by GABMARTINEZ
    Entity Framework 4.3.1 includes a new feature called code first migrations.  We are adding support for this feature in our upcoming 6.6 release of Connector/Net.  In this walk-through we'll see the workflow of code-based migrations when you have an existing application and you would like to upgrade to this EF 4.3.1 version and use this approach, so you can keep track of the changes that you do to your database.   The first thing we need to do is add the new Entity Framework 4.3.1 package to our application. This should via the NuGet package manager.  You can read more about why EF is not part of the .NET framework here. Adding EF 4.3.1 to our existing application  Inside VS 2010 go to Tools -> Library Package Manager -> Package Manager Console, this will open the Power Shell Host Window where we can work with all the EF commands. In order to install this library to your existing application you should type Install-Package EntityFramework This will make some changes to your application. So Let's check them. In your .config file you'll see a  <configSections> which contains the version you have from EntityFramework and also was added the <entityFramework> section as shown below. This section is by default configured to use SQL Express which won't be necesary for this case. So you can comment it out or leave it empty. Also please make sure you're using the Connector/Net 6.6.x version which is the one that has this support as is shown in the previous image. At this point we face one issue; in order to be able to work with Migrations we need the __MigrationHistory table that we don't have yet since our Database was created with an older version. This table is used to keep track of the changes in our model. So we need to get it in our existing Database. Getting a Migration-History table into an existing database First thing we need to do to enable migrations in our existing application is to create our configuration class which will set up the MySqlClient Provider as our SQL Generator. So we have to add it with the following code: using System.Data.Entity.Migrations;     //add this at the top of your cs file public class Configuration : DbMigrationsConfiguration<NameOfYourDbContext>  //Make sure to use the name of your existing DBContext { public Configuration() { this.AutomaticMigrationsEnabled = false; //Set Automatic migrations to false since we'll be applying the migrations manually for this case. SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());     }   }  This code will set up our configuration that we'll be using when executing all the migrations for our application. Once we have done this we can Build our application so we can check that everything is fine. Creating our Initial Migration Now let's add our Initial Migration. In Package Manager Console, execute "add-migration InitialCreate", you can use any other name but I like to set this as our initial create for future reference. After we run this command, some changes were done in our application: A new Migrations Folder was created. A new class migration call InitialCreate which in most of the cases should have empty Up and Down methods as long as your database is up to date with your Model. Since all your entities already exists, delete all duplicated code to create any entity which exists already in your Database if there is any. I found this easier when you don't have any pending updates to do to your database. Now we have our empty migration that will make no changes in our database and represents how are all the things at the begining of our migrations.  Finally, let's create our MigrationsHistory table. Optionally you can add SQL code to delete the edmdata table which is not needed anymore. public override void Up() { // Just make sure that you used 4.1 or later version         Sql("DROP TABLE EdmMetadata"); } From our Package Manager Console let's type: Update-database; If you like to see the operations made on each Update-database command you can use the flag -verbose after the Update-database. This will make two important changes.  It will execute the Up method in the initial migration which has no changes in the database. And second, and very important,  it will create the __MigrationHistory table necessary to keep track of your changes. And next time you make a change to your database it will compare the current model to the one stored in the Model Column of this table. Conclusion The important thing of this walk through is that we must create our initial migration before we start doing any changes to our model. This way we'll be adding the necessary __MigrationsHistory table to our existing database, so we can keep our database up to date with all the changes we do in our context model using migrations. Hope you have found this information useful. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Happy MySQL/Net Coding!

    Read the article

  • I'm back

    - by Rob Addis
    I haven’t posted much for a while but after 3 years as a Solution Architect I’m back to my old role designing and developing SOA Frameworks and Management Platforms. So you may be hearing a bit more from me in the future.

    Read the article

  • Guaranteed Restore Points as Fallback Method

    - by Mike Dietrich
    Thanks to the great audience yesterday in the Upgrade & Migration Workshop in Utrecht. That was really fun and I was amazed by our new facilities (and the  "wellness" lights surrounding the plenum room's walls). And another reason why I like to do these workshops is that often I learn new things from you So credits here to Rick van  Ek who has highlighted the following topic to me. Yesterday (and in some previous workshops) I did mention during the discussion about Fallback Strategies that you'll have to switch on Flashback Database beforehand to create a guaranteed restore point in case you'll encounter an issue during the database upgrade. I knew that we've made it possible since Oracle Database 11.2 to switch Flashback Database on without taking the database into MOUNT status (you could switch it off anyway while the database is open before in all releases). But before Oracle Database 11.2 that did require MOUNT status. SQL> create restore point rp1 guarantee flashback database ; create restore point rp1 guarantee flashback database * ERROR at line 1: ORA-38784: Cannot create restore point 'RP1'. ORA-38787: Creating the first guaranteed restore point requires mount mode when flashback database is off. But Rick did mention that I won't need to switch Flashback Database On to create a guaranteed restore point. And he's right - in older releases I would have had to go into MOUNT state to define the restore point which meant to restart the database. But in 11.2 that's no necessary anymore. And the same will apply when you upgrade your pre-11.2 database (e.g. an Oracle Database 10.2.0.4) to Oracle Database 11.2. As soon as you start your "old" not-yet-upgraded database in your 11.2 environment with STARTUP UPGRADE you can define a guaranteed restore point. If you tail the alert.log you'll see that the database will start the RVWR (Recovery Writer) background process - you'll just have to make sure that you'd define the values for db_recovery_file_dest_size and db_recovery_file_dest. SQL> startup upgrade ORACLE instance started. Total System Global Area  417546240 bytes Fixed Size                  2228944 bytes Variable Size             134221104 bytes Database Buffers          272629760 bytes Redo Buffers                8466432 bytes Database mounted. Database opened. SQL> create restore point grpt guarantee flashback database; Restore point created.SQL> drop restore point grpt; And don't forget to drop that restore point the sooner or later as it is guaranteed - and will fill up your Fast Recovery Area pretty quickly Just on the side: in any case archivelog mode is required if you'd like to work with restore points. - Mike

    Read the article

  • Neue Marketing Kits für Hardware

    - by A&C Redaktion
    Zur Vertriebsunterstützung gibt es jetzt auch Oracle Marketing Kits in Deutsch für folgende Hardware-Lösungen: Server & Storage: Improve Database Capacity Management with Oracle Storage and Hybrid Columnar Compression Server & Storage: Accelerating Database Test & Development with Sun ZFS Storage Appliance Server & Storage: Upgrade SAN Storage to Oracle Pillar Axiom Server & Storage: SPARC Refresh with Oracle Solaris Operating System Server & Storage: SPARC Server Refresh: The Next Level of Datacenter Performance with Oracle’s New SPARC Servers Server & Storage: Oracle Server Virtualization Server & Storage: Oracle Desktop Virtualization

    Read the article

  • Why is Double.Parse so slow?

    - by alexhildyard
    I was recently investigating a bottleneck in one of my applications, which read a CSV file from disk using a TextReader a line at a time, split the tokens, called Double.Parse on each one, then shunted the results into an object list. I was surprised to find it was actually the Double.Parse which seemed to be taking up most of the time.Googling turned up this, which is a little unfocused in places but throws out some excellent ideas:It makes more sense to work with binary format directly, rather than coerce strings into doublesThere is a significant performance improvement in composing doubles directly from the byte stream via long intermediariesString.Split is inefficient on fixed length recordsIn fact it turned out that my problem was more insidious and also more mundane -- a simple case of bad data in, bad data out. Since I had been serialising my Doubles as strings, when I inadvertently divided by zero and produced a "NaN", this of course was serialised as well without error. And because I was reading in using Double.Parse, these "NaN" fields were also (correctly) populating real Double objects without error. The issue is that Double.Parse("NaN") is incredibly slow. In fact, it is of the order of 2000x slower than parsing a valid double. For example, the code below gave me results of 357ms to parse 1000 NaNs, versus 15ms to parse 100,000 valid doubles.            const int invalid_iterations = 1000;            const int valid_iterations = invalid_iterations * 100;            const string invalid_string = "NaN";            const string valid_string = "3.14159265";            DateTime start = DateTime.Now;                        for (int i = 0; i < invalid_iterations; i++)            {                double invalid_double = Double.Parse(invalid_string);            }            Console.WriteLine(String.Format("{0} iterations of invalid double, time taken (ms): {1}",                invalid_iterations,                ((TimeSpan)DateTime.Now.Subtract(start)).Milliseconds            ));            start = DateTime.Now;            for (int i = 0; i < valid_iterations; i++)            {                double valid_double = Double.Parse(valid_string);            }            Console.WriteLine(String.Format("{0} iterations of valid double, time taken (ms): {1}",                valid_iterations,                ((TimeSpan)DateTime.Now.Subtract(start)).Milliseconds            )); I think the moral is to look at the context -- specifically the data -- as well as the code itself. Once I had corrected my data, the performance of Double.Parse was perfectly acceptable, and while clearly it could have been improved, it was now sufficient to my needs.

    Read the article

  • Oracle Unveils AutoVue Release 20.1

    - by prasenjit.niyogi(at)oracle.com
    We are extremely pleased to announce the availability of Oracle's AutoVue Release 20.1. AutoVue 20.1 is the latest major release of the family of Enterprise Visualization solutions from Oracle. Highlights of the release include: Unparalleled new format support and enhancements for 3D CAD, 2D, CAD, ECAD and PDF documents New capabilities that support end-to-end design to manufacture processes in the Electronics & High Tech space, that allow manufacturing engineers to perform accurate manufacturability reviews through better support for variants, overlays and polarity Significant printing enhancements, such as printing of markup notes; support for Excel file print settings; and print in grayscale; which serve to optimize paper-based business processes Powerful integration enablement capabilities to extend visualization into existing enterprise architectures and systems; including AutoVue Hotspots that enable visual navigation and action by linking visual data to structured enterprise data, and new AutoVue Document Print Services (DPS) to enrich enterprise applications with format and platform agnostic printing of any document type Improvements for cost-effective AutoVue deployment and administration, including support for virtualization Release 20.1 Webcast - Attend the webcast on April 13th at 12:00 pm EST to discover what is new and exciting in the latest release. Encourage your customers, prospects, and partners to attend. Title: Oracle Unveils AutoVue Release 20.1 Channel: Oracle AutoVue Channel Register Here: http://www.brighttalk.com/webcast/26282 To discover more about the latest release, and to find out what the customers and partners are saying about the value of this offering, check out the: What's New is AutoVue 20.1 Datasheet You can also learn all about the latest format support here AutoVue 20.1 Format Support Sheet We look forward to seeing you at the webcast. If you have any questions feel free to ask, and we will answer it in this forum. Enjoy AutoVue 20.1!

    Read the article

  • Oracle ties social, CRM, analytics products to customer experience

    - by Richard Lefebvre
    Oracle will embark on a new product strategy that centers on customer experience management, an approach driven by the company’s many recent acquisitions.  The new approach, announced by the company Monday night, will be seen in an expansive suite that features familiar Oracle products -- such as its Fusion CRM platform -- and offerings the company recently gained through acquisitions, including FatWire, RightNow and Vitrue. Billed as Oracle Customer Experience (CX), the suite enables businesses to respond to a market centered on the customer experience, said Anthony Lye, the company’s senior vice president of CRM. Companies “are very aware their products are commoditizing,” Lye said in an interview last week, referring to how the Web and social media channels have empowered customers. Customer experiences start and mature outside of CRM, and applications today need to reflect that shift, Lye said. Businesses thus need to step away from a pure CRM model, he said. Oracle claims CX will improve customer experience management by connecting businesses with customers across Web sites and social channels. Companies can create a single, real-time view of the customer and use predictive analytics of interactions to strengthen the customer experience, Oracle said. “Companies have to connect with their customers wherever, whenever and however they want,” Lye said. “They have to know and understand their customer.” Lye promoted Oracle CX as a suite that will work across channels to complement the company’s applications. A new strategy has been “cooking” for years now, but the acquisitions Oracle has made over the past two years made the time right for a “unique collaboration,” Lye said. CX includes basic Oracle CRM solutions such as Siebel and the new Fusion Apps. It also includes the company’s MDM products, Enterprise Data Quality, Customer Hub and Product Hub. And the suite is rounded out by the services that Oracle recently bought, transactions that created or enhanced the company’s presence in social, marketing, e-commerce and customer service. For instance, FatWire provides tools for marketing. ATG focuses on e-commerce. And RightNow specializes in customer service. Two recent acquisitions -- Collective Intellect and Vitrue -- gave Oracle a seat at the social table. Collective Intellect is a social intelligence program, and Vitrue is a social marketing and engagement platform. Those acquisitions have yet to be finalized. Oracle hopes to eventually integrate the two social offerings, as well as most of the other services, into the CX suite. CX can integrate on Oracle’s standard middleware, and can give users a lower TCO by leveraging it as a single stack on premise or as a cloud solution. Lye deferred questions about the pricing of CX, and instead pitched Oracle’s ability to offer multiple customer experience solutions in one suite. Businesses have struggled with the complexity of infrastructure and modern services that communicate with customers, Lye said. “They’ve struggled to pull all these things together. We’ve done that,” he said. Stephen Powers, a research director at Forrester Research Inc. in Cambridge, Mass., said it’s not surprising for Oracle to offer the CX suite and a related customer experience strategy.  “They’ve got CRM, ATG, FatWire. Clearly, it’s been the strategy for them,” he said. But the challenge for Oracle, and for any other vendor that has gone on an “acquisition spree,” is to connect its many products, Powers said. “The portfolio has to be more than the parts. They’ve got to realize the efficiencies and value of having these pieces to tie them together,” he said. “The proof is in the pudding. Adobe has done a nice job in its space with the products they’ve got. Now, Oracle has got to show it has something.” Albert McKeon (SearchCRM) Published: 25 Jun 2012 : http://searchcrm.techtarget.com/news/2240158644/Oracle-ties-social-CRM-analytics-products-to-customer-experience

    Read the article

  • WebCenter Customer Spotlight: Hyundai Motor Company

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. They  undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors.  Hyundai Motor Company cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported their future growth in the competitive car industry. Company OverviewHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. The company strives to enhance its brand image and market recognition by continuously improving the quality and design of its cars. Business Challenges To maximize the company’s growth potential, Hyundai Motor Company undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Specifically, they wanted to: Introduce a smart work environment to improve staff productivity and efficiency, and take advantage of rapid company growth due to new, enhanced car designs Replace a legacy document system managed by individual staff to improve collaboration, the visibility of corporate documents, and sharing of work-related files between employees Improve the security and storage of documents containing corporate intellectual property, and prevent intellectual property loss when staff leaves the company Eliminate delays when downloading files from the central server to a PC Build a large, single document repository to more efficiently manage and share data between 30,000 staff at the company’s headquarters Establish a scalable system that can be extended to Hyundai offices around the world Solution DeployedAfter conducting a large-scale benchmark test, Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors. Business Results Lowered the overall time spent each day on all document-related work by approximately 85%—from 4.5 hours to around 42 minutes on an average day Saved more than US$1 million per year in printer, paper, and toner costs, and laid the foundation for a completely paperless environment Reduced staff’s time spent requesting and receiving documents about car sales or designs from supervisors by 50%, by storing and managing all documents across the corporation in a single repository Cut the time required to draft new-car manufacturing, sales, and design documents by 20%, by allowing employees to reference high-quality data, such as marketing strategy and product planning documents already in the system Enhanced staff productivity at company headquarters by 9% by reducing the document-related tasks of 30,000 administrative and research and development staff Ensured the system could scale to hold 3 petabytes of car sales, manufacturing, and design data by 2013 and be deployed at branches worldwide We chose Oracle Exalogic, Oracle Exadata, and Oracle WebCenter Content to support our new document-centralization system over their competitors as Oracle offers stable storage for petabytes of data and high processing speeds. We have cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported our future growth in the competitive car industry. Kang Tae-jin, Manager, General Affairs Team, Hyundai Motor Company Additional Information Hyundai Motor Company Customer Snapshot Oracle WebCenter Content

    Read the article

  • Nike Achieves Scalability and Performance with Oracle Coherence & Exadata

    - by Michelle Kimihira
    Today, we are featuring a customer interview with with Nicole Otto, Senior Director, Consumer Digital Tech at Nike who talks about how they achieved scalability and performance with Oracle Coherence and Oracle Exadata.    Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Why Does Adding a UDF or Code Truncates the # of Resources in List?

    - by Jeffrey McDaniel
    Go to the Primavera - Resource Assignment History subject area.  Go under Resources, General and add fields Resource Id, Resource Name and Current Flag. Because this is using a historical subject area with Type II slowly changing dimensions for Resources you may get multiple rows for each resource if there have been any changes on the resource.  You may see a few records with current flags = 0, and you will see a row with current flag = 1 for all resources. Current flag = 1 represents this is the most up to date row for this resource.  In this query the OBI server is only querying the W_RESOURCE_HD dimension.  (Query from nqquery log) select distinct 0 as c1,      D1.c1 as c2,      D1.c2 as c3,      D1.c3 as c4 from       (select distinct T10745.CURRENT_FLAG as c1,                T10745.RESOURCE_ID as c2,                T10745.RESOURCE_NAME as c3           from                 W_RESOURCE_HD T10745 /* Dim_W_RESOURCE_HD_Resource */            where  ( T10745.LAST_RUN_PER_DAY_FLAG = 1 )       ) D1 If you add a resource code to the query now it is forcing the OBI server to include data from W_RESOURCE_HD, W_CODES_RESOURCE_HD, as well as W_ASSIGNMENT_SPREAD_HF. Because the Resource and Resource Codes are in different dimensions they must be joined through a common fact table. So if at anytime you are pulling data from different dimensions it will ALWAYS pass through the fact table in that subject areas. One rule is if there is no fact value related to that dimensional data then nothing will show. In this case if you have a list of 100 resources when you query just Resource Id, Resource Name and Current Flag but when you add a Resource Code the list drops to 60 it could be because those resources exist at a dictionary level but are not assigned to any activities and therefore have no facts. As discussed in a previous blog, its all about the facts.   Here is a look at the query returned from the OBI server when trying to query Resource Id, Resource Name, Current Flag and a Resource Code.  You'll see in the query there is an actual fact included (AT_COMPLETION_UNITS) even though it is never returned when viewing the data through the Analysis. select distinct 0 as c1,      D1.c2 as c2,      D1.c3 as c3,      D1.c4 as c4,      D1.c5 as c5,      D1.c1 as c6 from       (select sum(T10754.AT_COMPLETION_UNITS) as c1,                T10706.CODE_VALUE_02 as c2,                T10745.CURRENT_FLAG as c3,                T10745.RESOURCE_ID as c4,                T10745.RESOURCE_NAME as c5           from                 W_RESOURCE_HD T10745 /* Dim_W_RESOURCE_HD_Resource */ ,                W_CODES_RESOURCE_HD T10706 /* Dim_W_CODES_RESOURCE_HD_Resource_Codes_HD */ ,                W_ASSIGNMENT_SPREAD_HF T10754 /* Fact_W_ASSIGNMENT_SPREAD_HF_Assignment_Spread */            where  ( T10706.RESOURCE_OBJECT_ID = T10754.RESOURCE_OBJECT_ID and T10706.LAST_RUN_PER_DAY_FLAG = 1 and T10745.ROW_WID = T10754.RESOURCE_WID and T10745.LAST_RUN_PER_DAY_FLAG = 1 and T10754.LAST_RUN_PER_DAY_FLAG = 1 )            group by T10706.CODE_VALUE_02, T10745.RESOURCE_ID, T10745.RESOURCE_NAME, T10745.CURRENT_FLAG      ) D1 order by c4, c5, c3, c2 When querying in any subject area and you cross different dimensions, especially Type II slowly changing dimensions, if the result set appears to be short the first place to look is to see if that object has associated facts.

    Read the article

  • Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Benefits

    - by Anand Akela
    Earlier this month at the Oracle Open World 2012, we celebrated the first anniversary of Oracle Enterprise Manager 12c . Early adopters of  Oracle Enterprise manager 12c have benefited from its federated self-service access to complete application stacks, automated provisioning, elastic scalability, metering, and charge-back capabilities. Crimson Consulting Group recently interviewed multiple early adopters of Oracle Enterprise Manager 12c and captured their finding in a white Paper "Real-World Benefits of Private Cloud: Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Gains".  Here is summary of the finding :- On October 25th at 10 AM pacific time, Kirk Bangstad from the Crimson Consulting group will join us in a live webcast and share what learnt from the early adopters of Oracle Enterprise Manager 12c. Don't miss this chance to hear how private clouds could impact your business and ask questions from our experts. Webcast: Real-World Benefits of Private Cloud Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Benefits Date: Thursday, October 25, 2012 Time: 10:00 AM PDT | 1:00 PM EDT Register Today All attendees will receive the White Paper: Real-World Benefits of Private Cloud: Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Gains. Stay Connected Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • C++11 Tidbits: Decltype (Part 2, trailing return type)

    - by Paolo Carlini
    Following on from last tidbit showing how the decltype operator essentially queries the type of an expression, the second part of this overview discusses how decltype can be syntactically combined with auto (itself the subject of the March 2010 tidbit). This combination can be used to specify trailing return types, also known informally as "late specified return types". Leaving aside the technical jargon, a simple example from section 8.3.5 of the C++11 standard usefully introduces this month's topic. Let's consider a template function like: template <class T, class U> ??? foo(T t, U u) { return t + u; } The question is: what should replace the question marks? The problem is that we are dealing with a template, thus we don't know at the outset the types of T and U. Even if they were restricted to be arithmetic builtin types, non-trivial rules in C++ relate the type of the sum to the types of T and U. In the past - in the GNU C++ runtime library too - programmers used to address these situations by way of rather ugly tricks involving __typeof__ which now, with decltype, could be rewritten as: template <class T, class U> decltype((*(T*)0) + (*(U*)0)) foo(T t, U u) { return t + u; } Of course the latter is guaranteed to work only for builtin arithmetic types, eg, '0' must make sense. In short: it's a hack. On the other hand, in C++11 you can use auto: template <class T, class U> auto foo(T t, U u) -> decltype(t + u) { return t + u; } This is much better. It's generic and a construct fully supported by the language. Finally, let's see a real-life example directly taken from the C++11 runtime library as implemented in GCC: template<typename _IteratorL, typename _IteratorR> inline auto operator-(const reverse_iterator<_IteratorL>& __x, const reverse_iterator<_IteratorR>& __y) -> decltype(__y.base() - __x.base()) { return __y.base() - __x.base(); } By now it should appear be completely straightforward. The availability of trailing return types in C++11 allowed fixing a real bug in the C++98 implementation of this operator (and many similar ones). In GCC, C++98 mode, this operator is: template<typename _IteratorL, typename _IteratorR> inline typename reverse_iterator<_IteratorL>::difference_type operator-(const reverse_iterator<_IteratorL>& __x, const reverse_iterator<_IteratorR>& __y) { return __y.base() - __x.base(); } This was guaranteed to work well with heterogeneous reverse_iterator types only if difference_type was the same for both types.

    Read the article

  • The Use-Case Driven Approach to Change Management

    - by Lauren Clark
    In the third entry of the series on OUM and PMI’s Pulse of the Profession, we took a look at the continued importance of change management and risk management. The topic of change management and OUM’s use-case driven approach has come up in few recent conversations. So I thought I would jot down a few thoughts on how the use-case driven approach aids a project team in managing the project’s scope. The use-case model is one of several tools in OUM that is used to establish and manage the project's scope.  Because a use-case model can be understood by both business and IT project team members, it can serve as a bridge for ongoing collaboration as well as a visual diagram that encapsulates all agreed-upon functionality. This makes it a vital artifact in identifying changes to the project’s scope. Here are some of the primary benefits of using the use-case model as part of the effort for establishing and managing project scope: The use-case model quickly communicates scope in a straightforward manner. All project stakeholders can have a common foundation for the decisions regarding architecture and design and how they relate to the project's objectives. Once agreed upon, the model can be put under change control and any updates to the model can then be quickly identified as potentially affecting the project’s scope.  Changes requested or discovered later in the project can be analyzed objectively for their impact on project's budget, resources and schedule. A modular foundation for the design of the software solution can be established in Elaboration.  This permits work to be divided up effectively and executed in so that the most important and riskiest use-cases can be tackled early in the project. The use-case model helps the team make informed decisions about implementation priorities, which allows effective allocation of limited project resources.  This is very helpful in not only managing scope, but in doing iterative and incremental planning which relies heavily on the ability to identify project priorities. Bottom line is that the use-case model gives the project team solid understanding of scope early in the project.  Combine this understanding with effective project management and communication and you have an effective tool for reducing the risk of overruns in budget and/or time due to out of control scope changes. Now that you’ve had a chance to read these thoughts on the use-case model and project scope, please let me know your feedback based on your experience.

    Read the article

  • Video Did Not Kill the Podcast Star

    - by Justin Kestelyn
    Who says video killed the podcast star? We're seeing more favorites out there than ever before. For example, the OTN team is proud to be supporters of the Java Spotlight Podcasts, straight from the official Java Evangelist Team at Oracle (lots of great insider info); the OurSQL: The MySQL Database Podcasts, produced by MySQL maven (and Oracle ACE Director) Sheeri Cabral; and The GlassFish Podcast, always a reliable source. And we'd add The Java Posse and The Basement Coders to our personal playlist. And although we're on a video kick ourselves at the moment, you can still get the audio of our TechCast Live shows, if you think we have "faces for radio."

    Read the article

  • Brand New Oracle WebLogic 12c Online Launch Event, December 1st, 18:00 GMT

    - by swalker
    The brand new WebLogic 12c will be released on December 1st 2011. Please join Hasan Rizvi on December 1, as he unveils the next generation of the industry’s #1 application server and cornerstone of Oracle’s cloud application foundation—Oracle WebLogic Server 12c. Hear, with your fellow IT managers, architects, and developers, how the new release of Oracle WebLogic Server is: Designed to help you seamlessly move into the public or private cloud with an open, standards-based platform Built to drive higher value for your current infrastructure and significantly reduce development time and cost Optimized to run your solutions for Java Platform, Enterprise Edition (Java EE); Oracle Fusion Middleware; and Oracle Fusion Applications Enhanced with transformational platforms and technologies such as Java EE 6, Oracle’s Active GridLink for RAC, Oracle Traffic Director, and Oracle Virtual Assembly Builder Don’t miss this online launch event on December 1st, 18:00 GMT. Register Now For regular information become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea

    Read the article

  • Oracle NoSQL Database Exceeds 1 Million Mixed YCSB Ops/Sec

    - by Charles Lamb
    We ran a set of YCSB performance tests on Oracle NoSQL Database using SSD cards and Intel Xeon E5-2690 CPUs with the goal of achieving 1M mixed ops/sec on a 95% read / 5% update workload. We used the standard YCSB parameters: 13 byte keys and 1KB data size (1,102 bytes after serialization). The maximum database size was 2 billion records, or approximately 2 TB of data. We sized the shards to ensure that this was not an "in-memory" test (i.e. the data portion of the B-Trees did not fit into memory). All updates were durable and used the "simple majority" replica ack policy, effectively 'committing to the network'. All read operations used the Consistency.NONE_REQUIRED parameter allowing reads to be performed on any replica. In the past we have achieved 100K ops/sec using SSD cards on a single shard cluster (replication factor 3) so for this test we used 10 shards on 15 Storage Nodes with each SN carrying 2 Rep Nodes and each RN assigned to its own SSD card. After correcting a scaling problem in YCSB, we blew past the 1M ops/sec mark with 8 shards and proceeded to hit 1.2M ops/sec with 10 shards.  Hardware Configuration We used 15 servers, each configured with two 335 GB SSD cards. We did not have homogeneous CPUs across all 15 servers available to us so 12 of the 15 were Xeon E5-2690, 2.9 GHz, 2 sockets, 32 threads, 193 GB RAM, and the other 3 were Xeon E5-2680, 2.7 GHz, 2 sockets, 32 threads, 193 GB RAM.  There might have been some upside in having all 15 machines configured with the faster CPU, but since CPU was not the limiting factor we don't believe the improvement would be significant. The client machines were Xeon X5670, 2.93 GHz, 2 sockets, 24 threads, 96 GB RAM. Although the clients had 96 GB of RAM, neither the NoSQL Database or YCSB clients require anywhere near that amount of memory and the test could have just easily been run with much less. Networking was all 10GigE. YCSB Scaling Problem We made three modifications to the YCSB benchmark. The first was to allow the test to accommodate more than 2 billion records (effectively int's vs long's). To keep the key size constant, we changed the code to use base 32 for the user ids. The second change involved to the way we run the YCSB client in order to make the test itself horizontally scalable.The basic problem has to do with the way the YCSB test creates its Zipfian distribution of keys which is intended to model "real" loads by generating clusters of key collisions. Unfortunately, the percentage of collisions on the most contentious keys remains the same even as the number of keys in the database increases. As we scale up the load, the number of collisions on those keys increases as well, eventually exceeding the capacity of the single server used for a given key.This is not a workload that is realistic or amenable to horizontal scaling. YCSB does provide alternate key distribution algorithms so this is not a shortcoming of YCSB in general. We decided that a better model would be for the key collisions to be limited to a given YCSB client process. That way, as additional YCSB client processes (i.e. additional load) are added, they each maintain the same number of collisions they encounter themselves, but do not increase the number of collisions on a single key in the entire store. We added client processes proportionally to the number of records in the database (and therefore the number of shards). This change to the use of YCSB better models a use case where new groups of users are likely to access either just their own entries, or entries within their own subgroups, rather than all users showing the same interest in a single global collection of keys. If an application finds every user having the same likelihood of wanting to modify a single global key, that application has no real hope of getting horizontal scaling. Finally, we used read/modify/write (also known as "Compare And Set") style updates during the mixed phase. This uses versioned operations to make sure that no updates are lost. This mode of operation provides better application behavior than the way we have typically run YCSB in the past, and is only practical at scale because we eliminated the shared key collision hotspots.It is also a more realistic testing scenario. To reiterate, all updates used a simple majority replica ack policy making them durable. Scalability Results In the table below, the "KVS Size" column is the number of records with the number of shards and the replication factor. Hence, the first row indicates 400m total records in the NoSQL Database (KV Store), 2 shards, and a replication factor of 3. The "Clients" column indicates the number of YCSB client processes. "Threads" is the number of threads per process with the total number of threads. Hence, 90 threads per YCSB process for a total of 360 threads. The client processes were distributed across 10 client machines. Shards KVS Size Clients Mixed (records) Threads OverallThroughput(ops/sec) Read Latencyav/95%/99%(ms) Write Latencyav/95%/99%(ms) 2 400m(2x3) 4 90(360) 302,152 0.76/1/3 3.08/8/35 4 800m(4x3) 8 90(720) 558,569 0.79/1/4 3.82/16/45 8 1600m(8x3) 16 90(1440) 1,028,868 0.85/2/5 4.29/21/51 10 2000m(10x3) 20 90(1800) 1,244,550 0.88/2/6 4.47/23/53

    Read the article

  • Walkthrough: Scheduling jobs using Quartz.net &ndash; Part 1: What is Quartz.Net?

    - by Tarun Arora
    Quartz.NET is a full-featured, open source enterprise job scheduling system written in .NET platform that can be used from smallest apps to large scale enterprise systems. What is the problem that we trying to address? I want to schedule the execution of a task but only when something happens. Let’s call that something a trigger, so... if the trigger is met => execute the task. Sounds simple, why not use windows task scheduler for this? Well, windows task scheduler is great for tasks where the trigger can be easily defined. With windows task scheduler will you be able to schedule a task to run on every working day according to the UK calendar (exclude all weekends & bank holidays) without either writing the logic for day check in the task or a wrapper script calling into the task. The task should just contain the execution logic and should not have anything to do with the schedule for execution; Quartz.net allows you to achieve this and lots more. A quartz.net trigger gives you the flexibility for task invocation based on the following triggers, 1. at a certain time of day (to the millisecond) 2. on certain days of the week 3. on certain days of the month 4. on certain days of the year 5. not on certain days listed within a registered Calendar (such as business holidays) 6. repeated a specific number of times 7. repeated until a specific time/date 8. repeated indefinitely 9. repeated with a delay interval Did 8 – repeat indefinitely just ring a bell? I’ll be covering that in the future post. Using Quartz.net as a windows service You can have Quartz.net run as a standalone instance within its own .NET virtual machine instance via .NET Remoting. Let’s take a look at typical application architecture. In the figure below, I have the application tier set up on Machine 1, database set up on Machine 2 and Quartz.net set up on Machine 3 which is normally the architecture for most (if not all) enterprise applications. Figure 1 -  Typical Application architecture while using Quartz.net as a windows service What other options do I have if I don’t want to use Quartz.net? Quartz.net is just one of the many job scheduling services. Have a look at this comprehensive list of free and paid enterprise job scheduling software along with their feature comparison. http://en.wikipedia.org/wiki/List_of_job_scheduler_software This was first in the series of posts on enterprise scheduling using Quartz.net, in the next post I’ll be covering how to Install Quartz.net as a windows service. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Strange 401 (Unauthorized) when calling a WCF Service

    - by mipsen
    A WCF Service we call from BizTalk using WCF BasicHTTP usually works fine but all of a sudden it started returning 401 errors for some calls while others continued working as expected so it could not have been a "real" 401. The difference was the size of the message. One parameter of the service is a rather complex object. In the cases we got a 401 it got quite big (containing a lot of customer-data), say 5 MB. So we turned on tracking. The messages we traced out where about 20MB. Not too big for WCF one should suppose... A bit of research led us to increasing maxItemsInObjectGraph in the behaviours but that did not help. The service we call is in the same network as we are and is a WCF service. So we tried changing from BasicHTTP to net.tcp and Bingo! Ok, we had to use CustomBinding in BizTalk to set all the Quotas, etc. but it worked in the end.

    Read the article

  • Oracle's Director NoSQL Database Product Management talks with ODBMS.ORG

    - by thegreeneman
    I was pinged by one of my favorite database technology sites today, ODBMS.ORG - informing that Dave Segleau, the Director of Oracle NoSQL Database product management spent some time talking with their editor Roberto Zicari about the product.   Its a great interview and I highly recommend the read.  I think its important to understand the connectivity that Oracle NoSQL Database (ONDB) has with BerkeleyDB, as it says a lot about the maturity of ONDB as it relates to data integrity and reliability.  BerkeleyDB has been living the NoSQL life since the beginning of this transition embracing the right tool for the job approach to data management.  Several of the biggest names in NoSQL ( e.g. LinkedIn's Voldemort ) built their NoSQL scale-out solutions leveraging the robust BerkeleyDB storage engine under their distribution architectures.  Oracle commercializing the same via ONDB makes perfect sense given the demonstrated need for this category of technology.

    Read the article

  • EBS – ATG Webcast 9/11 - 9/12

    - by cwarticki
    EBS – ATG Webcast in September 2012 EBS – Multiple Language Support (MLS) Agenda :EBS is MLS Ready                                                                                 NLS / MLS Basic ArchitectureNLS / MLS InstallationNLS / MLS Configuration Settings                                                                    TroubleshootingQuestion and AnswersEMEA Session : September 11, 2012 at 09:00 UK / 10:00 CET / 13:30 India / 17:00 Japan / 18:00 Sydney (Australia) Details & Registration : Note 1480084.1 Direct link to register in WebEx US Session : September 12, 2012 at 18:00 UK / 19:00 CET / 10:00 AM Pacific / 11:00 AM Mountain/ 01:00 PM Eastern ·      Details & Registration : Note 1480085.1 ·      Direct link to register in WebEx ·         Schedules, recordings and the Presentations of the Advisor Webcast drove under the EBS Applications Technology area can be found in Note 1186338.1. ·         Current Schedules of Advisor Webcast for all Oracle Products can be found on Note 740966.1 ·         Post Presentation Recordings of the Advisor Webcasts for all Oracle Products can be found on Note 740964.1 If you have any question about the schedules or if you have a suggestion for an Advisor Webcast to be planned in future, please send an E-Mail to Ruediger Ziegler.

    Read the article

  • Nokia Lumia 920 Windows Phone 8 Announcement

    - by Tim Murphy
    Today Nokia and Microsoft had an event to officially introduce the Lumia 920.  Below is a rundown of some of the things I found interesting. As a person who likes photography there was a lot to drool over.  The main feature that caught my attention was PureView with its optical stabilization.  This alone should improve the majority of you pictures.  Add to that the SmartShoot Object remover that uses multiple images to remove unwanted people or objects that move through your picture and you never have to accept reality again. For the most part the lenses concept introduced in Windows Phone 8 just makes the usability of leveraging camera better.  Of course that is Microsoft’s selling point.  One lens that caught my attention was the Bing lens.  I have to say it is about time that we can take pictures and use them to search for answers using Bing. There were a couple of features shown that involved augmented reality.  One was similar to the yapf application that is already in the market which overlays restaurants and other destination over live camera views.  The other was using the navigation directions with a live view. Then you get down to some of the physical features of the Lumia 920.  The one that got the most stage time is that it has a great 2000mah battery which can be charged wirelessly.  They also pointed out the improved glare reduction of the 4.5 in. curved glass screen.  This hardware improvement is improved further with software that detects glare conditions and adjusts the display attributes to enhance viewing ease. Adding to the wireless cool factor of the Lumia 920 is the general NFC capabilities.  This was demonstrated with NFC docking stations as well as JBL speakers and headphones. There was one more hardware feature that I applauded.  The super sensitive touch screen did away with one of my pet peeves with capacitive touch screens.  You will never have to remove you gloves to operate your phone again.  The mittens that they did the demo with looked more like boxing gloves. I was disappointed with Joe Belfiore said that they were only going to show a couple of new features of the Windows Phone 8 and would hear more at future events.  One of the things he did show is the ability to customize which buttons you preferred as defaults in IE10.  For example you could have the folders button where the refresh button normally is.  He also showed that at long last you can natively take screenshots on your phone.  Hopefully he will be back quickly to give us the rest of the features. The most disappointing part of the event was that we never found out when they would be released or how much they would cost.  Let’s hope this comes soon.  Even with these couple of items still left on my wish list I can’t wait to get my hands on a Lumia 920.  del.icio.us Tags: Windows Phone,Windows Phone 8,Nokia,Lumia,Lumia 920,Microsoft

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >