Search Results

Search found 12089 results on 484 pages for 'rule of three'.

Page 258/484 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • Insurance Outlook: Just Right of Center

    - by Chuck Johnston Admin
    On Tuesday June 21st, PwC lead a session at the International Insurance Society meeting in Toronto focused on the opportunity in insurance.  The scenarios focusing on globalization, regulation and new areas of insurance opportunity were well defined and thought provoking, but the most interesting part of the session was the audience participation. PwC used a favorite strategic planning tool of mine, scenario planning, to highlight the important financial, political, social and technological dimensions that impact the insurance industry. Using wireless polling keypads, the audience was able to participate in scoring a range of possibilities across each dimension using a 1 to 5 ranking; 1 being generally negative or highly pessimistic scenarios and 5 being very positive or more confident scenarios. The results were then displayed on a screen with a line or "center" in the middle. "Left of center" was defined as being highly cautious and conservative, while "right of center" was defined as a more optimistic outlook for the industry's future. This session was attended by insurance carriers' senior leadership, leading insurance academics, senior regulators, and the occasional insurance technology executive. In general, the average answer fell just right of center, i.e. a little more positive or optimistic than center. Three years ago, after the 2008 financial crisis, I suspect the answers would have skewed more sharply to the left of center. This sense that things are generally getting better for insurers and that there is the potential for positive change pervaded the conference. There is still caution and concern around economic factors, regulation (especially the potential pitfalls of regulatory convergence with banking) and talent management, but in general, the industry outlook is more positive than it's been in several years. Chuck Johnston is vice president of industry strategy, Oracle Insurance. 

    Read the article

  • Developing a mobile application, how to show help if it contains too much data?

    - by MobileDev123
    I am developing a mobile application which has many functionality, and I am pretty sure that the design will confuse the user about how to use some functionality so we decided to include some help as we can see them regularly in desktop applications, but later we found that the help text is too long. We don't think that one screen is enough to describe what a user can do. Moreover the project itself is subjected to evolve based on beta stage and user reports. After a lot of thinking and meetings we have decided three options to show the users what they can do. Create the website or blog, so we can let the users know what they can do with this application, the advantage is that it can provide us a good source of marketing, but for that they have to access the site while most part of the application can be used while being offline in earlier versions. Create a section in the application called demos to show the same thing locally, but we are afraid that it will increase the size, that we think can be avoided (and we are planning to avoid if there is any option) Show popups, but we discarded this thinking that pop ups annoys user no matter what the platform is. I want to know from community that which option will you choose, we are also open to accept other ideas if you have.

    Read the article

  • Best approach for utility class library using Visual Studio

    - by gregsdennis
    I have a collection of classes that I commonly (but not always) use when developing WPF applications. The trouble I have is that if I want to use only a subset of the classes, I have three options: Distribute the entire DLL. While this approach makes code maintenance easier, it does require distributing a large DLL for minimal code functionality. Copy the classes I need to the current application. This approach solves the problem of not distributing unused code, but completely eliminates code maintenance. Maintain each class/feature in a separate project. This solves both problems from above, but then I have dramatically increased the number of files that need to be distributed, and it bloats my VS solution with tiny projects. Ideally, I'd like a combination of 1 & 3: A single project that contains all of my utility classes but builds to a DLL containing only the classes that are used in the current application. Are there any other common approaches that I haven't considered? Is there any way to do what I want? Thank you.

    Read the article

  • Identify which CCSprite is touched in Cocos2d

    - by PeterK
    I am trying to learn Cocos2d and is experimenting with Ray Wenderlich tutorial whack-a-mole: www.raywenderlich.com/2560/how-to-create-a-mole-whacking-game-with-cocos2d-part-1 In this tutorial three CCSprite's are popping up and you should click on them... However, i am trying to identify which mole, rat in my case, is popping up and place a CCSprite above that. Initially this looked like an easy task but i am failing. I am trying to NSLog LEFT HIT. i would guess the problem is in the If-statement and the last "227" height parameter. The left rat boundingBox = {{99.5, 146.5}, {165, 227}} (from NSLog). The key code is in the ccTouchBegan function: -(BOOL) ccTouchBegan:(UITouch *)touch withEvent:(UIEvent *)event { CGPoint touchLocation = [self convertTouchToNodeSpace:touch]; for (CCSprite *rat in rats) { if (rat.userData == FALSE) continue; if (CGRectContainsPoint(rat.boundingBox, touchLocation)) { //left: rat boundingBox = {{99.5, 146.5}, {165, 227}} //mid: rat boundingBox = {{349.5, 146.5}, {165, 227}} //right: rat boundingBox = {{599.5, 146.5}, {165, 227}} //>>>>Here is where i try to get a hit<<<< if (CGRectContainsPoint(CGRectMake(99.5, 146.55, 165, 227), touchLocation)) { NSLog(@">>>>HIT LEFT<<<<<"); } I would really appreciate a few ideas how to get this to work.

    Read the article

  • Show Notes: Debra Lilley on Fusion Applications

    - by Bob Rhubart
    The latest ArchBeat program features a three-part interview with Oracle ACE Director Debra Lilley (ACE Profile). Debra is Oracle Alliance Director at Fujitsu, Executive Member at the International Oracle Users Group Community (IOUG), Director and Deputy Chair at the UK Oracle Users Group (UKOUG), and a partner at Oracle UK.  So yeah, she’s connected.  In this interview Debra talks about her connection to Oracle Fusion Applications. Listen to Part 1 Debra talks about her role as the as the Director and Deputy Chairperson of the UKOUG and about the UKOUG development group’s involvement in Oracle Fusion Applications. Listen to Part 2 (March 9) Debra shares her insight into what Fusion Applications will bring to Enterprise Architecture, and the importance of user experience in enterprise architecture. Listen to Part 3 (March 16) Debra discusses the need to  close the gap between IT and business, and about how business users should be able to use applications without having to think about the underlying technology. Debra is very active in social networks, so if you have questions or comments you can connect with her via the following: Blog: http://www.debrasoracle.blogspot.com/ Twitter: @debralilley LinkedIn:  http://uk.linkedin.com/pub/debra-lilley/1/438/bba And if you’d like to learn more about Oracle Fusion Applications: http://www.oracle.com/us/products/applications/fusion/index.html Coming Soon Dr. Frank Munz, author of Middleware and Cloud Computing: Oracle Fusion Middleware on Amazon Web Services and Rackspace Cloud.  Andy MacMillan (VP, Enterprise 2.0, Oracle) on the socialization of the enterprise. A panel discussion on “Who gets to be a software architect?” Stay tuned: RSS Technorati Tags: oracle,fusion applications,enterprise architecture,IOUG,UKOUG del.icio.us Tags: oracle,fusion applications,enterprise architecture,IOUG,UKOUG

    Read the article

  • Less Than Four Weeks Away: Oracle OpenWorld Latin America

    - by Oracle OpenWorld Blog Team
    It's only four weeks and counting to Oracle OpenWorld Latin America 2012 in São Paulo. There are dozens of sessions in seven technology tracks that you won't want to miss. And dozens of interesting, innovative, and exciting sponsors and exhibitors you'll want to be sure to talk to in the Exhibition Hall, not to mention the Oracle demos there that you'll want to experience first-hand. There are three ways to experience Oracle OpenWorld Latin America:  The Oracle OpenWorld conference pass gets you access to all keynotes, sessions, demos, labs, networking events, and more The Oracle OpenWorld and JavaOne conference pass gets you access to all of the above, for BOTH Oracle OpenWorld and JavaOne The Discover pass gets you access to the Exhibition Hall, where you'll be able to see and talk with sponsors and exhibitors, and check out all of the Oracle demos The sooner you sign up the more you save. Savings are greatest between now and 16 November. From 17 November you'll still save significantly over the onsite price if you register before 3 December.  And by the way, the Discover pass comes at no charge if you register by 3 December. So don't wait: Register Now!

    Read the article

  • Sales and Procurement Contracts 12.1.3++ Release Information

    - by LuciaC
    New functionality has been released for Sales and Procurement Contracts in a new patch: Contracts 12.1.3++: Patch 13877401: 12.1.3 Rollup for Oracle Contracts Core. The new functionality includes: APIs for Import of Contract Templates, Contract Expert rules, Questions and Constants: The three APIs are as follows: API for Templates, API for Rules, and API for Questions and Constants. These can be used to both create entities and update existing templates and rules. The APIs will display error and warning messages which can be processed and analyzed by the customer. Ability to Apply Multiple Templates to a Sourcing, Procurement or Sales Document: The buyer can select and add multiple templates to a quote,sales agreement document, sourcing or purchasing odcument.  All the clauses and deliverables from the new templates are synchronized with the document. The Contract Expert rules are from the original template. The buyer can also view the list of templates that are added to any sales or procurement document. Ability to Define Multi-Row Variables: You can create user defined manual variables that are tables containing one row per line or multiple rows. Contract Preview will print the variable values according to the layout defined for the variable. These variables are not available for Contract Expert Rules and Supplier. Enhancement to Suggested Sections for Clauses by Contract Expert: You can associate multiple default sections with a clause. A clause is associated with multiple values of any system variable and for each such value a section name is associated in Contracts Terms Library. When Contract Expert is run in the contract authoring flow, the clause is automatically placed in the associated section name. Plus many more new features. Read the following notes for details on all the new and changed functionality: Oracle Procurement Contracts Release Notes, Release 12.1.3++ (Doc ID 1467140.1) Oracle Sales Contracts Release Notes, Release 12.1.3++ (Doc ID 1467149.1) Oracle E-Business Suite Releases 12.1 and 12.2 Release Content Documents (Doc ID 1302189.1)

    Read the article

  • IT Optimization Plan Pays Off For UK Retailer

    - by [email protected]
    I caught this article in ComputerworldUK yesterday. The headline talks about UK-based supermarket chain Morrisons is increasing their IT spend...OK, sounds good. Even nicer that Oracle is a big part of that. But what caught my eye were three things: 1) Morrison's truly has a long term strategy for IT. In this case, modernizing and optimizing how they use IT for business advantage. 2) Even in a tough economic climate, Morrison's views IT investments as contributing to and improving the bottom line. Specifically, "The investment in IT contributed to a 21 percent increase in Morrison's underlying profit.." 3) The phased, 3-year "Optimization Plan" took a holistic approach to their business--from CRM and Supply Chain systems to the underlying application infrastructure. On the infrastructure front, adopting a more flexible Service-Oriented Architecture enabled them to be more agile and adapt their business and Identity Management helped with sometimes mundane (but costly) issues like lost passwords and being able to document who has access to what. Things don't always turn out so rosy. And I know it was a long and difficult process...but it's nice to see a happy ending every once in a while.

    Read the article

  • Oracle Delivers Special Recognition for Specialized Partners

    - by michaela.seika(at)oracle.com
    Since announcing Oracle PartnerNetwork Specialized (OPN Specialized) in October 2009, Oracle has been focused on building a program that first enables solution providers to become highly skilled Oracle partners who deliver value to customers and that then recognizes and rewards their achievements in a meaningful way. Today the company unveiled new benefits reserved for partners who have achieved one or more of the over 50 specializations currently available. The benefits demonstrate Oracle's commitment to showcase these valued partners to three key audiences: customers, other partners, and Oracle employees.With today's launch of www.oracle.com/specialized Oracle has taken what IDC believes is a first of its kind approach to putting top partners front and center with customers and prospects. While most vendors offer a business partner finder tool on their website none has gone as far as Oracle with the creation of this new site dedicated to the promotion of Specialized Partners. The tag lines - "Recognized by Oracle, Preferred by Customers" and "Specialized. Recognized. Preferred." gets right to the point - these are the solution providers with which customers should choose to engage. The contents of the page offer multiple proof points to justify the marketing phrases.One of the benefits Oracle offers its Specialized Partners is video creation and placement. While Oracle works with partners to create informal or "guerilla" videos which often are placed on YouTube to generate awareness and buzz, the company also produces professional videos for its partners. The greatest value the partner receives from this benefit isn't the non-trivial production costs that Oracle covers but instead the prominent exposure Oracle gives the finished product. Partner videos are featured on www.oracle.com/specialized, used as part of monthly OPN Specialized Partners monthly webcasts, placed on a customer facing website, the Oracle Media Network, which includes several partner sites such as PartnerCast. A solution provider gains a great deal of credibility when they can send a prospect to an Oracle website where they are featured. Read the full article here.

    Read the article

  • Wammu - USB Device Name?

    - by Paul
    I'm trying to get to my phone's filesystem through USB in Wammu, but I'm stuck in the configuration wizard when it asks for a USB device name. After about an hour of Internet searching, here are the failed solutions I've already tried, starting with the relevant information returned by lsusb in terminal. lsusb Bus 001 Device 003: ID 12d1:101e Huawei Technologies Co., Ltd. So I tried opening Wammu through sudo wammu in terminal and inputting "/dev/bus/usb/001/003" as the device name, which returns: Error opening device Device /dev/bus/usb/001/003 does not exist! and then "/dev/bus/usb/001/", which returns: Failed to connect to phone Description: Error opening device. Unknown, busy, or no permissions.<br> Function: Init<br> Error code: 2 Another proposed solution was to try "tail -f /var/log/messages" in terminal. But that only returned a "No such file or directory" message. Seemingly relevant dmesg info: [ 4739.716214] usb 1-1: new high-speed USB device number 8 using ehci_hcd [ 4739.854137] scsi9 : usb-storage 1-1:1.0 [ 4740.854416] scsi 9:0:0:0: CD-ROM HUAWEI T Mass Storage 2.31 PQ: 0 ANSI: 2 [ 4740.867051] sr0: scsi-1 drive [ 4740.867806] sr 9:0:0:0: Attached scsi CD-ROM sr0 [ 4740.870464] sr 9:0:0:0: Attached scsi generic sg1 type 5 I don't know why it is coming up as CD-ROM. But there it is. If you haven't noticed already, I'm an absolute beginner when it comes to Linux and terminal. So speaking to me like I'm a three year old is welcome if you can propose a solution. I'm running Ubuntu 12.04 LTS, and the phone is a Huawei U1250. My computer is an Acer Aspire One D250/KAV60. Any help is much appreciated.

    Read the article

  • How old is "too old"?

    - by Dori
    I've been told that to be taken seriously as a job applicant, I should drop years of relevant experience off my résumé, remove the year I got my degree, or both. Or not even bother applying, because no one wants to hire programmers older than them.1 Or that I should found a company, not because I want to, or because I have a product I care about, but because that way I can get a job if/when my company is acquired. Or that I should focus more on management jobs (which I've successfully done in the past) because… well, they couldn't really explain this one, except the implication was that over a certain age you're a loser if you're still writing code. But I like writing code. Have you seen this? Is this only a local (Northern California) issue? If you've ever hired programmers:2 Of the résumés you've received, how old was the eldest applicant? What was the age of the oldest person you've interviewed? How old (when hired) was the oldest person you hired? How old is "too old" to employed as a programmer? 1 I'm assuming all applicants have equivalent applicable experience. This isn't about someone with three decades of COBOL applying for a Java guru job. 2 Yes, I know that (at least in the US) you aren't supposed to ask how old an applicant is. In my experience, though, you can get a general idea from a résumé.

    Read the article

  • Is there any evidence that drugs can actually help programmers produce "better" code? [closed]

    - by sytycs
    I just read this quote from Steve Jobs: "Doing LSD was one of the two or three most important things I have done in my life." Also a quote from that article: He was hardly alone among computer scientists in his appreciation of hallucinogenics and their capacity to liberate human thought from the prison of the mind. Now I'm wondering if there's any evidence to support the theory that drugs can help make a "better" programmer. Has there ever been a study where programmers have been given drugs to see if they could produce "better" code? Is there a well-known programming concept or piece of code which originated from people who were on drugs? EDIT So I did a little more research and it turns out Dennis R. Wier actually documented how he took LSD to wrap his head around a coding project: "At one point in the project I could not get an overall viewpoint for the operation of the entire system. It really was too much for my brain to keep all the subtle aspects and processing nuances clear so I could get a processing and design overview. After struggling with this problem for a few weeks, I decided to use a little acid to see if it would enable a breakthrough, because otherwise, I would not be able to complete the project and be certain of a consistent overall design"[1] There is also an interesting article on wired about Kevin Herbet, who used LSD to solve tough technical problems and chemist Kary Mullis even said "...that LSD had helped him develop the polymerase chain reaction that helps amplify specific DNA sequences." [2]

    Read the article

  • Big Data Appliance X4-2 Release Announcement

    - by Jean-Pierre Dijcks
    Today we are announcing the release of the 3rd generation Big Data Appliance. Read the Press Release here. Software Focus The focus for this 3rd generation of Big Data Appliance is: Comprehensive and Open - Big Data Appliance now includes all Cloudera Software, including Back-up and Disaster Recovery (BDR), Search, Impala, Navigator as well as the previously included components (like CDH, HBase and Cloudera Manager) and Oracle NoSQL Database (CE or EE). Lower TCO then DIY Hadoop Systems Simplified Operations while providing an open platform for the organization Comprehensive security including the new Audit Vault and Database Firewall software, Apache Sentry and Kerberos configured out-of-the-box Hardware Update A good place to start is to quickly review the hardware differences (no price changes!). On a per node basis the following is a comparison between old and new (X3-2) hardware: Big Data Appliance X3-2 Big Data Appliance X4-2 CPU 2 x 8-Core Intel® Xeon® E5-2660 (2.2 GHz) 2 x 8-Core Intel® Xeon® E5-2650 V2 (2.6 GHz) Memory 64GB 64GB Disk 12 x 3TB High Capacity SAS 12 x 4TB High Capacity SAS InfiniBand 40Gb/sec 40Gb/sec Ethernet 10Gb/sec 10Gb/sec For all the details on the environmentals and other useful information, review the data sheet for Big Data Appliance X4-2. The larger disks give BDA X4-2 33% more capacity over the previous generation while adding faster CPUs. Memory for BDA is expandable to 512 GB per node and can be done on a per-node basis, for example for NameNodes or for HBase region servers, or for NoSQL Database nodes. Software Details More details in terms of software and the current versions (note BDA follows a three monthly update cycle for Cloudera and other software): Big Data Appliance 2.2 Software Stack Big Data Appliance 2.3 Software Stack Linux Oracle Linux 5.8 with UEK 1 Oracle Linux 6.4 with UEK 2 JDK JDK 6 JDK 7 Cloudera CDH CDH 4.3 CDH 4.4 Cloudera Manager CM 4.6 CM 4.7 And like we said at the beginning it is important to understand that all other Cloudera components are now included in the price of Oracle Big Data Appliance. They are fully supported by Oracle and available for all BDA customers. For more information: Big Data Appliance Data Sheet Big Data Connectors Data Sheet Oracle NoSQL Database Data Sheet (CE | EE) Oracle Advanced Analytics Data Sheet

    Read the article

  • Combined Likelihood Models

    - by Lukas Vermeer
    In a series of posts on this blog we have already described a flexible approach to recording events, a technique to create analytical models for reporting, a method that uses the same principles to generate extremely powerful facet based predictions and a waterfall strategy that can be used to blend multiple (possibly facet based) models for increased accuracy. This latest, and also last, addition to this sequence of increasing modeling complexity will illustrate an advanced approach to amalgamate models, taking us to a whole new level of predictive modeling and analytical insights; combination models predicting likelihoods using multiple child models. The method described here is far from trivial. We therefore would not recommend you apply these techniques in an initial implementation of Oracle Real-Time Decisions. In most cases, basic RTD models or the approaches described before will provide more than enough predictive accuracy and analytical insight. The following is intended as an example of how more advanced models could be constructed if implementation results warrant the increased implementation and design effort. Keep implemented statistics simple! Combining likelihoods Because facet based predictions are based on metadata attributes of the choices selected, it is possible to generate such predictions for more than one attribute of a choice. We can predict the likelihood of acceptance for a particular product based on the product category (e.g. ‘toys’), as well as based on the color of the product (e.g. ‘pink’). Of course, these two predictions may be completely different (the customer may well prefer toys, but dislike pink products) and we will have to somehow combine these two separate predictions to determine an overall likelihood of acceptance for the choice. Perhaps the simplest way to combine multiple predicted likelihoods into one is to calculate the average (or perhaps maximum or minimum) likelihood. However, this would completely forgo the fact that some facets may have a far more pronounced effect on the overall likelihood than others (e.g. customers may consider the product category more important than its color). We could opt for calculating some sort of weighted average, but this would require us to specify up front the relative importance of the different facets involved. This approach would also be unresponsive to changing consumer behavior in these preferences (e.g. product price bracket may become more important to consumers as a result of economic shifts). Preferably, we would want Oracle Real-Time Decisions to learn, act upon and tell us about, the correlations between the different facet models and the overall likelihood of acceptance. This additional level of predictive modeling, where a single supermodel (no pun intended) combines the output of several (facet based) models into a single prediction, is what we call a combined likelihood model. Facet Based Scores As an example, we have implemented three different facet based models (as described earlier) in a simple RTD inline service. These models will allow us to generate predictions for likelihood of acceptance for each product based on three different metadata fields: Category, Price Bracket and Product Color. We will use an Analytical Scores entity to store these different scores so we can easily pass them between different functions. A simple function, creatively named Compute Analytical Scores, will compute for each choice the different facet scores and return an Analytical Scores entity that is stored on the choice itself. For each score, a choice attribute referring to this entity is also added to be returned to the client to facilitate testing. One Offer To Predict Them All In order to combine the different facet based predictions into one single likelihood for each product, we will need a supermodel which can predict the likelihood of acceptance, based on the outcomes of the facet models. This model will not need to consider any of the attributes of the session, because they are already represented in the outcomes of the underlying facet models. For the same reason, the supermodel will not need to learn separately for each product, because the specific combination of facets for this product are also already represented in the output of the underlying models. In other words, instead of learning how session attributes influence acceptance of a particular product, we will learn how the outcomes of facet based models for a particular product influence acceptance at a higher level. We will therefore be using a single All Offers choice to represent all offers in our combined likelihood predictions. This choice has no attribute values configured, no scores and not a single eligibility rule; nor is it ever intended to be returned to a client. The All Offers choice is to be used exclusively by the Combined Likelihood Acceptance model to predict the likelihood of acceptance for all choices; based solely on the output of the facet based models defined earlier. The Switcheroo In Oracle Real-Time Decisions, models can only learn based on attributes stored on the session. Therefore, just before generating a combined prediction for a given choice, we will temporarily copy the facet based scores—stored on the choice earlier as an Analytical Scores entity—to the session. The code for the Predict Combined Likelihood Event function is outlined below. // set session attribute to contain facet based scores. // (this is the only input for the combined model) session().setAnalyticalScores(choice.getAnalyticalScores); // predict likelihood of acceptance for All Offers choice. CombinedLikelihoodChoice c = CombinedLikelihood.getChoice("AllOffers"); Double la = CombinedLikelihoodAcceptance.getChoiceEventLikelihoods(c, "Accepted"); // clear session attribute of facet based scores. session().setAnalyticalScores(null); // return likelihood. return la; This sleight of hand will allow the Combined Likelihood Acceptance model to predict the likelihood of acceptance for the All Offers choice using these choice specific scores. After the prediction is made, we will clear the Analytical Scores session attribute to ensure it does not pollute any of the other (facet) models. To guarantee our combined likelihood model will learn based on the facet based scores—and is not distracted by the other session attributes—we will configure the model to exclude any other inputs, save for the instance of the Analytical Scores session attribute, on the model attributes tab. Recording Events In order for the combined likelihood model to learn correctly, we must ensure that the Analytical Scores session attribute is set correctly at the moment RTD records any events related to a particular choice. We apply essentially the same switching technique as before in a Record Combined Likelihood Event function. // set session attribute to contain facet based scores // (this is the only input for the combined model). session().setAnalyticalScores(choice.getAnalyticalScores); // record input event against All Offers choice. CombinedLikelihood.getChoice("AllOffers").recordEvent(event); // force learn at this moment using the Internal Dock entry point. Application.getPredictor().learn(InternalLearn.modelArray, session(), session(), Application.currentTimeMillis()); // clear session attribute of facet based scores. session().setAnalyticalScores(null); In this example, Internal Learn is a special informant configured as the learn location for the combined likelihood model. The informant itself has no particular configuration and does nothing in itself; it is used only to force the model to learn at the exact instant we have set the Analytical Scores session attribute to the correct values. Reporting Results After running a few thousand (artificially skewed) simulated sessions on our ILS, the Decision Center reporting shows some interesting results. In this case, these results reflect perfectly the bias we ourselves had introduced in our tests. In practice, we would obviously use a wider range of customer attributes and expect to see some more unexpected outcomes. The facetted model for categories has clearly picked up on the that fact our simulated youngsters have little interest in purchasing the one red-hot vehicle our ILS had on offer. Also, it would seem that customer age is an excellent predictor for the acceptance of pink products. Looking at the key drivers for the All Offers choice we can see the relative importance of the different facets to the prediction of overall likelihood. The comparative importance of the category facet for overall prediction might, in part, be explained by the clear preference of younger customers for toys over other product types; as evident from the report on the predictiveness of customer age for offer category acceptance. Conclusion Oracle Real-Time Decisions' flexible decisioning framework allows for the construction of exceptionally elaborate prediction models that facilitate powerful targeting, but nonetheless provide insightful reporting. Although few customers will have a direct need for such a sophisticated solution architecture, it is encouraging to see that this lies within the realm of the possible with RTD; and this with limited configuration and customization required. There are obviously numerous other ways in which the predictive and reporting capabilities of Oracle Real-Time Decisions can be expanded upon to tailor to individual customers needs. We will not be able to elaborate on them all on this blog; and finding the right approach for any given problem is often more difficult than implementing the solution. Nevertheless, we hope that these last few posts have given you enough of an understanding of the power of the RTD framework and its models; so that you can take some of these ideas and improve upon your own strategy. As always, if you have any questions about the above—or any Oracle Real-Time Decisions design challenges you might face—please do not hesitate to contact us; via the comments below, social media or directly at Oracle. We are completely multi-channel and would be more than glad to help. :-)

    Read the article

  • GetContactList stops reporting collisions on welded bodies

    - by Henrique Jung
    I have some strange problem with my game which uses Box2D as physics engine and I'm out of ideas on what I can do to solve it. My game is a class assignment where I need to build a simple game where the main character moves in a 2D environment while square blocks comes from below him. Each time a collision occurs, that block is attached to the character using a weld joint, when three blocks of the same colors are together, they annihilate themselves(an effect similar to Bejeweled). I'm using a recursive function to iterate through all the attached blocks of a given block to see if there are enough blocks for them to be deleted. I'm using GetContactList function to iterate through the list of contacts to see which blocks are adjacent to each other. The results are quite disappointing, the blocks only get annihilated in few cases. After a lot of debugging, I found the issue, but I still don't know how to solve. My issue is: after some time, GetContactList STOPS returning contacts (return NULL) to blocks that were already attached for some time. I spent some time reading the Box2D manual as well as some tutorials and still didn't find any clue of what is happening. Below there's some simplified version of the code that I wrote. for(int a = 0; a < blocksList.size(); a++) { blocksList[a].BuildConnections(); } And on BuildConnections b2ContactEdge* edge = body->GetContactList(); while(edge != NULL) { if (long_check_to_see_if_there's_a_block_nearby) { // add itself to the list to be anihilated globalList.push_back(this); //if there's, call BuildConnections again on the adjacent block adjacentBody->GetUserData()->BuildConnections; } edge = edge->next; } I know that there's another issue related to circular inclusions, but I fairly sure that this problem isn't causing the problem with the collisions. You can download my entire code from this page if you'd like http://code.google.com/p/fellz/source/list

    Read the article

  • Register now for the UK Windows Azure Self-paced Interactive Learning Course starting May 10th

    - by Eric Nelson
    [Suggested twitter tag #selfpacedazure] We (myself and David Gristwood) have been working in the UK to create a fantastic opportunity to get yourself up to speed on the Windows Azure Platform over a 6 week period starting May 10th – without ever needing to leave the comfort of your home/office.  The course is derived from the internal training Microsoft gives on Azure which is both fun and challenging in equal parts – and we felt was just too good to keep to ourselves! We will be releasing more details nearer the date but hopefully the following is enough to convince you to register and … recommend it to a colleague or three :-) What we have produced is the “Microsoft Azure Self-paced Learning Course”. This is a free, interactive, self-paced, technical training course covering the Windows Azure platform – Windows Azure, SQL Azure and the Azure AppFabric. The course takes place over a six week period finishing on June 18th. During the course you will work from your own home or workplace, and get involved via interactive Live Meetings session, watch on-line videos, work through hands-on labs and research and complete weekly coursework assignments. The mentors and other attendees on the course will help you in your research and learning, and there are weekly Live Meetings where you can raise questions and interact with them. This is a technical course, aimed at programmers, system designers, and architects who want a solid understanding of the Microsoft Windows Azure platform, hence a prerequisite for this course is at least six months programming in the .NET framework and Visual Studio. Check out the full details of the event or go straight to registration.   The course outline is: Week 1 - Windows Azure Platform Week 2 - Windows Azure Storage Week 3 - Windows Azure Deep Dive and Codename "Dallas" Week 4 - SQL Azure Week 5 - Windows Azure Platform AppFabric Access Control Week 6 - Windows Azure Platform AppFabric Service Bus If you have any questions about the course and its suitability, please email [email protected].

    Read the article

  • Ubuntu 12.04.2 Dual boot UEFI Windows 8 Preinstalled CX21903W Ultrabook

    - by user180782
    Hi i have a problem trying to install ubuntu. The machine is a CX Ultrabook model CX.21903W Intel I5 with 500GB hard disk, 8 GB ram and 32 GB SSD. From Installing Ubuntu on a Pre-Installed Windows 8 (64-bit) System (UEFI Supported), and according to the steps guide: 1 - We create a partition from Win8 (70 GB) from the own win8 program. 2 - Confirm-SecureBootUEFI=True. 3 - From Win8, shift + Restart and from special menu we selected the UEFI Firmware Setting. 4 - From BIOS Option: ------Option 1) Disable Secure Boot. ------Option 2) Disable UEFI (Not Available) from Option 1: Three ways is available. With Secure Boot enable - We can't even boot ubuntu. A red windows saying Soft unproper signed. With Secure Boot disable - and this config in boot device order: ----1: UEFI: USB ----2: Windows Boot Manger ----3: Others and CSM (Compatibility Support Module): enable - GRUB appears and selecting try Ubuntu then a black windows appears and nothing happens. The same result if install ubuntu is selected. With Secure Boot disable - and this config in boot device order: ----1: USB (No UEFI) ----2: Windows Boot Manger ----3: Others and CSM (Compatibility Support Module): enable - GRUB appears and selecting try Ubuntu, - Ubuntu boots and we can install it even. 5 - Rebooting and just changing the boot order as ----1: Ubuntu [] ----2: Windows Boot Manger ----3: Others then nothings happens. 6 - Booting from LiveUSB again and, as per instructed, making Boot-Repair (A warning windows: Ubuntu is working in legacy mode.). 7 - Saving changes and rebooting, Grub works but selecting Ubuntu, a black windows appears and nothing happens. Selecting Win8, Win8 boots and works. Untill now we can't make the ubuntu installation. Any suggestion will be welcomed. kind regards and thanks in advance.

    Read the article

  • SQLRally Nordic gets underway

    - by Rob Farley
    PASS is becoming more international, which is great. The SQL Community has always been international – it’s not as if data is only generated in North America. And while it’s easy for organisations to have a North American focus, PASS is taking steps to become international. Regular readers will be aware that I’m one of three advisors to the PASS Board of Directors, with a focus on developing PASS as a more global organisation. With this in mind, it’s great that today is Day 1 of SQLRally Nordic, being hosted in in Sweden – not only a non-American country, but one that doesn’t have English as its major language. The event has been hosted by the amazing Johan Åhlén and Raoul Illyés, two guys who I met earlier this year, but the thing that amazes me is the incredible support that this event has from the SQL Community. It’s been sold out for a long time, and when you see the list of speakers, it’s not surprising. Some of the industry’s biggest names from Microsoft have turned up, including Mark Souza (who is also a PASS Director), Thomas Kejser and Tobias Thernström. Business Intelligence experts such as Jen Stirrup, Chris Webb, Peter Myers, Marco Russo and Alberto Ferrari are there, as are some of the most awarded SQL MVPs such as Itzik Ben-Gan, Aaron Bertrand and Kevin Kline. The sponsor list is also brilliant, with names such as HP, FusionIO, SQL Sentry, Quest and SolidQ complimented by Swedish companies like Cornerstone, Informator, B3IT and Addskills. As someone who is interested in PASS becoming global, I’m really excited to see this event happening, and I hope it’s a launch-pad into many other international events hosted by the SQL community. If you have the opportunity, thank Johan and Raoul for putting this event on, and the speakers and sponsors for helping support it. The noise from Twitter is that everything is going fantastically well, and everyone involved should be thoroughly congratulated! @rob_farley

    Read the article

  • Dynamic endpoint binding in Oracle SOA Suite by Cattle Crew

    - by JuergenKress
    Why is dynamic endpoint binding needed? Sometimes a BPEL process instance has to determine at run-time which implementation of a web service interface is to be called. We’ll show you how to achieve that using dynamic endpoint binding. Let’s imagine the following scenario: we’re running a car rental agency called RYLC (Rent Your Legacy Car) which operates different locations. The process of renting a car is basically identical for all locations except for the determination which cars are currently available. This is depicted in the following diagram: There are three different implementations of the GetAvailableCars service. But how can we achieve calling them dynamically at run-time using Oracle SOA Suite? How to dynamically set the service endpoint There are just a couple of implementation steps we need to perform to enable dynamic endpoint binding: create a new SOA project in JDeveloper add a CarRental BPEL process add an external reference to the GetAvailableCars service within the composite create a DVM file containing the URI’s by which the services for the different locations can be accessed set the endpointURI property on the Invoke component calling the GetAvailableCars service (value is taken from the DVM file) Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Cattle crew,SOA binding,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How can I rank teams based off of head to head wins/losses

    - by TMP
    I'm trying to write an algorithm (specifically in Ruby) that will rank teams based on their record against each other. If a team A and team B have won the same amount of games against each other, then it goes down to point differentials. Here's an example: A beat B two times B beats C one time A beats D three times C bests D two times D beats C one time B beats A one time Which sort of reduces to A[B] = 2 B[C] = 1 A[D] = 3 C[D] = 2 D[C] = 1 B[A] = 1 Which sort of reduces to A[B] = 1 B[C] = 1 A[D] = 3 C[D] = 1 D[C] = -1 B[A] = -1 Which is about how far I've got I think the results of this specific algorithm would be: A, B, C, D But I'm stuck on how to transition from my nested hash-like structure to the results. My psuedo-code is as follows (I can post my ruby code too if someone wants): For each game(g): hash[g.winner][g.loser] += 1 That leaves hash as the first reduction above hash2 = clone of hash For each key(winner), value(losers hash) in hash: For each key(loser), value(losses against winner): hash2[loser][winner] -= losses Which leaves hash2 as the second reduction Feel free to as me question or edit this to be more clear, I'm not sure of how to put it in a very eloquent way. Thanks!

    Read the article

  • Dependency errors on installing Banshee

    - by Ben Cracknell
    I just installed Ubuntu 12.10 (Verified the ISO hash as well). The VERY first thing I did was open the software centre and try to install banshee. I am met with the following error: The following packages have unmet dependencies: banshee: Depends: libc6 (>= 2.7) but 2.15-0ubuntu20 is to be installed Depends: libglib2.0-0 (>= 2.34.1) but 2.34.0-1ubuntu1 is to be installed Depends: libgtk2.0-0 (>= 2.24.0) but 2.24.13-0ubuntu2 is to be installed Depends: libsoup-gnome2.4-1 (>= 2.27.4) but 2.40.0-0ubuntu1 is to be installed Depends: libsoup2.4-1 (>= 2.26.1) but 2.40.0-0ubuntu1 is to be installed Depends: libx11-6 (>= 2:1.4.99.1) but 2:1.5.0-1 is to be installed Depends: mono-runtime (>= 2.10.1) but 2.10.8.1-5ubuntu1 is to be installed Depends: libc0.1 (>= 2.15) but it is not going to be installed Depends: libgconf2.0-cil (>= 2.24.0) but 2.24.2-2 is to be installed Depends: libgdk-pixbuf2.0-0 (>= 2.26.4) but 2.26.4-0ubuntu1 is to be installed Depends: libglib2.0-cil (>= 2.12.10-1ubuntu1) but 2.12.10-4 is to be installed Depends: libgtk2.0-cil (>= 2.12.10-1ubuntu1) but 2.12.10-4 is to be installed Depends: libmono-cairo4.0-cil (>= 2.10.1) but 2.10.8.1-5ubuntu1 is to be installed Depends: libmono-corlib4.0-cil (>= 2.10.1) but 2.10.8.1-5ubuntu1 is to be installed Depends: libmono-posix4.0-cil (>= 2.10.1) but 2.10.8.1-5ubuntu1 is to be installed Depends: libmono-system-core4.0-cil (>= 2.10.3) but 2.10.8.1-5ubuntu1 is to be installed Depends: libmono-system4.0-cil (>= 2.10.7) but 2.10.8.1-5ubuntu1 is to be installed Depends: gnome-icon-theme (>= 2.16) but 3.6.0-0ubuntu2 is to be installed I should note that the banshee application appears three times when searching for it: http://i.imgur.com/fJOsb.png Other applications install fine though. I installed the latest updates and still received the same error. I even tried reinstalling Ubuntu, but the same thing happened.

    Read the article

  • Edd strikes again &ndash; IronRuby for Rubyists on InfoQ

    - by Eric Nelson
    Colleague, friend and generally top guy on IronRuby Edd Morgan has just been published over on InfoQ. To wet the appetite… a snippet or three. IronRuby for Rubyists IronRuby is Microsoft's implementation of the Ruby language we all know and love with the added bonus of interoperability with the .NET framework — the Iron in the name is actually an acronym for 'Implementation running on .NET'. It's supported by the .NET Common Language Runtime as well as, albeit unofficially, the Mono project. You'd be forgiven for harbouring some question in your mind about running a dynamic language such as Ruby atop the CLR - that's where the DLR (Dynamic Language Runtime) comes in. The DLR is Microsoft's way of providing dynamic language capability on top of the CLR. Both IronRuby and the DLR are, as part of Microsoft's commitment to open source software, available as part of the Microsoft Public License on GitHub and CodePlex respectively… And Metaprogramming with IronRuby The art and science of metaprogramming — especially in Ruby, where it's an absolute joy — is something that could very easily span an entire article. As you would hope, IronRuby code is fully able to manipulate itself allowing you to bend your classes to your whim just as you would expect with a good dynamic language… And Riding the irails? So let's get to the point. I think it's a solid bet to make that a large proportion of Ruby programmers are familiar with the Rails framework - perhaps it's even safe to assume that most were first led to the Ruby language by the siren song of the Rails framework itself. Long story short, IronRuby is compatible enough to run your Rails app… Now… get yourself over to the full article and also check out some of Edds other work below. Related Links: 5 Steps to getting started with IronRuby Mini Book Review of IronRuby Unleashed by Shay Friedman Guest Post: Using IronRuby and .NET to produce the ‘Hello World of WPF’ – also by Edd Getting PhP and Ruby working on Windows Azure and SQL Azure Guest Post: What's IronRuby, and how do I put it on Rails? – also by Edd

    Read the article

  • Language parsing to find important words

    - by Matt Huggins
    I'm looking for some input and theory on how to approach a lexical topic. Let's say I have a collection of strings, which may just be one sentence or potentially multiple sentences. I'd like to parse these strings to and rip out the most important words, perhaps with a score that denotes how likely the word is to be important. Let's look at a few examples of what I mean. Example #1: "I really want a Keurig, but I can't afford one!" This is a very basic example, just one sentence. As a human, I can easily see that "Keurig" is the most important word here. Also, "afford" is relatively important, though it's clearly not the primary point of the sentence. The word "I" appears twice, but it is not important at all since it doesn't really tell us any information. I might expect to see a hash of word/scores something like this: "Keurig" => 0.9 "afford" => 0.4 "want" => 0.2 "really" => 0.1 etc... Example #2: "Just had one of the best swimming practices of my life. Hopefully I can maintain my times come the competition. If only I had remembered to take of my non-waterproof watch." This example has multiple sentences, so there will be more important words throughout. Without repeating the point exercise from example #1, I would probably expect to see two or three really important words come out of this: "swimming" (or "swimming practice"), "competition", & "watch" (or "waterproof watch" or "non-waterproof watch" depending on how the hyphen is handled). Given a couple examples like this, how would you go about doing something similar? Are there any existing (open source) libraries or algorithms in programming that already do this?

    Read the article

  • What causes critical glib errors (when coding using messaging menu)?

    - by fluteflute
    If I run the python code below (almost entirely from this useful blog post) then I get three identical nasty looking error messages in the terminal. What might be causing them? I note the number (5857 in the example below) changes slightly on each run. What does this number signify? Is it a memory location or something similar? (messaging-menu.py:5857): GLib-GIO-CRITICAL **: g_dbus_method_invocation_return_dbus_error: assertion `error_name != NULL && g_dbus_is_name (error_name)' failed (messaging-menu.py:5857): GLib-GIO-CRITICAL **: g_dbus_method_invocation_return_dbus_error: assertion `error_name != NULL && g_dbus_is_name (error_name)' failed (messaging-menu.py:5857): GLib-GIO-CRITICAL **: g_dbus_method_invocation_return_dbus_error: assertion `error_name != NULL && g_dbus_is_name (error_name)' failed I'm running this on Natty, I should probably find out if I get the same errors in 10.10 though... import gtk def show_window_function(x, y): print x print y # get the indicate module, which does all the work import indicate # Create a server item mm = indicate.indicate_server_ref_default() # If someone clicks your server item in the MM, fire the server-display signal mm.connect("server-display", show_window_function) # Set the type of messages that your item uses. It's not at all clear which types # you're allowed to use, here. mm.set_type("message.im") # You must specify a .desktop file: this is where the MM gets the name of your # app from. mm.set_desktop_file("/usr/share/applications/nautilus.desktop") # Show the item in the MM. mm.show() # Create a source item mm_source = indicate.Indicator() # Again, it's not clear which subtypes you are allowed to use here. mm_source.set_property("subtype", "im") # "Sender" is the text that appears in the source item in the MM mm_source.set_property("sender", "Unread") # If someone clicks this source item in the MM, fire the user-display signal mm_source.connect("user-display", show_window_function) # Light up the messaging menu so that people know something has changed mm_source.set_property("draw-attention", "true") # Set the count of messages in this source. mm_source.set_property("count", "15") # If you prefer, you can set the time of the last message from this source, # rather than the count. (You can't set both.) This means that instead of a # message count, the MM will show "2m" or similar for the time since this # message arrived. # mm_source.set_property_time("time", time.time()) mm_source.show() gtk.main()

    Read the article

  • How to Fix this specific Google "Fetch as Googlebot" error appearing on my Webmaster Tools?

    - by UXdesigner
    Good day, I'm currently finding out why I have lost all of my website's rank in google. I don't even appear in google results by the domain. But other sites do link me and they appear in the google results. I think it's all about leaving my site two months alone and finding out I had 20k in comment spam, which I completely deleted and fixed with filters and adding a new Disqus comment service. Thing is, I added my site to Google Webmaster Tools and I'm finding out several awful things. For example, when I click in Google Fetch As GoogleBot. I receive this error message below in response to my request. And I don't even know what's the real problem and how to fix it. I simply don't get it. This is what appears: Date: Wednesday, July 20, 2011 9:43:35 AM PDT Googlebot Type: Web Download Time (in milliseconds): 55 HTTP/1.1 403 Forbidden Date: Wed, 20 Jul 2011 16:43:36 GMT Server: Apache Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 248 Keep-Alive: timeout=2, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 403 Forbidden Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. Do you guys know anything about this problem ? I need to have Google crawl my site again. I used to have a really nice google result in the past three years. Now, there's nothing. thanks,

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >