Search Results

Search found 20998 results on 840 pages for 'e business suite release'.

Page 236/840 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • Consumer Oriented Search In Oracle Endeca Information Discovery – Part 1

    - by Bob Zurek
    Information Discovery, a core capability of Oracle Endeca Information Discovery, enables business users to rapidly search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. One of the key capabilities, among many, that differentiate our solution from others in the Information Discovery market is our deep support for search across this growing amount of varied big data. Our method and approach is very different than classic simple keyword search that is found in may information discovery solutions. In this first part of a series on the topic of search, I will walk you through many of the key capabilities that go beyond the simple search box that you might experience in products where search was clearly an afterthought or attempt to catch up to our core capabilities in this area. Lets explore. The core data management solution of Oracle Endeca Information Discovery is the Endeca Server, a hybrid search-analytical database that his highly scalable and column-oriented in nature. We will talk in more technical detail about the capabilities of the Endeca Server in future blog posts as this post is intended to give you a feel for the deep search capabilities that are an integral part of the Endeca Server. The Endeca Server provides best-of-breed search features aw well as a new class of features that are the first to be designed around the requirement to bridge structured, semi-structured and unstructured big data. Some of the key features of search include type a heads, automatic alphanumeric spell corrections, positional search, Booleans, wildcarding, natural language, and category search and query classification dialogs. This is just a subset of the advanced search capabilities found in Oracle Endeca Information Discovery. Search is an important feature that makes it possible for business users to explore on the diverse data sets the Endeca Server can hold at any one time. The search capabilities in the Endeca server differ from other Information Discovery products with simple “search boxes” in the following ways: The Endeca Server Supports Exploratory Search.  Enterprise data frequently requires the user to explore content through an ad hoc dialog, with guidance that helps them succeed. This has implications for how to design search features. Traditional search doesn’t assume a dialog, and so it uses relevance ranking to get its best guess to the top of the results list. It calculates many relevance factors for each query, like word frequency, distance, and meaning, and then reduces those many factors to a single score based on a proprietary “black box” formula. But how can a business users, searching, act on the information that the document is say only 38.1% relevant? In contrast, exploratory search gives users the opportunity to clarify what is relevant to them through refinements and summaries. This approach has received consumer endorsement through popular ecommerce sites where guided navigation across a broad range of products has helped consumers better discover choices that meet their, sometimes undetermined requirements. This same model exists in Oracle Endeca Information Discovery. In fact, the Endeca Server powers many of the most popular e-commerce sites in the world. The Endeca Server Supports Cascading Relevance. Traditional approaches of search reduce many relevance weights to a single score. This means that if a result with a good title match gets a similar score to one with an exact phrase match, they’ll appear next to each other in a list. But a user can’t deduce from their score why each got it’s ranking, even though that information could be valuable. Oracle Endeca Information Discovery takes a different approach. The Endeca Server stratifies results by a primary relevance strategy, and then breaks ties within a strata by ordering them with a secondary strategy, and so on. Application managers get the explicit means to compose these strategies based on their knowledge of their own domain. This approach gives both business users and managers a deterministic way to set and understand relevance. Now that you have an understanding of two of the core search capabilities in Oracle Endeca Information Discovery, our next blog post on this topic will discuss more advanced features including set search, second-order relevance as well as an understanding of faceted search mechanisms that include queries and filters.  

    Read the article

  • The Rise of Project Intelligence and Why It Matters

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Amy DeWolf Are you doing any of these in your organization? How are you leveraging historical data to forecast projects? There’s a lot going on in government today. The economic pressures agencies feel from the uncertainty of budget cuts and sequestration effect every part of an organization, including the Project Management Office (PMO).  The PMO is responsible for monitoring and administering government IT projects. As time goes on, priorities shift, technology advances, and new regulations are imposed, all of which make planning and executing projects more difficult.  For example, think about your own projects.  How many boxes do you need to check and hoops do you need to jump through to ensure you comply with new regulations? While new regulations and technology advancements can be a good thing, they add an additional layer of complexity to already complex projects. To overcome some of these pressures, particularly new regulations, many in the PMO world are adopting a new approach- Project Intelligence (PI). According to a new Oracle Primavera white paper, The Rise of Project Intelligence: When Project Management is Just Not Enough, “PI uses Business Intelligence methods to leverage historical project data to make more informed decisions and greatly enhance project execution.” Currently, project managers plan and forecast the possible phases in an execution cycle.  However, most project managers don’t have the proper tools to do this as effectively as they would like. As the white paper noted, “The underlying deficiencies in most forecasting approaches are that 1) the PM fails in most instances to leverage historical data and 2) the PM doesn’t employ current Business Intelligence tools.” PI seeks to overturn this by combining modeling tools used in Business Intelligence for projects with the understanding of Emotional Intelligence for managing people.   Simply put, Project Intelligence is built off four main pillars: Actively use historical data to forecast project cycles Understand the intricacies of complex projects Enhance social and emotional intelligence in projects Actively use Business intelligence tools Read our complimentary whitepaper and discover the importance of emotional intelligence and best practices for improving projects, specifically in terms of communication.

    Read the article

  • The battle between Java vs. C#

    The battle between Java vs. C# has been a big debate amongst the development community over the last few years. Both languages have specific pros and cons based on the needs of a particular project. In general both languages utilize a similar coding syntax that is based on C++, and offer developers similar functionality. This being said, the communities supporting each of these languages are very different. The divide amongst the communities is much like the political divide in America, where the Java community would represent the Democrats and the .Net community would represent the Republicans. The Democratic Party is a proponent of the working class and the general population. Currently, Java is deeply entrenched in the open source community that is distributed freely to anyone who has an interest in using it. Open source communities rely on developers to keep it alive by constantly contributing code to make applications better; essentially they develop code by the community. This is in stark contrast to the C# community that is typically a pay to play community meaning that you must pay for code that you want to use because it is developed as products to be marketed and sold for a profit. This ties back into my reference to the Republicans because they typically represent the needs of business and personal responsibility. This is emphasized by the belief that code is a commodity and that it can be sold for a profit which is in direct conflict to the laissez-faire beliefs of the open source community. Beyond the general differences between Java and C#, they also target two different environments. Java is developed to be environment independent and only requires that users have a Java virtual machine running in order for the java code to execute. C# on the other hand typically targets any system running a windows operating system and has the appropriate version of the .Net Framework installed. However, recently there has been push by a segment of the Open source community based around the Mono project that lets C# code run on other non-windows operating systems. In addition, another feature of C# is that it compiles into an intermediate language, and this is what is executed when the program runs. Because C# is reduced down to an intermediate language called Common Language Runtime (CLR) it can be combined with other languages that are also compiled in to the CLR like Visual Basic (VB) .Net, and F#. The allowance and interaction between multiple languages in the .Net Framework enables projects to utilize existing code bases regardless of the actual syntax because they can be compiled in to CLR and executed as one codebase. As a software engineer I personally feel that it is really important to learn as many languages as you can or at least be open to learn as many languages as you can because no one language will work in every situation.  In some cases Java may be a better choice for a project and others may be C#. It really depends on the requirements of a project and the time constraints. In addition, I feel that is really important to concentrate on understanding the logic of programming and be able to translate business requirements into technical requirements. If you can understand both programming logic and business requirements then deciding which language to use is just basically choosing what syntax to write for a given business problem or need. In regards to code refactoring and dynamic languages it really does not matter. Eventually all projects will be refactored or decommissioned to allow for progress. This is the way of life in the software development industry. The language of a project should not be chosen based on the fact that a project will eventually be refactored because they all will get refactored.

    Read the article

  • Certify September Updates

    - by Sadia2
    We have added some release and platform certifications to MOS Certify. Applications: Oracle Demantra 12.2.2, 7.3.1.5, 7.3.1.4, 7.3.0.2.0, 7.3.0.0.0 Collaboration Technologies: Oracle Beehive 2.0.1.8.0 Database: Oracle Database Client 12.1.0.1.0, Oracle Clusterware 11.2.0.4.0, Oracle Database 11.2.0.4.0, Oracle Real Application Clusters 11.2.0.4.0 E-Business Suite: Oracle E-Business Suite 12.2.2, 12.1.3, 12.1.2, 12.1.1, 12.0.6, 11.5.10.2 Edge Applications: Oracle AutoVue 20.2.2, 20.2.1, 20.2.0 Enterprise Manager: Enterprise Manager Base Platform - OMS 12.1.0.3.0, Oracle Real User Experience Insight 12.1.0.4.0, 12.1.0.3.0, 12.1.0.1, 11.1 FSGBU Insurance Group: Oracle Health Insurance Claims 2.13.3.0.0 Fusion Middleware: Oracle Business Intelligence Applications 11.1.1.7.1, 7.9.6.4.0, Oracle Discoverer 11.1.1.6.0, Discoverer Administrator 11.1.1.6.0, Discoverer Desktop 11.1.1.6.0, Oracle JDK 1.7.0_40, 1.7.0_25", Oracle JRE 1.7.0_40, 1.7.0_25, Oracle JRockit 6u45 R28.2.7+, Oracle WebCenter Sites 11.1.1.8.0, Oracle WebCenter Sites: Community-Gadgets 11.1.1.8.0, Oracle WebCenter Sites: CIP for File Systems and MS SharePoint 11.1.1.8.0, Oracle WebCenter Sites: CIP for EMC Documentum 11.1.1.8.0 JD Edwards EnterpriseOne: JD Edwards EnterpriseOne Business Services Server 9.1.3.0, 9.1.2.0, 9.1.0.0, JD Edwards EnterpriseOne Mobile Applications 9.1.2.0 Oracle Fusion Applications: Oracle Fusion Applications 11.1.7.0.0 Primavera GBU: Primavera Unifier 9.13.0.0 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Siebel Enterprise: Siebel Application Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Database Server 8.2.2.3.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Remote Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Tools Client 8.2.2.4.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.9.0, Siebel SSO Integration 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0

    Read the article

  • Get Ready for Anytime, Anywhere Engagement

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Are you ready for 2015?  According to IDC, 2015 is the year when more users are projected to access the internet using mobile devices than with PC’s or other wired devices.  It’s no doubt that mobile devices are a critical means of communication today, and are on track to become increasingly more important in the coming years. However, device formats are so varied that delivering a mobile web experience that will engage site visitors and enhance your brand can be a daunting task. Solutions that empower organizations to easily extend their web presence to the mobile channel, while saving significant time and effort in managing mobile sites, are now essential in our ever connected mobile world. So what are some of the things organizations should look for in such a solution? Mobile device form factors, networks, protocols, and browsers vary widely, and reformatting web content for thousands of different device and software combinations is a prohibitive task. An effective mobile solution can make this process seamless by automatically formatting designated web content for mobile delivery.  By automatically detecting a site visitor’s device configuration, the selected web content can be sized and formatted for optimal display on that particular device. This can save tremendous time involved in building, formatting, and maintaining individual websites or mobile applications for different mobile devices. It’s not enough to simply support the thousands of different mobile device types that are out there. It’s also critical to make it easy for marketers and other business users to manage mobile sites and mobile content. Those responsible for maintaining an organization’s web and mobile experiences need the ability to edit content using rich text editor tools and then preview that content directly in the context of the mobile website and the traditional website, ideally from the same business user interface. Powerful capabilities such as these make managing the web experience for mobile devices easy, even with frequently changing content, across a multitude of different devices. This saves tremendous time involved in building, formatting, and maintaining individual websites or mobile applications for different mobile devices. When content or business needs change, the business user needs only to change site content once, and it is seamlessly deployed to the web and all mobile channels.Geo-location is another critical input to making the online experience engaging and relevant for web visitors who are increasingly mobile. A mobile solution should enable use of device GPS data to deliver location-based content and services to mobile website visitors. Organizations can provide mobile site visitors with location-sensitive search results, location-based offers and recommendations, integration of maps and directions into site content, and much more – all critical for meeting the needs of those on the go.To hear more about how mobile is changing the game, check out our recent webcast with Ted Schadler, Vice President, Principal Analyst, Forrester, where he discussed why mobile is the new face of engagement, or learn more about how to extend your web presence to the mobile channel with Oracle WebCenter Sites and Oracle WebCenter Sites Mobility Server.

    Read the article

  • Getting Help with 'SEPA' Questions

    - by MargaretW
    What is 'SEPA'? The Single Euro Payments Area (SEPA) is a self-regulatory initiative for the European banking industry championed by the European Commission (EC) and the European Central Bank (ECB). The aim of the SEPA initiative is to improve the efficiency of cross border payments and the economies of scale by developing common standards, procedures, and infrastructure. The SEPA territory currently consists of 33 European countries -- the 28 EU states, together with Iceland, Liechtenstein, Monaco, Norway and Switzerland. Part of that infrastructure includes two new SEPA instruments that were introduced in 2008: SEPA Credit Transfer (a Payables transaction in Oracle EBS) SEPA Core Direct Debit (a Receivables transaction in Oracle EBS) A SEPA Credit Transfer (SCT) is an outgoing payment instrument for the execution of credit transfers in Euro between customer payment accounts located in SEPA. SEPA Credit Transfers are executed on behalf of an Originator holding a payment account with an Originator Bank in favor of a Beneficiary holding a payment account at a Beneficiary Bank. In R12 of Oracle applications, the current SEPA credit transfer implementation is based on Version 5 of the "SEPA Credit Transfer Scheme Customer-To-Bank Implementation Guidelines" and the "SEPA Credit Transfer Scheme Rulebook" issued by European Payments Council (EPC). These guidelines define the rules to be applied to the UNIFI (ISO20022) XML message standards for the implementation of the SEPA Credit Transfers in the customer-to-bank space. This format is compliant with SEPA Credit Transfer version 6. A SEPA Core Direct Debit (SDD) is an incoming payment instrument used for making domestic and cross-border payments within the 33 countries of SEPA, wherein the debtor (payer) authorizes the creditor (payee) to collect the payment from his bank account. The payment can be a fixed amount like a mortgage payment, or variable amounts such as those of invoices. The "SEPA Core Direct Debit" scheme replaces various country-specific direct debit schemes currently prevailing within the SEPA zone. SDD is based on the ISO20022 XML messaging standards, version 5.0 of the "SEPA Core Direct Debit Scheme Rulebook", and "SEPA Direct Debit Core Scheme Customer-to-Bank Implementation Guidelines". This format is also compliant with SEPA Core Direct Debit version 6. EU Regulation #260/2012 established the technical and business requirements for both instruments in euro. The regulation is referred to as the "SEPA end-date regulation", and also defines the deadlines for the migration to the new SEPA instruments: Euro Member States: February 1, 2014 Non-Euro Member States: October 31, 2016. Oracle and SEPA Within the Oracle E-Business Suite of applications, Oracle Payables (AP), Oracle Receivables (AR), and Oracle Payments (IBY) provide SEPA transaction capabilities for the following releases, as noted: Release 11.5.10.x -  AP & AR Release 12.0.x - AP & AR & IBY Release 12.1.x - AP & AR & IBY Release 12.2.x - AP & AR & IBY Resources To assist our customers in migrating, using, and troubleshooting SEPA functionality, a number of resource documents related to SEPA are available on My Oracle Support (MOS), including: R11i: AP: White Paper - SEPA Credit Transfer V5 support in Oracle Payables, Doc ID 1404743.1R11i: AR: White Paper - SEPA Core Direct Debit v5.0 support in Oracle Receivables, Doc ID 1410159.1R12: IBY: White Paper - SEPA Credit Transfer v5 support in Oracle Payments, Doc ID 1404007.1R12: IBY: White Paper - SEPA Core Direct Debit v5 support in Oracle Payments, Doc ID 1420049.1R11i/R12: AP/AR/IBY: Get Help Setting Up, Using, and Troubleshooting SEPA Payments in Oracle, Doc ID 1594441.2R11i/R12: Single European Payments Area (SEPA) - UPDATES, Doc ID 1541718.1R11i/R12: FAQs for Single European Payments Area (SEPA), Doc ID 791226.1

    Read the article

  • WF4 &ndash; Guess the number game!

    - by MarkPearl
    I posted yesterday how really good WF4 was looking. Today I thought I would show some real basics that I was able to figure out. This will be a simple example, I am going to make a flowchart workflow – which will prompt the user to guess the number until they guess the right number. Lets begin… Make a new project and make it a Workflow console Application. Then select the Workflow file and drag a FlowChart (2) to point 3. This will now show a green start circle in the designer form. We are going to work with primitives to start with. We are now going to drag a few objects onto the Workflow, We drag the WriteLine, Assign & Decision items onto the designer. Once they are dragged onto the designer we will want to link them up. The order that they are linked is critical since this will determine the order of the solution. In this case, we want the system to first ask “Guess a number”, then to wait for the user to input some code, and then to display “You got it” if they got it right, and “Try again” if they got it wrong. So we now link the arrows to the objects. This is done by moving the mouse pointer over the start objects and clicking on one of the toggles and then dragging it to the next object and releasing the button over one of the toggles. This will place an arrow from the source object to the target object. Okay… pretty simple stuff – now we just need these primitive objects to do stuff. Lets start with the WriteLine primitive. We place the text in inverted commas in the Text field. Because this field accepts any valid VB expression we could have put variables etc. in there if we wanted to. The next thing we want to do is allow the user to input a number. This brings up an interesting problem, if a user were to type in a number, there would need to be someway to declare a variable to hold that value for the life of the workflow. We can achieve this by declaring a variable. To declare a variable, move your cursor over the variables tab at the bottom of the workflow, and then type the name of the new variable in the “Create Variable” field and set it as shown in the image above. Now that we have a variable, we want to call the Console.Readline method and assign the inputted value from the Console to that variable. The code that cannot be seen is actually this – Convert.ToInt32(Console.ReadLine()) We now have a workflow that first prompts the user for a number, then allows the user to type in a number. We are almost done, we just need to make the system react to the value inputted. There are a few ways we could do this, I am going to use the Decision item. So select the Decision object on the designer and then view its properties (F4 for me), and in the condition field place a condition. For simplicity sake I have decided that if the user guesses 10, they will have guessed the number. This is now the completed workflow. Its really easy to understand and shows some really powerful principles for Business applications. You can run the application and see what it does. Imagine writing business solutions that do not worry about the exact flow of objects, but simply allows a business analyst or someone to configure the solution to work exactly as the business rules would dictate. And if the rules changed six months later all they would need to do is re-drag some of the flows. Now I do not know if WF4 will allow for this, but it feels like it is a step in the right direct.

    Read the article

  • Should I manage authentication on my own if the alternative is very low in usability and I am already managing roles?

    - by rumtscho
    As a small in-house dev department, we only have experience with developing applications for our intranet. We use the existing Active Directory for user account management. It contains the accounts of all company employees and many (but not all) of the business partners we have a cooperation with. Now, the top management wants a technology exchange application, and I am the lead dev on the new project. Basically, it is a database containing our know-how, with a web frontend. Our employees, our cooperating business partners, and people who wish to become our cooperating business partners should have access to it and see what technologies we have, so they can trade for them with the department which owns them. The technologies are not patented, but very valuable to competitors, so the department bosses are paranoid about somebody unauthorized gaining access to their technology description. This constraint necessitates a nightmarishly complicated multi-dimensional RBAC-hybrid model. As the Active Directory doesn't even contain all the information needed to infer the roles I use, I will have to manage roles plus per-technology per-user granted access exceptions within my system. The current plan is to use Active Directory for authentication. This will result in a multi-hour registration process for our business partners where the database owner has to manually create logins in our Active Directory and send them credentials. If I manage the logins in my own system, we could improve the usability a lot, for example by letting people have an active (but unprivileged) account as soon as they register. It seems to me that, after I am having a users table in the DB anyway (and managing ugly details like storing historical user IDs so that recycled user IDs within the Active Directory don't unexpectedly get rights to view someone's technologies), the additional complexity from implementing authentication functionality will be minimal. Therefore, I am starting to lean towards doing my own user login management and forgetting the AD altogether. On the other hand, I see some reasons to stay with Active Directory. First, the conventional wisdom I have heard from experienced programmers is to not do your own user management if you can avoid it. Second, we have code I can reuse for connection to the active directory, while I would have to code the authentication if done in-system (and my boss has clearly stated that getting the project delivered on time has much higher priority than delivering a system with high usability). Third, I am not a very experienced developer (this is my first lead position) and have never done user management before, so I am afraid that I am overlooking some important reasons to use the AD, or that I am underestimating the amount of work left to do my own authentication. I would like to know if there are more reasons to go with the AD authentication mechanism. Specifically, if I want to do my own authentication, what would I have to implement besides a secure connection for the login screen (which I would need anyway even if I am only transporting the pw to the AD), lookup of a password hash and a mechanism for password recovery (which will probably include manual identity verification, so no need for complex mTAN-like solutions)? And, if you have experience with such security-critical systems, which one would you use and why?

    Read the article

  • Faster Trip to Innovation with Simplified Data Integration: Sabre Holdings Case Study

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Author: Irem Radzik, Director of Product Marketing, Data Integration, Oracle In today’s fast-paced, competitive environment, IT teams are under pressure to deliver technology solutions for many critical business initiatives as fast as possible. When the focus is on speed, it can be easy to continue to use old style, point-to-point custom scripts that grow organically to the point where they are unmanageable and too costly to maintain. As data volumes, data sources, and end users grow, uncoordinated data integration efforts create significant inefficiencies for both IT and business users. In addition to losing IT productivity due to maintaining spaghetti architecture, data integrity becomes a concern as well. Errors caused by inconsistent, data and manual data entry can prove very costly for companies and disrupt business activities. Many industry leaders recognize now that data should be moved in an automated and reliable manner across all platforms to have one version of the truth. By simplifying their data integration architecture and standardizing on a centralized approach, IT teams now accelerate time to market. Especially, using a centralized, shared-service approach brings agility, increases IT productivity, and frees up resources for innovation. One such industry leader that simplified its data integration architecture is Sabre Holdings. Sabre Holdings provides distribution and technology solutions for the travel industry, and is a winner of Oracle Excellence Awards for Fusion Middleware in 2011 in the data integration category. I had the pleasure to host Sabre Holdings on a public webcast and discuss their data integration best practices for data warehousing. In this webcast Sabre’s Amjad Saeed, presented how the company reduced complexity by consolidating systems and standardizing development on Oracle Data Integrator and Oracle GoldenGate for its global data warehouse development team. With Oracle’s complete real-time data integration solution, Sabre also streamlined support and maintenance operations, achieved real-time view in the execution of the integration processes, and can manage the data warehouse and business intelligence solution performance on demand. By reducing complexity and leveraging timely market insights, the company was able to decrease time to market by 40%. You can now listen to the webcast on demand: Sabre Holdings Case Study: Accelerating Innovation using Oracle Data Integration I invite you to hear directly from Sabre how to use advanced data integration capabilities to enable accelerated innovation. To learn more about Oracle’s data integration offering you can download our free resources.

    Read the article

  • The Minimalist Approach to Content Governance - Request Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. For each project, regardless of size, it is critical to understand the required ownership, business purpose, prerequisite education / resources needed to execute and success criteria around it. Without doing this, there is no way to get a handle on the content life-cyle, resulting in a mass of orphaned material. This lowers the quality of end user experiences.     The good news is that by using a simple process in this request phase - we will not have to revisit this phase unless something drastic changes in the project. For each of the elements mentioned above in this stage, the why, how (technically focused) and impact are outlined with the intent of providing the most value to a small team. 1. Ownership Why - Without ownership information it will not be possible to track and manage any of the content and take advantage of many features of enterprise content management technology. To hedge against this, we need to ensure that both a individual and their group or department within the organization are associated with the content. How - Apply metadata that indicates the owner and department or group that has responsibility for the content. Impact - It is possible to keep the content system optimized by running native reports against the meta-data and acting on them based on what has been outlined for success criteria. This will maximize end user experience, as content will be faster to locate and more relevant to the user by virtue of working through a smaller collection. 2. Business Purpose Why - This simple step will weed out requests that have tepid justification, as users will most likely not spend the effort to request resources if they do not have a real need. How - Use a simple online form to collect and workflow the request to management native to the content system. Impact - Minimizes the amount user generated content that is of low value to the organization. 3. Prerequisite Education Resources Needed Why - If a project cannot be properly staffed the probability of its success is going to be low. By outlining the resources needed - in both skill set and duration - it will cause the requesting party to think critically about the commitment needed to complete their project and what gap must be closed with regard to education of those resources. How - In the simple request form outlined above, resources and a commitment to fulfilling any needed education should be included with a brief acceptance clause that outlines the requesting party's commitment. Impact - This stage acts as a formal commitment to ensuring that resources are able to execute on the vision for the project. 4. Success Criteria Why - Similar to the business purpose, this is a key element in helping to determine if the project and its respective content should continue to exist if it does not meet its intended goal. How - Set a review point for the project content that will check the progress against the originally outlined success criteria and then determine the fate of the content. This can even include logic that will tell the content system to remove items that have not been opened by any users in X amount of time. Impact - This ensures that projects and their contents do not live past their useful lifespans. Just as with orphaned content, non-relevant information will slow user's access to relevant materials for the jobs. Request Phase Summary With a simple form that outlines the ownership of a project and its content, business purpose, education and resources, along with success criteria, we can ensure that an enterprise content management system will stay clean and relevant to end users - allowing it to deliver the most value possible. The key here is to make it straightforward to make the request and let the content management technology manage as much as possible through metadata, retention policies and workflow. Doing these basic steps will allow project content to get off to a great start in the enterprise! Stay tuned for the next installment - the "Create Phase" - covering security access and workflow involved in content creation, enabling a practical layer of governance over our enterprise content repository.

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Multichannel Digital Engagement: Find Out How Your Organization Measures Up

    - by Michael Snow
    This article was originally published in the September 2013 Edition of the Oracle Information InDepth Newsletter ORACLE WEBCENTER EDITION Thanks to mobile and social technologies, interactive online experiences are now commonplace. Not only that, they give consumers more choices, influence, and control than ever before. So how can you make your organization stand out? The key building blocks for delivering exceptional cross-channel digital experiences are outlined below. Also, a new assessment tool is available to help you measure your organization's ability to deliver such experiences. A clearly defined digital strategy. The customer journey is growing increasingly complex, encompassing multiple touchpoints and channels. It used to be easy to map marketing efforts to specific offline channels; for example, a direct mail piece with an offer to visit a store for a discounted purchase. Now it is more difficult to cultivate and track such clear cause-and-effect relationships. To deliver an integrated digital experience in this more complex world, organizations need a clearly defined and comprehensive digital marketing strategy that is backed up by an integrated set of software, middleware, and hardware solutions. Strong support for business agility and speed-to-market. As both IT and marketing executives know, speed-to-market and business agility are key to competitive advantage. That means marketers need solutions to support the rapid implementation of online marketing initiatives—plus the flexibility to adapt quickly to a changing marketplace. And IT needs tools with the performance, scalability, and ease of integration to support marketing efforts. Both teams benefit when business users are empowered to implement marketing initiatives on their own, with minimal IT intervention. The ability to deliver relevant, personalized content. Delivering a one-size-fits-all online customer experience is no longer acceptable. Customers expect you to know who they are, including their preferences and past relationship with your brand. That means delivering the most relevant content from the moment a visitor enters your site. To make that happen, you need a powerful rules engine so that marketers and business users can easily define site visitor segments and deliver content accordingly. That includes both implicit targeting that is based on the user’s behavior, and explicit targeting that takes a user’s profile information into account. Ideally, the rules engine can also intelligently weight recommendations when multiple segments apply to a specific customer. Support for social interactivity. With the advent of Facebook and LinkedIn, visitors expect to participate in and contribute to your web presence—and share their experience on their own social networks. That requires easy incorporation of user-generated content such as comments, ratings, reviews, polls, and blogs; seamless integration with third-party social networking sites; and support for social login, which helps to remove barriers to social participation. The ability to deliver connected, multichannel experiences that include powerful, flexible mobile capabilities. By 2015, mobile usage is projected to surpass that of PCs and other wired devices. In other words, mobile is an essential element in delivering exceptional online customer experiences. This requires the creation and management of mobile experiences that are optimized for delivery to the thousands of different devices that are in use today. Just as important, organizations must be able to easily extend their traditional web presence to the mobile channel and deliver highly personalized and relevant multichannel marketing initiatives while also managing to minimize the time and effort required to manage mobile sites. Are you curious to know how your organization measures up when it comes to delivering an engaging, multichannel digital experience? If so, take this brief, 15-question online assessment and see how your organization scores in the areas of digital strategy, digital agility, relevance and personalization, social interactivity, and multichannel experience.

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 1: Why Build Smart Phone Apps

    - by Tim Murphy
    This is part 1 in a series of post based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Intro Most of us already carry smartphones. We play games on them. We keep up with what is going on with our friends and our favorite teams. We take pictures of our kids at their events. But the question is if that is all they are good for. Many companies have aspects of their business that lend themselves to being performed by mobile devices. Some of them lean toward larger device such as tablets, but many can be executed on smartphones. This and the following articles will discuss some of the possible applications of smartphone technology for businesses, the platforms that are available and the considerations you need to make when building them. I'll take a look at some specific scenarios and wrap up with a couple of capabilities that are just emerging that can be used in the future. Why Build Enterprise Smartphone Applications So what are some of the ways that you can leverage smartphone technology to gain efficiency in your business or a clients business. There are a few major areas that I have seen mobile platforms being an advantage to. Your mobile sales force is a key candidate for leveraging smartphone apps.  They can visit clients in their retail location and place orders on site. It is a more personal approach which can gain you customer loyalty.  A sales person may also gather information about the way a client does business or who their target market is. This allows them you to focus marketing information or build customized support for your customer. You may also have need to track physical inventory in a store. This is something that has historically been done with laser scanners, but with the camera capabilities in today's phones and tablets it is possible to use more general multi-purpose devices.  This can save costs on both hardware and telecommunication contracts. Delivery verification is another area that historically has been the domain of specialized devices but can now be accomplished with smartphones.  This also reduces costs because it is also used for communicating with the driver and other operations.  Add to that the navigation capability of smartphones and you can see how the return on investment increases. Executives are always on the go. They spend most of their time in meetings and yet they need access to decision making information at their finger tips. With a smartphone app they can get alerts when major sales are closed or critical accounting process are completed that may need their attention. They can also answer questions by instantly pulling up BI reports. I have often heard operations support people say that they need things like VPN and RDP from their phones. If they can also have notifications of outages or critical support requests they can be react to situations without needing to be tied to their desks. These are all valid reasons to need smartphone applications.  In the next installment I will discuss platforms and features. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • Python, unit test - Pass command line arguments to setUp of unittest.TestCase

    - by sberry2A
    I have a script that acts as a wrapper for some unit tests written using the Python unittest module. In addition to cleaning up some files, creating an output stream and generating some code, it loads test cases into a suite using unittest.TestLoader().loadTestsFromTestCase() I am already using optparse to pull out several command-line arguments used for determining the output location, whether to regenerate code and whether to do some clean up. I also want to pass a configuration variable, namely an endpoint URI, for use within the test cases. I realize I can add an OptionParser to the setUp method of the TestCase, but I want to instead pass the option to setUp. Is this possible using loadTestsFromTestCase()? I can iterate over the returned TestSuite's TestCases, but can I manually call setUp on the TestCases? ** EDIT ** I wanted to point out that I am able to pass the arguments to setUp if I iterate over the tests and call setUp manually like: (options, args) = op.parse_args() suite = unittest.TestLoader().loadTestsFromTestCase(MyTests.TestSOAPFunctions) for test in suite: test.setUp(options.soap_uri) However, I am using xmlrunner for this and its run method takes a TestSuite as an argument. I assume it will run the setUp method itself, so I would need the parameters available within the XMLTestRunner. I hope this makes sense.

    Read the article

  • MSbuild task fails because "Any CPU" solution is built out of order

    - by Art Vandalay
    I have two solutions to build in Teambuild, one is the application itself, the other one is the WiX installer. I want to build the application using "Any CPU" build configuration and the installer using "x86". I've listed the "Any CPU" solution first in my project file, but Teambuild always builds the "x86" solution first. I'm setting BuildSolutionsInParallel = false, but it still builds the solutions in the reverse listed order. If I change the first solution to "Mixed Platform", it works fine. How can I get the solutions to build in the order listed in the project file? <Project ...> <PropertyGroup> <!-- We want to build the install solution after the build solution --> <BuildSolutionsInParallel>false</BuildSolutionsInParallel> </PropertyGroup> <ItemGroup> <SolutionToBuild Include="$(BuildProjectFolderPath)/Pricer/Pricer.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> <SolutionToBuild Include="$(BuildProjectFolderPath)/Pricer/Pricer.Install/Pricer.Install.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> </ItemGroup> <ItemGroup> <ConfigurationToBuild Include="Release|Any CPU"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>Any CPU</PlatformToBuild> </ConfigurationToBuild> <ConfigurationToBuild Include="Release|x86"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>x86</PlatformToBuild> </ConfigurationToBuild> </ItemGroup> </Project>

    Read the article

  • Can somebody explain the differences, status and future of the various ASP.NET AJAX libraries and to

    - by tjrobinson
    I'm confused about the differences and relationships between the various Microsoft ASP.NET AJAX components/libraries/toolkits and particularly the naming of them. It starts off relatively simple with ASP.NET AJAX itself: ASP.NET AJAX 1.0 (available for ASP.NET 2.0 in a separate package called ASP.NET 1.0 Extensions) ASP.NET AJAX 3.5 (included with ASP.NET 3.5) ASP.NET AJAX 4.0 (included with ASP.NET 4.0) Then come the various projects on CodePlex and elsewhere: ASP.NET AJAX Control Toolkit (aka Original Ajax Control Toolkit) Samples CodePlex It seems that the September 2009 Release is the final release of the Original Ajax Control Toolkit and that it's been superseded by... Ajax Control Toolkit in ASP.NET Ajax Library It looks like the old ASP.NET AJAX Control Toolkit has now become part of a larger ASP.NET Ajax Library but is still maintained seperately on CodePlex. This release is in beta at time of writing so presumably if I want to use the "Control Toolkit" I should stick with the September 2009 Release of the Original ASP.NET AJAX Control Toolkit CodePlex Microsoft Ajax Library Preview Is this the same as the ASP.NET Ajax Library mentioned above just with a confusing name variation? Is the "Control Toolkit" included in Preview 6 and is it older newer or older than the code in Ajax Control Toolkit in ASP.NET Ajax Library? CodePlex Microsoft ASP.NET Ajax Wiki - note the inconsistent insertion of ASP.NET into the name Links to useful articles, roadmaps would be useful.

    Read the article

  • How to Compile Mod_Python 3.3.1 for Python 2.6 and Apache 2.2 on Windows?

    - by John
    I have no experience compiling code other than using Visual Studio's Build command. I am hoping we can create a step by step guide for compiling mod_python on windows. Please be as descriptive as possible. This is what I've done so far: Download and install python 2.6.2 Download and install apache 2.2.11 Download the most recent source code for mod_python from svn From here I'm lost to what the next step is. I've downloaded Microsoft Visual C++ 2008 Express Edition. As mentioned by Hao I've already tried the tutorial mentioned in that link. Here is the error messages I'm receiving with that tutorial. C:\mod_python\distbuild_installer.bat Could Not Find C:\mod_python\src*.obj running bdist_wininst running build running build_py creating build creating build\lib.win32-2.6 creating build\lib.win32-2.6\mod_python copying C:\mod_python\lib\python\mod_python\apache.py - build\lib.win32-2.6\mod _python copying C:\mod_python\lib\python\mod_python\cache.py - build\lib.win32-2.6\mod_ python copying C:\mod_python\lib\python\mod_python\cgihandler.py - build\lib.win32-2.6 \mod_python copying C:\mod_python\lib\python\mod_python\Cookie.py - build\lib.win32-2.6\mod _python copying C:\mod_python\lib\python\mod_python\importer.py - build\lib.win32-2.6\m od_python copying C:\mod_python\lib\python\mod_python\psp.py - build\lib.win32-2.6\mod_py thon copying C:\mod_python\lib\python\mod_python\publisher.py - build\lib.win32-2.6\ mod_python copying C:\mod_python\lib\python\mod_python\python22.py - build\lib.win32-2.6\m od_python copying C:\mod_python\lib\python\mod_python\Session.py - build\lib.win32-2.6\mo d_python copying C:\mod_python\lib\python\mod_python\testhandler.py - build\lib.win32-2. 6\mod_python copying C:\mod_python\lib\python\mod_python\util.py - build\lib.win32-2.6\mod_p ython copying C:\mod_python\lib\python\mod_python__init__.py - build\lib.win32-2.6\m od_python running build_ext building 'mod_python_so' extension creating build\temp.win32-2.6 creating build\temp.win32-2.6\Release creating build\temp.win32-2.6\Release\mod_python creating build\temp.win32-2.6\Release\mod_python\src C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W 3 /GS- /DNDEBUG -DWIN32 -DNDEBUG -D_WINDOWS -IC:\mod_python\src\include -Ic:\apa che\include -IC:\Python26\include -IC:\Python26\PC /TcC:\mod_python\src\mod_pyth on.c /Fobuild\temp.win32-2.6\Release\mod_python\src\mod_python.obj mod_python.c c:\apache\include\ap_config.h(25) : fatal error C1083: Cannot open include file: 'apr.h': No such file or directory error: command '"C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe"' fa iled with exit status 2

    Read the article

  • Repository pattern with lazying loading using POCO

    - by Simon G
    Hi, I'm in the process of starting a new project and creating the business objects and data access etc. I'm just using plain old clr objects rather than any orms. I've created two class libraries: 1) Business Objects - holds all my business objects, all this objects are light weight with only properties and business rules. 2) Repository - this is for all my data access. The majority of my objects will have child list in and my question is what is the best way to lazy load these values as I don't want to bring back unnecessary information if I dont need to. I've thought about when using the "get" on the child property to check if its "null" and if it is call my repository to get the child information. This has two problems from what I can see: 1) The object "knows" how to get itself I would rather no data access logic be held in the object. 2) This required both classes to reference each other which in visual studio throws a circular dependency error. Does anyone have any suggestions on how to overcome this issue or any recommendations on my projects layout and where it can be improved? Thanks

    Read the article

  • MSBuild Validating Properties

    - by Brian Gillespie
    I'm working on a reusable MSBuild Target that will be consumed by several other tasks. This target requires that several properties be defined. What's the best way to validate that properties are defined, throwing an Error if the are not? Two attempts that I almost like: <?xml version="1.0" encoding="utf-8" ?> <Project ToolsVersion="3.5" DefaultTarget="Release" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Target Name="Release"> <Error Text="Property PropA required" Condition="'$(PropA)' == ''"/> <Error Text="Property PropB required" Condition="'$(PropB)' == ''"/> <!-- The body of the task --> </Target> </Project> Here's an attempt at batching. It's ugly because of the extra "Name" parameter. Is it possible to use the Include attribute instead? <?xml version="1.0" encoding="utf-8" ?> <Project ToolsVersion="3.5" DefaultTarget="Release" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Target Name="Release"> <!-- MSBuild BuildInParallel="true" Projects="@(ProjectsToBuild)"/ --> <ItemGroup> <RequiredProperty Include="PropA"><Name>PropA</Name></RequiredProperty> <RequiredProperty Include="PropB"><Name>PropB</Name></RequiredProperty> <RequiredProperty Include="PropC"><Name>PropC</Name></RequiredProperty> </ItemGroup> <Error Text="Property %(RequiredProperty.Name) required" Condition="'$(%(RequiredProperty.Name))' == ''" /> </Target> </Project>

    Read the article

  • Statically Compiling QWebKit 4.6.2

    - by geeko
    I tried to compile Qt+Webkit statically with MS VS 2008 and this worked. C:\Qt\4.6.2configure -release -static -opensource -no-fast -no-exceptions -no-accessibility -no-rtti -no-stl -no-opengl -no-openvg -no-incredibuild-xge -no-style-plastique -no-style-cleanlooks -no-style-motif -no-style-cde -no-style-windowsce -no-style-windowsmobile -no-style-s60 -no-gif -no-libpng -no-libtiff -no-libjpeg -no-libmng -no-qt3support -no-mmx -no-3dnow -no-sse -no-sse2 -no-iwmmxt -no-openssl -no-dbus -platform win32-msvc2008 -arch windows -no-phonon -no-phonon-backend -no-multimedia -no-audio-backend -no-script -no-scripttools -webkit -no-declarative However, I get these errors whenever building a project that links statically to QWebKit: 1 Creating library C:\Users\Geeko\Desktop\Qt\TestQ\Release\TestQ.lib and object C:\Users\Geeko\Desktop\Qt\TestQ\Release\TestQ.exp 1QtWebKit.lib(PluginPackageWin.obj) : error LNK2019: unresolved external symbol _VerQueryValueW@16 referenced in function "class WebCore::String __cdecl WebCore::getVersionInfo(void * const,class WebCore::String const &)" (?getVersionInfo@WebCore@@YA?AVString@1@QAXABV21@@Z) 1QtWebKit.lib(PluginPackageWin.obj) : error LNK2019: unresolved external symbol _GetFileVersionInfoW@16 referenced in function "private: bool __thiscall WebCore::PluginPackage::fetchInfo(void)" (?fetchInfo@PluginPackage@WebCore@@AAE_NXZ) 1QtWebKit.lib(PluginPackageWin.obj) : error LNK2019: unresolved external symbol _GetFileVersionInfoSizeW@8 referenced in function "private: bool __thiscall WebCore::PluginPackage::fetchInfo(void)" (?fetchInfo@PluginPackage@WebCore@@AAE_NXZ) 1QtWebKit.lib(PluginDatabaseWin.obj) : error LNK2019: unresolved external symbol _imp_PathRemoveFileSpecW@4 referenced in function "class WebCore::String __cdecl WebCore::safariPluginsDirectory(void)" (?safariPluginsDirectory@WebCore@@YA?AVString@1@XZ) 1QtWebKit.lib(PluginDatabaseWin.obj) : error LNK2019: unresolved external symbol _imp_SHGetValueW@24 referenced in function "void __cdecl WebCore::addWindowsMediaPlayerPluginDirectory(class WTF::Vector &)" (?addWindowsMediaPlayerPluginDirectory@WebCore@@YAXAAV?$Vector@VString@WebCore@@$0A@@WTF@@@Z) 1QtWebKit.lib(PluginDatabaseWin.obj) : error LNK2019: unresolved external symbol _imp_PathCombineW@12 referenced in function "void __cdecl WebCore::addMacromediaPluginDirectories(class WTF::Vector &)" (?addMacromediaPluginDirectories@WebCore@@YAXAAV?$Vector@VString@WebCore@@$0A@@WTF@@@Z) 1C:\Users\Geeko\Desktop\Qt\TestQ\Release\TestQ.exe : fatal error LNK1120: 6 unresolved externals Do I need to check something in the Qt project options ? I have QtCore, QtGui, Network and WebKit checked.

    Read the article

  • jQuery "Autcomplete" plugin is messing up the order of my data

    - by Max Williams
    I'm using Jorn Zaefferer's Autocomplete plugin on a couple of different pages. In both instances, the order of displayed strings is a little bit messed up. Example 1: array of strings: basically they are in alphabetical order except for General Knowledge which has been pushed to the top: General Knowledge,Art and Design,Business Studies,Citizenship,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Displayed strings: General Knowledge,Geography,Art and Design,Business Studies,Citizenship,Design and Technology,English,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Note that Geography has been pushed to be the second item, after General Knowledge. The rest are all fine. Example 2: array of strings: as above but with Cross-curricular instead of General Knowledge. Cross-curricular,Art and Design,Business Studies,Citizenship,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Displayed strings: Cross-curricular,Citizenship,Art and Design,Business Studies,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Here, Citizenship has been pushed to the number 2 position. I've experimented a little, and it seems like there's a bug saying "put things that start with the same letter as the first item after the first item and leave the rest alone". Kind of mystifying. I've tried a bit of debugging by triggering alerts inside the autocomplete plugin code but everywhere i can see, it's using the correct order. it seems to be just when its rendered out that it goes wrong. Any ideas anyone? max

    Read the article

  • JSON and Microformats

    - by Tauren
    I'm looking for opinions on whether microformats should be used to name JSON elements. For instance, there is a microformat for physical addresses, that looks like this: <div class="adr"> <div class="street-address">665 3rd St.</div> <div class="extended-address">Suite 207</div> <span class="locality">San Francisco</span>, <span class="region">CA</span> <span class="postal-code">94107</span> <div class="country-name">U.S.A.</div> </div> There is a document available on using JSON and Microformats. The information above could be represented as JSON data like this: "adr": { "street-address":"665 3rd St.", "extended-address":"Suite 207", "locality":"San Fransicso", "region":"CA", "postal-code":"94107", "country-name":"U.S.A." }, The issue I have with this is that I'd like my JSON data to be as lightweight as possible, but still human readable. While still supporting international addresses, I would prefer something like this: "address": { "street":"665 3rd St.", "extended":"Suite 207", "locality":"San Fransicso", "region":"CA", "code":"94107", "country":"U.S.A." }, If I'm designing a new JSON API right now, does it make sense to use microformats from the start? Or should I not really worry about it? Is there some other standard that is more specific to JSON that I should look at?

    Read the article

  • iPhone SDK Core Data: Fetch all entities with a nil relationship?

    - by Harkonian
    I have a core data project that has Books and Authors. In the data model Authors has a to-many relationship to Books and Books has a 1-1 relationship with Authors. I'm trying to pull all Books that do not have an Author. No matter how I try it, no results are returned. In my predicate I've also tried = NIL, == nil, == NIL. Any suggestions would be appreciated. // fetch all books without authors - (NSMutableArray *)fetchOrphanedBooks { NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Book" inManagedObjectContext:self.managedObjectContext]; [fetchRequest setEntity:entity]; [fetchRequest setFetchBatchSize:20]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"author = nil"]; [fetchRequest setPredicate:predicate]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; NSString *sectionKey = @"name";//nil; NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:sectionKey cacheName:nil]; BOOL success = [aFetchedResultsController performFetch:nil]; NSMutableArray *orphans = nil; // this is always 0 NSLog(@"Orphans found: %i", aFetchedResultsController.fetchedObjects.count); if (aFetchedResultsController.fetchedObjects.count > 0) { orphans = [[NSMutableArray alloc] init]; for (Note *note in aFetchedResultsController.fetchedObjects) { if (note.subject == nil) { [orphans addObject:note]; } } } [aFetchedResultsController release]; [fetchRequest release]; [sortDescriptor release]; [sortDescriptors release]; return [orphans autorelease]; }

    Read the article

  • MFMailComposeViewController hangs my app

    - by Neal L
    Hi, I am trying to add email functionality to my app. I can get the MFMailComposeViewController to display correctly and pre-populate its subject and body, but for some reason when the user clicks on the "Cancel" or "Send" buttons in the nav bar the app just hangs. I inserted a NSLog() statement into the first line of mailComposeController"didFinishWithResult:error and it doesn't even print that line out to the console. Does anybody have an idea what would cause the MFMailComposeViewController to hang like that? Here is my code from the header: #import "ManagedObjectEditor.h" #import <MessageUI/MessageUI.h> @interface MyManagedObjectEditor : ManagedObjectEditor <MFMailComposeViewControllerDelegate, UIImagePickerControllerDelegate, UINavigationControllerDelegate> { } - (IBAction)emailObject; @end from the implementation file: if ([MFMailComposeViewController canSendMail]) { MFMailComposeViewController *mailComposer = [[MFMailComposeViewController alloc] init]; mailComposer.delegate = self; [mailComposer setSubject:NSLocalizedString(@"An email from me", @"An email from me")]; [mailComposer setMessageBody:emailString isHTML:YES]; [self presentModalViewController:mailComposer animated:YES]; [mailComposer release]; } [error release]; [emailString release]; and here is the code from the callback: #pragma mark - #pragma mark Mail Compose Delegate Methods - (void)mailComposeController:(MFMailComposeViewController *)controller didFinishWithResult:(MFMailComposeResult)result error:(NSError *)error { NSLog(@"in didFinishWithResult:"); switch (result) { case MFMailComposeResultCancelled: NSLog(@"cancelled"); break; case MFMailComposeResultSaved: NSLog(@"saved"); break; case MFMailComposeResultSent: NSLog(@"sent"); break; case MFMailComposeResultFailed: { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:NSLocalizedString(@"Error sending email!",@"Error sending email!") message:[error localizedDescription] delegate:nil cancelButtonTitle:NSLocalizedString(@"Bummer",@"Bummer") otherButtonTitles:nil]; [alert show]; [alert release]; break; } default: break; } [self dismissModalViewControllerAnimated:YES]; } Thanks!

    Read the article

  • Want to learn Objective-C but syntax is very confusing

    - by Sahat
    Coming from Java background I am guessing this is expected. I would really love to learn Objective-C and start developing Mac apps, but the syntax is just killing me. For example: -(void) setNumerator: (int) n { numerator = n; } What is that dash for and why is followed by void in parenthesis? I've never seen void in parenthesis in C/C++, Java or C#. Why don't we have a semicolon after (int) n? But we do have it here: -(void) setNumerator: (int) n; And what's with this alloc, init, release process? myFraction = [Fraction alloc]; myFraction = [myFraction init]; [myFraction release]; And why is it [myFraction release]; and not myFraction = [myFraction release]; ? And lastly what's with the @ signs and what's this implementation equivalent in Java? @implementation Fraction @end I am currently reading Programming in Objective C 2.0 and it's just so frustrating learning this new syntax for someone in Java background.

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >