Search Results

Search found 21666 results on 867 pages for 'business objects'.

Page 235/867 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Certify September Updates

    - by Sadia2
    We have added some release and platform certifications to MOS Certify. Applications: Oracle Demantra 12.2.2, 7.3.1.5, 7.3.1.4, 7.3.0.2.0, 7.3.0.0.0 Collaboration Technologies: Oracle Beehive 2.0.1.8.0 Database: Oracle Database Client 12.1.0.1.0, Oracle Clusterware 11.2.0.4.0, Oracle Database 11.2.0.4.0, Oracle Real Application Clusters 11.2.0.4.0 E-Business Suite: Oracle E-Business Suite 12.2.2, 12.1.3, 12.1.2, 12.1.1, 12.0.6, 11.5.10.2 Edge Applications: Oracle AutoVue 20.2.2, 20.2.1, 20.2.0 Enterprise Manager: Enterprise Manager Base Platform - OMS 12.1.0.3.0, Oracle Real User Experience Insight 12.1.0.4.0, 12.1.0.3.0, 12.1.0.1, 11.1 FSGBU Insurance Group: Oracle Health Insurance Claims 2.13.3.0.0 Fusion Middleware: Oracle Business Intelligence Applications 11.1.1.7.1, 7.9.6.4.0, Oracle Discoverer 11.1.1.6.0, Discoverer Administrator 11.1.1.6.0, Discoverer Desktop 11.1.1.6.0, Oracle JDK 1.7.0_40, 1.7.0_25", Oracle JRE 1.7.0_40, 1.7.0_25, Oracle JRockit 6u45 R28.2.7+, Oracle WebCenter Sites 11.1.1.8.0, Oracle WebCenter Sites: Community-Gadgets 11.1.1.8.0, Oracle WebCenter Sites: CIP for File Systems and MS SharePoint 11.1.1.8.0, Oracle WebCenter Sites: CIP for EMC Documentum 11.1.1.8.0 JD Edwards EnterpriseOne: JD Edwards EnterpriseOne Business Services Server 9.1.3.0, 9.1.2.0, 9.1.0.0, JD Edwards EnterpriseOne Mobile Applications 9.1.2.0 Oracle Fusion Applications: Oracle Fusion Applications 11.1.7.0.0 Primavera GBU: Primavera Unifier 9.13.0.0 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Siebel Enterprise: Siebel Application Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Database Server 8.2.2.3.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Remote Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Tools Client 8.2.2.4.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.9.0, Siebel SSO Integration 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0

    Read the article

  • Should I manage authentication on my own if the alternative is very low in usability and I am already managing roles?

    - by rumtscho
    As a small in-house dev department, we only have experience with developing applications for our intranet. We use the existing Active Directory for user account management. It contains the accounts of all company employees and many (but not all) of the business partners we have a cooperation with. Now, the top management wants a technology exchange application, and I am the lead dev on the new project. Basically, it is a database containing our know-how, with a web frontend. Our employees, our cooperating business partners, and people who wish to become our cooperating business partners should have access to it and see what technologies we have, so they can trade for them with the department which owns them. The technologies are not patented, but very valuable to competitors, so the department bosses are paranoid about somebody unauthorized gaining access to their technology description. This constraint necessitates a nightmarishly complicated multi-dimensional RBAC-hybrid model. As the Active Directory doesn't even contain all the information needed to infer the roles I use, I will have to manage roles plus per-technology per-user granted access exceptions within my system. The current plan is to use Active Directory for authentication. This will result in a multi-hour registration process for our business partners where the database owner has to manually create logins in our Active Directory and send them credentials. If I manage the logins in my own system, we could improve the usability a lot, for example by letting people have an active (but unprivileged) account as soon as they register. It seems to me that, after I am having a users table in the DB anyway (and managing ugly details like storing historical user IDs so that recycled user IDs within the Active Directory don't unexpectedly get rights to view someone's technologies), the additional complexity from implementing authentication functionality will be minimal. Therefore, I am starting to lean towards doing my own user login management and forgetting the AD altogether. On the other hand, I see some reasons to stay with Active Directory. First, the conventional wisdom I have heard from experienced programmers is to not do your own user management if you can avoid it. Second, we have code I can reuse for connection to the active directory, while I would have to code the authentication if done in-system (and my boss has clearly stated that getting the project delivered on time has much higher priority than delivering a system with high usability). Third, I am not a very experienced developer (this is my first lead position) and have never done user management before, so I am afraid that I am overlooking some important reasons to use the AD, or that I am underestimating the amount of work left to do my own authentication. I would like to know if there are more reasons to go with the AD authentication mechanism. Specifically, if I want to do my own authentication, what would I have to implement besides a secure connection for the login screen (which I would need anyway even if I am only transporting the pw to the AD), lookup of a password hash and a mechanism for password recovery (which will probably include manual identity verification, so no need for complex mTAN-like solutions)? And, if you have experience with such security-critical systems, which one would you use and why?

    Read the article

  • Faster Trip to Innovation with Simplified Data Integration: Sabre Holdings Case Study

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Author: Irem Radzik, Director of Product Marketing, Data Integration, Oracle In today’s fast-paced, competitive environment, IT teams are under pressure to deliver technology solutions for many critical business initiatives as fast as possible. When the focus is on speed, it can be easy to continue to use old style, point-to-point custom scripts that grow organically to the point where they are unmanageable and too costly to maintain. As data volumes, data sources, and end users grow, uncoordinated data integration efforts create significant inefficiencies for both IT and business users. In addition to losing IT productivity due to maintaining spaghetti architecture, data integrity becomes a concern as well. Errors caused by inconsistent, data and manual data entry can prove very costly for companies and disrupt business activities. Many industry leaders recognize now that data should be moved in an automated and reliable manner across all platforms to have one version of the truth. By simplifying their data integration architecture and standardizing on a centralized approach, IT teams now accelerate time to market. Especially, using a centralized, shared-service approach brings agility, increases IT productivity, and frees up resources for innovation. One such industry leader that simplified its data integration architecture is Sabre Holdings. Sabre Holdings provides distribution and technology solutions for the travel industry, and is a winner of Oracle Excellence Awards for Fusion Middleware in 2011 in the data integration category. I had the pleasure to host Sabre Holdings on a public webcast and discuss their data integration best practices for data warehousing. In this webcast Sabre’s Amjad Saeed, presented how the company reduced complexity by consolidating systems and standardizing development on Oracle Data Integrator and Oracle GoldenGate for its global data warehouse development team. With Oracle’s complete real-time data integration solution, Sabre also streamlined support and maintenance operations, achieved real-time view in the execution of the integration processes, and can manage the data warehouse and business intelligence solution performance on demand. By reducing complexity and leveraging timely market insights, the company was able to decrease time to market by 40%. You can now listen to the webcast on demand: Sabre Holdings Case Study: Accelerating Innovation using Oracle Data Integration I invite you to hear directly from Sabre how to use advanced data integration capabilities to enable accelerated innovation. To learn more about Oracle’s data integration offering you can download our free resources.

    Read the article

  • The Minimalist Approach to Content Governance - Request Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. For each project, regardless of size, it is critical to understand the required ownership, business purpose, prerequisite education / resources needed to execute and success criteria around it. Without doing this, there is no way to get a handle on the content life-cyle, resulting in a mass of orphaned material. This lowers the quality of end user experiences.     The good news is that by using a simple process in this request phase - we will not have to revisit this phase unless something drastic changes in the project. For each of the elements mentioned above in this stage, the why, how (technically focused) and impact are outlined with the intent of providing the most value to a small team. 1. Ownership Why - Without ownership information it will not be possible to track and manage any of the content and take advantage of many features of enterprise content management technology. To hedge against this, we need to ensure that both a individual and their group or department within the organization are associated with the content. How - Apply metadata that indicates the owner and department or group that has responsibility for the content. Impact - It is possible to keep the content system optimized by running native reports against the meta-data and acting on them based on what has been outlined for success criteria. This will maximize end user experience, as content will be faster to locate and more relevant to the user by virtue of working through a smaller collection. 2. Business Purpose Why - This simple step will weed out requests that have tepid justification, as users will most likely not spend the effort to request resources if they do not have a real need. How - Use a simple online form to collect and workflow the request to management native to the content system. Impact - Minimizes the amount user generated content that is of low value to the organization. 3. Prerequisite Education Resources Needed Why - If a project cannot be properly staffed the probability of its success is going to be low. By outlining the resources needed - in both skill set and duration - it will cause the requesting party to think critically about the commitment needed to complete their project and what gap must be closed with regard to education of those resources. How - In the simple request form outlined above, resources and a commitment to fulfilling any needed education should be included with a brief acceptance clause that outlines the requesting party's commitment. Impact - This stage acts as a formal commitment to ensuring that resources are able to execute on the vision for the project. 4. Success Criteria Why - Similar to the business purpose, this is a key element in helping to determine if the project and its respective content should continue to exist if it does not meet its intended goal. How - Set a review point for the project content that will check the progress against the originally outlined success criteria and then determine the fate of the content. This can even include logic that will tell the content system to remove items that have not been opened by any users in X amount of time. Impact - This ensures that projects and their contents do not live past their useful lifespans. Just as with orphaned content, non-relevant information will slow user's access to relevant materials for the jobs. Request Phase Summary With a simple form that outlines the ownership of a project and its content, business purpose, education and resources, along with success criteria, we can ensure that an enterprise content management system will stay clean and relevant to end users - allowing it to deliver the most value possible. The key here is to make it straightforward to make the request and let the content management technology manage as much as possible through metadata, retention policies and workflow. Doing these basic steps will allow project content to get off to a great start in the enterprise! Stay tuned for the next installment - the "Create Phase" - covering security access and workflow involved in content creation, enabling a practical layer of governance over our enterprise content repository.

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Oracle Applications Cloud Release 8 Customization: Your User Interface, Your Text

    - by ultan o'broin
    Introducing the User Interface Text Editor In Oracle Applications Cloud Release 8, there’s an addition to the customization tool set, called the User Interface Text Editor  (UITE). When signed in with an application administrator role, users launch this new editing feature from the Navigator's Tools > Customization > User Interface Text menu option. See how the editor is in there with other customization tools? User Interface Text Editor is launched from the Navigator Customization menu Applications customers need a way to make changes to the text that appears in the UI, without having to initiate an IT project. Business users can now easily change labels on fields, for example. Using a composer and activated sandbox, these users can take advantage of the Oracle Metadata Services (MDS), add a key to a text resource bundle, and then type in their preferred label and its description (as a best practice for further work, I’d recommend always completing that description). Changing a simplified UI field label using Oracle Composer In Release 8, the UITE enables business users to easily change UI text on a much wider basis. As with composers, the UITE requires an activated sandbox where users can make their changes safely, before committing them for others to see. The UITE is used for editing UI text that comes from Oracle ADF resource bundles or from the Message Dictionary (or FND_MESSAGE_% tables, if you’re old enough to remember such things). Functionally, the Message Dictionary is used for the text that appears in business rule-type error, warning or information messages, or as a text source when ADF resource bundles cannot be used. In the UITE, these Message Dictionary texts are referred to as Multi-part Validation Messages.   If the text comes from ADF resource bundles, then it’s categorized as User Interface Text in the UITE. This category refers to the text that appears in embedded help in the UI or in simple error, warning, confirmation, or information messages. The embedded help types used in the application are explained in an Oracle Fusion Applications User Experience (UX) design pattern set. The message types have a UX design pattern set too. Using UITE  The UITE enables users to search and replace text in UI strings using case sensitive options, as well as by type. Users select singular and plural options for text changes, should they apply. Searching and replacing text in the UITE The UITE also provides users with a way to preview and manage changes on an exclusion basis, before committing to the final result. There might, for example, be situations where a phrase or word needs to remain different from how it’s generally used in the application, depending on the context. Previewing replacement text changes. Changes can be excluded where required. Multi-Part Messages The Message Dictionary table architecture has been inherited from Oracle E-Business Suite days. However, there are important differences in the Oracle Applications Cloud version, notably the additional message text components, as explained in the UX Design Patterns. Message Dictionary text has a broad range of uses as indicated, and it can also be reserved for internal application use, for use by PL/SQL and C programs, and so on. Message Dictionary text may even concatenate together at run time, where required. The UITE handles the flexibility of such text architecture by enabling users to drill down on each message and see how it’s constructed in total. That way, users can ensure that any text changes being made are consistent throughout the different message parts. Multi-part (Message Dictionary) message components in the UITE Message Dictionary messages may also use supportability-related numbers, the ones that appear appended to the message text in the application’s UI. However, should you have the requirement to remove these numbers from users' view, the UITE is not the tool for the job. Instead, see my blog about using the Manage Messages UI.

    Read the article

  • Multichannel Digital Engagement: Find Out How Your Organization Measures Up

    - by Michael Snow
    This article was originally published in the September 2013 Edition of the Oracle Information InDepth Newsletter ORACLE WEBCENTER EDITION Thanks to mobile and social technologies, interactive online experiences are now commonplace. Not only that, they give consumers more choices, influence, and control than ever before. So how can you make your organization stand out? The key building blocks for delivering exceptional cross-channel digital experiences are outlined below. Also, a new assessment tool is available to help you measure your organization's ability to deliver such experiences. A clearly defined digital strategy. The customer journey is growing increasingly complex, encompassing multiple touchpoints and channels. It used to be easy to map marketing efforts to specific offline channels; for example, a direct mail piece with an offer to visit a store for a discounted purchase. Now it is more difficult to cultivate and track such clear cause-and-effect relationships. To deliver an integrated digital experience in this more complex world, organizations need a clearly defined and comprehensive digital marketing strategy that is backed up by an integrated set of software, middleware, and hardware solutions. Strong support for business agility and speed-to-market. As both IT and marketing executives know, speed-to-market and business agility are key to competitive advantage. That means marketers need solutions to support the rapid implementation of online marketing initiatives—plus the flexibility to adapt quickly to a changing marketplace. And IT needs tools with the performance, scalability, and ease of integration to support marketing efforts. Both teams benefit when business users are empowered to implement marketing initiatives on their own, with minimal IT intervention. The ability to deliver relevant, personalized content. Delivering a one-size-fits-all online customer experience is no longer acceptable. Customers expect you to know who they are, including their preferences and past relationship with your brand. That means delivering the most relevant content from the moment a visitor enters your site. To make that happen, you need a powerful rules engine so that marketers and business users can easily define site visitor segments and deliver content accordingly. That includes both implicit targeting that is based on the user’s behavior, and explicit targeting that takes a user’s profile information into account. Ideally, the rules engine can also intelligently weight recommendations when multiple segments apply to a specific customer. Support for social interactivity. With the advent of Facebook and LinkedIn, visitors expect to participate in and contribute to your web presence—and share their experience on their own social networks. That requires easy incorporation of user-generated content such as comments, ratings, reviews, polls, and blogs; seamless integration with third-party social networking sites; and support for social login, which helps to remove barriers to social participation. The ability to deliver connected, multichannel experiences that include powerful, flexible mobile capabilities. By 2015, mobile usage is projected to surpass that of PCs and other wired devices. In other words, mobile is an essential element in delivering exceptional online customer experiences. This requires the creation and management of mobile experiences that are optimized for delivery to the thousands of different devices that are in use today. Just as important, organizations must be able to easily extend their traditional web presence to the mobile channel and deliver highly personalized and relevant multichannel marketing initiatives while also managing to minimize the time and effort required to manage mobile sites. Are you curious to know how your organization measures up when it comes to delivering an engaging, multichannel digital experience? If so, take this brief, 15-question online assessment and see how your organization scores in the areas of digital strategy, digital agility, relevance and personalization, social interactivity, and multichannel experience.

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 1: Why Build Smart Phone Apps

    - by Tim Murphy
    This is part 1 in a series of post based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Intro Most of us already carry smartphones. We play games on them. We keep up with what is going on with our friends and our favorite teams. We take pictures of our kids at their events. But the question is if that is all they are good for. Many companies have aspects of their business that lend themselves to being performed by mobile devices. Some of them lean toward larger device such as tablets, but many can be executed on smartphones. This and the following articles will discuss some of the possible applications of smartphone technology for businesses, the platforms that are available and the considerations you need to make when building them. I'll take a look at some specific scenarios and wrap up with a couple of capabilities that are just emerging that can be used in the future. Why Build Enterprise Smartphone Applications So what are some of the ways that you can leverage smartphone technology to gain efficiency in your business or a clients business. There are a few major areas that I have seen mobile platforms being an advantage to. Your mobile sales force is a key candidate for leveraging smartphone apps.  They can visit clients in their retail location and place orders on site. It is a more personal approach which can gain you customer loyalty.  A sales person may also gather information about the way a client does business or who their target market is. This allows them you to focus marketing information or build customized support for your customer. You may also have need to track physical inventory in a store. This is something that has historically been done with laser scanners, but with the camera capabilities in today's phones and tablets it is possible to use more general multi-purpose devices.  This can save costs on both hardware and telecommunication contracts. Delivery verification is another area that historically has been the domain of specialized devices but can now be accomplished with smartphones.  This also reduces costs because it is also used for communicating with the driver and other operations.  Add to that the navigation capability of smartphones and you can see how the return on investment increases. Executives are always on the go. They spend most of their time in meetings and yet they need access to decision making information at their finger tips. With a smartphone app they can get alerts when major sales are closed or critical accounting process are completed that may need their attention. They can also answer questions by instantly pulling up BI reports. I have often heard operations support people say that they need things like VPN and RDP from their phones. If they can also have notifications of outages or critical support requests they can be react to situations without needing to be tied to their desks. These are all valid reasons to need smartphone applications.  In the next installment I will discuss platforms and features. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • Exporting makefile from Eclipse CDT

    - by Alex Farber
    I have C++ project in the Ubuntu OS, Eclipse CDT. My final goal is to build the project binaries for FreeBSD OS. The first test. I create simple C++ CDT project with main.cpp file: cout << "OK" << endl; and build it. Then I open Terminal window in Release directory: alex@alex-linux:~/workspace/HelloWorld/Release$ ls HelloWorld main.d main.o makefile objects.mk sources.mk subdir.mk alex@alex-linux:~/workspace/HelloWorld/Release$ rm HelloWorld main.d main.o alex@alex-linux:~/workspace/HelloWorld/Release$ ls makefile objects.mk sources.mk subdir.mk alex@alex-linux:~/workspace/HelloWorld/Release$ make Building file: ../main.cpp Invoking: GCC C++ Compiler g++ -O3 -Wall -c -fmessage-length=0 -MMD -MP -MF"main.d" -MT"main.d" -o"main.o" "../main.cpp" Finished building: ../main.cpp Building target: HelloWorld Invoking: GCC C++ Linker g++ -o"HelloWorld" ./main.o Finished building target: HelloWorld alex@alex-linux:~/workspace/HelloWorld/Release$ ./HelloWorld OK alex@alex-linux:~/workspace/HelloWorld/Release$ So far, so good. Now I copy the whole project tree to FreeBSD and trying to build it: $ cd /home/alex/project $ ls main.cpp release $ cd release $ ls makefile objects.mk sources.mk subdir.mk $ make "makefile", line 5: Need an operator "makefile", line 10: Need an operator "makefile", line 11: Need an operator "makefile", line 12: Need an operator CDT-generated makefile doesn't work. This is makefile beginning: $ Automatically-generated file. Do not edit! -include ../makefile.init RM := rm -rf $ All of the sources participating in the build are defined here -include sources.mk -include subdir.mk -include objects.mk ... Line 5 is -include ../makefile.init. Really, there is no such file. But it works by some way on Ubuntu computer. What is the trick, how can I build this? BTW, manually written makefile works: all: g++ -O0 -g3 -Wall -c -fmessage-length=0 -MMD -MP -MF"main.d" -MT"main.d" -o"main.o" "../main.cpp" g++ -o"HelloWorld" ./main.o Note: $ in makefile is actually #, I replaced it because # creates formatting problems inside of stackoverflow pre block.

    Read the article

  • Autonumber with Entity Framework

    - by dcompiled
    I want to loop through a collection of objects and add them all to a table. The destination table has an auto-increment field. If I add a single object there is no problem. If I add two objects both with the primary key of zero, the entity framework fails. I can manually specify primary keys but the whole point of trying the EF was to make life easier not more complicated. Here is the code and the exception received follows. foreach (Contact contact in contacts) { Instructor instructor = InstructorFromContact(contact); context.AddToInstructors(instructor); } try { context.SaveChanges(); } catch (Exception ex) { Console.WriteLine(ex.ToString()); } System.InvalidOperationException: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. at System.Data.Objects.ObjectContext.SaveChanges(SaveOptions options) at System.Data.Objects.ObjectContext.SaveChanges() at DataMigration.Program.CopyInstructors() in C:\Projects\DataMigration\Program.cs:line 52

    Read the article

  • Django queryset not returning the same values as the generated sql

    - by HRCerqueira
    Hello guys, I have the following queryset: subscribers = User.objects.values('email', 'username').filter( Q(subscription_settings__new_question='i') | Q(subscription_settings__new_question_watched_tags='i', marked_tags__id__in=question.tags.values('id'), tag_selections__reason='good') ).exclude(id=question.author.id) The problem is that when I evaluate the query I get only the values that are filtered by the first Q object (even if I reverse the order of the objects). So lets say that I was expecting the user A, B, C and D, where A and B are filtered by the first Q object and C and D by the second. But the queryset only returns A and B. I used the django debug toolbar to see the query that was actually being executed (and then I used a direct print statement like "print subscriber.query.as_sql()" just to be sure) and then evaluated the query directly using psql (I'm using postgres by the way), and I get the results I expect. Here's the query btw: SELECT "auth_user"."email", "auth_user"."username" FROM "auth_user" LEFT OUTER JOIN "forum_markedtag" ON ("auth_user"."id" = "forum_markedtag"."user_id") INNER JOIN "forum_defaultsubscriptionsetting" ON ("auth_user"."id" = "forum_defaultsubscriptionsetting"."user_id") WHERE ((("forum_markedtag"."reason" = E'good' AND "forum_defaultsubscriptionsetting"."new_question_watched_tags" = E'i' AND "forum_markedtag"."tag_id" IN (SELECT U0."id" FROM "tag" U0 INNER JOIN "question_tags" U1 ON (U0."id" = U1."tag_id") WHERE U1."question_id" = 64 )) OR "forum_defaultsubscriptionsetting"."new_question" = E'i' ) AND NOT ("auth_user"."id" = 10 )) Thanks in advance EDIT: I tried to break the queryset into two, one that uses the first Q object as the filter and another one with the second Q object, and although the second queryset produces the correct sql that returns the correct values when evaluated directly, it still doesn't return nothing when evaluated as a django queryset. heres the alternative code: subscribers = User.objects.values('email', 'username').filter( subscription_settings__new_question='i').exclude(id=question.author.id) subscribers2 = User.objects.values('email', 'username').filter( subscription_settings__new_question_watched_tags='i', marked_tags__id__in=question.tags.values('id'), tag_selections__reason='good').exclude(id=question.author.id)

    Read the article

  • Database structure and source control - best practice

    - by Paddy
    Background I came from several years working in a company where all the database objects were stored in source control, one file per object. We had a list of all the objects that was maintained when new items were added (to allow us to have scripts run in order and handle dependencies) and a VB script that ran to create one big script for running against the database. All the tables were 'create if not exists' and all the SP's etc. were drop and recreate. Up to the present and I am now working in a place where the database is the master and there is no source control for DB objects, but we do use redgate's tools for updating our production database (SQL compare), which is very handy, and requires little work. Question How do you handle your DB objects? I like to have them under source control (and, as we're using GIT, I'd like to be able to handle merge conflicts in the scripts, rather than the DB), but I'm going to be pressed to get past the ease of using SQL compare to update the database. I don't really want to have us updating scripts in GIT and then using SQL compare to update the production database from our DEV DB, as I'd rather have 'one version of the truth', but I don't really want to get into re-writing a custom bit of software to bundle the whole lot of scripts together. I think that visual studio database edition may do something similar to this, but I'm not sure if we will have the budget for it. I'm sure that this has been asked to death, but I can't find anything that seems to quite have the answer I'm looking for. Similar to this, but not quite the same: http://stackoverflow.com/questions/340614/what-are-the-best-practices-for-database-scripts-under-code-control

    Read the article

  • Why is ExecuteFunction method only available through base.ExecuteFunction in a child class of Object

    - by Matt
    I'm trying to call ObjectContext.ExecuteFunction from my objectcontext object in the repository of my site. The repository is generic, so all I have is an ObjectContext object, rather than one that actually represents my specific one from the Entity Framework. Here's an example of code that was generated that uses the ExecuteFunction method: [global::System.CodeDom.Compiler.GeneratedCode("System.Data.Entity.Design.EntityClassGenerator", "4.0.0.0")] public global::System.Data.Objects.ObjectResult<ArtistSearchVariation> FindSearchVariation(string source) { global::System.Data.Objects.ObjectParameter sourceParameter; if ((source != null)) { sourceParameter = new global::System.Data.Objects.ObjectParameter("Source", source); } else { sourceParameter = new global::System.Data.Objects.ObjectParameter("Source", typeof(string)); } return base.ExecuteFunction<ArtistSearchVariation>("FindSearchVariation", sourceParameter); } But what I would like to do is something like this... public class Repository<E, C> : IRepository<E, C>, IDisposable where E : EntityObject where C : ObjectContext { private readonly C _ctx; // ... public ObjectResult<E> ExecuteFunction(string functionName, params[]) { // Create object parameters return _ctx.ExecuteFunction<E>(functionName, /* parameters */) } } Anyone know why I have to call ExecuteFunction from base instead of _ctx? Also, is there any way to do something like I've written out? I would really like to keep my repository generic, but with having to execute stored procedures it's looking more and more difficult... Thanks, Matt

    Read the article

  • Update table columns bound to NSArrayController

    - by Loz
    Hi, I'm fairly new to the world of bindings in cocoa, and I'm having some troubles (perhaps/probably due to a misunderstanding). I have a singleton that contains an NSMutableArray called plugins, containing objects of class Plugin. It has a method called loadPlugins which adds objects to the plugins array. This may be called at any point. It's been added as an instance in Interface Builder. Also in IB is an NSObjectController, whose content outlet is connected to the singleton. There is also an NSArrayController, whose contentArray is bound to the NSObjectController (controller key is 'selection', model key path is 'plugins', object class name is 'Plugin'). And finally I have a table view with 2 columns, the values of which are bound to the NSArrayController's arrangedObjects, using keys of properties in the Plugin class. So far so standard (as far as I can tell from tutorials at least). My trouble is that when the loadPlugins method is called in the singleton, and objects are added to the plugins array, the table doesn't update to show the objects (unless loadPlugins is called before the nib is loaded). -reloadData called on the tableView doesn't do anything. Is there a way to tell the NSArrayController that the referenced array has been updated? I understand there is the -add: method for NSArrayController, which could be used in the loadPlugins, but this isn't desirable as I want to keep the singleton totally separate from the display aspect. This seems related to: http://stackoverflow.com/questions/1623396/refresh-cocoa-binding-nsarraycontroller-combobox The line: "editing the array behind the controller's back" seems to perhaps pinpoint the problem, but I would hope that it would be possible to have the singleton not know about the controller. Thanks in advance.

    Read the article

  • Best practices for class-mapping with SoapClient

    - by Foofy
    Using SoapClient's class mapping feature and it's pretty sweet. Unfortunately the SOAP service we're using has a bunch of read-only properties on some of the objects and will throw faults if the properties are passed back as anything but null. Need to filter out the properties before they're used in the SOAP call and am looking for advice on the best way to do it. So far the options are: Stick to a convention where I use getter and setter functions to manipulate the properties, and use property overloading to filter method access since only SoapClient would be doing that. E.g. developers would access properties like this: $obj->getAccountNumber() SoapClient would access properties like this: $obj->accountNumber I don't like this because the properties are still exposed and things could go wrong if developers don't stick to convention. Have a wrapper for SoapClient that sets a public property the mapped objects can check to see if the property is being accessed by SoapClient. I already have a wrapper that assigns a reference to itself to all the mapped objects. class SoapClientWrapper { public function __soapCall($method, $args) { $this->setSoapMode(true); $this->_soapClient->__soapCall($method, $args); $this->setSoapMode(false); } } class Invoice { function __get($val) { if($this->_soapClient->getSoapMode()) { return null; } else { return $this->$val; } } } This works but it doesn't feel right and seems a bit clunky. Do the mapping manually, and don't use SoapClient's mapping features. I'd just have a function on all the mapped objects that returns the safe-to-send properties. Also, nobody would have access to properties they shouldn't since I could enforce getters and setters. A lot more work, though.

    Read the article

  • jQuery "Autcomplete" plugin is messing up the order of my data

    - by Max Williams
    I'm using Jorn Zaefferer's Autocomplete plugin on a couple of different pages. In both instances, the order of displayed strings is a little bit messed up. Example 1: array of strings: basically they are in alphabetical order except for General Knowledge which has been pushed to the top: General Knowledge,Art and Design,Business Studies,Citizenship,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Displayed strings: General Knowledge,Geography,Art and Design,Business Studies,Citizenship,Design and Technology,English,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Note that Geography has been pushed to be the second item, after General Knowledge. The rest are all fine. Example 2: array of strings: as above but with Cross-curricular instead of General Knowledge. Cross-curricular,Art and Design,Business Studies,Citizenship,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Displayed strings: Cross-curricular,Citizenship,Art and Design,Business Studies,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Here, Citizenship has been pushed to the number 2 position. I've experimented a little, and it seems like there's a bug saying "put things that start with the same letter as the first item after the first item and leave the rest alone". Kind of mystifying. I've tried a bit of debugging by triggering alerts inside the autocomplete plugin code but everywhere i can see, it's using the correct order. it seems to be just when its rendered out that it goes wrong. Any ideas anyone? max

    Read the article

  • How is jQuery so fast?

    - by ClarkeyBoy
    Hey, I have a rather large application which, on the admin frontend, takes a few seconds to load a page because of all the pageviews that it has to load into objects before displaying anything. Its a bit complex to explain how the system works, but a few of my other questions explains the system in great detail. The main difference between what they say and the current system is that the customer frontend no longer loads all the pageviews into objects when a customer first views the page - it simply adds the pageview to the database and creates an object in an unsynchronised list... to put it simply, when a customer views a page it no longer loads all the pageviews into objects; but the admin frontend still does. I have been working on some admin tools on the customer frontend recently, so if an administrator clicks the description of an item in the catalogue then the right hand column will display statistics and available actions for the selected item. To do this the page which gets loaded (through $('action-container').load(bla bla bla);) into the right hand column has to loop through ALL the pageviews - this ultimately means that ALL the pageviews are loaded into objects if they haven't been already. For some reason this loads really REALLY fast. The difference in speed is only like a second on my dev site, but the live site has thousands of pageviews so the difference is quite big... So my question is: why is it that the admin frontend loads so slowly while using $(bla).load(bla); is so fast? I mean whatever method jQuery uses, can't browsers use this method too and load pages super-fast? Obviously not as someone would've done that by now - but I am interested to know just why the difference is so big... is it just my system or is there a major difference in speed between the browser getting a page and jQuery getting a page? Do other people experience the same kind of differences? Thanks in advance, Regards, Richard

    Read the article

  • Oracle sqlldr: column not allowed here

    - by Wade Williams
    Can anyone spot the error in this attempted data load? The '\\N' is because this is an import of an OUTFILE dump from mysql, which puts \N for NULL fields. The decode is to catch cases where the field might be an empty string, or might have \N. Using Oracle 10g on Linux. load data infile objects.txt discardfile objects.dsc truncate into table objects fields terminated by x'1F' optionally enclosed by '"' (ID INTEGER EXTERNAL NULLIF (ID='\\N'), TITLE CHAR(128) NULLIF (TITLE='\\N'), PRIORITY CHAR(16) "decode(:PRIORITY, BLANKS, NULL, '\\N', NULL)", STATUS CHAR(64) "decode(:STATUS, BLANKS, NULL, '\\N', NULL)", ORIG_DATE DATE "YYYY-MM-DD HH:MM:SS" NULLIF (ORIG_DATE='\\N'), LASTMOD DATE "YYYY-MM-DD HH:MM:SS" NULLIF (LASTMOD='\\N'), SUBMITTER CHAR(128) NULLIF (SUBMITTER='\\N'), DEVELOPER CHAR(128) NULLIF (DEVELOPER='\\N'), ARCHIVE CHAR(4000) NULLIF (ARCHIVE='\\N'), SEVERITY CHAR(64) "decode(:SEVERITY, BLANKS, NULL, '\\N', NULL)", VALUED CHAR(4000) NULLIF (VALUED='\\N'), SRD DATE "YYYY-MM-DD" NULLIF (SRD='\\N'), TAG CHAR(64) NULLIF (TAG='\\N') ) Sample Data (record 1). The ^_ represents the unprintable 0x1F delimiter. 1987^_Component 1987^_\N^_Done^_2002-10-16 01:51:44^_2002-10-16 01:51:44^_import^_badger^_N^_^_N^_0000-00-00^_none Error: Record 1: Rejected - Error on table objects, column SEVERITY. ORA-00984: column not allowed here

    Read the article

  • Persisting Joda DateTime instead of Java Date in hibernate

    - by Tauren
    My entities currently contain java Date properties. I'm starting to use Joda Time for date manipulation and calculations quite frequently. This means that I'm constantly having to convert my Dates into Joda DateTime objects and back again. So I was wondering, is there any reason I shouldn't just change my entities to store Joda DateTime objects instead of Java Date objects? Please note that these entities are persisted via Hibernate. I found the jodatime-hibernate project, but I also was reading on the Joda mailing list that it wasn't compatible with newer versions of hibernate. And it seems like it isn't very well maintained. So I'm wondering if it would be best to just continue converting between Date and DateTime, or if it would be wise to start persisting DateTime objects. My concern is being reliant on a poorly maintained library. Edit: Note that one of my objectives is to be better able to store timezone information. Storing just a Date appears to save the date in the local timezone. As my application can be used globally, I need to know the timezone as well. Joda Time Hibernate seems to address this as well in the user guide.

    Read the article

  • Java Collections and Garbage Collector

    - by Anth0
    A little question regarding performance in a Java web app. Let's assume I have a List<Rubrique> listRubriques with ten Rubrique objects. A Rubrique contains one list of products (List<product> listProducts) and one list of clients (List<Client> listClients). What exactly happens in memory if I do this: listRubriques.clear(); listRubriques = null; My point of view would be that, since listRubriques is empty, all my objects previously referenced by this list (including listProducts and listClients) will be garbage collected pretty soon. But since Collection in Java are a little bit tricky and since I have quite performance issues with my app i'm asking the question :) edit : let's assume now that my Client object contains a List<Client>. Therefore, I have kind of a circular reference between my objects. What would happen then if my listRubrique is set to null? This time, my point of view would be that my Client objects will become "unreachable" and might create a memory leak?

    Read the article

  • Unit Testing in the real world

    - by Malfist
    I manage a rather large application (50k+ lines of code) by myself, and it manages some rather critical business actions. To describe the program simple, I would say it's a fancy UI with the ability to display and change data from the database, and it's managing around 1,000 rental units, and about 3k tenants and all the finances. When I make changes, because it's so large of a code base, I sometimes break something somewhere else. I typically test it by going though the stuff I changed at the functional level (i.e. I run the program and work through the UI), but I can't test for every situation. That is why I want to get started with unit testing. However, this isn't a true, three tier program with a database tier, a business tier, and a UI tier. A lot of the business logic is performed in the UI classes, and many things are done on events. To complicate things, everything is database driven, and I've not seen (so far) good suggestions on how to unit test database interactions. How would be a good way to get started with unit testing for this application. Keep in mind. I've never done unit testing or TDD before. Should I rewrite it to remove the business logic from the UI classes (a lot of work)? Or is there a better way?

    Read the article

  • jquery toggle and fade in one function?

    - by tony noriega
    I was wondering if toggle() and fadeIn() could be used in one function... i got this to work, but it only fades in after the second click... not on first click of the toggle. $(document).ready(function() { $('#business-blue').hide(); $('a#biz-blue').click(function() { $('#business-blue').toggle().fadeIn('slow'); return false; }); // hides the slickbox on clicking the noted link $('a#biz-blue-hide').click(function() { $('#business-blue').hide('fast'); return false; }); }); <a href="#" id="biz-blue">Learn More</a> <div id="business-blue" style="border:1px soild #00ff00; background:#c6c1b8; height:600px; width:600px; margin:0 auto; position:relative;"> <p>stuff here</p> </div>

    Read the article

  • Subversion svn:externals - What's wrong here?

    - by Brandon Montgomery
    I first want to say I've read the Subversion manual. I've read this question. I've also read this question. Here's my dilemma. Let's say I have 3 repositories laid out like this: DataAccessObject/ branches/ tags/ trunk/ DataAccessObject/ DataAccessObjectTests/ PlanObject/ branches/ tags/ trunk/ PlanObject/ PlanObjectTests/ WinFormsPlanViewer/ branches/ tags/ trunk/ WinFormsPlanViewer/ The PlanObject and DataAccessObject repositories contain shared projects. They are used by the WinFormsPlanViewer, but also by several other projects in several other repositories. Bear with me here. I put an svn:externals definition on the WinFormsPlanViewer/trunk folder like this: https://server/svn/PlanObject/trunk Objects https://server/svn/DataAccessObject/trunk Objects And here's what I see after I do an svn update. WinFormsPlanViewer/ branches/ tags/ trunk/ WinFormsPlanViewer/ Objects/ DataAccessObject/ DataAccessObjectTests/ The PlanObject stuff doesn't even come down in the update! I don't know if this has anything to do with it, but there's an externals definition on the PlanObject/trunk folder also: https://server/svn/DataAccessObject/trunk Objects What's going on here? What am I doing wrong? Are there bad consequences of referencing the PlanObject and the DataAccessObject from the WinFormsPlanViewer using svn:externals when the PlanObject references the DataAccessObject using svn:externals also?

    Read the article

  • How will Arel affect rails' includes() 's capabilities.

    - by Tim Snowhite
    I've looked over the Arel sources, and some of the activerecord sources for Rails 3.0, but I can't seem to glean a good answer for myself as to whether Arel will be changing our ability to use includes(), when constructing queries, for the better. There are instances when one might want to modify the conditions on an activerecord :include query in 2.3.5 and before, for the association records which would be returned. But as far as I know, this is not programmatically tenable for all :include queries: (I know some AR-find-includes make t#{n}.c#{m} renames for all the attributes, and one could conceivably add conditions to these queries to limit the joined sets' results; but others do n_joins + 1 number of queries over the id sets iteratively, and I'm not sure how one might hack AR to edit these iterated queries.) Will Arel allow us to construct ActiveRecord queries which specify the resulting associated model objects when using includes()? Ex: User :has_many posts( has_many :comments) User.all(:include => :posts) #say I wanted the post objects to have their #comment counts loaded without adding a comment_count column to `posts`. #At the post level, one could do so by: posts_with_counts = Post.all(:select => 'posts.*, count(comments.id) as comment_count', :joins => 'left outer join comments on comments.post_id = posts.id', :group_by => 'posts.id') #i believe #But it seems impossible to do so while linking these post objects to each #user as well, without running User.all() and then zippering the objects into #some other collection (ugly) #OR running posts.group_by(&:user) (even uglier, with the n user queries)

    Read the article

  • Hibernate : Opinions in Composite PK vs Surrogate PK

    - by Albert Kam
    As i understand it, whenever i use @Id and @GeneratedValue on a Long field inside JPA/Hibernate entity, i'm actually using a surrogate key, and i think this is a very nice way to define a primary key considering my not-so-good experiences in using composite primary keys, where : there are more than 1 business-value-columns combination that become a unique PK the composite pk values get duplicated across the table details cannot change the business value inside that composite PK I know hibernate can support both types of PK, but im left wondering by my previous chats with experienced colleagues where they said that composite PK is easier to deal with when doing complex SQL queries and stored procedure processes. They went on saying that when using surrogate keys will complicate things when doing joining and there are several condition when it's impossible to do some stuffs when using surrogate keys. Although im sorry i cant explain the detail here since i was not clear enough when they explain it. Maybe i'll put more details next time. Im currently trying to do a project, and want to try out surrogate keys, since it's not getting duplicated across tables, and we can change the business-column values. And when the need for some business value combination uniqueness, i can use something like : @Table(name="MY_TABLE", uniqueConstraints={ @UniqueConstraint(columnNames={"FIRST_NAME", "LAST_NAME"}) // name + lastName combination must be unique But im still in doubt because of the previous discussion about the composite key. Could you share your experiences in this matter ? Thank you !

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >