Search Results

Search found 1402 results on 57 pages for 'underlying'.

Page 19/57 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Oracle Access Manager 11.1.2 Certified with E-Business Suite 12

    - by Elke Phelps (Oracle Development)
    I am happy to announce that Oracle Access Manager 11gR2 (11.1.2) is now certified with E-Business Suite Releases 12.0.6 and 12.1. If you are implementing single sign-on for the first time, or are an existing Oracle Access Manager user, you may integrate with Oracle Access Manager 11gR2 using Oracle Access Manager WebGate and Oracle E-Business Suite AccessGate. Supported Architecture and Release Versions Oracle Access Manager 11.1.2 Oracle E-Business Suite Release 12.0.6, 12.1.1+ Oracle Identity Management 11.1.1.5, 11.1.1.6 Oracle Internet Directory 11.1.1.6 Oracle WebLogic Server 10.3.0.5+ What's New In This Oracle Access Manager 11gR2 Integration? Simplified integration: We've simplified the instructions and cut the number of pages, while adding clarity to the steps. Automation of configuration steps:  We've automated some of the required configuration steps. This is the first phase of automation and diagnostics that are part of our roadmap for this integration. Use of default OAM Login page: We are reducing the required troubleshooting by delivering the default OAM Login page for the integration. A custom login page can still be created by using Oracle Access Manager. Use of the Detached Credential collector in a Demilitarized Zone: We have certified the Detached Credential collector as part of a DMZ configuration. This will enhance the security of the underlying Oracle Access Manager and E-Business Suite components, which will now be required only within a company's intranet.   Choosing the Right Architecture Our previously published blog article and support note with single sign-on recommended and certified integration paths has been updated to include Oracle Access Manager 11gR2: Overview of Single Sign-On Integration Options for Oracle E-Business Suite (Note 1388152.1) Other References Integrate with Oracle Access Manager 11gR2 (11.1.2) using Oracle E-Business Suite AccessGate (Note 1484024.1) Overview of Single Sign-On Integration Options for Oracle E-Business Suite (Note 1388152.1) Related Articles Understanding Options for Integrating Oracle Access Manager with E-Business Suite Why Does E-Business Suite Integration with OAM Require Oracle Internet Directory? In-Depth: Using Third-Party Identity Managers with E-Business Suite Release 12

    Read the article

  • How to promote code reuse and documentation?

    - by Graviton
    As a team lead of about 10+ developers, I would want to promote code reuse. We have written a lot of code-- a lot of them are repetitive over the past few years. The problem now is that a lot of these code are just duplicate of some other code or a slight variation of them. I have started the movement ( discussion) on how to make code into components so that they can be reused for the future projects, but the problem is that I afraid the new developers or other developers who are ignorant of the components will just go forward and write their own thing. Is there anyway to remind the developers to reuse the components/ improve the documentation/ contribute to the underlying component instead of duplicating the existing code and tweaking on it or just write their own? How to make the components easily discover-able, easily usable so that everyone will use it? Edit: I think every developer knows about the benefit of reusable components and wants to use them, it's just that we don't know how to make them discoverable. Also, the developers when they are writing code, they know they should write reusable code but lack of the motivation to do so.

    Read the article

  • Separation of development responsibilities in a new project

    - by dreza
    We have very recently started a new project (MVC 3.0) and some of our early discussion has been around how the work and development will be split amongst the team members to ensure we get the least amount of overlap of work and so help make it a bit easier for each developer to get on and do their work. The project is expected to take about 6 months - 1 year (although not all developers are likely to be on and might filter off towards the end), Our team is going to be small so this will help out a bit I believe. The team will essentially consist of: 3 x developers (All different levels i.e. more senior, intermediate and junior) 1 x project manager / product owner / tester An external company responsbile for doing our design work General project/development decisions so far have included: Develop in an Agile way using SCRUM techniques (We are still very much learning this approach as a company) Use MVVM archectecture Use Ninject and DI where possible Attempt to use as TDD as much as possible to drive development. Keep our controllers as skinny as possible Keep our views as simple as possible During our discussions two approaches have been broached as too how to seperate the workload given our objectives outlined above. OPTION 1: A framework seperation where each person is responsible for conceptual areas with overlap and discussion primarily in the integration areas. The integration areas would the responsibily of both developers as required. View prototypes (**Graphic designer**) | - Mockups | Views (Razor and view helpers etc) & Javascript (**Developer 1**) | - View models (Integration point) | Controllers and Application logic (**Developer 2**) | - Models (Integration point) | Domain model and persistence (**Developer 3**) OPTION 2: A more task orientated approach where each person is responsible for the completion of the entire task (story) from view - controller - model. QUESTION: For those who have worked in small teams developing MVC projects how have you managed the workload distribution in this situation. I can't imagine the junior would be responsible for building parts of the underlying architecture so would given them responsibility for the view make sense considering we are trying to keep it simple?

    Read the article

  • Oracle is Proud Sponsor of Gartner Security and Risk Management Summit 2011

    - by Troy Kitch
    Oracle will have a very strong presence at this year’s Gartner Security and Risk Management Summit 2011 in Washington D.C., June 20-23. If you plan on being there, please be sure to stop by Oracle booth D and say “hi” to the Security Solution Experts. Please join us for the: Oracle Solution Provider Session Oracle Solution Showcase Receptions Oracle Face to Face Meetings We have some powerful database security demonstrations that we’re showing off. If you haven’t had an opportunity to check out the new Oracle Database Firewall, now’s your chance to learn why it’s the first line of defense in a database security defense in depth strategy. Additionally, Mark Morrison, director of intelligence community information assurance, and Pat Sack, VP of the Oracle national security group, will discuss U.S. government cross-domain secure information sharing. This case study session will explain how Oracle helped the U.S. government consolidate its mission-critical intelligence database infrastructure securely, and the underlying Oracle Database security solutions that can benefit any organization looking to increase business agility and drive down IT costs through database consolidation. Potomac Ballroom B Find out more about the event here. Twitter #GartnerSecurity to join the conversation.

    Read the article

  • Resolution Independence in libGDX

    - by ashes999
    How do I make my libGDX game resolution/density independent? Is there a way to specify image sizes as "absolute" regardless of the underlying density? I'm making a very simple kids game; just a bunch of sprites displayed on-screen, and some text for menus (options menu primarily). What I want to know is: how do I make my sprites/fonts resolution independent? (I have wrapped them in my own classes to make things easier.) Since it's a simple kids game, I don't need to worry about the "playable area" of the game; I want to use as much of the screen space as possible. What I'm doing right now, which seems super incorrect, is to simply create images suitable for large resolutions, and then scale down (or rarely, up) to fit the screen size. This seems to work okay (in the desktop version), even with linear mapping on my textures, but the smaller resolutions look ugly. Also, this seems to fly in the face of Android's "device independent pixels" (DPs). Or maybe I'm missing something and libGDX already takes care of this somehow? What's the best way to tackle this? I found this link; is this a good way of solving the problem?: http://www.dandeliongamestudio.com/2011/09/12/android-fragmentation-density-independent-pixel-dip/ It mentions how to control the images, but it doesn't mention how to specify font/image sizes regardless of density.

    Read the article

  • Roadmap for Thinktecture IdentityServer

    - by Your DisplayName here!
    I got asked today if I could publish a roadmap for thinktecture IdentityServer (idrsv in short). Well – I got a lot of feedback after B1 and one of the biggest points here was the data access layer. So I made two changes: I moved to configuration database access code to EF 4.1 code first. That makes it much easier to change the underlying database. So it is now just a matter of changing the connection string to use real SQL Server instead of SQL Compact. Important when you plan to do scale out. I included the ASP.NET Universal Providers in the download. This adds official support for SQL Azure, SQL Server and SQL Compact for the membership, roles and profile features. Unfortunately the Universal Provider use a different schema than the original ASP.NET providers (that sucks btw!) – so I made them optional. If you want to use them go to web.config and uncomment the new provider. Then there are some other small changes: The relying party registration entries now have added fields to add extra data that you want to couple with the RP. One use case could be to give the UI a hint how the login experience should look like per RP. This allows to have a different look and feel for different relying parties. I also included a small helper API that you can use to retrieve the RP record based on the incoming WS-Federation query string. WS-Federation single sign out is now conforming to the spec. I made certificate based endpoint identities for SSL endpoints optional. This caused some problems with configuration and versioning of existing clients. I hope I can release the RC in the next days. If there are no major issues, there will be RTM very soon!

    Read the article

  • How to interpret number of URL errors in Google webmaster tools

    - by user359650
    Recently Google has made some changes to Webmaster tools which are explained below: http://googlewebmastercentral.blogspot.com/2012/03/crawl-errors-next-generation.html One thing I could not find out is how to interpret the number of errors over time. At the end of February we've recently migrated our website and didn't implement redirect rules for some pages (quite a few actually). Here is what we're getting from the Crawl errors: What I don't know is if the number of errors is cumulative over time or not (i.e. if Google bots crawl your website on 2 different days and find 1 separate issue on each day, whether they will report 1 error for each day, or 1 for the 1st, and 2 for the 2nd). Based on the Crawl stats we can see that the number of requests made by Google bots doesn't increase: Therefore I believe the number of errors reported is cumulative and that an error detected on 1 day is taken into account and reported on the subsequent days until the underlying problem is fixed and the page it's crawled again (or if you manually Mark as fixed the error) because if you don't make more requests to a website, there is no way you can check new pages and old pages at the same time. Q: Am I interpreting the number of errors correctly?

    Read the article

  • Setup for mounting kerberized nfs home directory - gssd not finding valid kerberos ticket

    - by janm
    Our home directories are exported via kerberized nfs, so the user needs a valid kerberos ticket to be able to mount its home. This setup works fine with our existing clients & server. Now we want to add some 11.10 client and thus set up ldap & kerberos together with pam_mount. The ldap authentication works and users can login via ssh, however their homes can not be mounted. When pam_mount is configured to mount as root, gssd does not find a valid kerberos ticket and the mount fails. Nov 22 17:34:26 zelda rpc.gssd[929]: handle_gssd_upcall: 'mech=krb5 uid=0 enctypes=18,17,16,23,3,1,2 ' Nov 22 17:34:26 zelda rpc.gssd[929]: handling krb5 upcall (/var/lib/nfs/rpc_pipefs/nfs/clnt2) Nov 22 17:34:26 zelda rpc.gssd[929]: process_krb5_upcall: service is '<null>' Nov 22 17:34:26 zelda rpc.gssd[929]: getting credentials for client with uid 0 for server purple.physcip.uni-stuttgart.de Nov 22 17:34:26 zelda rpc.gssd[929]: CC file '/tmp/krb5cc_65678_Ku2226' being considered, with preferred realm 'PURPLE.PHYSCIP.UNI-STUTTGART.DE' Nov 22 17:34:26 zelda rpc.gssd[929]: CC file '/tmp/krb5cc_65678_Ku2226' owned by 65678, not 0 Nov 22 17:34:26 zelda rpc.gssd[929]: WARNING: Failed to create krb5 context for user with uid 0 for server purple.physcip.uni-stuttgart.de Nov 22 17:34:26 zelda rpc.gssd[929]: doing error downfall When pam_mount is on the other hand configured with the noroot=1 option, then it cannot mount the volume at all. Nov 22 17:33:58 zelda sshd[2226]: pam_krb5(sshd:auth): user phy65678 authenticated as [email protected] Nov 22 17:33:58 zelda sshd[2226]: Accepted password for phy65678 from 129.69.74.20 port 51875 ssh2 Nov 22 17:33:58 zelda sshd[2226]: pam_unix(sshd:session): session opened for user phy65678 by (uid=0) Nov 22 17:33:58 zelda sshd[2226]: pam_mount(mount.c:69): Messages from underlying mount program: Nov 22 17:33:58 zelda sshd[2226]: pam_mount(mount.c:73): mount: only root can do that Nov 22 17:33:58 zelda sshd[2226]: pam_mount(pam_mount.c:521): mount of /Volumes/home/phy65678 failed So how can we allow users of a specific group to perform nfs mounts? If this does not work, can we make pam_mount use root but pass the correct uid?

    Read the article

  • SQLAuthority News – BI Quiz Question – How to Optimize Cube? – Hints

    - by pinaldave
    I earlier wrote about SQL BI Quiz over here. The details of the quiz is as following: Working with huge data is very common when it is about Data Warehousing. It is necessary to create Cubes on the data to make it meaningful and consumable. There are cases when retrieving the data from cube takes lots of the time. Let us assume that your cube is returning you data very quickly. Suddenly on one day it is returning the data very slowly. What are the three things will you to diagnose this. After diagnose what you will do to resolve performance issue. Participate in my question over here Here is a couple of hints what I am looking for in answer: How to reach to root of slow performance? Is hardware causing the problem or something else? Is slowness is due to how cube is build and its granularity? Is underlying tables require maintenance? Is there is chance to refractor the process? Are there any tool which can help diagnosis the slowness of the cube? It is not necessary to answer all the question – but something to start with. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Show Notes: Debra Lilley on Fusion Applications

    - by Bob Rhubart
    The latest ArchBeat program features a three-part interview with Oracle ACE Director Debra Lilley (ACE Profile). Debra is Oracle Alliance Director at Fujitsu, Executive Member at the International Oracle Users Group Community (IOUG), Director and Deputy Chair at the UK Oracle Users Group (UKOUG), and a partner at Oracle UK.  So yeah, she’s connected.  In this interview Debra talks about her connection to Oracle Fusion Applications. Listen to Part 1 Debra talks about her role as the as the Director and Deputy Chairperson of the UKOUG and about the UKOUG development group’s involvement in Oracle Fusion Applications. Listen to Part 2 (March 9) Debra shares her insight into what Fusion Applications will bring to Enterprise Architecture, and the importance of user experience in enterprise architecture. Listen to Part 3 (March 16) Debra discusses the need to  close the gap between IT and business, and about how business users should be able to use applications without having to think about the underlying technology. Debra is very active in social networks, so if you have questions or comments you can connect with her via the following: Blog: http://www.debrasoracle.blogspot.com/ Twitter: @debralilley LinkedIn:  http://uk.linkedin.com/pub/debra-lilley/1/438/bba And if you’d like to learn more about Oracle Fusion Applications: http://www.oracle.com/us/products/applications/fusion/index.html Coming Soon Dr. Frank Munz, author of Middleware and Cloud Computing: Oracle Fusion Middleware on Amazon Web Services and Rackspace Cloud.  Andy MacMillan (VP, Enterprise 2.0, Oracle) on the socialization of the enterprise. A panel discussion on “Who gets to be a software architect?” Stay tuned: RSS Technorati Tags: oracle,fusion applications,enterprise architecture,IOUG,UKOUG del.icio.us Tags: oracle,fusion applications,enterprise architecture,IOUG,UKOUG

    Read the article

  • Combined Likelihood Models

    - by Lukas Vermeer
    In a series of posts on this blog we have already described a flexible approach to recording events, a technique to create analytical models for reporting, a method that uses the same principles to generate extremely powerful facet based predictions and a waterfall strategy that can be used to blend multiple (possibly facet based) models for increased accuracy. This latest, and also last, addition to this sequence of increasing modeling complexity will illustrate an advanced approach to amalgamate models, taking us to a whole new level of predictive modeling and analytical insights; combination models predicting likelihoods using multiple child models. The method described here is far from trivial. We therefore would not recommend you apply these techniques in an initial implementation of Oracle Real-Time Decisions. In most cases, basic RTD models or the approaches described before will provide more than enough predictive accuracy and analytical insight. The following is intended as an example of how more advanced models could be constructed if implementation results warrant the increased implementation and design effort. Keep implemented statistics simple! Combining likelihoods Because facet based predictions are based on metadata attributes of the choices selected, it is possible to generate such predictions for more than one attribute of a choice. We can predict the likelihood of acceptance for a particular product based on the product category (e.g. ‘toys’), as well as based on the color of the product (e.g. ‘pink’). Of course, these two predictions may be completely different (the customer may well prefer toys, but dislike pink products) and we will have to somehow combine these two separate predictions to determine an overall likelihood of acceptance for the choice. Perhaps the simplest way to combine multiple predicted likelihoods into one is to calculate the average (or perhaps maximum or minimum) likelihood. However, this would completely forgo the fact that some facets may have a far more pronounced effect on the overall likelihood than others (e.g. customers may consider the product category more important than its color). We could opt for calculating some sort of weighted average, but this would require us to specify up front the relative importance of the different facets involved. This approach would also be unresponsive to changing consumer behavior in these preferences (e.g. product price bracket may become more important to consumers as a result of economic shifts). Preferably, we would want Oracle Real-Time Decisions to learn, act upon and tell us about, the correlations between the different facet models and the overall likelihood of acceptance. This additional level of predictive modeling, where a single supermodel (no pun intended) combines the output of several (facet based) models into a single prediction, is what we call a combined likelihood model. Facet Based Scores As an example, we have implemented three different facet based models (as described earlier) in a simple RTD inline service. These models will allow us to generate predictions for likelihood of acceptance for each product based on three different metadata fields: Category, Price Bracket and Product Color. We will use an Analytical Scores entity to store these different scores so we can easily pass them between different functions. A simple function, creatively named Compute Analytical Scores, will compute for each choice the different facet scores and return an Analytical Scores entity that is stored on the choice itself. For each score, a choice attribute referring to this entity is also added to be returned to the client to facilitate testing. One Offer To Predict Them All In order to combine the different facet based predictions into one single likelihood for each product, we will need a supermodel which can predict the likelihood of acceptance, based on the outcomes of the facet models. This model will not need to consider any of the attributes of the session, because they are already represented in the outcomes of the underlying facet models. For the same reason, the supermodel will not need to learn separately for each product, because the specific combination of facets for this product are also already represented in the output of the underlying models. In other words, instead of learning how session attributes influence acceptance of a particular product, we will learn how the outcomes of facet based models for a particular product influence acceptance at a higher level. We will therefore be using a single All Offers choice to represent all offers in our combined likelihood predictions. This choice has no attribute values configured, no scores and not a single eligibility rule; nor is it ever intended to be returned to a client. The All Offers choice is to be used exclusively by the Combined Likelihood Acceptance model to predict the likelihood of acceptance for all choices; based solely on the output of the facet based models defined earlier. The Switcheroo In Oracle Real-Time Decisions, models can only learn based on attributes stored on the session. Therefore, just before generating a combined prediction for a given choice, we will temporarily copy the facet based scores—stored on the choice earlier as an Analytical Scores entity—to the session. The code for the Predict Combined Likelihood Event function is outlined below. // set session attribute to contain facet based scores. // (this is the only input for the combined model) session().setAnalyticalScores(choice.getAnalyticalScores); // predict likelihood of acceptance for All Offers choice. CombinedLikelihoodChoice c = CombinedLikelihood.getChoice("AllOffers"); Double la = CombinedLikelihoodAcceptance.getChoiceEventLikelihoods(c, "Accepted"); // clear session attribute of facet based scores. session().setAnalyticalScores(null); // return likelihood. return la; This sleight of hand will allow the Combined Likelihood Acceptance model to predict the likelihood of acceptance for the All Offers choice using these choice specific scores. After the prediction is made, we will clear the Analytical Scores session attribute to ensure it does not pollute any of the other (facet) models. To guarantee our combined likelihood model will learn based on the facet based scores—and is not distracted by the other session attributes—we will configure the model to exclude any other inputs, save for the instance of the Analytical Scores session attribute, on the model attributes tab. Recording Events In order for the combined likelihood model to learn correctly, we must ensure that the Analytical Scores session attribute is set correctly at the moment RTD records any events related to a particular choice. We apply essentially the same switching technique as before in a Record Combined Likelihood Event function. // set session attribute to contain facet based scores // (this is the only input for the combined model). session().setAnalyticalScores(choice.getAnalyticalScores); // record input event against All Offers choice. CombinedLikelihood.getChoice("AllOffers").recordEvent(event); // force learn at this moment using the Internal Dock entry point. Application.getPredictor().learn(InternalLearn.modelArray, session(), session(), Application.currentTimeMillis()); // clear session attribute of facet based scores. session().setAnalyticalScores(null); In this example, Internal Learn is a special informant configured as the learn location for the combined likelihood model. The informant itself has no particular configuration and does nothing in itself; it is used only to force the model to learn at the exact instant we have set the Analytical Scores session attribute to the correct values. Reporting Results After running a few thousand (artificially skewed) simulated sessions on our ILS, the Decision Center reporting shows some interesting results. In this case, these results reflect perfectly the bias we ourselves had introduced in our tests. In practice, we would obviously use a wider range of customer attributes and expect to see some more unexpected outcomes. The facetted model for categories has clearly picked up on the that fact our simulated youngsters have little interest in purchasing the one red-hot vehicle our ILS had on offer. Also, it would seem that customer age is an excellent predictor for the acceptance of pink products. Looking at the key drivers for the All Offers choice we can see the relative importance of the different facets to the prediction of overall likelihood. The comparative importance of the category facet for overall prediction might, in part, be explained by the clear preference of younger customers for toys over other product types; as evident from the report on the predictiveness of customer age for offer category acceptance. Conclusion Oracle Real-Time Decisions' flexible decisioning framework allows for the construction of exceptionally elaborate prediction models that facilitate powerful targeting, but nonetheless provide insightful reporting. Although few customers will have a direct need for such a sophisticated solution architecture, it is encouraging to see that this lies within the realm of the possible with RTD; and this with limited configuration and customization required. There are obviously numerous other ways in which the predictive and reporting capabilities of Oracle Real-Time Decisions can be expanded upon to tailor to individual customers needs. We will not be able to elaborate on them all on this blog; and finding the right approach for any given problem is often more difficult than implementing the solution. Nevertheless, we hope that these last few posts have given you enough of an understanding of the power of the RTD framework and its models; so that you can take some of these ideas and improve upon your own strategy. As always, if you have any questions about the above—or any Oracle Real-Time Decisions design challenges you might face—please do not hesitate to contact us; via the comments below, social media or directly at Oracle. We are completely multi-channel and would be more than glad to help. :-)

    Read the article

  • Did 12.04 just add multi-touch gesture support mid-release?

    - by adempewolff
    I was reviewing the updates I was about to download today and I noticed that a lot of them had to do with gesture support, noticed that many of these were new installs rather than upgrades. Has 12.04 just added multi-touch gesture support mid-release? If so, what are the capabilities that this adds? Which applications already support these capabilities and can I expect others to add support in the near future? Here are the packages that were installed: Install: libframe6:amd64 (2.2.4-0ubuntu0.12.04.1), libgeis1:amd64 (2.2.9.2-0ubuntu1), libgrail5:amd64 (3.0.6-0ubuntu0.12.04.01, automatic) And here are those that were upgraded (also including many with touch support): Upgrade: libgrip0:amd64 (0.3.4-0ubuntu2~ubuntu12.04.1, 0.3.5-0ubuntu1~12.04.1), eog:amd64 (3.4.2-0ubuntu1, 3.4.2-0ubuntu1.1), ginn:amd64 (0.2.4-0ubuntu1, 0.2.4.1-0ubuntu1) Of which the descriptions for the new installs are, libgeis1: Gesture engine interface support A common API for clients of a systemwide gesture recognition and propagation engine. libframe6: Touch Frame Library This library handles the buildup and synchronization of a set of simultaneous touches. The library is input agnostic, with bindings for mtdev, frame and XI2.1. libgrail5: Gesture Recognition And Instantiation Library This library consists of an interface and tools for handling gesture recognition and gesture instantiation. Applications can use the grail callbacks to receive gesture primitives and raw input events from the underlying kernel device. And the descriptions for the upgraded packages are, ligrip0: provides multitouch gestures to GTK+ apps Libgrip hooks gesture recognition into GTK+ applications. ginn: Gesture Injector: No-GEIS, No-Toolkits A daemon with jinn-like wish-granting capabilities: it gives applications the ability to support a subset of multi-touch gestures without having to integrate GEIS or multi-touch GTK/Qt libs. Adding in a ton of new libraries and upgrading the existing components makes me wonder if 12.04 is meant to start natively supporting gestures other than two finger scroll in the near future. I expected these capabilities to be introduced soon but I thought that they would only be rolled out in a new release, not as upgrades for an existing release. Anyone have any info about this?

    Read the article

  • How to create single integer index value based on two integers where first is unlimited?

    - by Jan Doggen
    I have table data containing an integer value X ranging from 1.... unknown, and an integer value Y ranging from 1..9 The data need to be presented in order 'X then Y'. For one visual component I can set multiple index names: X;Y But for another component I need a one-dimensional integer value as index (sort order). If X were limited to an upper bound of say 100, the one-dimensional value could simply be X*100 + Y. If the one-dimensional value could have been a real, it could be X + Y/10. But if I want to keep X unlimited, is there a way to calculate a single integer 'indexing' value from X and Y? [Added] Background information: I have a Gantt/TreeList component where the tasks are ordered on a TaskIndex integer. This does not need to be a real database field, I can make it a calculated field in the underlying client dataset. My table data is e.g. as follows: ID Baseline ParentID 1 0 0 (task) 5 2 1 (baseline) 8 1 1 (baseline) 9 0 0 (task) 12 0 0 (task) 16 1 12 (baseline) Task 1 has two baselines numbered 1 and 2 (IDs 8 and 5) Task 9 has no baselines Task 12 has one baseline numbered 1 (ID 16) Baselines number 1-9 (the Y variable from my question); 0 or null identify the tasks ID's are unlimited (the X variable) The user plays with visibility of baselines, e.g. he wants to see all tasks with all baselines labeled 1. This is done by updating a filter on the table. Right now I constantly have to recalculate TaskIndex after changing the filter (looping through records). It would be nice if TaskIndex could be calculated on the fly for each record knowing only the data in the current record (I work in Delphi where a client dataset has an OnCalcFields event handler, that is triggered for each record when necessary). I have no control over the inner workings of the visual component.

    Read the article

  • unit/integration testing web service proxy client

    - by cori
    I'm rewriting a PHP client/proxy library that provides an interface to a SOAP-based .Net webservice, and in the process I want to add some unit and integration tests so future modifications are less risky. The work the library I'm working on performs is to marshall the calls to the web service and do a little reorganizing of the responses to present a slightly more -object-oriented interface to the underlying service. Since this library is little else than a thin layer on top of web service calls, my basic assumption is that I'll really be writing integration tests more than unit tests - for example, I don't see any reason to mock away the web service - the work that's performed by the code I'm working on is very light; it's almost passing the response from the service right back to its consumer. Most of the calls are basic CRUD operations: CreateRole(), CreateUser(), DeleteUser(), FindUser(), &ct. I'll be starting from a known database state - the system I'm using for these tests is isolated for testing purposes, so the results will be more or less predictable. My question is this: is it natural to use web service calls to confirm the results of operations within the tests and to reset the state of the application within the scope of each test? Here's an example: One test might be createUserReturnsValidUserId() and might go like this: public function createUserReturnsValidUserId() { // we're assuming a global connection to the service $newUserId = $client->CreateUser("user1"); assertNotNull($newUserId); assertNotNull($client->FindUser($newUserId); $client->deleteUser($newUserId); } So I'm creating a user, making sure I get an ID back and that it represents a user in the system, and then cleaning up after myself (so that later tests don't rely on the success or failure of this test w/r/t the number of users in the system, for example). However this still seems pretty fragile - lots of dependencies and opportunities for tests to fail and effect the results of later tests, which I definitely want to avoid. Am I missing some options of ways to decouple these tests from the system under test, or is this really the best I can do? I think this is a fairly general unit/integration testing question, but if it matters I'm using PHPUnit for the testing framework.

    Read the article

  • What's the ethos of the programming profession?

    - by mac
    I am one of those people who became professional programmer by chance, rather than by choice: I moved to a country whose main language I couldn't speak, I knew how to code... and here I am a few years later. Because of this I never really gave much a thought about the ethos of being a programmer, and working as a freelance I neither had many occasions to discuss this with fellow colleagues. Among others, Dictionary.com define the word ethos as follows: The fundamental character or spirit of a culture; the underlying sentiment that informs the beliefs, customs, or practices of a group or society; dominant assumptions of a people or period. So my question is: How would you describe the ethos of being a programmer, and why would you say so? Please note that: my question is different than this and this other ones (although you might have chosen to become a programmer because of the programmer'ethos or you might think that part of the programmer ethos is about "programming being a meaningful profession"). beside the "how/what" part of the question, there is a "why" part too! :) I would appreciate if the answer could be based not only on the idealised vision of the hero-programmer, but also on real working and life experience. Thank you in advance for your time and contributions!

    Read the article

  • Gemalto Mobile Payment Platform on Oracle T4

    - by user938730
    Gemalto is the world leader in digital security, at the heart of our rapidly evolving digital society. Billions of people worldwide increasingly want the freedom to communicate, travel, shop, bank, entertain and work – anytime, everywhere – in ways that are convenient, enjoyable and secure. Gemalto delivers on their expanding needs for personal mobile services, payment security, identity protection, authenticated online services, cloud computing access, eHealthcare and eGovernment services, modern transportation solutions, and M2M communication. Gemalto’s solutions for Mobile Financial Services are deployed at over 70 customers worldwide, transforming the way people shop, pay and manage personal finance. In developing markets, Gemalto Mobile Money solutions are helping to remove the barriers to financial access for the unbanked and under-served, by turning any mobile device into a payment and banking instrument. In recent benchmarks by our Oracle ISVe Labs, the Gemalto Mobile Payment Platform demonstrated outstanding performance and scalability using the new T4-based Oracle Sun machines running Solaris 11. Using a clustered environment on a mid-range 2x2.85GHz T4-2 Server (16 cores total, 128GB memory) for the application tier, and an additional dedicated Intel-based (2x3.2GHz Intel-Xeon X4200) Oracle database server, the platform processed more than 1,000 transactions per second, limited only by database capacity --higher performance was easily achievable with a stronger database server. Near linear scalability was observed by increasing the number of application software components in the cluster. These results show an increase of nearly 300% in processing power and capacity on the new T4-based servers relative to the previous generation of Oracle Sun CMT servers, and for a comparable price. In the fast-evolving Mobile Payment market, it is crucial that the underlying technology seamlessly supports Service Providers as the customer-base ramps up, use cases evolve and new services are launched. These benchmark results demonstrate that the Gemalto Mobile Payment Platform is designed to meet the needs of any deployment scale, whether targeting 5 or 100 million subscribers. Oracle Solaris 11 DTrace technology helped to pinpoint performance issues and tune the system accordingly to achieve optimal computation resources utilization.

    Read the article

  • MAAS/JuJu Clarifications

    - by ectoskeleton
    I really love the concept of MAAS underlying an OpenStack implementation, but there are a few questions about MAAS that I am not entirely clear on. Should all hosts be set to network boot at all times or after they have been registered and allocated as a service, should they boot to disk? After juju bootstrap is executed, I turn on the machine that has been allocated (note WoL isn't working, I suspect it's being blocked on the network), the machine boot's up and then juju status executes correct, agent running and all that good stuff. If I 'reboot' the machine (testing power failure/problem whatever), juju status comes back but the agent-state is no longer in running state, and so far I have to destroy the environment and restart. In all cases I have never been able to deploy any services to any of the other nodes. I deploy the service with juju, note which node it was assigned, and then start the system. The system just boots up into a basic node. If I SSH to it I have to enter password, so it's not setting up the ssh key or anything. This is on Ubuntu 12.04.1 LTS systems and HP GL360G7 hosts. The MAAS management server is running as a VM but all on the same network. At this point I am not sure if I am doing something wrong or if there is a problem somewhere else. Is the idea that anytime a host is rebooted it should be rebuilt from the ground up, or is something else going on behind the scene to tell it to boot the local image. If the latter, why doesn't the agent start on a system that has been successfully setup before (juju bootstrapped system)?

    Read the article

  • Best arguments for/against introducing ORM technology into a companies dev process

    - by james
    I have started using ORM technology in the last few years. My first exposure was NHibernate. I then moved onto Linq 2 Sql, and Entity Framework. The issue I have however is, there are some organisations where I have found strong opposition to introducing ORM tools. They usually have a number of reasons: they have a lot of built up SQL skills in the team, and are worried about the underlying SQL that ORM's generate. they have DBA's who like to be able to see the SQL an app uses in order that can review it for best practice. they are worried about performance (some people have "heard" the ORM's aren't as performant but have no real proof themselves - there may well be some truth in this! :). So, I'm looking for the best or most convincing arguments that you have put forward FOR the use of ORM tools. Equally, I would be interested in the against arguments too. Note: this is NOT a discussion over which ORM I should use.

    Read the article

  • Cloning existing software for commercial purposes - legal implications

    - by user2036256
    I have been asked to clone some existing software for a company. Basically its an old 16 bit DOS console app, which was supplied free of charge in I believe the late 80's. Having replaced the machine that needs to run it with a box running Win7 x64 they can't get it to work. It crashes every couple of minutes under DOSbox. The company that supplied it appears to no longer exist - if they did the company asking me to do this would almost certainly know about it. Its undetermined whether they have gone entirely or are just trading under a different name. If the latter they seem to have withdrawn from the market related to this product (because again, niche area, we should know about everyone there). What is the status to this with regards to copyright etc.? The main concern for the company involved is they want an identical interface to what they already have so I would have to clone this entirely. Having no source code / indication of the underlying mechanisms these would be written from scratch. Is an interface covered by copyright? / Does that still hold 30 years later? What is the assumed license when none at all is provided? Under UK law would I be under any serious risk were I to take on the project? How would this pan out if I then decided to sell the software on to other companies? Thanks

    Read the article

  • Dependency injection: what belongs in the constructor?

    - by Adam Backstrom
    I'm evaluating my current PHP practices in an effort to write more testable code. Generally speaking, I'm fishing for opinions on what types of actions belong in the constructor. Should I limit things to dependency injection? If I do have some data to populate, should that happen via a factory rather than as constructor arguments? (Here, I'm thinking about my User class that takes a user ID and populates user data from the database during construction, which obviously needs to change in some way.) I've heard it said that "initialization" methods are bad, but I'm sure that depends on what exactly is being done during initialization. At the risk of getting too specific, I'll also piggyback a more detailed example onto my question. For a previous project, I built a FormField class (which handled field value setting, validation, and output as HTML) and a Model class to contain these fields and do a bit of magic to ease working with fields. FormField had some prebuilt subclasses, e.g. FormText (<input type="text">) and FormSelect (<select>). Model would be subclassed so that a specific implementation (say, a Widget) had its own fields, such as a name and date of manufacture: class Widget extends Model { public function __construct( $data = null ) { $this->name = new FormField('length=20&label=Name:'); $this->manufactured = new FormDate; parent::__construct( $data ); // set above fields using incoming array } } Now, this does violate some rules that I have read, such as "avoid new in the constructor," but to my eyes this does not seem untestable. These are properties of the object, not some black box data generator reading from an external source. Unit tests would progressively build up to any test of Widget-specific functionality, so I could be confident that the underlying FormFields were working correctly during the Widget test. In theory I could provide the Model with a FieldFactory() which could supply custom field objects, but I don't believe I would gain anything from this approach. Is this a poor assumption?

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-27

    - by Bob Rhubart
    Understanding Oracle BI 11g Security vs Legacy Oracle BI 10g | Christian Screen "After conducting a large amount of Oracle BI 10g to Oracle BI 11g upgrades and after writing the Oracle BI 11g book," says Oracle ACE Christian Screen, "I still continually get asked one of the most basic questions regarding security in Oracle BI 11g; How does it compare to Oracle BI 10g? The trail of questions typically goes on to what are the differences? And, how do we leverage our current Oracle BI 10g security table schema in Oracle BI 11g?" Process Oracle OER Events using a simple Web Service | Bob Webster Bob Webster's post "provides an example of a simple web service that processes Oracle Enterprise Repository (OER) Events. The service receives events from OER and utilizes the OER REX API to implement simple OER automations for selected event types." Oracle Fusion Middleware Security: Attaching OWSM policies to JRF-based web services clients | Andre Correa "OWSM (Oracle Web Services Manager) is Oracle's recommended method for securing SOAP web services," says Oracle Fusion Middleware A-Team member Andre Correa. "It provides agents that encapsulate the necessary logic to interact with the underlying software stack on both service and client sides. Such agents have their behavior driven by policies. OWSM ships with a bunch of policies that are adequate to most common real world scenarios." His detailed post shows how to make it happen. WebCenter Content (WCC) Trace Sections | ECM Architect ECM Architect Kevin Smith shares a detailed technical post covering WebCenter Content (WCC) Trace sections. Thought for the Day "A complex system that works is invariably found to have evolved from a simple system that worked." — John Gall Source: SoftwareQuotes.com

    Read the article

  • Why not have a High Level Language based OS? Are Low Level Languages more efficient?

    - by rtindru
    Without being presumptuous, I would like you to consider the possibility of this. Most OS today are based on pretty low level languages (mainly C/C++) Even the new ones such as Android uses JNI & underlying implementation is in C In fact, (this is a personal observation) many programs written in C run a lot faster than their high level counterparts (eg: Transmission (a bittorrent client on Ubuntu) is a whole lot faster than Vuze(Java) or Deluge(Python)). Even python compilers are written in C, although PyPy is an exception. So is there a particular reason for this? Why is it that all our so called "High Level Languages" with the great "OOP" concepts can't be used in making a solid OS? So I have 2 questions basically. Why are applications written in low level languages more efficient than their HLL counterparts? Do low level languages perform better for the simple reason that they are low level and are translated to machine code easier? Why do we not have a full fledged OS based entirely on a High Level Language?

    Read the article

  • Live chat solutions

    - by Lèse majesté
    What good live chat/live help solutions are available (preferably for use on a site hosted on a LAMP stack and free)? I'm looking for a way to allow our sales and customer service reps to talk directly with visitors to our site. I've looked at phpopenchat, but it looks very unpolished. The only other free live chat app I've come across looked egregious. The aesthetics and UI design alone made me shudder to think what the underlying code might look like. This isn't a critical feature, and it wouldn't be hard to code up myself, so I'm not really looking for commercial software or paid services (unless there's a really compelling reason to use them). I'm just wondering if any other webmasters have come across a satisfactory free/open source solution for providing live customer support on their website. As a side note, live voice chat would also be an option, but it has to be be designed (or customizable) for customer support rather than a public chatroom. Edit: Looking at the responses, it looks like there probably aren't going to be many free solutions for this type of business-oriented chat solution, so feel free to post answers even if they are commercial solutions as long as they're a good value. Also feel free to post any alternate live support solutions (such as the Skype recommendation) that could be in someway integrated with a website. This will give me a good lay of the land for what people are actually using for live support, and I think will be more helpful to others reading this question.

    Read the article

  • What is meant by, "A user shouldn't decide whether it is an Admin or not. The Privileges or Security system should."

    - by GlenPeterson
    The example used in the question pass bare minimum data to a function touches on the best way to determine whether the user is an administrator or not. One common answer was: user.isAdmin() This prompted a comment which was repeated several times and up-voted many times: A user shouldn't decide whether it is an Admin or not. The Privileges or Security system should. Something being tightly coupled to a class doesn't mean it is a good idea to make it part of that class. I replied, The user isn't deciding anything. The User object/table stores data about each user. Actual users don't get to change everything about themselves. But this was not productive. Clearly there is an underlying difference of perspective which is making communication difficult. Can someone explain to me why user.isAdmin() is bad, and paint a brief sketch of what it looks like done "right"? Really, I fail to see the advantage of separating security from the system that it protects. Any security text will say that security needs to be designed into a system from the beginning and considered at every stage of development, deployment, maintenance, and even end-of-life. It is not something that can be bolted on the side. But 17 up-votes so far on this comment says that I'm missing something important.

    Read the article

  • Expanding existing DVCS Wiki

    - by A Lion
    A portion of my job is to maintain technical documentation for a rapidly expanding manufacturing company. Because it is only a portion of my job and the company's product line is expanding so quickly, I can't stay on top of the documentation. As a result, I've been yearning for an information management system with a handful of specific features. I've found many products that have a subset, but none that have all the features I'm looking for. I'm at the point of picking an existing product and expanding it to cover my desired feature set, however, this will be a pet project and I will be learning the underlying language as I go. So, the main question is which existing product will be the easiest to expand to cover the full feature set and has a relatively easy to learn language? Alternatively, have I missed another existing program that will cover the feature set or should be in my list of "close, but not quite there"? Feature Set web interface based on a distributed version control system (e.g., git) easy to edit by logged in novices (e.g. wiki, multimarkdown) outputs in more traditional formats (e.g., doc, odt, pdf) edits held in queue until editor/engineer/manager approves them (e.g., MS Word editing) [this is the really big elephant in list - suggestions on where to start appreciated] edits held in queue specifically for engineer approval [extra limb of the elephant in the list] well-supported in the open source community Closest, but not quite there ikiwiki - http://ikiwiki.info (php) lots of awesome functionality and extensions, including easy to edit and based on DVCS lacks a review/forward for review queue appears to be well-supported within the OSS community gitit - http://gitit.net/ (haskell) easy to edit and based on DVCS lots of outputs in traditional formats a great web-based gui diff interface lacks a review/forward for review queue appears to be primarily maintained by one individual

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >