Search Results

Search found 68400 results on 2736 pages for 'industry data model'.

Page 23/2736 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

  • Can data classes contain methods for validation?

    - by Arturas M
    OK, say I have a data class for a user: public class User { private String firstName; private String lastName; private long personCode; private Date birthDate; private Gender gender; private String email; private String password; Now let's say I want to validate email, whether names are not empty, whether birth date is in normal range, etc. Can I put that validation method in this class together with data? Or should it be in UserManager which in my case handles the lists of these users?

    Read the article

  • How much data validation is too much? [closed]

    - by adbertram
    Possible Duplicate: Data input validation - Where? How much? I'm a new PHP developer and am into Powershell quite a bit but this question is language agnostic. I've been questioning my code quite a bit lately thinking about how many nets I should setup to catch exceptions, verify results, etc. I realize that I could go crazy trying to verify each and every line of code but at the same time I want the code as resilient as possible. I'm not talking about user input but verifying output from methods. Is there some standard or rule of thumb to go by when deciding when and where to do data validation?

    Read the article

  • Data structure for file search

    - by poly
    I've asked this question before and I got a few answers/idea, but I'm not sure how to implement them. I'm building a telecom messaging solution. Currently, I'm using a database to save my transaction/messages for the network stack I've built, and as you know it's slower than using a data structure (hash, linkedlist, etc...). My problem is that the data can be really huge, and it won't fit in the memory. I was thinking of saving the records in a file and the a key and line number in a hash, then if I want to access some record then I can get the line number from the hash, and get it from the file. I don't know how efficient is this; I think the database is doing a way better job than this on my behalf. Please share whatever you have in mind.

    Read the article

  • Model value not being set on return from View to Controller

    - by sagesky36
    I have a boolean model variable who's value is supposed to be set to TRUE in order to perform a process on return back into the Controller. It works absolutely fine on my local machine, but not on the remote web server. Can somebody PLEASE inform me what I am missing? Below is the "proof of the pudding": The boolean value in quesion is "ShouldGeneratePdf"; MODEL: namespace PDFConverterModel.ViewModels { public partial class ViewModelTemplate_Guarantors { public ViewModelTemplate_Guarantors() { Templates = new List<PDFTemplate>(); Guarantors = new List<tGuarantor>(); } public int SelectedTemplateId { get; set; } public List<PDFTemplate> Templates { get; set; } public int SelectedGuarantorId { get; set; } public List<tGuarantor> Guarantors { get; set; } public string LoanId { get; set; } public string DepartmentId { get; set; } public bool isRepeat { get; set; } public string ddlDept { get; set; } public string SelectedDeptText { get; set; } public string LoanTypeId { get; set; } public string LoanType { get; set; } public string Error { get; set; } public string ErrorT { get; set; } public string ErrorG { get; set; } public bool ShowGeneratePDFBtn { get; set; } public bool ShouldGeneratePdf { get; set; } } } MasterPage: <!DOCTYPE html> <html> <head> <title>@ViewBag.Title</title> <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2012.2.913/kendo.common.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2012.2.913/kendo.dataviz.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2012.2.913/kendo.blueopal.min.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.7.1.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/modernizr-2.5.3.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/kendo/2012.2.913/kendo.all.min.js")"></script> <script src="@Url.Content("~/Scripts/kendo/2012.2.913/kendo.aspnetmvc.min.js")"></script> </head> <body> <div class="page"> <header> <div id="title"> <h1>BHG :: PDF Service Generator</h1> </div> </header> <section id="main"> @RenderBody() </section> <footer> </footer> </div> </body> </html> View: @model PDFConverterModel.ViewModels.ViewModelTemplate_Guarantors @using (Html.BeginForm("ProcessForm", "Home", new AjaxOptions { HttpMethod = "POST" })) { <table style="width: 1000px"> @Html.HiddenFor(x => x.ShouldGeneratePdf) <tr> <td> <img alt="BHG Logo" src="~/Images/logo.gif" /> </td> </tr> <tr> <td> @(Html.Kendo().IntegerTextBox() .Placeholder("Enter Loan Id") .Name("LoanId") .Format("{0:#######}") .Value(Convert.ToInt32(Model.LoanId)) ) </td> </tr> <tr> <td>@Html.Label("Loan Type: ") @Html.DisplayFor(model => Model.LoanType) </td> <td> <label for="ddlDept">Department:</label> @(Html.Kendo().DropDownListFor(model => Model.ddlDept) .Name("ddlDept") .DataTextField("DepartmentName") .DataValueField("DepartmentID") .Events(e => e.Change("Refresh")) .DataSource(source => { source.Read(read => { read.Action("GetDepartments", "Home"); }); }) .Value(Model.ddlDept.ToString()) ) </td> </tr> @if (Model.ShowGeneratePDFBtn == true) { if (Model.ErrorT == string.Empty) { <tr> <td> <u><b>@Html.Label("Templates:")</b></u> </td> </tr> <tr> @for (int i = 0; i < Model.Templates.Count; i++) { <td> @Html.CheckBoxFor(model => Model.Templates[i].IsChecked) @Html.DisplayFor(model => Model.Templates[i].TemplateId) </td> } </tr> } else { <tr> <td> <b>@Html.DisplayFor(model => Model.ErrorT)</b> </td> </tr> } if (Model.ErrorG == string.Empty) { <tr> <td> <u><b>@Html.Label("Guarantors:")</b></u> </td> </tr> <tr> @for (int i = 0; i < Model.Guarantors.Count; i++) { <td> @Html.CheckBoxFor(model => Model.Guarantors[i].isChecked) @Html.DisplayFor(model => Model.Guarantors[i].GuarantorFirstName)&nbsp;@Html.DisplayFor(model => Model.Guarantors[i].GuarantorLastName) </td> } </tr> } else { <tr> <td> <b>@Html.DisplayFor(model => Model.ErrorG)</b> </td> </tr> } } <tr> <td> <input type="submit" name="submitbutton" id="btnRefresh" value='Refresh' /> </td> @if (Model.ShowGeneratePDFBtn == true) { <td> <input type="submit" name="submitbutton" id="btnGeneratePDF" value='Generate PDF' /> </td> } </tr> <tr> <td style="color: red; font: bold"> @Model.Error </td> </tr> </table> } <script type="text/javascript"> $('#btnRefresh').click(function () { Refresh(); }); function Refresh() { var LoanID = $("#LoanID").val(); if (parseInt(LoanID) != 0) { $('#ShouldGeneratePdf').val(false) document.forms[0].submit(); } else { alert("Please enter a LoanId"); } } //$(function () { // //DOM loaded // $('#btnGeneratePDF').click(function () { // DisableGeneratePDF(); // $('#ShouldGeneratePdf').val(true) // }); //}); //function DisableGeneratePDF() { // $('#btnGeneratePDF').attr("disabled", true); // $('#btnRefresh').attr("disabled", true); //} $('#btnGeneratePDF').click(function () { alert("inside click function"); DisableGeneratePDF(); $('#ShouldGeneratePdf').val(true) tof = $('#ShouldGeneratePdf').val(); alert("ShouldGeneratePdf set to " + tof); }); function DisableGeneratePDF() { alert("begin DisableGeneratePDF function"); $('#btnGeneratePDF').attr("disabled", true); $('#btnRefresh').attr("disabled", true); alert("end DisableGeneratePDF function"); } </script> Controller: [HttpPost] public ActionResult ProcessForm(string submitbutton, ViewModelTemplate_Guarantors model, FormCollection collection) if ((submitbutton == "Refresh") || (submitbutton == null) && (model.ShouldGeneratePdf == false)) { } else if ((submitbutton == "Generate PDF") || (model.ShouldGeneratePdf == true)) { } The "Alerts" in the script above come out to exactly what they should be on the remote server. The last alert shows that the value of the bool variable is "true". However, when I do page source views of the hidden variable, below is the result. The values of the hidden variable when the page loads and when the last alert button finishes are as follows: My local machine: The remote machine: As you can see, the value on my machine is set to true when the process executes. However, on the remote machine, it is set to false where it then doesn't excute. Why isn't the value in the model being returned as TRUE on the remote machine?

    Read the article

  • How to deal with data on the model specific to the technology being used?

    - by user1620696
    There are some cases where some of the data on a class of the domain model of an application seems to be dependent on the technology being used. One example of this is the following: suppose we are building one application in .NET such that there's the need of an Employee class. Suppose further that we are going to implement relational database, then the Employee has a primary key right? So that the classe would be something like public class Employee { public int EmployeeID { get; set; } public string Name { get; set; } ... } Now, that EmployeeID is dependent on the technology right? That's something that has to do with the way we've choose to persist our data. Should we write down a class independent of such things? If we do it this way, how should we work? I think I would need to map all the time between domain model and persistence specific types, but I'm not sure.

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • Highlights From Interact '12 - Healthcare Industry User Group

    - by John Webb
    Last week the Oracle team traveled to Orlando for the 18th annual Healthcare Industry User Group (HIUG) conference, Interact '12.   HIUG has over 3,000 members representing 180 organizations.  While we now know the result on the SCOTUS ruling yesterday, the consensus at the conference last week was summed up well in the welcome note from HIUG President, Chris Ryzewski:    "Regardless of the legal ruling on this administration's  Patient Protection and Affordable Care Act we will undoubtedly be called upon to further reduce costs and be more efficient in every aspect of our business processes."    Well put!   Attendance exceeded previous years with several hundred attendees, over 100 sessions, and a trade show that numbered 40 booths.    Most of the HIUG members use PeopleSoft applications and they tend to be full suite customers who use PeopleSoft broadly from HCM to Financials and Supply Chain. For many customers who have licensed PeopleSoft in the last year, it was their first experience with a very strong and collaborative user group.   I had dinner with a provider who is rolling out PeopleSoft HCM and ERP to a nationwide system of forty hospitals.  A key driver for this organization and others is how to leverage PeopleSoft applications to meet the cost reduction goals mentioned above.   In the area of procurement, the topic of Supplier Contract Management attracted a lot of attention.  Contract pricing and adherence to contracts throughout the procure to pay life cycle are key to meeting cost containment objectives.  Customers were excited to see the new faceted search capabilities and usability of  the upcoming PeopleSoft eProcurement release.     The new Work Center concept was discussed in several areas including the Cost Reconciliation Work Center and the Supply Demand Work Center which enables healthcare specific functions around PAR counts and related replenishment activities.  The latest Feature Pack of HCM 9.1 was demonstrated with the Talent Summary and Manager Dashboard.   Customers were excited to see the major advances in self service available today.    The Grants Special Interest Group focused quite a bit on the usage of PeopleSoft's Project Costing "Funds Distribution" feature, which can be used to manage capital projects funded by multiple agencies and sources.  Along with the latest release of the Mobile Inventory solution that several hospitals have now implemented, a preview of new mobile applications for expenses and approvals drew a lot of attention.   The PeopleSoft focus on assisting these companies in their goals to contain costs and create new efficiencies continues forward.   We look foward to Interact '13!     

    Read the article

  • Ruby on rails model and controllers inside of different namespaces

    - by Nelson LaQuet
    OK. This is insane. I'm new to RoR and I really want to get into it as everything about it that I have seen so far makes it more appealing to the type of work that I do. However, I can't seem to accomplish a very simple thing with RoR. I want these controlers: /admin/blog/entries (index/show/edit/delete) /admin/blog/categories (index/show/edit/delete) /admin/blog/comments (index/show/edit/delete) ... and so on And these models: Blog::Entry (table: blog_entries) Blog::Category (table: blog_categories) Blog::Comments (table: blog_comments) ... and so on Now, I have already gone though quite a bit of misery to make this work. My first attempt was with generating scaffolding (I'm using 2.2.2). I generated my scaffolding, but had to move my model, then fix the references to the model in my controller (see http://stackoverflow.com/questions/903258/ruby-on-rails-model-inside-namespace-cant-be-found-in-controller). That is already a big of a pain, but hey, I got it to work. Now though form_for won't work and I cannot figure out how to use the url helpers (I have no idea what these are called... they are the automatically generated methods that return URLs to controllers associated with a model). I cannot figure out what their name is. My model is Blog::Entries. I have tried to mess with the route.rb's map's resource method, but no luck. When I attempt to use form_for with my model, I get this error undefined method `blog_entries_path' for #<ActionView::Base:0xb6848080> Now. This is really quite frustrating. I am not going to completely destroy my code's organization in order to use this framework, and if I cannot figure out how to accomplish this simple task (I have been researching this for at least 5 hours) then I simply cannot continue. Are there any ideas on how to accomplish this? Thanks EDIT Here are my routes: admin_blog_entries GET /admin_blog_entries {:controller=>"admin_blog_entries", :action=>"index"} formatted_admin_blog_entries GET /admin_blog_entries.:format {:controller=>"admin_blog_entries", :action=>"index"} POST /admin_blog_entries {:controller=>"admin_blog_entries", :action=>"create"} POST /admin_blog_entries.:format {:controller=>"admin_blog_entries", :action=>"create"} new_admin_blog_entry GET /admin_blog_entries/new {:controller=>"admin_blog_entries", :action=>"new"} formatted_new_admin_blog_entry GET /admin_blog_entries/new.:format {:controller=>"admin_blog_entries", :action=>"new"} edit_admin_blog_entry GET /admin_blog_entries/:id/edit {:controller=>"admin_blog_entries", :action=>"edit"} formatted_edit_admin_blog_entry GET /admin_blog_entries/:id/edit.:format {:controller=>"admin_blog_entries", :action=>"edit"} admin_blog_entry GET /admin_blog_entries/:id {:controller=>"admin_blog_entries", :action=>"show"} formatted_admin_blog_entry GET /admin_blog_entries/:id.:format {:controller=>"admin_blog_entries", :action=>"show"} PUT /admin_blog_entries/:id {:controller=>"admin_blog_entries", :action=>"update"} PUT /admin_blog_entries/:id.:format {:controller=>"admin_blog_entries", :action=>"update"} DELETE /admin_blog_entries/:id {:controller=>"admin_blog_entries", :action=>"destroy"} DELE

    Read the article

  • Scrambling Sensitive Data in E-Business Suite Release 12 Cloned Environments

    - by Elke Phelps (Oracle Development)
    Securing the Oracle E-Business Suite includes protecting the underlying E-Business data in production and non-production databases.  While steps can be taken to provide a secure configuration to limit EBS access, a better approach to protecting non-production data is simply to scramble (mask) the data in the non-production copy.  You can use the Oracle Data Masking Pack with Oracle Enterprise Manager today to scramble sensitive data in cloned environments. Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function.  Using the Data Masking Pack in E-Business Suite environments is now easier with the release of new set of templates for E-Business Suite databases: Oracle E-Business Suite Release 12.1.3 Template for Data Masking Pack (Patch13898999) This template works with the Oracle Data Masking Pack and Oracle Enterprise Manager to obscure sensitive E-Business Suite information that is copied from production to non-production environments.  Is there a charge for this? Yes. You must purchase licenses for Oracle Enterprise Manager and the Oracle Data Masking Pack plug-in. The Oracle E-Business Suite 12.1.3 Template for the Data Masking Pack is included with the Oracle Data Masking Pack license.  You can contact your Oracle account manager for more details about licensing. What does data masking do in E-Business Suite environments? Application data masking does the following: De-identify the data:  Scramble identifiers of individuals, also known as personally identifiable information or PII.  Examples include information such as name, account, address, location, and driver's license number. Mask sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.   Maintain data validity:  Provide a fully functional application. How can EBS customers use data masking? The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers.  The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing.   References For additional information on the Oracle E-Business Suite Template for Data Masking Pack please refer to the following: Masking Sensitive Data for Non-production Use in the Oracle Enterprise Manager Concepts 11g Using the Oracle E-Business Suite, Release 12.1.3 Template for the Data Masking Pack, Note 1437485.1 Related Articles Webcast Replay Available: E-Business Suite Data Protection Oracle E-Business Suite Plug-in 4.0 Released for OEM 11g (11.1.0.1)

    Read the article

  • E-Business Suite 12.1.3 Data Masking Certified with Enterprise Manager 12c

    - by Elke Phelps (Oracle Development)
    Following up on our prior announcement for EM 11g, we're pleased to announce the certification of the E-Business Suite 12.1.3 Data Masking Template for the Data Masking Pack with Enterprise Manager Cloud Control 12c. You can use the Oracle Data Masking Pack with Oracle Enterprise Manager Grid Control 12c to scramble sensitive data in cloned E-Business Suite environments.  Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function.  You may scramble data in E-Business Suite cloned environments with EM12c using the following template: E-Business Suite 12.1.3 Data Masking Template for Data Masking Pack with EM12c (Patch 14407414) What does data masking do in E-Business Suite environments? Application data masking does the following: De-identify the data:  Scramble identifiers of individuals, also known as personally identifiable information or PII.  Examples include information such as name, account, address, location, and driver's license number. Mask sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.   Maintain data validity:  Provide a fully functional application. How can EBS customers use data masking? The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers.  The template works with the Oracle Data Masking Pack and Oracle Enterprise Manager to obscure sensitive E-Business Suite information that is copied from production to non-production environments. The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing.  What's new with EM 12c? Some of the execution steps may also be performed with EM Command Line Interface (EM CLI).  Support of EM CLI is a new feature with the E-Business Suite Release 12.1.3 template for EM 12c.  Is there a charge for this? Yes. You must purchase licenses for the Oracle Data Masking Pack plug-in. The Oracle E-Business Suite 12.1.3 Template for the Data Masking Pack is included with the Oracle Data Masking Pack license.  You can contact your Oracle account manager for more details about licensing. References Additional details and requirements are provided in the following My Oracle Support Note: Using Oracle E-Business Suite Release 12.1.3 Template for the Data Masking Pack with Oracle Enterprise Manager 12.1.0.2 Data Masking Tool (Note 1481916.1) Masking Sensitive Data in the Oracle Database Real Application Testing User's Guide 11g Release 2 (11.2) Related Articles Scrambling Sensitive Data in E-Business Suite

    Read the article

  • Patterns for a tree of persistent data with multiple storage options?

    - by Robin Winslow
    I have a real-world problem which I'll try to abstract into an illustrative example. So imagine I have data objects in a tree, where parent objects can access children, and children can access parents: // Interfaces interface IParent<TChild> { List<TChild> Children; } interface IChild<TParent> { TParent Parent; } // Classes class Top : IParent<Middle> {} class Middle : IParent<Bottom>, IChild<Top> {} class Bottom : IChild<Middle> {} // Usage var top = new Top(); var middles = top.Children; // List<Middle> foreach (var middle in middles) { var bottoms = middle.Children; // List<Bottom> foreach (var bottom in bottoms) { var middle = bottom.Parent; // Access the parent var top = middle.Parent; // Access the grandparent } } All three data objects have properties that are persisted in two data stores (e.g. a database and a web service), and they need to reflect and synchronise with the stores. Some objects only request from the web service, some only write to it. Data Mapper My favourite pattern for data access is Data Mapper, because it completely separates the data objects themselves from the communication with the data store: class TopMapper { public Top FetchById(int id) { var top = new Top(DataStore.TopDataById(id)); top.Children = MiddleMapper.FetchForTop(Top); return Top; } } class MiddleMapper { public Middle FetchById(int id) { var middle = new Middle(DataStore.MiddleDataById(id)); middle.Parent = TopMapper.FetchForMiddle(middle); middle.Children = BottomMapper.FetchForMiddle(bottom); return middle; } } This way I can have one mapper per data store, and build the object from the mapper I want, and then save it back using the mapper I want. There is a circular reference here, but I guess that's not a problem because most languages can just store memory references to the objects, so there won't actually be infinite data. The problem with this is that every time I want to construct a new Top, Middle or Bottom, it needs to build the entire object tree within that object's Parent or Children property, with all the data store requests and memory usage that that entails. And in real life my tree is much bigger than the one represented here, so that's a problem. Requests in the object In this the objects request their Parents and Children themselves: class Middle { private List<Bottom> _children = null; // cache public List<Bottom> Children { get { _children = _children ?? BottomMapper.FetchForMiddle(this); return _children; } set { BottomMapper.UpdateForMiddle(this, value); _children = value; } } } I think this is an example of the repository pattern. Is that correct? This solution seems neat - the data only gets requested from the data store when you need it, and thereafter it's stored in the object if you want to request it again, avoiding a further request. However, I have two different data sources. There's a database, but there's also a web service, and I need to be able to create an object from the web service and save it back to the database and then request it again from the database and update the web service. This also makes me uneasy because the data objects themselves are no longer ignorant of the data source. We've introduced a new dependency, not to mention a circular dependency, making it harder to test. And the objects now mask their communication with the database. Other solutions Are there any other solutions which could take care of the multiple stores problem but also mean that I don't need to build / request all the data every time?

    Read the article

  • Custom model in ASP.NET MVC controller: Custom display message for Date DataType

    - by Rita
    Hi I have an ASP.NET MVC Page that i have to display the fields in customized Text. For that I have built a CustomModel RequestViewModel with the following fields. Description, Event, UsageDate Corresponding to these my custom Model has the below code. So that, the DisplayName is displayed on the ASP.NET MVC View page. Now being the Description and Event string Datatype, both these fields are displaying Custom DisplayMessage. But I have problem with Date Datatype. Instead of "Date of Use of Slides", it is still displaying UsageDate from the actualModel. Anybody faced this issue with DateDatatype? Appreciate your responses. Custom Model: [Required(ErrorMessage="Please provide a description")] [DisplayName("Detail Description")] [StringLength(250, ErrorMessage = "Description cannot exceed 250 chars")] // also need min length 30 public string Description { get; set; } [Required(ErrorMessage="Please specify the name or location")] [DisplayName("Name/Location of the Event")] [StringLength(250, ErrorMessage = "Name/Location cannot exceed 250 chars")] public string Event { get; set; } [Required(ErrorMessage="Please specify a date", ErrorMessageResourceType = typeof(DateTime))] [DisplayName("Date of Use of Slides")] [DataType(DataType.Date)] public string UsageDate { get; set; } ViewCode: <p> <%= Html.LabelFor(model => model.Description) %> <%= Html.TextBoxFor(model => model.Description) %> <%= Html.ValidationMessageFor(model => model.Description) %> </p> <p> <%= Html.LabelFor(model => model.Event) %> <%= Html.TextBoxFor(model => model.Event) %> <%= Html.ValidationMessageFor(model => model.Event) %> </p> <p> <%= Html.LabelFor(model => model.UsageDate) %> <%= Html.TextBoxFor(model => model.UsageDate) %> <%= Html.ValidationMessageFor(model => model.UsageDate) %> </p>

    Read the article

  • C# - Data Clustering approach

    - by Brett
    Hi all, I am writing a program in C# in which I have a set of 200 points displayed on an image. However, the points tend to cluster in various regions, and I am looking to find a way to "cluster." In other words, maybe draw a circle/ellipse around the clustered points. Has anyone seen any way to do this? I have heard about K-means clustering, but I am not sure how to implement it in C#. Any favorite implementations out there? Cheers, Brett

    Read the article

  • What's faster Model.get(keys) or Model.get_by_id(ids, parent=None)

    - by WooYek
    I'm wondering is there a difference in terms of computing cost for the Model.get(keys) and Model.get_by_id(ids, parent=None) methods? Is there a server side computing advantage of using numeric id's over encoded string keys, or other way around? How big is the difference? PS. Sorry, if it's a dupe. I'm sure I read an article about it, but I cannot find it now.

    Read the article

  • Why is the W3C box model considered better?

    - by Mel
    Why do most developers consider the W3C box-model to be better than the box-model used by Internet Explorer? I know it's very frustrating developing pages that look the way you want them on Internet Explorer, but I find the W3C box-model to be counter-intuitive. For example, if margins, padding, and border were factored into the width, I could assign fixed width values for all my columns without having to worry about how many columns I have and what changes I make to my padding and margins. Under W3C box model I have to worry about how many columns I have, and develop something akin to a mathematical formula to calculate my values. Changing them would nightmarish, especially for complex layouts. My novice opinion is that it's more restrictive. Consider this small frame-work I wrote: #content { margin:0 auto 30px auto; padding:0 30px 30px 30px; width:900px; } #content .column { float:left; margin:0 20px 20px 20px; } #content .first { margin-left:0; } #content .last { margin-right:0; } .width_1-4 { width:195px; } .width_1-3 { width:273px; } .width_1-2 { width:430px; } .width_3-4 { width:645px; } .width_1-1 { width:900px; } These values I have assigned here will falter unless there are three columns, and thus the margins at 0(first)+20+20+20+20+0(last). It would be a disaster if I wanted to add padding to my columns too, as my entire setup would have to be re calibrated. Now imagine if column width incorporated all the other elements, all I would need to do is change one associated value, and I have my layout. I'm less criticizing it and more hoping to understand why it's better, or why I'm finding it more difficult. Am I doing this whole thing wrong? I don't know very much about this topic, but it seems counter intuitive to use W3C's box-model. Some advice would be really appreciated. Thanks

    Read the article

  • Instantiate a model when the kind of model needed is represented as a string argument

    - by indiehacker
    My input data is a string representing the kind of datastore model I want to make. In python, I am using the eval() function to instantiate the model (below code), but this seems overly complex so I was wondering if there is a simpler way people normally do this? >>>model_kind="TextPixels" >>>key_name_eval="key_name" >>>key_name="key_name" >>>kwargs {'lat': [0, 1, 2, 3], 'stringText': 'boris,ted', 'lon': [0, 1, 2, 8], 'zooms': [0, 10]} >>>obj=eval( model_type + '(key_name='+tester+ ',**kwargs )' ) >>>obj <datamodel.TextPixels object at 0xed8808c>

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • How to show or direct a business analyst to a data modelling subject?

    - by AaronLS
    Our business analysts pushed hard to collect data through a spreadsheet. I am the programmer responsible for importing that data. Usually when they push hard for something like this, I never know how well it will work out until a few weeks later when I have time assigned to work on the task of programming the import of the data. I have tried to do as much as possible along the way, named ranges, data validations, etc. But I usually don't have time to take a detailed look at all the data and compare to the destination in the database to determine how well it matches up. A lot of times there will be maybe a little table of items that somehow I have to relate to something else in the database, but there are not natural or business keys present that would allow me to do so. Make the best of this, trying to write something that can compare strings and make a best guess at it and then go through the effort of creating interfaces for a user to match the imported data to the destination. I feel like if the business analyst was actually creating a data model, they would be forced to think about these relationships, and have an appreciation for the need of natural or business keys to be part of the spreadsheet for the purposes of smoothly importing the data. The closest they come to business analysis is a big flat list of fields, and that would be fine if it were like any other data dictionary and include data types+relationships, but it isn't. They are just a bunch of names. No indication of what type of data they might hold, and it is up to me to guess. When I have pushed for more detail, they say that it is just busy work. How can I explain the importance of data modelling? How can I tell them what it is and how to do it? It feels impossible, because they don't have an appreciation for its importance. They do however, usually have an interest in helping out in whatever way they can, it's just this in particular has never gotten a motivated response.

    Read the article

  • Ray Tracing concers: Efficient Data Structure and Photon Mapping

    - by Grieverheart
    I'm trying to build a simple ray tracer for specific target scenes. An example of such scene can be seen below. I'm concerned as to what accelerating data structure would be most efficient in this case since all objects are touching but on the other hand, the scene is uniform. The objects in my ray tracer are stored as a collection of triangles, thus I also have access to individual triangles. Also, when trying to find the bounding box of the scene, how should infinite planes be handled? Should one instead use the viewing frustum to calculate the bounding box? A few other questions I have are about photon mapping. I've read the original paper by Jensen and many more material. In the compact data structure for the photon they introduce, they store photon power as 4 chars, which from my understanding is 3 chars for color and 1 for flux. But I don't understand how 1 char is enough to store a flux of the order of 1/n, where n is the number of photons (I'm also a bit confused about flux vs power). The other question about photon mapping is, if it would be more efficient in my case to store photons per object (or even per Object's triangle) instead of using a balanced kd-tree. Also, same question about bounding box of the scene but for photon mapping. How should one find a bounding box from the pov of the light when infinite planes are involved?

    Read the article

  • How to show or direct a business analyst to do data modelling?

    - by AaronLS
    Our business analysts pushed hard to collect data through a spreadsheet. I am the programmer responsible for importing that data. Usually when they push hard for something like this, I never know how well it will work out until a few weeks later when I have time assigned to work on the task of programming the import of the data. I have tried to do as much as possible along the way, named ranges, data validations, etc. But I usually don't have time to take a detailed look at all the data and compare to the destination in the database to determine how well it matches up. A lot of times there will be maybe a little table of items that somehow I have to relate to something else in the database, but there are not natural or business keys present that would allow me to do so. Make the best of this, trying to write something that can compare strings and make a best guess at it and then go through the effort of creating interfaces for a user to match the imported data to the destination. I feel like if the business analyst was actually creating a data model, they would be forced to think about these relationships, and have an appreciation for the need of natural or business keys to be part of the spreadsheet for the purposes of smoothly importing the data. The closest they come to business analysis is a big flat list of fields, and that would be fine if it were like any other data dictionary and include data types+relationships, but it isn't. They are just a bunch of names. No indication of what type of data they might hold, and it is up to me to guess. When I have pushed for more detail, they say that it is just busy work. How can I explain the importance of data modelling? How can I tell them what it is and how to do it? It feels impossible, because they don't have an appreciation for its importance. They do however, usually have an interest in helping out in whatever way they can, it's just this in particular has never gotten a motivated response.

    Read the article

  • Graph data structures and journal format for mini-IDE

    - by matec
    Background: I am writing a small/partial IDE. Code is internally converted/parsed into a graph data structure (for fast navigation, syntax-check etc). Functionality to undo/redo (also between sessions) and restoring from crash is implemented by writing to and reading from journal. The journal records modifications to the graph (not to the source). Question: I am hoping for advice on a decision on data-structures and journal format. For the graph I see two possible versions: g-a Graph edges are implemented in the way that one node stores references to other nodes via memory address g-b Every node has an ID. There is an ID-to-memory-address map. Graph uses IDs (instead of addresses) to connect nodes. Moving along an edge from one node to another each time requires lookup in ID-to-address map. And also for the journal: j-a There is a current node (like current working directory in a shell + file-system setting). The journal contains entries like "create new node and connect to current", "connect first child of current node" (relative IDs) j-b Journal uses absolute IDs, e.g. "delete edge 7 - 5", "delete node 5" I could e.g. combine g-a with j-a or combine g-b with j-b. In principle also g-b and j-a should be possible. [My first/original attempt was g-a and a version of j-b that uses addresses, but that turned out to cause severe restrictions: nodes cannot change their addresses (or journal would have to keep track of it), and using journal between two sessions is a mess (or even impossible)] I wonder if variant a or variant b or a combination would be a good idea, what advantages and disadvantages they would have and especially if some variant might be causing troubles later.

    Read the article

  • What will be the better way for data retrieval on application that needs to handle limited amount of data?

    - by Milanix
    Just moved this question from Stack Overflow. Since, adding my code snippets itself would make this question really long. Instead, I am pretty interested in knowing a better ways for data retrieval on application that needs to handle limited amount of data which isn't updated regularly. Let's take this example: I am writing an application which gets a schedule as an XML from server. I have written a logic in order to parse XML version and update database only if the version is newer than the local version. Although the update is checked automatically/manually on daily basis based on user preference, the actual version update happens only once per few months or so. Since, this is done by some other authority which doesn't provide API but, rather inform publicly on their changes. The actual XML contains a "(n number of groups)(days in a week) (n number of schedule)" . The group is usually 6 and the number of schedule is usually 2. So basically there would usually be only around 100 strings. Now although I have used SQLite at the moment. I want to know how to make update on database. Should I show progress dialog that the application is updating and exit the app when it's done? Since, my updates are infrequent i don't think this will really harm user experience but, is there any better ways to do it? Because I don't want update to be made when user is searching which is done using database. This will cause an database already open exception. At least I have faced this problem before. Is it better to rather parse XML every time when user wants to view certain things or to use SQLite? Since, I make lots of use of adapter in my app to create lists, will that degrade the performance?

    Read the article

  • Unmet Dependencies with kdelibs5-data

    - by Jitesh
    I was trying to install Amarok 1.4 on Ubuntu 12.04 (Gnome-classic), by following this instructions. Problem started after giving these two commands dpkg -i kdelibs5-data_4.6.2-0ubuntu4_all.deb dpkg -i kdelibs-data_3.5.10.dfsg.1-5ubuntu2_all.deb Now, immediately after these commands, Ubuntu Updater popped up and gave me an error that the package catalog is broken and needs to be repaired. Nothing can be installed or removed till then. It also offered a suggestion to run apt-get install -f. I tried that, but again got the same error.Also tried apt-get clean followed by apt-get install -f. Again got the following output: jitesh@jitesh-desktop:~$ sudo apt-get clean jitesh@jitesh-desktop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: kdelibs5-data The following packages will be upgraded: kdelibs5-data 1 upgraded, 0 newly installed, 0 to remove and 18 not upgraded. 2 not fully installed or removed. Need to get 2,832 kB of archives. After this operation, 2,998 kB disk space will be freed. Do you want to continue [Y/n]? y Get:1 http: //in.archive.ubuntu.com/ubuntu/precise-updates/main kdelibs5-data all 4:4.8.4a-0ubuntu0.2 [2,832 kB] Fetched 2,832 kB in 32s (86.6 kB/s) dpkg: dependency problems prevent configuration of kdelibs5-data: libplasma3 (4:4.8.4a-0ubuntu0.2) breaks kdelibs5-data (<< 4:4.6.80~) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. kate-data (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. katepart (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. dpkg: error processing kdelibs5-data (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of kdelibs-data: kdelibs-data depends on kdelibs5-data; however: Package kdelibs5-data is not configured yet. No apport report written because MaxReports is reached already dpkg: error processing kdelibs-data (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: kdelibs5-data kdelibs-data W: Duplicate sources.list entry http://archive.canonical.com/ubuntu/ precise/partner i386 Packages (/var/lib/apt/lists/archive.canonical.com_ubuntu_dists_precise_partner_binary-i386_Packages) W: You may want to run apt-get update to correct these problems E: Sub-process /usr/bin/dpkg returned an error code (1) As I thought the error was related to configuring kdelibs, I tried to configure using dpkg. But got the following errors: jitesh@jitesh-desktop:~$ sudo dpkg --configure -a dpkg: dependency problems prevent configuration of kdelibs5-data: libplasma3 (4:4.8.4a-0ubuntu0.2) breaks kdelibs5-data (<< 4:4.6.80~) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. kate-data (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. katepart (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. dpkg: error processing kdelibs5-data (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of kdelibs-data: kdelibs-data depends on kdelibs5-data; however: Package kdelibs5-data is not configured yet. dpkg: error processing kdelibs-data (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: kdelibs5-data kdelibs-data jitesh@jitesh-desktop:~$ Now I dont have any idea how to proceed. I am unable to install anything from Software Centre or using Terminal now. Some basic info: Core2Duo, dual booting Ubuntu 12.04 with Win7. Fresh install of Ubuntu 12.04 (not upgrade). Incidentally, I had first upgraded from 10.04 and had succesfully installed Amarok 1.4 following this same method. But due to other issues, i had to format and do a clean install of 12.04. Now when I tried to install Amarok 1.4, I'm getting these errors. I also have digiKam and k3b installed, if that can be of any help. I use digiKam a lot, so removing KDE is not feasible for me. Any help on this issue will be highly appreciated. Thanks in advance.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >