Search Results

Search found 13880 results on 556 pages for 'explicit interface'.

Page 418/556 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • Grandparent – Parent – Child Reports in SQL Developer

    - by thatjeffsmith
    You’ll never see one of these family stickers on my car, but I promise not to judge…much. Parent – Child reports are pretty straightforward in Oracle SQL Developer. You have a ‘parent’ report, and then one or more ‘child’ reports which are based off of a value in a selected row or value from the parent. If you need a quick tutorial to get up to speed on the subject, go ahead and take 5 minutes Shortly before I left for vacation 2 weeks agao, I got an interesting question from one of my Twitter Followers: @thatjeffsmith any luck with the #Oracle awr reports in #SQLDeveloper?This is easy with multi generation parent>child Done in #dbvisualizer — Ronald Rood (@Ik_zelf) August 26, 2012 Now that I’m back from vacation, I can tell Ronald and everyone else that the answer is ‘Yes!’ And here’s how Time to Get Out Your XML Editor Don’t have one? That’s OK, SQL Developer can edit XML files. While the Reporting interface doesn’t surface the ability to create multi-generational reports, the underlying code definitely supports it. We just need to hack away at the XML that powers a report. For this example I’m going to start simple. A query that brings back DEPARTMENTs, then EMPLOYEES, then JOBs. We can build the first two parts of the report using the report editor. A Parent-Child report in Oracle SQL Developer (Departments – Employees) Save the Report to XML Once you’ve generated the XML file, open it with your favorite XML editor. For this example I’ll be using the build-it XML editor in SQL Developer. SQL Developer Reports in their raw XML glory! Right after the PDF element in the XML document, we can start a new ‘child’ report by inserting a DISPLAY element. I just copied and pasted the existing ‘display’ down so I wouldn’t have to worry about screwing anything up. Note I also needed to change the ‘master’ name so it wouldn’t confuse SQL Developer when I try to import/open a report that has the same name. Also I needed to update the binds tags to reflect the names from the child versus the original parent report. This is pretty easy to figure out on your own actually – I mean I’m no real developer and I got it pretty quick. <?xml version="1.0" encoding="UTF-8" ?> <displays> <display id="92857fce-0139-1000-8006-7f0000015340" type="" style="Table" enable="true"> <name><![CDATA[Grandparent]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.departments]]></sql> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Parent]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.employees where department_id = EPARTMENT_ID]]></sql> <binds> <bind id="DEPARTMENT_ID"> <prompt><![CDATA[DEPARTMENT_ID]]></prompt> <tooltip><![CDATA[DEPARTMENT_ID]]></tooltip> <value><![CDATA[NULL_VALUE]]></value> </bind> </binds> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Child]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.jobs where job_id = :JOB_ID]]></sql> <binds> <bind id="JOB_ID"> <prompt><![CDATA[JOB_ID]]></prompt> <tooltip><![CDATA[JOB_ID]]></tooltip> <value><![CDATA[NULL_VALUE]]></value> </bind> </binds> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> </display> </display> </display> </displays> Save the file and ‘Open Report…’ You’ll see your new report name in the tree. You just need to double-click it to open it. Here’s what it looks like running A 3 generation family Now Let’s Build an AWR Text Report Ronald wanted to have the ability to query AWR snapshots and generate the AWR reports. That requires a few inputs, including a START and STOP snapshot ID. That basically tells AWR what time period to use for generating the report. And here’s where it gets tricky. We’ll need to use aliases for the SNAP_ID column. Since we’re using the same column name from 2 different queries, we need to use different bind variables. Fortunately for us, SQL Developer’s clever enough to use the column alias as the BIND. Here’s what I mean: Grandparent Query SELECT snap_id start1, begin_interval_time, end_interval_time FROM dba_hist_snapshot ORDER BY 1 asc Parent Query SELECT snap_id stop1, begin_interval_time, end_interval_time, :START1 carry FROM dba_hist_snapshot WHERE snap_id > :START1 ORDER BY 1 asc And here’s where it gets even trickier – you can’t reference a bind from outside the parent query. My grandchild report can’t reference a value from the grandparent report. So I just carry the selected value down to the parent. In my parent query SELECT you see the ‘:START1′ at the end? That’s making that value available to me when I use it in my grandchild query. To complicate things a bit further, I can’t have a column name with a ‘:’ in it, or SQL Developer will get confused when I try to reference the value of the variable with the ‘:’ – and ‘::Name’ doesn’t work. But that’s OK, just alias it. Grandchild Query Select Output From Table(Dbms_Workload_Repository.Awr_Report_Text(1298953802, 1,:CARRY, :STOP1)); Ok, and the last trick – I hard-coded my report to use my database’s DB_ID and INST_ID into the AWR package call. Now a smart person could figure out a way to make that work on any database, but I got lazy and and ran out of time. But this should be far enough for you to take it from here. Here’s what my report looks like now: Caution: don’t run this if you haven’t licensed Enterprise Edition with Diagnostic Pack. The Raw XML for this AWR Report <?xml version="1.0" encoding="UTF-8" ?> <displays> <display id="927ba96c-0139-1000-8001-7f0000015340" type="" style="Table" enable="true"> <name><![CDATA[AWR Start Stop Report Final]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[SELECT snap_id start1, begin_interval_time, end_interval_time FROM dba_hist_snapshot ORDER BY 1 asc]]></sql> </query> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Stop SNAP_ID]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[SELECT snap_id stop1, begin_interval_time, end_interval_time, :START1 carry FROM dba_hist_snapshot WHERE snap_id > :START1 ORDER BY 1 asc]]></sql> </query> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[AWR Report]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[Select Output From Table(Dbms_Workload_Repository.Awr_Report_Text(1298953802, 1,:CARRY, :STOP1 ))]]></sql> </query> </display> </display> </display> </displays> Should We Build Support for Multiple Levels of Reports into the User Interface? Let us know! A comment here or a suggestion on our SQL Developer Exchange might help your case!

    Read the article

  • HPC Server Dynamic Job Scheduling: when jobs spawn jobs

    - by JoshReuben
    HPC Job Types HPC has 3 types of jobs http://technet.microsoft.com/en-us/library/cc972750(v=ws.10).aspx · Task Flow – vanilla sequence · Parametric Sweep – concurrently run multiple instances of the same program, each with a different work unit input · MPI – message passing between master & slave tasks But when you try go outside the box – job tasks that spawn jobs, blocking the parent task – you run the risk of resource starvation, deadlocks, and recursive, non-converging or exponential blow-up. The solution to this is to write some performance monitoring and job scheduling code. You can do this in 2 ways: manually control scheduling - allocate/ de-allocate resources, change job priorities, pause & resume tasks , restrict long running tasks to specific compute clusters Semi-automatically - set threshold params for scheduling. How – Control Job Scheduling In order to manage the tasks and resources that are associated with a job, you will need to access the ISchedulerJob interface - http://msdn.microsoft.com/en-us/library/microsoft.hpc.scheduler.ischedulerjob_members(v=vs.85).aspx This really allows you to control how a job is run – you can access & tweak the following features: max / min resource values whether job resources can grow / shrink, and whether jobs can be pre-empted, whether the job is exclusive per node the creator process id & the job pool timestamp of job creation & completion job priority, hold time & run time limit Re-queue count Job progress Max/ min Number of cores, nodes, sockets, RAM Dynamic task list – can add / cancel jobs on the fly Job counters When – poll perf counters Tweaking the job scheduler should be done on the basis of resource utilization according to PerfMon counters – HPC exposes 2 Perf objects: Compute Clusters, Compute Nodes http://technet.microsoft.com/en-us/library/cc720058(v=ws.10).aspx You can monitor running jobs according to dynamic thresholds – use your own discretion: Percentage processor time Number of running jobs Number of running tasks Total number of processors Number of processors in use Number of processors idle Number of serial tasks Number of parallel tasks Design Your algorithms correctly Finally , don’t assume you have unlimited compute resources in your cluster – design your algorithms with the following factors in mind: · Branching factor - http://en.wikipedia.org/wiki/Branching_factor - dynamically optimize the number of children per node · cutoffs to prevent explosions - http://en.wikipedia.org/wiki/Limit_of_a_sequence - not all functions converge after n attempts. You also need a threshold of good enough, diminishing returns · heuristic shortcuts - http://en.wikipedia.org/wiki/Heuristic - sometimes an exhaustive search is impractical and short cuts are suitable · Pruning http://en.wikipedia.org/wiki/Pruning_(algorithm) – remove / de-prioritize unnecessary tree branches · avoid local minima / maxima - http://en.wikipedia.org/wiki/Local_minima - sometimes an algorithm cant converge because it gets stuck in a local saddle – try simulated annealing, hill climbing or genetic algorithms to get out of these ruts   watch out for rounding errors – http://en.wikipedia.org/wiki/Round-off_error - multiple iterations can in parallel can quickly amplify & blow up your algo ! Use an epsilon, avoid floating point errors,  truncations, approximations Happy Coding !

    Read the article

  • MVC Architecture

    Model-View-Controller (MVC) is an architectural design pattern first written about and implemented by  in 1978. Trygve developed this pattern during the year he spent working with Xerox PARC on a small talk application. According to Trygve, “The essential purpose of MVC is to bridge the gap between the human user's mental model and the digital model that exists in the computer. The ideal MVC solution supports the user illusion of seeing and manipulating the domain information directly. The structure is useful if the user needs to see the same model element simultaneously in different contexts and/or from different viewpoints.”  Trygve Reenskaug on MVC The MVC pattern is composed of 3 core components. Model View Controller The Model component referenced in the MVC pattern pertains to the encapsulation of core application data and functionality. The primary goal of the model is to maintain its independence from the View and Controller components which together form the user interface of the application. The View component retrieves data from the Model and displays it to the user. The View component represents the output of the application to the user. Traditionally the View has read-only access to the Model component because it should not change the Model’s data. The Controller component receives and translates input to requests on the Model or View components. The Controller is responsible for requesting methods on the model that can change the state of the model. The primary benefit to using MVC as an architectural pattern in a project compared to other patterns is flexibility. The flexibility of MVC is due to the distinct separation of concerns it establishes with three distinct components.  Because of the distinct separation between the components interaction is limited through the use of interfaces instead of classes. This allows each of the components to be hot swappable when the needs of the application change or needs of availability change. MVC can easily be applied to C# and the .Net Framework. In fact, Microsoft created a MVC project template that will allow new project of this type to be created with the standard MVC structure in place before any coding begins. The project also creates folders for the three key components along with default Model, View and Controller classed added to the project. Personally I think that MVC is a great pattern in regards to dealing with web applications because they could be viewed from a myriad of devices. Examples of devices include: standard web browsers, text only web browsers, mobile phones, smart phones, IPads, IPhones just to get started. Due to the potentially increasing accessibility needs and the ability for components to be hot swappable is a perfect fit because the core functionality of the application can be retained and the View component can be altered based on the client’s environment and the View component could be swapped out based on the calling device so that the display is targeted to that specific device.

    Read the article

  • Dependency injection with n-tier Entity Framework solution

    - by Matthew
    I am currently designing an n-tier solution which is using Entity Framework 5 (.net 4) as its data access strategy, but am concerned about how to incorporate dependency injection to make it testable / flexible. My current solution layout is as follows (my solution is called Alcatraz): Alcatraz.WebUI: An asp.net webform project, the front end user interface, references projects Alcatraz.Business and Alcatraz.Data.Models. Alcatraz.Business: A class library project, contains the business logic, references projects Alcatraz.Data.Access, Alcatraz.Data.Models Alcatraz.Data.Access: A class library project, houses AlcatrazModel.edmx and AlcatrazEntities DbContext, references projects Alcatraz.Data.Models. Alcatraz.Data.Models: A class library project, contains POCOs for the Alcatraz model, no references. My vision for how this solution would work is the web-ui would instantiate a repository within the business library, this repository would have a dependency (through the constructor) of a connection string (not an AlcatrazEntities instance). The web-ui would know the database connection strings, but not that it was an entity framework connection string. In the Business project: public class InmateRepository : IInmateRepository { private string _connectionString; public InmateRepository(string connectionString) { if (connectionString == null) { throw new ArgumentNullException("connectionString"); } EntityConnectionStringBuilder connectionBuilder = new EntityConnectionStringBuilder(); connectionBuilder.Metadata = "res://*/AlcatrazModel.csdl|res://*/AlcatrazModel.ssdl|res://*/AlcatrazModel.msl"; connectionBuilder.Provider = "System.Data.SqlClient"; connectionBuilder.ProviderConnectionString = connectionString; _connectionString = connectionBuilder.ToString(); } public IQueryable<Inmate> GetAllInmates() { AlcatrazEntities ents = new AlcatrazEntities(_connectionString); return ents.Inmates; } } In the Web UI: IInmateRepository inmateRepo = new InmateRepository(@"data source=MATTHEW-PC\SQLEXPRESS;initial catalog=Alcatraz;integrated security=True;"); List<Inmate> deathRowInmates = inmateRepo.GetAllInmates().Where(i => i.OnDeathRow).ToList(); I have a few related questions about this design. 1) Does this design even make sense in terms of Entity Frameworks capabilities? I heard that Entity framework uses the Unit-of-work pattern already, am I just adding another layer of abstract unnecessarily? 2) I don't want my web-ui to directly communicate with Entity Framework (or even reference it for that matter), I want all database access to go through the business layer as in the future I will have multiple projects using the same business layer (web service, windows application, etc.) and I want to have it easy to maintain / update by having the business logic in one central area. Is this an appropriate way to achieve this? 3) Should the Business layer even contain repositories, or should that be contained within the Access layer? If where they are is alright, is passing a connection string a good dependency to assume? Thanks for taking the time to read!

    Read the article

  • RequestValidation Changes in ASP.NET 4.0

    - by Rick Strahl
    There’s been a change in the way the ValidateRequest attribute on WebForms works in ASP.NET 4.0. I noticed this today while updating a post on my WebLog all of which contain raw HTML and so all pretty much trigger request validation. I recently upgraded this app from ASP.NET 2.0 to 4.0 and it’s now failing to update posts. At first this was difficult to track down because of custom error handling in my app – the custom error handler traps the exception and logs it with only basic error information so the full detail of the error was initially hidden. After some more experimentation in development mode the error that occurs is the typical ASP.NET validate request error (‘A potentially dangerous Request.Form value was detetected…’) which looks like this in ASP.NET 4.0: At first when I got this I was real perplexed as I didn’t read the entire error message and because my page does have: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="NewEntry.aspx.cs" Inherits="Westwind.WebLog.NewEntry" MasterPageFile="~/App_Templates/Standard/AdminMaster.master" ValidateRequest="false" EnableEventValidation="false" EnableViewState="false" %> WTF? ValidateRequest would seem like it should be enough, but alas in ASP.NET 4.0 apparently that setting alone is no longer enough. Reading the fine print in the error explains that you need to explicitly set the requestValidationMode for the application back to V2.0 in web.config: <httpRuntime executionTimeout="300" requestValidationMode="2.0" /> Kudos for the ASP.NET team for putting up a nice error message that tells me how to fix this problem, but excuse me why the heck would you change this behavior to require an explicit override to an optional and by default disabled page level switch? You’ve just made a relatively simple fix to a solution a nasty morass of hard to discover configuration settings??? The original way this worked was perfectly discoverable via attributes in the page. Now you can set this setting in the page and get completely unexpected behavior and you are required to set what effectively amounts to a backwards compatibility flag in the configuration file. It turns out the real reason for the .config flag is that the request validation behavior has moved from WebForms pipeline down into the entire ASP.NET/IIS request pipeline and is now applied against all requests. Here’s what the breaking changes page from Microsoft says about it: The request validation feature in ASP.NET provides a certain level of default protection against cross-site scripting (XSS) attacks. In previous versions of ASP.NET, request validation was enabled by default. However, it applied only to ASP.NET pages (.aspx files and their class files) and only when those pages were executing. In ASP.NET 4, by default, request validation is enabled for all requests, because it is enabled before the BeginRequest phase of an HTTP request. As a result, request validation applies to requests for all ASP.NET resources, not just .aspx page requests. This includes requests such as Web service calls and custom HTTP handlers. Request validation is also active when custom HTTP modules are reading the contents of an HTTP request. As a result, request validation errors might now occur for requests that previously did not trigger errors. To revert to the behavior of the ASP.NET 2.0 request validation feature, add the following setting in the Web.config file: <httpRuntime requestValidationMode="2.0" /> However, we recommend that you analyze any request validation errors to determine whether existing handlers, modules, or other custom code accesses potentially unsafe HTTP inputs that could be XSS attack vectors. Ok, so ValidateRequest of the form still works as it always has but it’s actually the ASP.NET Event Pipeline, not WebForms that’s throwing the above exception as request validation is applied to every request that hits the pipeline. Creating the runtime override removes the HttpRuntime checking and restores the WebForms only behavior. That fixes my immediate problem but still leaves me wondering especially given the vague wording of the above explanation. One thing that’s missing in the description is above is one important detail: The request validation is applied only to application/x-www-form-urlencoded POST content not to all inbound POST data. When I first read this this freaked me out because it sounds like literally ANY request hitting the pipeline is affected. To make sure this is not really so I created a quick handler: public class Handler1 : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World <hr>" + context.Request.Form.ToString()); } public bool IsReusable { get { return false; } } } and called it with Fiddler by posting some XML to the handler using a default form-urlencoded POST content type: and sure enough – hitting the handler also causes the request validation error and 500 server response. Changing the content type to text/xml effectively fixes the problem however, bypassing the request validation filter so Web Services/AJAX handlers and custom modules/handlers that implement custom protocols aren’t affected as long as they work with special input content types. It also looks that multipart encoding does not trigger event validation of the runtime either so this request also works fine: POST http://rasnote/weblog/handler1.ashx HTTP/1.1 Content-Type: multipart/form-data; boundary=------7cf2a327f01ae User-Agent: West Wind Internet Protocols 5.53 Host: rasnote Content-Length: 40 Pragma: no-cache <xml>asdasd</xml>--------7cf2a327f01ae *That* probably should trigger event validation – since it is a potential HTML form submission, but it doesn’t. New Runtime Feature, Global Scope Only? Ok, so request validation is now a runtime feature but sadly it’s a feature that’s scoped to the ASP.NET Runtime – effective scope to the entire running application/app domain. You can still manually force validation using Request.ValidateInput() which gives you the option to do this in code, but that realistically will only work with the requestValidationMode set to V2.0 as well since the 4.0 mode auto-fires before code ever gets a chance to intercept the call. Given all that, the new setting in ASP.NET 4.0 seems to limit options and makes things more difficult and less flexible. Of course Microsoft gets to say ASP.NET is more secure by default because of it but what good is that if you have to turn off this flag the very first time you need to allow one single request that bypasses request validation??? This is really shortsighted design… <sigh>© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Do unit tests sometimes break encapsulation?

    - by user1288851
    I very often hear the following: "If you want to test private methods, you'd better put that in another class and expose it." While sometimes that's the case and we have a hiding concept inside our class, other times you end up with classes that have the same attributes (or, worst, every attribute of one class become a argument on a method in the other class) and exposes functionality that is, in fact, implementation detail. Specially on TDD, when you refactor a class with public methods out of a previous tested class, that class is now part of your interface, but has no tests to it (since you refactored it, and is a implementation detail). Now, I may be not finding an obvious better answer, but if my answer is the "correct", that means that sometimes writting unit tests can break encapsulation, and divide the same responsibility into different classes. A simple example would be testing a setter method when a getter is not actually needed for anything in the real code. Please when aswering don't provide simple answers to specific cases I may have written. Rather, try to explain more of the generic case and theoretical approach. And this is neither language specific. Thanks in advance. EDIT: The answer given by Matthew Flynn was really insightful, but didn't quite answer the question. Altough he made the fair point that you either don't test private methods or extract them because they really are other concern and responsibility (or at least that was what I could understand from his answer), I think there are situations where unit testing private methods is useful. My primary example is when you have a class that has one responsibility but the output (or input) that it gives (takes) is just to complex. For example, a hashing function. There's no good way to break a hashing function apart and mantain cohesion and encapsulation. However, testing a hashing function can be really tough, since you would need to calculate by hand (you can't use code calculation to test code calculation!) the hashing, and test multiple cases where the hash changes. In that way (and this may be a question worth of its own topic) I think private method testing is the best way to handle it. Now, I'm not sure if I should ask another question, or ask it here, but are there any better way to test such complex output (input)? OBS: Please, if you think I should ask another question on that topic, leave a comment. :)

    Read the article

  • How to Customize Your How-To Geek RSS Feeds (We’re Changing Things)

    - by The Geek
    If you’re an RSS subscriber, you’ll soon notice that we’re making a few changes. Why? It’s time to simplify our system, while providing you a little more control over which articles you want to see. The point, of course, is that people like different things, and that’s OK. What’s not so great is getting complaints—Linux users are always whining about Windows posts, and Windows users are whining when we write Linux posts. It’s also worth pointing out that if you aren’t interested in a post—you don’t have to click on it to read it. This is probably fairly obvious to reasonable people. The New Feeds Here’s the new set of feeds you can subscribe to. We’ll probably add more fine-grained feeds in the future, as we get some more things straightened out. Everything we publish (news, how-tos, features) Just the Feature Articles (the absolute best stuff) Just News (ETC) Posts Just Windows Articles Just Linux Articles Just Apple Articles Just Desktop Fun Articles You can obviously subscribe to one or many of them if you feel like it. The Once Daily Summary Feed! If you’d rather get all your How-To Geek in a single dose each day, you can subscribe to the summary feed, which is pretty much the same as our daily email newsletter. You can subscribe to this summary feed by clicking here. Note: we’re working on a lot of backend changes to hopefully make things a little better for you, the reader. One of the things we’ve consistently had feedback on is the comment system, which we’ll tackle a little later. Also, if you suddenly saw a barrage of posts earlier… oops! Our mistake. Latest Features How-To Geek ETC Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents

    Read the article

  • How do you handle authentication across domains?

    - by William Ratcliff
    I'm trying to save users of our services from having to have multiple accounts/passwords. I'm in a large organization and there's one group that handles part of user authentication for users who are from outside the facility (primarily for administrative functions). They store a secure cookie to establish a session and communicate only via HTTPS via the browser. Sessions expire either through: 1) explicit logout of the user 2) Inactivity 3) Browser closes My team is trying to write a web application to help users analyze data that they've taken (or are currently taking) while at our facility. We need to determine if a user is 1) authenticated 2) Some identifier for that user so we can store state for them (what analysis they are working on, etc.) So, the problem is how do you authenticate across domains (the authentication server for the other application lives in a border region between public and private--we will live in the public region). We have come up with some scenarios and I'd like advice about what is best practice, or if there is one we haven't considered. Let's start with the case where the user is authenticated with the authentication server. 1) The authentication server leaves a public cookie in the browser with their primary key for a user. If this is deemed sensitive, they encrypt it on their server and we have the key to decrypt it on our server. When the user visits our site, we check for this public cookie. We extract the user_id and use a public api for the authentication server to request if the user is logged in. If they are, they send us a response with: response={ userid :we can then map this to our own user ids. If necessary, we can request additional information such as email-address/display name once (to notify them if long running jobs are done, or to share results with other people, like with google_docs). account_is_active:Make sure that the account is still valid session_is_active: Is their session still active? If we query this for a valid user, this will have a side effect that we will reset the last_time_session_activated value and thus prolong their session with the authentication server last_time_session_activated: let us know how much time they have left ip_address_session_started_from:make sure the person at our site is coming from the same ip as they started the session at } Given this response, we either accept them as authenticated and move on with our app, or redirect them to the login page for the authentication server (question: if we give an encrypted portion of the response (signed by us) with the page to redirect them to, do we open any gaping security holes in the authentication server)? The flaw that we've found with this is that if the user visits evilsite.com and they look at the session cookie and send a query to the public api of the authentication server, they can keep the session alive and if our original user leaves the machine without logging out, then the next user will be able to access their session (this was possible before, but having the session alive eternally makes this worse). 2) The authentication server redirects all requests made to our domain to us and we send responses back through them to the user. Essentially, they act as a proxy. The advantage of this is that we can handshake with the authentication server, so it's safe to be trusted with the email address/name of the user and they don't have to reenter it So, if the user tries to go to: authentication_site/mysite_page1 they are redirected to mysite. Which would you choose, or is there a better way? The goal is to minimize the "Yet Another Password/Yet another username" problem... Thanks!!!!

    Read the article

  • “Cloud Integration in Minutes” – True or False?

    - by Bruce Tierney
    The short answer is “yes”. Connecting on-premise and cloud applications “in minutes” is true…provided you only consider the connectivity subset of integration and have a small number of cloud integration touch points. At the recent Gartner AADI conference, 230 attendees filled up the Oracle session to get a more comprehensive answer to this question. During the session, titled “Simplifying Integration – The Cloud & Mobile Pre-requisite”, Oracle’s Tim Hall described cloud connectivity and then, equally importantly, the other essential and sometimes overlooked aspects of integration required to ensure a long term application and service integration strategy. To understand the challenges and opportunities faced by cloud integration, the session started off with a slide that describes how connectivity can quickly transition from simplicity to complexity as the number of applications and service vendor instances grows: Increased complexity puts increased demand on the integration platform As companies expand from on-premise applications into a hybrid on-premise/cloud infrastructure with support for mobile, cloud, and social, there is a new sense of urgency to implement a unified and comprehensive service integration platform. Without getting this unified platform in place, companies face increased complexity and cost managing a growing patchwork of niche integration toolsets as well as the disparate standards mandated by each SaaS vendor as shown in the image below: dddddddddddddddddddd Incomplete and overlapping offerings from a patchwork of niche vendors Also at Gartner AADI, Oracle SOA Suite customer Geeta Pyne, Director of Middleware at BMC presented their successful strategy on how BMC efficiently manages their cloud integration despite disparate requirements from each vendor. From one of Geeta’s slide: Interfaces are dictated by SaaS vendors; wide variety (SOAP, REST, Socket, HTTP/POX, SFTP); Flexibility of Oracle Service Bus/SOA Suite helps to support Every vendor has their way to handle Security; WS-Security, Custom Header; Support in Oracle Service Bus helps to adhere to disparate requirements At BMC, the flexibility of Oracle Service Bus and Oracle SOA Suite allowed them to support the wide variation in the functional requirements as mandated by their SaaS vendors. In contrast to the patchwork platform approach of escalating complexity from overlapping SaaS toolkits, Oracle’s strategy is to provide a unified platform to support disparate requirements from your SaaS vendors, on-premise apps, legacy apps, and more. Furthermore, Oracle SOA Suite includes the many aspects of comprehensive integration beyond basic connectivity including orchestration, analytics (BAM, events…), service virtualization and more in a single unified interface. Oracle SOA Suite – Unified and comprehensive To summarize, yes you can achieve “cloud integration in minutes” when considering the connectivity subset of integration but be sure to look for ways to simplify as you consider a more comprehensive view of integration beyond basic connectivity such as service virtualization, management, event processing and more. And finally, be sure your integration platform has the deep flexibility to handle the requirements of all your future SaaS applications…many of which are unknown to you now.

    Read the article

  • Introducing the Oracle MDM Blog - Why All MDM Solutions Aren't Equal

    - by ken.pulverman
    Welcome to the Oracle MDM Blog.  Dave Butler, Tony Ouk, and myself - Ken Pulverman, will be bringing you news and information from the world of MDM at Oracle.  Dave is our resident expert with more than 30 years of experience in data and information management. Tony has deep expertise in our Exadata product line which provides a strong hardware synergy with MDM.  I come from Siebel Systems where I helped found the team that built our integration product line and then our Universal Customer Master with is part of our MDM offering at Oracle. I thought I'd hit the ground running with a topic we are going to want to continue to bend your ear about.  We had a recent meeting with Ford Goodman, our head of MDM commercial sales in the US and he was very fired up about and important topic.  He's irked that all MDM solutions get painted with the same brush even though they aren't the same at all. There are companies out there trying to represent frameworks and toolkits as out of the box solutions.  They give you the pleasure (read pain) of doing things like developing your own multi-application data model, building your own web services, or creating your own APIs.  Huh?  What gets sold as flexibility in reality is a barrier to ever going live.  At Siebel Systems we obsessed over the notion of a customer.  Our data model took over 10 years to perfect as defining a customer is a very complex task indeed.  There are divisions, subsidiaries, branches, acquisitions, sites etc., etc., etc..  You'll want to do your homework, but trust me - you aren't going to want to take the time or resource to build these canonical data structures yourself.  And what about APIs?  Again, it sounds flexible.  In reality it's a lot of work. Our DNA at Oracle is to reduce the cost of information technology so we pre-integrate our technology with all of our major applications and pre-build integrations and connectors for all the major systems you work with.  This is tedious work that requires detailed knowledge of the interfaces of all the applications involved.  It is also version specific as the interface features and technology are always changing.  We have a substantial organization to manage this complexity so you don't have to.  Suffice to say, we'd like to help our customers peel back the rhetoric of companies that fly the MDM flag without a real offering that you can quickly benefit from. Please watch this space for more information on this storyline as well as news and information around Oracle MDM.

    Read the article

  • How to set conditional activation to taskflows?

    - by shantala.sankeshwar(at)oracle.com
    This article describes implementing conditional activation to taskflows.Use Case Description Suppose we have a taskflow dropped as region on a page & this region is enclosed in a popup .By default when the page is loaded the respective region also gets loaded.Hence a region model needs to provide a viewId whenever one is requested.  A consequence of this is the TaskFlowRegionModel always has to initialize its task flow and execute the task flow's default activity in order to determine a viewId, even if the region is not visible on the page.This can lead to unnecessary performance overhead of executing task flow to generate viewIds for regions that are never visible. In order to increase the performance,we need to set the taskflow bindings activation property to 'conditional'.Below described is a simple usecase that shows how exactly we can set the conditional activations to taskflow bindings.Steps:1.Create an ADF Fusion web ApplicationView image 2.Create Business components for Emp tableView image3.Create a view criteria where deptno=:some_bind_variableView image4.Generate EmpViewImpl.java file & write the below code.Then expose this to client interface.    public void filterEmpRecords(Number deptNo){            // Code to filter the deptnos         ensureVariableManager().setVariableValue("some_bind_variable",  deptNo);        this.applyViewCriteria(this.getViewCriteria("EmpViewCriteria"));        this.executeQuery();       }5.Create an ADF Taskflow with page fragements & drop the above method on the taskflow6.Also drop the view activity(showEmp.jsff) .Define control flow case from the above method activity to the view activity.Set the method activity as default activityView image7.Create  main.jspx page & drop the above taskflow as region on this pageView image8.Surround the region with the dialog & surround the dialog with the popup(id is Popup1)9.Drop the commandButton on the above page & insert af:showPopupBehavior inside the commandButton:<af:commandButton text="show popup" id="cb1"><af:showPopupBehavior popupId="::Popup1"/></af:commandButton>10.Now if we execute this main page ,we will notice that the method action gets called even before the popup is launched.We can avoid this this by setting the activation property of the taskflow to conditional11.Goto the bindings of the above main page & select the taskflow binding ,set its activation property to 'conditional' & active property to Boolean value #{Somebean.popupVisible}.By default its value should be false.View image12.We need to set the above Boolean value to true only when the popup is launched.This can be achieved by inserting setPropertyListener inside the popup:<af:setPropertyListener from="true" to="#{Somebean.popupVisible}" type="popupFetch"/>13.Now if we run the page,we will notice that the method action is not called & only when we click on 'show popup' button the method action gets called.

    Read the article

  • PowerShell PowerPack Download

    - by BuckWoody
    I read Jeffery Hicks’ article in this month’s Redmond Magazine on a new add-in for Windows PowerShell 2.0. It’s called the PowerShell Pack and it has a some great new features that I plan to put into place on my production systems as soon as I finished learning and testing them. You can download the pack here if you have PowerShell 2.0. I’m having a lot of fun with it, and I’ll blog about what I’m learning here in the near future, but you should check it out. The only issue I have with it right now is that you have to load a module and then use get-help to find out what it does, because I haven’t found a lot of other documentation so far. The most interesting modules for me are the ones that can run a command elevated (in PSUserTools), the task scheduling commands (in TaskScheduler) and the file system checks and tools (in FileSystem). There’s also a way to create simple Graphical User Interface panels (in ). I plan to string all these together to install a management set of tools on my SQL Server Express Instances, giving the user “task buttons” to backup or restore a database, add or delete users and so on. Yes, I’ll be careful, and yes, I’ll make sure the user is allowed to do that. For now, I’m testing the download, but I thought I would share what I’m up to. If you have PowerShell 2.0 and you download the pack, let me know how you use it. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Web application / Domain model integration using JSON capable DTOs [on hold]

    - by g-makulik
    I'm a bit confused about architectural choices for the web-applications/java/python world. For c/c++ world the available (open source) choices to implement web applications is pretty limited to zero, involving java or python the choices explode to a,- hard to sort out -, mess of available 'frameworks' and application approaches. I want to sort out a clean MVC model, where the M stands for a fully blown (POCO, POJO driven) domain model (according M.Fowler's EAA pattern) using a mature OO language (Java,C++) for implementation. The background is: I have a system with certain hardware components (that introduce system immanent active behavior) and a configuration database for system meta and HW-components configuration data (these are even usually self contained, since the HW-components are capable to persist their configuration data anyway). For realization of the configuration/status data exchange protocol with the HW-components we have chosen the Google Protobuf format, which works well for the directly wired communication with these components. This protocol is already used successfully with a Java based GUI application via TCP/IP connection to the main system controlling HW-component. This application has some drawbacks and design flaws for historical reasons. Now we want to develop an abstract model (domain model) for configuration and monitoring those HW-components, that represents a more use case oriented view to the overall system behavior. I have the feeling that a plain Java class model would fit best for this (c++ implementation seems to have too much implementation/integration overhead with viable language-bridge interfaces). Google Protobuf message definitions could still serve well to describe DTO objects used to interact with a domain model API. But integrating Google Protobuf messages client side for e.g. data binding in the current view doesn't seem to be a good choice. I'm thinking about some extra serialization features, e.g. for JSON based data exchange with the views/controllers. Most lightweight solutions seem to involve a python based presentation layer using JSON based data transfer (I'm at least not sure to be fully informed about this). Is there some lightweight (applicable for a limited ARM Linux platform) framework available, supporting such architecture to realize a web-application? UPDATE: According to my recent research and comments of colleagues I've noticed that using Java (and some JVM) might not be the preferable choice for integration with python on a limited linux system as we have (running on ARM9 with hard to discuss memory and MCU costs), but C/C++ modules would do well for this (since this forms the native interface to python extensions, doesn't it?). I can imagine to provide a domain model from an appropriate C/C++ API (though I still think it's more efforts and higher skill requirements for the involved developers to do with these languages). Still I'm searching for a good approach that supports such architecture. I'll appreciate any pointers!

    Read the article

  • How to Run Apache Commands From Oracle HTTP Server 11g Home

    - by Daniel Mortimer
    Every now and then you come across a problem when there is nothing in the "troubleshooting manual" which can help you. Instead you need to think outside the box. This happened to me two or three years back. Oracle HTTP Server (OHS) 11g did not start. The error reported back by OPMN was generic and gave no clue, and worse the HTTP Server error log was empty, and remained so even after I had increased the OPMN and HTTP Server log levels. After checking configuration files, operating system resources, etc I was still no nearer the solution. And then the light bulb moment! OHS is based on Apache - what happens if I attempt to start HTTP Server using the native apache command. Trouble was the OHS 11g solution has its binaries and configuration files in separate "home" directories ORACLE_HOME contains the binaries ORACLE_INSTANCE contains the configuration files How to set the environment so that native apache commands run without error? Eventually, with help from a colleague, the knowledge articleHow to Start Oracle HTTP Server 11g Without Using opmnctl [ID 946532.1]was born! To be honest, I cannot remember the exact cause and solution to that OHS problem two or three years ago. But, I do remember that an attempt to start HTTP Server using the native apache command threw back an error to the console which led me to discover the culprit was some unusual filesystem fault.The other day, I was asked to review and publish a new knowledge article which described how to use the apache command to dump a list of static and shared loaded modules. This got me thinking that it was time [ID 946532.1] was given an update. The resultHow To Run Native Apache Commands in an Oracle HTTP Server 11g Environment [ID 946532.1] Highlights: Title change Improved environment setting scripts Interactive, should be no need to manually edit the scripts (although readers are welcome to do so) Automatically dump out some diagnostic information Inclusion of some links to other troubleshooting collateral To view the knowledge article you need a My Oracle Support login. For convenience, you can obtain the scripts via the links below.MS Windows:Wrapper cmd script - calls main cmd script [After download, remove the ".txt" file extension]Main cmd script - sets OHS 11g environment to run Apache commands [After download, remove the ".txt" file extension]Unix:Shell script - sets OHS 11g environment to run Apache commands on Unix Please note: I cannot guarantee that the scripts held in the blog repository will be maintained. Any enhancements or faults will applied to the scripts attached to the knowledge article. Lastly, to find out more about native apache commands, refer to the Apache Documentation apachectl - Apache HTTP Server Control Interface[http://httpd.apache.org/docs/2.2/programs/apachectl.html]httpd - Apache Hypertext Transfer Protocol Server[http://httpd.apache.org/docs/2.2/programs/httpd.html]

    Read the article

  • JavaFX Dialogs, Anyone?

    - by HecklerMark
    A common question about JavaFX, especially for those coming from a Swing background, is "How do I do Dialogs?" The reason this is a question at all is that, currently, there is no baked-in capability to do dialog boxes within a pure JavaFX 2.x application. But come on...you wouldn't be reading about this at all if you weren't a resourceful programmer. You have ways of making things happen.  :-) I ran across a decent patch of code recently that handles many of the dialog chores for you. Pros and cons follow, but pointing your browser to this link on Github (appropriately named JavaFXDialog) will get you off to a good start. Here are some screen shots the original code author, Anton Smirnov, provided: Nothing fancy, just clean and functional. Now, about those pros and cons. From my perspective, here's the bottom line: Pros Already developed. Time required to implement is limited to downloading and decompressing the file, doing a bit of reading, and writing a few lines of code to try things out. Easy. Most of the work is done, and the interface is pretty simple. Open source. If you want to make changes - and I'm already thinking along those lines, so you may as well admit you will, too - you can do it. Cons Documentation. What you see on the Wiki page is the extent of it. Lack of activity. As of the date this article was published, the code hasn't been updated in several months...so the project is a bit stale. To be fair, the cons listed above won't cause anyone to lose sleep. After all, you don't expect constant revisions against something that works well enough for most purposes, and if your needs exceed what is there, it's easy to mod the code yourself or "roll your own" if you prefer. The lack of documentation isn't a show-stopper either due to the limited functionality and complexity of the code. Wrapping It Up If you need a quick, drop-in dialog capability for your JavaFX 2.x app, give it a try and see what you think. And if you're already using something you like, please share it as well! I'd love to hear from you, take a look at what you pass along, and maybe do a "dialog shoot-out" article in the future. So..what works for you?  :-) All the best, Mark

    Read the article

  • Is SugarCRM really adequate for custom development?

    - by dukeofgaming
    I used SugarCRM for a project about two years ago, I ran into errors from the very installation, having to hack the actual installation file to deploy the software in the server... and other erros that I can't recall now. Two years after, I'm picking it up for a project once again... oh dear, I'm feeling like I should have developed the whole thing from scratch myself. Some examples: I couldn't install it in the server (again)... I had to install it locally, then copy the files and database over to the server and manually edit the config file. Constantly getting deployment errors from the module builder... one reason is SugarCRM keeps creating a record in the upgrade_history table for a file that does not exist, I keep deleting such record and it keeps coming back corrupt. I get other deployment errors, but have not figured them out... then I have to rollback all files and database to try again. I deleted a custom module with relationships, the relationships stayed in the other modules and cannot be deleted anymore, PHP warnings all over the place. Quick create for custom modules does not appear, hack needed. Its whole cache directory is a joke, permanent data/files are stored there. The module builder interface disappears required fields. Edit the wrong thing, module builder won't deploy again... then pray Quick Repair and/or Rebuild Relationships do the trick. My impression of SugarCRM now is that, regardless of its pretty exterior and apparent functionality, it is a very low quality piece of software. This even scared me more: http://amplicate.com/hate/sugarcrm; a quote: I wis this info had been available when I tried to implement it 2 years ago... I searched high and low and the only info I found was positive. Yes, it's a piece of crap. The community edition was full of bugs... nothing worked. Essentially I got fired for implementing it. I'm glad though, because now I work for myself, am much happier and make more money... so, I should really thank SugarCRM for sucking so much I guess! I figured that perhaps some of you have had similar experiences, and have either sticked with SugarCRM or moved on to another solution. I'm very interested in knowing what your resolutions were -or your current situations are- to make up my own mind, since the project I'm working on is long term and I'm feeling SugarCRM will be more an obstacle than an aid. Thanks in advance.

    Read the article

  • ArchBeat Link-o-Rama for 11/15/2011

    - by Bob Rhubart
    Java Magazine - November/December 2011 - by and for the Java Community Java Magazine is an essential source of knowledge about Java technology, the Java programming language, and Java-based applications for people who rely on them in their professional careers, or who aspire to. Enterprise 2.0 Conference: November 14-17 | Kellsey Ruppel "Oracle is proud to be a Gold sponsor of the Enterprise 2.0 West Conference, November 14-17, 2011 in Santa Clara, CA. You will see the latest collaboration tools and technologies, and learn from thought leaders in Enterprise 2.0's comprehensive conference." The Return of Oracle Wikis: Bigger and Better | @oracletechnet The Oracle Wikis are back - this time, with Oracle SSO on top and powered by Atlassian's Confluence technology. These wikis offer quite a bit more functionality than the old platform. Cloud Migration Lifecycle | Tom Laszewski Laszewski breaks down the four steps in the Set Up Phase of the Cloud Migration lifecycle. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ - Dec14 Spend the day with your peers learning from Oracle experts in engineered systems, cloud computing, Oracle Coherence, Oracle WebLogic, and more. Registration is free, but seating is limited. SOA all the Time; Architects in AZ; Clearing Info Integration Hurdles This week on the Architect Home Page on OTN. Live Webcast: New Innovations in Oracle Linux Date: Tuesday, November 15, 2011 Time: 9:00 AM PT / Noon ET Speakers: Chris Mason, Elena Zannoni. People in glass futures should throw stones | Nicholas Carr "Remember that Microsoft video on our glassy future? Or that one from Corning? Or that one from Toyota?" asks Carr. "What they all suggest, and assume, is that our rich natural 'interface' with the world will steadily wither away as we become more reliant on software mediation." Integration of SABSA Security Architecture Approaches with TOGAF ADM | Jeevak Kasarkod Jeevak Kasarkod's overview of a new paper from the OpenGroup and the SABSA institute "which delves into the incorporatation of risk management and security architecture approaches into a well established enterprise architecture methodology - TOGAF." Cloud Computing at the Tactical Edge | Grace Lewis - SEI Lewis describes the SEI's work with Cloudlets, " lightweight servers running one or more virtual machines (VMs), [that] allow soldiers in the field to offload resource-consumptive and battery-draining computations from their handheld devices to nearby cloudlets." Simplicity Is Good | James Morle "When designing cluster and storage networking for database platforms, keep the architecture simple and avoid the complexities of multi-tier topologies," says Morle. "Complexity is the enemy of availability." Mainframe as the cloud? Tom Laszewski There's nothing new about using the mainframe in the cloud, says Laszewski. Let Devoxx 2011 begin! | The Aquarium The Aquarium marks the kick-off of Devoxx 2011 with "a quick rundown of the Java EE and GlassFish side of things."

    Read the article

  • Should I be using a JavaScript SPA designed when security is important

    - by ryanzec
    I asked something kind of similar on stackoverflow with a particular piece of code however I want to try to ask this in a broader sense. So I have this web application that I have started to write in backbone using a Single Page Architecture (SPA) however I am starting to second guess myself because of security. Now we are not storing and sending credit card information or anything like that through this web application but we are storing sensitive information that people are uploading to us and will have the ability to re-download too. The obviously security concern that I have with JavaScript is that you can't trust anything that comes from JavaScript however in a Backbone SPA application, everything is being sent through JavaScript. There are two security features that I will have to build in JavaScript; permissions and authentication. The authentication piece is just me override the Backbone.Router.prototype.navigate method to check the fragment it is trying to load and if the JavaScript application.session.loggedIn is not set to true (and they are not viewing a none authenticated page), they are redirected to the login page automatically. The user could easily modify application.session.loggedIn to equal true (or modify Backbone.Router.prototype.navigate method) but then they would also have to not so easily dynamically embedded a link into the page (or modify a current one) that has the proper classes, data-* attributes, and href values to then load a page that should only be loaded when they user has logged in (and has the permissions). So I have an acl object that deals with the permissions stuff. All someone would have to do to view pages or parts of pages they should not be able to is to call acl.addPermission(resource, permission) with the proper permissions or modify the acl.hasPermission() to always return true and then navigate away and then back to the page. Now certain things is EMCAScript 5 like Object.seal() or Object.freeze() would help with some of this however we have to support IE 8 which does not support those pieces of functionality. Now the REST API also performs security checks on every request so technically even if they are able to see parts of the interface that they should not be able to, they still should not be able to actually affect any data. The main benefits for me in developing a JavaScript SPA application is that the application is a lot more responsive since it is only transferring the minimum amount of JSON data for the requested action and performing the minimum amount of work too. There are also other things that I think are beneficial like you are going to have to develop an API for the data (which is good if you want expand your application to different platforms/technologies) or their is more of a separation between front-end and back-end however if security is a concern, it is really wise to go down the road of a JavaScript SPA application for the front-end?

    Read the article

  • We’ve Got 10 Free Copies of Microsoft’s Networking Windows 7 eBook to Give Away. Get Yours!

    - by The Geek
    Last month, we reviewed our friend Ciprian’s new book by Microsoft Press, Network Your Computers & Devices: Step by Step—and we’ve twisted his arm until he decided to give away 10 free copies for our readers. First, the book: It’s a great book that covers networking between computers running Windows 7, XP, Vista, Linux, and even Mac OS X. Just as the title suggests, he’s got step-by-step tutorials that explain how to get your network up and running with a minimum of fuss. Want to see for yourself? You can grab a copy of the free sample chapter if you’d like, or you can look through the chapter outline: Chapter 1: Setting Up a Router and Devices Chapter 2: Setting User Accounts on All Computers Chapter 3: Setting Up Your Libraries on All Windows 7 Computers Chapter 4: Creating the Network Chapter 5: Customizing Network Sharing Settings in Windows 7 Chapter 6: Creating the Homegroup and Joining Windows 7 Computers Chapter 7: Sharing Libraries and Folders Chapter 8: Sharing and Working with Devices Chapter 9: Streaming Media Over the Network and the Internet Chapter 10: Sharing Between Windows XP, Windows Vista, and Windows 7 Computers Chapter 11: Sharing Between Mac OS X and Windows 7 Computers Chapter 12: Sharing Between Ubuntu Linux and Windows 7 Computers Chapter 13: Keeping the Network Secure Chapter 14: Setting Up Parental Controls Chapter 15: Troubleshooting Network and Internet Problems Whether you believe it’s the perfect book or not, we’re giving away one for free, so keep reading. Giveaway Details: Or What You Need to Do Since we’ve got an awful lot of subscribers, and we’ve only got 10 ebooks to give away, we need a few rules. So here’s how you can put your name into the hat for the giveaway: Method 1: Leave a comment on the giveaway post over on our Facebook Fan page. Obviously you’ll need to Like us before you can leave a comment. Method 2: If you don’t use Facebook, you can tweet this post using the Tweet button at the top of the article. Winners: We’ll randomly pick 10 winners from those who participate. Expiration: This giveaway expires in 3 days, give or take a day. We’ll announce the winners and contact them directly. So go forth, and get yourself a free ebook! Of course, if you want the print version, you can get that for a discount over on Amazon at the moment. Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents Peaceful Tropical Cavern Wallpaper

    Read the article

  • Calling a REST Based JSON Endpoint with HTTP POST and WCF

    - by Wallym
    Note: I always forget this stuff, so I'm putting it my blog to help me remember it.Calling a JSON REST based service with some params isn't that hard.  I have an endpoint that has this interface:        [WebInvoke(UriTemplate = "/Login",             Method="POST",             BodyStyle = WebMessageBodyStyle.Wrapped,            RequestFormat = WebMessageFormat.Json,            ResponseFormat = WebMessageFormat.Json )]        [OperationContract]        bool Login(LoginData ld); The LoginData class is defined like this:    [DataContract]    public class LoginData    {        [DataMember]        public string UserName { get; set; }        [DataMember]        public string PassWord { get; set; }        [DataMember]        public string AppKey { get; set; }    } Now that you see my method to call to login as well as the class that is passed for the login, the body of the login request looks like this:{ "ld" : {  "UserName":"testuser", "PassWord":"ackkkk", "AppKey":"blah" } } The header (in Fiddler), looks like this:User-Agent: FiddlerHost: hostnameContent-Length: 76Content-Type: application/json And finally, my url to POST against is:http://www.something.com/...../someservice.svc/LoginAnd there you have it, calling a WCF JSON Endpoint thru REST (and HTTP POST)

    Read the article

  • What I like about WIF&rsquo;s Claims-based Authorization

    - by Your DisplayName here!
    In “traditional” .NET with its IPrincipal interface and IsInRole method, developers were encouraged to write code like this: public void AddCustomer(Customer customer) {     if (Thread.CurrentPrincipal.IsInRole("Sales"))     {         // add customer     } } In code reviews I’ve seen tons of code like this. What I don’t like about this is, that two concerns in your application get tightly coupled: business and security logic. But what happens when the security requirements change – and they will (e.g. members of the sales role and some other people from different roles need to create customers)? Well – since your security logic is sprinkled across your project you need to change the security checks in all relevant places (and make sure you don’t forget one) and you need to re-test, re-stage and re-deploy the complete app. This is clearly not what we want. WIF’s claims-based authorization encourages developers to separate business code and authorization policy evaluation. This is a good thing. So the same security check with WIF’s out-of-the box APIs would look like this: public void AddCustomer(Customer customer) {     try     {         ClaimsPrincipalPermission.CheckAccess("Customer", "Add");           // add customer     }     catch (SecurityException ex)     {         // access denied     } } You notice the fundamental difference? The security check only describes what the code is doing (represented by a resource/action pair) – and does not state who is allowed to invoke the code. As I mentioned earlier – the who is most probably changing over time – the what most probably not. The call to ClaimsPrincipalPermission hands off to another class called the ClaimsAuthorizationManager. This class handles the evaluation of your security policy and is ideally in a separate assembly to allow updating the security logic independently from the application logic (and vice versa). The claims authorization manager features a method called CheckAccess that retrieves three values (wrapped inside an AuthorizationContext instance) – action (“add”), resource (“customer”) and the principal (including its claims) in question. CheckAccess then evaluates those three values and returns true/false. I really like the separation of concerns part here. Unfortunately there is not much support from Microsoft beyond that point. And without further tooling and abstractions the CheckAccess method quickly becomes *very* complex. But still I think that is the way to go. In the next post I will tell you what I don’t like about it (and how to fix it).

    Read the article

  • Fair Comments

    - by Tony Davis
    To what extent is good code self-documenting? In one of the most entertaining sessions I saw at the recent PASS summit, Jeremiah Peschka (blog | twitter) got a laugh out of a sleepy post-lunch audience with the following remark: "Some developers say good code is self-documenting; I say, get off my team" I silently applauded the sentiment. It's not that all comments are useful, but that I mistrust the basic premise that "my code is so clearly written, it doesn't need any comments". I've read many pieces describing the road to self-documenting code, and my problem with most of them is that they feed the myth that comments in code are a sign of weakness. They aren't; in fact, used correctly I'd say they are essential. Regardless of how far intelligent naming can get you in describing what the code does, or how well any accompanying unit tests can explain to your fellow developers why it works that way, it's no excuse not to document fully the public interfaces to your code. Maybe I just mixed with the wrong crowd while learning my favorite language, but when I open a stored procedure I lose the will even to read it unless I see a big Phil Factor- or Jeff Moden-style header summarizing in plain English what the code does, how it fits in to the broader application, and a usage example. This public interface describes the high-level process and should explain the role of the code, clearly, for fellow developers, language non-experts, and even any non-technical stake holders in the project. When you step into the body of the code, the low-level details, then I agree that the rules are somewhat different; especially when code is subject to frequent refactoring that can quickly render comments redundant or misleading. At their worst, here, inline comments are sticking plaster to cover up the scars caused by poor naming conventions, failure in clarity when mapping a complex domain into code, or just by not entirely understanding the problem (/ this is the clever part). If you design and refactor your code carefully so that it is as simple as possible, your functions do one thing only, you avoid having two completely different algorithms in the same piece of code, and your functions, classes and variables are intelligently named, then, yes, the need for inline comments should be minimal. And yet, even given this, I'd still argue that many languages (T-SQL certainly being one) just don't lend themselves to readability when performing even moderately-complex tasks. If the algorithm is complex, I still like to see the occasional helpful comment. Please, therefore, be as liberal as you see fit in the detail of the comments you apply to this editorial, for like code it is bound to increase its' clarity and usefulness. Cheers, Tony.

    Read the article

  • ERP in a Flash! Latest News on JD Edwards and Oracle VM Templates

    - by Kem Butller-Oracle
    Oracle Announces the Availability of Oracle VM Templates for JD Edwards EnterpriseOne 9.1 Update 2 and Tools 9.1 Update 4.4 Continuing the commitment to rapid and predictable deployments of JD Edwards EnterpriseOne, Oracle announces the general availability of Oracle VM templates for JD Edwards EnterpriseOne Application release 9.1 Update 2 and Tools release 9.1 Update 4.4. These templates can be used with Oracle VM for x86, on the Oracle Exalogic Elastic Cloud, and on the Oracle Database Machine. Oracle VM Templates for JD Edwards EnterpriseOne accelerate the process of setting up a working environment compared to the traditional installation process. The templates can be a key component to a well-managed cloud infrastructure, allowing system administrators to quickly provision fully functional JD Edwards EnterpriseOne environments for evaluation, development, or production use. The templates contain preconfigured images of the major JD Edwards EnterpriseOne server components, including: • Enterprise server • HTML server • Database server • BI Publisher (for use with One View Reporting) • Business Services Server and ADF Runtime (for use with Mobile Smartphone Applications) • Application Interface Services (new with this release, for use with Mobile Enterprise Applications) • Server Manager (new with this release) The virtual server images are built on a complete Oracle technology stack, including Oracle VM for x86, Oracle Linux, Oracle WebLogic Server, Oracle Database, and Oracle Business Intelligence Publisher. The templates can be installed into an Oracle VM for x86 system running on standard x86 servers, the Oracle Exalogic Elastic Cloud, and the Oracle Database Appliance as a composite “all-in-one” system. The database can be deployed as a fully preconfigured VM template, or it can be deployed to a preexisting database server, for example, the Oracle Exadata Database Machine or the Oracle Database Appliance. This latest set of templates includes the following applications and technology components: • JD Edwards EnterpriseOne Applications Release 9.1 Update 2 with ESUs as of April 8, 2014 • JD Edwards EnterpriseOne Tools 9.1 Update 4, maintenance pack 4 (9.1.4.4) • Oracle Database 12c (12.1.0.1) • Oracle WebLogic Server 12c (12.1.2) • Oracle Linux 5 Update 8, 64-bit • Oracle Business Intelligence Publisher 11.1.1.7.1, for use with JD Edwards EnterpriseOne One View Reporting • JD Edwards EnterpriseOne Business Services Server and Oracle Application Development Framework (ADF) 11.1.1.5, for use with the JD Edwards EnterpriseOne Mobile Applications. The delivery also includes a JD Edwards EnterpriseOne deployment server preconfigured to match the content of the templates. This edition of the templates also includes enhanced configuration utilities that greatly simplify the process of configuring the templates for deployment into a running system. The templates are immediately available for download from the Oracle Software Delivery Cloud. For more information see: • My Oracle Support article 884592.1 • Oracle Technology Network

    Read the article

  • Deploying Socket.IO App to Windows Azure Web Site with Azure CLI

    - by shiju
    In this blog post, I will demonstrate how to deploy Socket.IO app to Windows Azure Website using Windows Azure Cross-Platform Command-Line Interface, which leverages the Windows Azure Website’s new support for Web Sockets. Recently Windows Azure has announced lot of enhancements including the support for Web Sockets in Windows Azure Websites, which lets the Node.js developers deploy Socket.IO apps to Windows Azure Websites. In this blog post, I am using  Windows Azure CLI for create and deploy Windows Azure Website. Install  Windows Azure CLI The Windows Azure CLI available as a NPM module so that you can install Windows Azure CLI using  NPM as shown in the below command. After installing the azure-cli, just enter the command “azure” which will show the useful commands provided by Azure CLI. Import Windows Azure Subscription Account In order to import our Azure subscription account, we need to download the Windows Azure subscription profile. The Azure CLI command “account download” lets you download the  Windows Azure subscription profile as shown in the below command. The command redirect you login to Windows Azure portal and allow you to download the Windows Azure publish settings file. The account import command lets you import the downloaded publish settings file so that you can create and manage Websites, Cloud Services, Virtual Machines and Mobile Services in Windows Azure. Create Windows Azure Website and Enable Web Sockets In this post, we are going to deploy Socket.IO app to Windows Azure Website by using the Web Socket support provided by Windows Azure. Let’s create a Website named “socketiochatapp” using the Azure CLI. The above command will create a Windows Azure Website that will also initialize a Git repository with a remote named Azure. We can see the newly created Website from Azure portal. By default, the Web Sockets will be disabled. So let’s enable it by navigating to the Configure tab of the Website, and select “ON” in Web Sockets option and save the configuration changes. Deploy a Node.js Socket.IO App to Windows Azure Now, our Windows Azure Website supports Web Sockets so that we can easily deploy Socket.IO app to Windows Azure Website. Let’s add Node.js chat app which leverages Socket.IO module. Please note that you have to add npm module dependencies in the package.json file so that Windows Azure can install the dependencies when deploying the app. Let’s add the Node.js app and add the files to git repository. Let’s commit the changes to git repository. We have committed the changes to git local repository. Let’s push the changes to Windows Azure production environment. The successful deployment can see from the Windows Azure portal by navigating to the deployments tab of the selected Windows Azure Website. The screen shot below shows that our chat app is running successfully.   You can follow me on Twitter @shijucv

    Read the article

  • How I do VCS

    - by Wes McClure
    After years of dabbling with different version control systems and techniques, I wanted to share some of what I like and dislike in a few blog posts.  To start this out, I want to talk about how I use VCS in a team environment.  These come in a series of tips or best practices that I try to follow.  Note: This list is subject to change in the future. Always use some form of version control for all aspects of software development. Development is an evolution.  Looking back at where we were is an invaluable asset in that process.  This includes data schemas and documentation. Reverting / reapplying changes is absolutely critical for efficient development. The tools I use: Code: Hg (preferred), SVN Database: TSqlMigrations Documents: Sometimes in code repository, also SharePoint with versioning Always tag a commit (changeset) with comments This is a quick way to describe to someone else (or your future self) what the changeset entails. Be brief but courteous. One or two sentences about the task, not the actual changes. Use precommit hooks or setup the central repository to reject changes without comments. Link changesets to documentation If your project management system integrates with version control, or has a way to externally reference stories, tasks etc then leave a reference in the commit.  This helps locate more information about the commit and/or related changesets. It’s best to have a precommit hook or system that requires this information, otherwise it’s easy to forget. Ability to work offline is required, including commits and history Yes this requires a DVCS locally but doesn’t require the central repository to be a DVCS.  I prefer to use either Git or Hg but if it isn’t possible to migrate the central repository, it’s still possible for a developer to push / pull changes to that repository from a local Hg or Git repository. Never lock resources (files) in a central repository… Rude! We have merge tools for a reason, merging sucked a long time ago, it doesn’t anymore… stop locking files! This is unproductive, rude and annoying to other team members. Always review everything in your commit. Never ever commit a set of files without reviewing the changes in each. Never add a file without asking yourself, deep down inside, does this belong? If you leave to make changes during a review, start the review over when you come back.  Never assume you didn’t touch a file, double check. This is another reason why you want to avoid large, infrequent commits. Requirements for tools Quickly show pending changes for the entire repository. Default action for a resource with pending changes is a diff. Pluggable diff & merge tool Produce a unified diff or a diff of all changes.  This is helpful to bulk review changes instead of opening each file. The central repository is not your own personal dump yard.  Breaking this rule is a sure fire way to get the F bomb dropped in front of your name, multiple times. If you turn on Visual Studio’s commit on closing studio option, I will personally break your fingers. By the way, the person(s) in charge of this feature should be fired and never be allowed near programming, ever again. Commit (integrate) to the central repository / branch frequently I try to do this before leaving each day, especially without a DVCS.  One never knows when they might need to work from remote the following day. Never commit commented out code If it isn’t needed anymore, delete it! If you aren’t sure if it might be useful in the future, delete it! This is why we have history. If you don’t know why it’s commented out, figure it out and then either uncomment it or delete it. Don’t commit build artifacts, user preferences and temporary files. Build artifacts do not belong in VCS, everything in them is present in the code. (ie: bin\*, obj\*, *.dll, *.exe) User preferences are your settings, stop overriding my preferences files! (ie: *.suo and *.user files) Most tools allow you to ignore certain files and Hg/Git allow you to version this as an ignore file.  Set this up as a first step when creating a new repository! Be polite when merging unresolved conflicts. Count to 10, cuss, grab a stress ball and realize it’s not a big deal.  Actually, it’s an opportunity to let you know that someone else is working in the same area and you might want to communicate with them. Following the other rules, especially committing frequently, will reduce the likelihood of this. Suck it up, we all have to deal with this unintended consequence at times.  Just be careful and GET FAMILIAR with your merge tool.  It’s really not as scary as you think.  I personally prefer KDiff3 as its merging capabilities rock. Don’t blindly merge and then blindly commit your changes, this is rude and unprofessional.  Make sure you understand why the conflict occurred and which parts of the code you want to keep.  Apply scrutiny when you commit a manual merge: review the diff! Make sure you test the changes (build and run automated tests) Become intimate with your version control system and the tools you use with it. Avoid trial and error as much as is possible, sit down and test the tool out, read some tutorials etc.  Create test repositories and walk through common scenarios. Find the most efficient way to do your work.  These tools will be used repetitively, so inefficiencies will add up. Sometimes this involves a mix of tools, both GUI and CLI. I like a combination of both Tortoise Hg and hg cli to get the job efficiently. Always tag releases Create a way to find a given release, whether this be in comments or an explicit tag / branch.  This should be readily discoverable. Create release branches to patch bugs and then merge the changes back to other development branch(es). If using feature branches, strive for periodic integrations. Feature branches often cause forked code that becomes irreconcilable.  Strive to re-integrate somewhat frequently with the branch this code will ultimately be merged into.  This will avoid merge conflicts in the future. Feature branches are best when they are mutually exclusive of active development in other branches. Use and abuse local commits , at least one per task in a story. This builds a trail of changes in your local repository that can be pushed to a central repository when the story is complete. Never commit a broken build or failing tests to the central repository. It’s ok for a local commit to break the build and/or tests.  In fact, I encourage this if it helps group the changes more logically.  This is one of the main reasons I got excited about DVCS, when I wanted more than one changeset for a set of pending changes but some files could be grouped into both changesets (like solution file / project file changes). If you have more than a dozen outstanding changed resources, there should probably be more than one commit involved. Exceptions when maintaining code bases that require shotgun surgery, in this case, it’s a design smell :) Don’t version sensitive information Especially usernames / passwords   There is one area I haven’t found a solution I like yet: versioning 3rd party libraries and/or code.  I really dislike keeping any assemblies in the repository, but seems to be a common practice for external libraries.  Please feel free to share your ideas about this below.    -Wes

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >