Search Results

Search found 6981 results on 280 pages for 'force flow'.

Page 225/280 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • Create a Smoother Period Close

    - by Get Proactive Customer Adoption Team
    Untitled Document Do You Use Oracle E-Business Suite Products Involved in Accounting Period Closes? We understand that closing the periods in your system at the end of an accounting period enables your company to make the right business decisions. We also know this requires prior preparation, good procedures, and quality data. To help you meet that need, Oracle E-Business Suite’s proactive support team developed the Period Close Advisor to help your organization conduct a smooth period close for its Oracle E-Business Suite 12 products. The Period Close Advisor is composed of logical steps you can follow, aligned by the business requirement flow. It will help with an orderly close of the product sub-ledgers before posting to the General Ledger. It combines recommendations and industry best practices with tips from subject matter experts for troubleshooting. You will find patches needed and references to assist you during each phase. Get to know the E-Business Suite Period Close Advisor The Period Close Advisor does more than help the users of Oracle E-Business Suite products close their period. You can use it before and throughout the period to stay on track. Proactively it assists you as you set up your company’s period close process. During the period, it helps evaluate your system’s readiness for initiating the period close procedures and prepare the system for a smooth period close experience. The Period Close Advisor gets you to answers when you have questions and gives you the latest news from us on Oracle E-Business Suite’s period close. The Period Close Advisor is the right place to start. How to Use the E-Business Suite Period Close The Period Close Advisor graphically guides you through your period close. The tabs show you the products (also called applications or sub-ledgers) covered, and the product order required for the processing to handle any dependencies between the products. Users of all the products it covers can benefit from the information it contains. Structure of the Period Close Advisor Clicking on a tab gives you the details for that particular step in the process. This includes an overview, showing how the products fit into the overall period close process, and step-by-step information on each phase needed to complete the period close for the tab. You will also find multimedia training and related resources you can access if you need more information. Once you click on any of the phases, you see guidance for that phase. This can include: Tips from the subject-matter experts—here are examples from a Cash Management specialist: “For organizations with high transaction volumes bank statements should be loaded and reconciled on a daily basis.” “The automatic reconciliation process can be set up to create miscellaneous transactions automatically.” References to useful Knowledge Base documents: Information Centers for the products and features FAQs on functionality Known Issues and patches with both the errors and their solutions How-to documents that explain in detail how to use a feature or complete a process White papers that give overview of a feature, list setup required to use the feature, etc. Links to diagnosticsthat help debug issues you may find in a process Additional information and alerts about a process or reports that can help you prevent issues from surfacing This excerpt from the “Process Transaction” phase for the Receivables product lists documents you’ll find helpful. How to Get Started with the Period Close Advisor The Period Close Advisor is a great resource that can be used both as a proactive tool (while setting up your period end procedures) and as the first document to refer to when you encounter an issue during the period close procedures! As mentioned earlier, the order of the product tabs in the Period Close Advisor gives you the recommended order of closing. The first thing to do is to ensure that you are following the prescribed order for closing the period, if you are using more than one sub-ledger. Next, review the information shared in the Evaluate and Prepare and Process Transactions phases. Make sure that you are following the recommended best practices; you have applied the recommended patches, etc. The Reconcile phase gives you the recommended steps to follow for reconciling a sub-ledger with the General Ledger. Ensure that your reconciliation procedure aligns with those steps. At any stage during the period close processing, if you encounter an issue, you can revisit the Period Close Advisor. Choose the product you have an issue with and then select the phase you are in. You will be able to review information that can help you find a solution to the issue you are facing. Stay Informed Oracle updates the Period Close Advisor as we learn of new issues and information. Bookmark the Oracle E-Business Suite Period Close Advisor [ID 335.1] and keep coming back to it for the latest information on period close

    Read the article

  • Pages in IE render differently when served through the ASP.NET Development server and Production Ser

    - by rajbk
    You see differences in the way IE renders your web application locally on the ASP.NET Development server compared to your production server. Comparing the response from both servers including response headers and CSS show no difference. The issue may occur because of a setting in IE. In IE, go to Tools –> Compatibility ViewSettings. The checkbox “Display intranet sites in Compatibility View” turned on forces IE8 to display the web application content in a way similar to how Internet Explorer 7 handles standards mode web pages. Since your local web server is considered to be in the intranet zone, IE uses “Compatibility View” to render your pages. While you could uncheck this setting in or propagate the change to all developers through group policy settings, a different way is described below. To force IE to mimic the behavior of a certain version of IE when rendering the pages, you use the meta element  to include a “X-UA-Compatible” http-equiv header in  your web page or have it sent as part of the header by adding it to your web.config file. The values are listed below: <meta http-equiv="X-UA-Compatible" content="IE=4"> <!-- IE5 mode --> <meta http-equiv="X-UA-Compatible" content="IE=7.5"> <!-- IE7 mode --> <meta http-equiv="X-UA-Compatible" content="IE=100"> <!-- IE8 mode --> <meta http-equiv="X-UA-Compatible" content="IE=a"> <!-- IE5 mode --> This value can also be set in web.config like so: <?xml version="1.0" encoding="utf-8"?> <configuration> <system.webServer> <httpProtocol> <customHeaders> <clear /> <add name="X-UA-Compatible" value="IE=EmulateIE7" /> </customHeaders> </httpProtocol> </system.webServer> </configuration> The setting can added in the IIS metabase as described here. Similarly, you can do the same in Apache by adding the directive in httpd.conf <Location /store> Header set X-UA-Compatible “IE=EmulateIE7” </Location> Even though it can be done on a site level, I recommend you do it on a per application level to avoid confusing the developer. References Defining Document Compatibility Implementing the META Switch on IIS Implementing the META Switch on Apache

    Read the article

  • C#/.NET Little Wonders: Static Char Methods

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Often times in our code we deal with the bigger classes and types in the BCL, and occasionally forgot that there are some nice methods on the primitive types as well.  Today we will discuss some of the handy static methods that exist on the char (the C# alias of System.Char) type. The Background I was examining a piece of code this week where I saw the following: 1: // need to get the 5th (offset 4) character in upper case 2: var type = symbol.Substring(4, 1).ToUpper(); 3:  4: // test to see if the type is P 5: if (type == "P") 6: { 7: // ... do something with P type... 8: } Is there really any error in this code?  No, but it still struck me wrong because it is allocating two very short-lived throw-away strings, just to store and manipulate a single char: The call to Substring() generates a new string of length 1 The call to ToUpper() generates a new upper-case version of the string from Step 1. In my mind this is similar to using ToUpper() to do a case-insensitive compare: it isn’t wrong, it’s just much heavier than it needs to be (for more info on case-insensitive compares, see #2 in 5 More Little Wonders). One of my favorite books is the C++ Coding Standards: 101 Rules, Guidelines, and Best Practices by Sutter and Alexandrescu.  True, it’s about C++ standards, but there’s also some great general programming advice in there, including two rules I love:         8. Don’t Optimize Prematurely         9. Don’t Pessimize Prematurely We all know what #8 means: don’t optimize when there is no immediate need, especially at the expense of readability and maintainability.  I firmly believe this and in the axiom: it’s easier to make correct code fast than to make fast code correct.  Optimizing code to the point that it becomes difficult to maintain often gains little and often gives you little bang for the buck. But what about #9?  Well, for that they state: “All other things being equal, notably code complexity and readability, certain efficient design patterns and coding idioms should just flow naturally from your fingertips and are no harder to write then the pessimized alternatives. This is not premature optimization; it is avoiding gratuitous pessimization.” Or, if I may paraphrase: “where it doesn’t increase the code complexity and readability, prefer the more efficient option”. The example code above was one of those times I feel where we are violating a tacit C# coding idiom: avoid creating unnecessary temporary strings.  The code creates temporary strings to hold one char, which is just unnecessary.  I think the original coder thought he had to do this because ToUpper() is an instance method on string but not on char.  What he didn’t know, however, is that ToUpper() does exist on char, it’s just a static method instead (though you could write an extension method to make it look instance-ish). This leads me (in a long-winded way) to my Little Wonders for the day… Static Methods of System.Char So let’s look at some of these handy, and often overlooked, static methods on the char type: IsDigit(), IsLetter(), IsLetterOrDigit(), IsPunctuation(), IsWhiteSpace() Methods to tell you whether a char (or position in a string) belongs to a category of characters. IsLower(), IsUpper() Methods that check if a char (or position in a string) is lower or upper case ToLower(), ToUpper() Methods that convert a single char to the lower or upper equivalent. For example, if you wanted to see if a string contained any lower case characters, you could do the following: 1: if (symbol.Any(c => char.IsLower(c))) 2: { 3: // ... 4: } Which, incidentally, we could use a method group to shorten the expression to: 1: if (symbol.Any(char.IsLower)) 2: { 3: // ... 4: } Or, if you wanted to verify that all of the characters in a string are digits: 1: if (symbol.All(char.IsDigit)) 2: { 3: // ... 4: } Also, for the IsXxx() methods, there are overloads that take either a char, or a string and an index, this means that these two calls are logically identical: 1: // check given a character 2: if (char.IsUpper(symbol[0])) { ... } 3:  4: // check given a string and index 5: if (char.IsUpper(symbol, 0)) { ... } Obviously, if you just have a char, then you’d just use the first form.  But if you have a string you can use either form equally well. As a side note, care should be taken when examining all the available static methods on the System.Char type, as some seem to be redundant but actually have very different purposes.  For example, there are IsDigit() and IsNumeric() methods, which sound the same on the surface, but give you different results. IsDigit() returns true if it is a base-10 digit character (‘0’, ‘1’, … ‘9’) where IsNumeric() returns true if it’s any numeric character including the characters for ½, ¼, etc. Summary To come full circle back to our opening example, I would have preferred the code be written like this: 1: // grab 5th char and take upper case version of it 2: var type = char.ToUpper(symbol[4]); 3:  4: if (type == 'P') 5: { 6: // ... do something with P type... 7: } Not only is it just as readable (if not more so), but it performs over 3x faster on my machine:    1,000,000 iterations of char method took: 30 ms, 0.000050 ms/item.    1,000,000 iterations of string method took: 101 ms, 0.000101 ms/item. It’s not only immediately faster because we don’t allocate temporary strings, but as an added bonus there less garbage to collect later as well.  To me this qualifies as a case where we are using a common C# performance idiom (don’t create unnecessary temporary strings) to make our code better. Technorati Tags: C#,CSharp,.NET,Little Wonders,char,string

    Read the article

  • Standards Corner: Preventing Pervasive Monitoring

    - by independentid
     Phil Hunt is an active member of multiple industry standards groups and committees and has spearheaded discussions, creation and ratifications of industry standards including the Kantara Identity Governance Framework, among others. Being an active voice in the industry standards development world, we have invited him to share his discussions, thoughts, news & updates, and discuss use cases, implementation success stories (and even failures) around industry standards on this monthly column. Author: Phil Hunt On Wednesday night, I watched NBC’s interview of Edward Snowden. The past year has been tumultuous one in the IT security industry. There has been some amazing revelations about the activities of governments around the world; and, we have had several instances of major security bugs in key security libraries: Apple's ‘gotofail’ bug  the OpenSSL Heartbleed bug, not to mention Java’s zero day bug, and others. Snowden’s information showed the IT industry has been underestimating the need for security, and highlighted a general trend of lax use of TLS and poorly implemented security on the Internet. This did not go unnoticed in the standards community and in particular the IETF. Last November, the IETF (Internet Engineering Task Force) met in Vancouver Canada, where the issue of “Internet Hardening” was discussed in a plenary session. Presentations were given by Bruce Schneier, Brian Carpenter,  and Stephen Farrell describing the problem, the work done so far, and potential IETF activities to address the problem pervasive monitoring. At the end of the presentation, the IETF called for consensus on the issue. If you know engineers, you know that it takes a while for a large group to arrive at a consensus and this group numbered approximately 3000. When asked if the IETF should respond to pervasive surveillance attacks? There was an overwhelming response for ‘Yes'. When it came to 'No', the room echoed in silence. This was just the first of several consensus questions that were each overwhelmingly in favour of response. This is the equivalent of a unanimous opinion for the IETF. Since the meeting, the IETF has followed through with the recent publication of a new “best practices” document on Pervasive Monitoring (RFC 7258). This document is extremely sensitive in its approach and separates the politics of monitoring from the technical ones. Pervasive Monitoring (PM) is widespread (and often covert) surveillance through intrusive gathering of protocol artefacts, including application content, or protocol metadata such as headers. Active or passive wiretaps and traffic analysis, (e.g., correlation, timing or measuring packet sizes), or subverting the cryptographic keys used to secure protocols can also be used as part of pervasive monitoring. PM is distinguished by being indiscriminate and very large scale, rather than by introducing new types of technical compromise. The IETF community's technical assessment is that PM is an attack on the privacy of Internet users and organisations. The IETF community has expressed strong agreement that PM is an attack that needs to be mitigated where possible, via the design of protocols that make PM significantly more expensive or infeasible. Pervasive monitoring was discussed at the technical plenary of the November 2013 IETF meeting [IETF88Plenary] and then through extensive exchanges on IETF mailing lists. This document records the IETF community's consensus and establishes the technical nature of PM. The draft goes on to further qualify what it means by “attack”, clarifying that  The term is used here to refer to behavior that subverts the intent of communicating parties without the agreement of those parties. An attack may change the content of the communication, record the content or external characteristics of the communication, or through correlation with other communication events, reveal information the parties did not intend to be revealed. It may also have other effects that similarly subvert the intent of a communicator.  The past year has shown that Internet specification authors need to put more emphasis into information security and integrity. The year also showed that specifications are not good enough. The implementations of security and protocol specifications have to be of high quality and superior testing. I’m proud to say Oracle has been a strong proponent of this, having already established its own secure coding practices. 

    Read the article

  • Using Event Driven Programming in games, when is it beneficial?

    - by Arthur Wulf White
    I am learning ActionScript 3 and I see the Event flow adheres to the W3C recommendations. From what I learned events can only be captured by the dispatcher unless, the listener capturing the event is a DisplayObject on stage and a parent of the object firing the event. You can capture the events in the capture(before) or bubbling(after) phase depending on Listner and Event setup you use. Does this system lend itself well for game programming? When is this system useful? Could you give an example of a case where using events is a lot better than going without them? Are they somehow better for performance in games? Please do not mention events you must use to get a game running, like Event.ENTER_FRAME Or events that are required to get input from the user like, KeyboardEvent.KEY_DOWN and MouseEvent.CLICK. I am asking if there is any use in firing events that have nothing to do with user input, frame rendering and the likes(that are necessary). I am referring to cases where objects are communicating. Is this used to avoid storing a collection of objects that are on the stage? Thanks Here is some code I wrote as an example of event behavior in ActionScript 3, enjoy. package regression { import flash.display.Shape; import flash.display.Sprite; import flash.events.Event; import flash.events.EventDispatcher; import flash.events.KeyboardEvent; import flash.events.MouseEvent; import flash.events.EventPhase; /** * ... * @author ... */ public class Check_event_listening_1 extends Sprite { public const EVENT_DANCE : String = "dance"; public const EVENT_PLAY : String = "play"; public const EVENT_YELL : String = "yell"; private var baby : Shape = new Shape(); private var mom : Sprite = new Sprite(); private var stranger : EventDispatcher = new EventDispatcher(); public function Check_event_listening_1() { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { trace("test begun"); addChild(mom); mom.addChild(baby); stage.addEventListener(EVENT_YELL, onEvent); this.addEventListener(EVENT_YELL, onEvent); mom.addEventListener(EVENT_YELL, onEvent); baby.addEventListener(EVENT_YELL, onEvent); stranger.addEventListener(EVENT_YELL, onEvent); trace("\nTest1 - Stranger yells with no bubbling"); stranger.dispatchEvent(new Event(EVENT_YELL, false)); trace("\nTest2 - Stranger yells with bubbling"); stranger.dispatchEvent(new Event(EVENT_YELL, true)); stage.addEventListener(EVENT_PLAY, onEvent); this.addEventListener(EVENT_PLAY, onEvent); mom.addEventListener(EVENT_PLAY, onEvent); baby.addEventListener(EVENT_PLAY, onEvent); stranger.addEventListener(EVENT_PLAY, onEvent); trace("\nTest3 - baby plays with no bubbling"); baby.dispatchEvent(new Event(EVENT_PLAY, false)); trace("\nTest4 - baby plays with bubbling"); baby.dispatchEvent(new Event(EVENT_PLAY, true)); trace("\nTest5 - baby plays with bubbling but is not a child of mom"); mom.removeChild(baby); baby.dispatchEvent(new Event(EVENT_PLAY, true)); mom.addChild(baby); stage.addEventListener(EVENT_DANCE, onEvent, true); this.addEventListener(EVENT_DANCE, onEvent, true); mom.addEventListener(EVENT_DANCE, onEvent, true); baby.addEventListener(EVENT_DANCE, onEvent); trace("\nTest6 - Mom dances without bubbling - everyone is listening during capture phase(not target and bubble phase)"); mom.dispatchEvent(new Event(EVENT_DANCE, false)); trace("\nTest7 - Mom dances with bubbling - everyone is listening during capture phase(not target and bubble phase)"); mom.dispatchEvent(new Event(EVENT_DANCE, true)); } private function onEvent(e : Event):void { trace("Event was captured"); trace("\nTYPE : ", e.type, "\nTARGET : ", objToName(e.target), "\nCURRENT TARGET : ", objToName(e.currentTarget), "\nPHASE : ", phaseToString(e.eventPhase)); } private function phaseToString(phase : int):String { switch(phase) { case EventPhase.AT_TARGET : return "TARGET"; case EventPhase.BUBBLING_PHASE : return "BUBBLING"; case EventPhase.CAPTURING_PHASE : return "CAPTURE"; default: return "UNKNOWN"; } } private function objToName(obj : Object):String { if (obj == stage) return "STAGE"; else if (obj == this) return "MAIN"; else if (obj == mom) return "Mom"; else if (obj == baby) return "Baby"; else if (obj == stranger) return "Stranger"; else return "Unknown" } } } /*result : test begun Test1 - Stranger yells with no bubbling Event was captured TYPE : yell TARGET : Stranger CURRENT TARGET : Stranger PHASE : TARGET Test2 - Stranger yells with bubbling Event was captured TYPE : yell TARGET : Stranger CURRENT TARGET : Stranger PHASE : TARGET Test3 - baby plays with no bubbling Event was captured TYPE : play TARGET : Baby CURRENT TARGET : Baby PHASE : TARGET Test4 - baby plays with bubbling Event was captured TYPE : play TARGET : Baby CURRENT TARGET : Baby PHASE : TARGET Event was captured TYPE : play TARGET : Baby CURRENT TARGET : Mom PHASE : BUBBLING Event was captured TYPE : play TARGET : Baby CURRENT TARGET : MAIN PHASE : BUBBLING Event was captured TYPE : play TARGET : Baby CURRENT TARGET : STAGE PHASE : BUBBLING Test5 - baby plays with bubbling but is not a child of mom Event was captured TYPE : play TARGET : Baby CURRENT TARGET : Baby PHASE : TARGET Test6 - Mom dances without bubbling - everyone is listening during capture phase(not target and bubble phase) Event was captured TYPE : dance TARGET : Mom CURRENT TARGET : STAGE PHASE : CAPTURE Event was captured TYPE : dance TARGET : Mom CURRENT TARGET : MAIN PHASE : CAPTURE Test7 - Mom dances with bubbling - everyone is listening during capture phase(not target and bubble phase) Event was captured TYPE : dance TARGET : Mom CURRENT TARGET : STAGE PHASE : CAPTURE Event was captured TYPE : dance TARGET : Mom CURRENT TARGET : MAIN PHASE : CAPTURE */

    Read the article

  • BPM 11g - Dynamic Task Assignment with Multi-level Organization Units

    - by Mark Foster
    I've seen several requirements to have a more granular level of task assignment in BPM 11g based on some value in the data passed to the process. Parametric Roles is normally the first port of call to try to satisfy this requirement, but in this blog we will show how a lot of use-cases can be satisfied by the easier to implement and flexible Organization Unit. The Use-Case Task assignment is to an approval group containing several users. At runtime, a location value in the input data determines which of the particular users the task is ultimately assigned to. In this case we use the Demo Community referenced in the SOA Admin Guide, and specifically the "LoanAnalyticGroup" which contains three users; "szweig", "mmitch" & "fkafka". In our scenario we would like to assign a task to "szweig" if the input data specifies that the location is "JapanCentral", to "fkafka" if the location is "JapanNorth" and to "mmitch" if "JapanSouth", and to all of them if the location is "Japan" i.e....   The Process Simple one human task process.... In the output data association of the "Start" activity we need to set the value of the "Organization Unit" predefined variable based on the input data (note that the  predefined variables can only be set on output data associations)....  ...and in the output data association of the human activity we will reset the "Organization Unit" to empty, always good practice to ensure that the Organization Unit will not be used for any subsequent human activities for which we do not require it.... Set Up the Organization Unit  Log in to the BPM Workspace with an administrator user (weblogic/welcome1 in our case) and choose the "Administration" option. Within "Roles" assign the "ProcessOwner" swim-lane for our process to "LoanAnalyticGroup".... Within "Organization Units" we can model our organization.... "Root Organization Unit" as "Japan" and "Child Organization Unit" as "Central", "South" & "North" as shown. As described previously, add user "szweig" to "Central", "mmitch" to "South" and "fkafka" to "North"....   Test the Process Invalid Data  First let us test with invalid data in the input to see what the consequences are, here we use "X" as input.... ...and looking at the instance we can see it has errored.... Organization Unit Root Level Assignment  Now let us see what happens if we have "Japan" in the input data.... ...looking in the "flow trace" we can see that the task has been assigned....  ... but who has the task been assigned to ? Let us look in the BPM Workspace for user "szweig"....  ...and for "mmitch"....  ... and for "fkafka"....  ...so we can see that with an Organization Unit at "Root" level we have successfully assigned the task to all users. Organization Unit Child Level Assignment  Now let us test with "Japan/North" in the input data.... ...and looking in "fkafka" workspace we see the task has been assigned, remember, he was associated with "JapanNorth"....   ... but what about the workspace of "szweig"....  ...no tasks assigned, neither has "mmitch", just as we expected. Summary  We have seen in this blog how to easily implement multi-level dynamic task routing using Organization Units, a common use-case and a simpler solution than Parametric Roles. 

    Read the article

  • Data Security Through Structure, Procedures, Policies, and Governance

    Security Structure and Procedures One of the easiest ways to implement security is through the use of structure, in particular the structure in which data is stored. The preferred method for this through the use of User Roles, these Roles allow for specific access to be granted based on what role a user plays in relation to the data that they are manipulating. Typical data access actions are defined by the CRUD Principle. CRUD Principle: Create New Data Read Existing Data Update Existing Data Delete Existing Data Based on the actions assigned to a role assigned, User can manipulate data as they need to preform daily business operations.  An example of this can be seen in a hospital where doctors have been assigned Create, Read, Update, and Delete access to their patient’s prescriptions so that a doctor can prescribe and adjust any existing prescriptions as necessary. However, a nurse will only have Read access on the patient’s prescriptions so that they will know what medicines to give to the patients. If you notice, they do not have access to prescribe new prescriptions, update or delete existing prescriptions because only the patient’s doctor has access to preform those actions. With User Roles comes responsibility, companies need to constantly monitor data access to ensure that the proper roles have the most appropriate access levels to ensure users are not exposed to inappropriate data.  In addition this also protects rouge employees from gaining access to critical business information that could be destroyed, altered or stolen. It is important that all data access is monitored because of this threat. Security Governance Current Data Governance laws regarding security Health Insurance Portability and Accountability Act (HIPAA) Sarbanes-Oxley Act Database Breach Notification Act The US Department of Health and Human Services defines HIIPAA as a Privacy Rule. This legislation protects the privacy of individually identifiable health information. Currently, HIPAA   sets the national standards for securing electronically protected health records. Additionally, its confidentiality provisions protect identifiable information being used to analyze patient safety events and improve patient safety. In 2002 after the wake of the Enron and World Com Financial scandals Senator Paul Sarbanes and Representative Michael Oxley lead the creation of the Sarbanes-Oxley Act. This act administered by the Securities and Exchange Commission (SEC) dramatically altered corporate financial practices and data governance. In addition, it also set specific deadlines for compliance. The Sarbanes-Oxley is not a set of standard business rules and does not specify how a company should retain its records; In fact, this act outlines which pieces of data are to be stored as well as the storage duration. The Database Breach Notification Act requires companies, in the event of a data breach containing personally identifiable information, to notify all California residents whose information was stored on the compromised system at the time of the event, according to Gregory Manter. He further explains that this act is only California legislation. However, it does affect “any person or business that conducts business in California, and that owns or licenses computerized data that includes personal information,” regardless of where the compromised data is located.  This will force any business that maintains at least limited interactions with California residents will find themselves subject to the Act’s provisions. Security Policies All companies must work in accordance with the appropriate city, county, state, and federal laws. One way to ensure that a company is legally compliant is to enforce security policies that adhere to the appropriate legislation in their area or areas that they service. These types of polices need to be mandated by a company’s Security Officer. For smaller companies, these policies need to come from executives, Directors, and Owners.

    Read the article

  • BizTalk: Suspend shape and Convoy

    - by Leonid Ganeline
    Part 1: BizTalk: Instance Subscription and Convoys: Details This is a Part 2. I am discussing the Suspend shape together with Convoys and going to show that using them together is undesirable. In previous article we investigated the Instance Subscriptions and how they could create situation with dangerous zones in processing.  Let' start with Suspend shape. [See the BizTalk Help] "You can use the Suspend shape to make an orchestration instance stop running until an administrator explicitly intervenes, perhaps to reflect an error condition that requires attention beyond the scope of the orchestration. All of the state information for the orchestration instance is saved, and will be reinstated when the administrator resumes the orchestration instance. When an orchestration instance is suspended, an error is raised. You can specify a message string to accompany the error to help the administrator diagnose the situation."   On the Suspend shape the orchestration is stopped in the Suspended (Resumable) state. Next we have two choices, one is to resume and the second is to terminate the orchestration. Is the orchestration is stopped or unenlisted? You don't find a note about it anywhere. The fact is the Orchestration is stopped and still enlisted. It is very important. So again, the suspended orchestration can be resumed or terminated. The moment when the operator or the operation script resumes or terminates can be far away. It is also important too. Let's go back to the case from previous article. Make sure you notice the convoy and the dangerous zone after the last Receive shape.     Now we have a Suspend shape inside the orchestration. The first orchestration instance is suspended. Next messages start new orchestration instance and have been consumed by this orchestration, right? Wrong! The orchestration is stopped on the Suspend shape but still enlisted. Now the dangerous zone, the "zombie zone" is expanded to the interval between the last receive and the moment of termination or end of the orchestration. The new orchestration instance for this convoy will not start till this moment. How fast operator finds out this suspended orchestration? Maybe hours or days. All this time orchestration is still enlisted and gathering the convoy messages. We can resume the orchestration but we cannot resume these messages together with orchestration. Seems the name Suspended of the orchestration is misleading. The orchestration can be in the Started (and Enlisted)/Stopped (and Enlisted)/Unenlisted state. The Suspend shape switches orchestration exactly to the Stopped state. The Stop name would describe the shape clearly and unambiguously and the Stopped state would describe the orchestration. Imagine we can change the BizTalk. The Orchestration editor can search these situations and returns the compile error. In similar case the Orchestration Editor forces us to use only ordered delivery port with convoys. The run-time core can force the orchestration with convoy be suspended in Unresumable state, that means the run-time unenlists the orchestration instance subscriptions. The Suspend shape name should be changed. The "Suspend" name is misleading. The "Stop" name is clear and unambiguous. The same for the orchestration state, it should be “Stopped” not “Suspended (Resumable)”.   Conclusion:  It is not recommended using a Suspend shape together with the convoy orchestrations.

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • External USB 3 drive not recognized

    - by ilan123
    Ubuntu 12.10 64 bit seems not to recognize my external hard disk. It is a Vantec NST-310S3 external disk enclosure with a WD 3TB drive. The disk has two NTFS partitions. My PC is a dual boot system. Under Windows 7 the hard disk works fine but I can't make it work with Ubuntu. When the drive is connected to the PC then the command sudo fdisk -l seems to hang forever. Below are the output of lsusb and cat /proc/partitions without the external drive and then with it connected. I added also the last lines of the dmesg command at the end. First without the drive: ilan@linux:~$ lsusb Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 13ba:0017 Unknown PS/2 Keyboard+Mouse Adapter Bus 001 Device 004: ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver Bus 001 Device 005: ID 0ac8:3420 Z-Star Microelectronics Corp. Venus USB2.0 Camera ilan@linux:~$ cat /proc/partitions major minor #blocks name 8 0 1953514584 sda 8 1 102400 sda1 8 2 629043200 sda2 8 3 367001600 sda3 8 4 1 sda4 8 5 471859200 sda5 8 6 157286400 sda6 8 7 324115456 sda7 8 8 4101120 sda8 11 0 1048575 sr0 Second with the USB 3 drive: ilan@linux:~$ lsusb Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 004 Device 002: ID 174c:55aa ASMedia Technology Inc. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 13ba:0017 Unknown PS/2 Keyboard+Mouse Adapter Bus 001 Device 004: ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver Bus 001 Device 005: ID 0ac8:3420 Z-Star Microelectronics Corp. Venus USB2.0 Camera ilan@linux:~$ cat /proc/partitions major minor #blocks name 8 0 1953514584 sda 8 1 102400 sda1 8 2 629043200 sda2 8 3 367001600 sda3 8 4 1 sda4 8 5 471859200 sda5 8 6 157286400 sda6 8 7 324115456 sda7 8 8 4101120 sda8 11 0 1048575 sr0 8 16 2930266584 sdb ilan@linux:~$ lsusb -v -s 004:002 Bus 004 Device 002: ID 174c:55aa ASMedia Technology Inc. Couldn't open device, some information will be missing Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 9 idVendor 0x174c ASMedia Technology Inc. idProduct 0x55aa bcdDevice 1.00 iManufacturer 2 iProduct 3 iSerial 1 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 44 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk-Only iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0400 1x 1024 bytes bInterval 0 bMaxBurst 15 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0400 1x 1024 bytes bInterval 0 bMaxBurst 15 ilan@linux:~$ sudo fdisk -l [sudo] password for ilan: Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf1b4f1ee Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 1258293247 629043200 7 HPFS/NTFS/exFAT /dev/sda3 1258293248 1992296447 367001600 7 HPFS/NTFS/exFAT /dev/sda4 1992298494 3907028991 957365249 f W95 Ext'd (LBA) /dev/sda5 1992298496 2936016895 471859200 7 HPFS/NTFS/exFAT /dev/sda6 2936018944 3250591743 157286400 7 HPFS/NTFS/exFAT /dev/sda7 3250593792 3898824703 324115456 83 Linux /dev/sda8 3898826752 3907028991 4101120 82 Linux swap / Solaris dmesg output after connecting the external drive: [ 23.740567] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx [ 23.740786] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 49.144673] usb 4-1: >new SuperSpeed USB device number 2 using xhci_hcd [ 49.163039] usb 4-1: >Parent hub missing LPM exit latency info. Power management will be impacted. [ 49.166789] usb 4-1: >New USB device found, idVendor=174c, idProduct=55aa [ 49.166793] usb 4-1: >New USB device strings: Mfr=2, Product=3, SerialNumber=1 [ 49.166796] usb 4-1: >Product: AS2105 [ 49.166799] usb 4-1: >Manufacturer: ASMedia [ 49.166801] usb 4-1: >SerialNumber: 0123456789ABCDEF [ 49.206372] usbcore: registered new interface driver uas [ 49.228891] Initializing USB Mass Storage driver... [ 49.229042] scsi6 : usb-storage 4-1:1.0 [ 49.229115] usbcore: registered new interface driver usb-storage [ 49.229116] USB Mass Storage support registered. [ 64.045528] scsi 6:0:0:0: >Direct-Access WDC WD30 EZRX-00MMMB0 80.0 PQ: 0 ANSI: 0 [ 64.046224] sd 6:0:0:0: >Attached scsi generic sg2 type 0 [ 64.046881] sd 6:0:0:0: >[sdb] Very big device. Trying to use READ CAPACITY(16). [ 64.047610] sd 6:0:0:0: >[sdb] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB) [ 64.048368] sd 6:0:0:0: >[sdb] Write Protect is off [ 64.048373] sd 6:0:0:0: >[sdb] Mode Sense: 23 00 00 00 [ 64.048984] sd 6:0:0:0: >[sdb] No Caching mode page present [ 64.048987] sd 6:0:0:0: >[sdb] Assuming drive cache: write through [ 64.049297] sd 6:0:0:0: >[sdb] Very big device. Trying to use READ CAPACITY(16). [ 64.050942] sd 6:0:0:0: >[sdb] No Caching mode page present [ 64.050944] sd 6:0:0:0: >[sdb] Assuming drive cache: write through [ 94.245006] usb 4-1: >reset SuperSpeed USB device number 2 using xhci_hcd [ 94.262553] usb 4-1: >Parent hub missing LPM exit latency info. Power management will be impacted. [ 94.263805] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c00 [ 94.263808] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c40 [ 125.262722] usb 4-1: >reset SuperSpeed USB device number 2 using xhci_hcd [ 125.280304] usb 4-1: >Parent hub missing LPM exit latency info. Power management will be impacted. [ 125.281511] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c00 [ 125.281516] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c40

    Read the article

  • Oracle Fusion Middleware gives you Choice and Portability for Public and Private Cloud

    - by Michelle Kimihira
    Author: Margaret Lee, Senior Director, Product Management, Oracle Fusion Middleware Cloud Computing allows customers to quickly develop and deploy applications in a shared environment.  The environment can span across hardward (IaaS), foundation layer software (PaaS), and end-user software (SaaS). Cloud Computing provides compelling benefits in terms of business agility and IT cost savings.  However, with complex, existing heterogeneous architectures, and concerns for security and manageability, enterprises are challenged to define their Cloud strategy.  For most enterprises, the solution is a hybrid of private and public cloud.  Fusion Middleware supports customers’ Cloud requirements through choice and portability. Fusion Middleware supports a variety of cloud development and deployment models:  Oracle [Public] Cloud; customer private cloud; hybrid of these two, and traditional dedicated, on-premise model Customers can develop applications in any of these models and deployed in another, providing the flexibility and portability they need Oracle Cloud is a public cloud offering.  Within Oracle Cloud, Fusion Middleware provides two key offerings include the Developer cloud service and Java cloud deployment service. Developer Cloud Service Simplify Development: Automated provisioned environment; pre-configured and integrated; web-based administration Deploy Automatically: Fully integrated with Oracle Cloud for Java deployment; workflow ensures build & test Collaborate & Manage: Fits any size team; integrated team source repository; continuous integration; task/defect tracking Integrated with all major IDEs: Oracle JDeveloper; NetBeans; Eclipse Java Cloud Service Java Cloud service provides flexible Java deployment environment for departmental applications and development, staging, QA, training, and demo environments.  It also supports customizations deployments for SaaS-based Fusion Applications customers.  Some key features of Java Cloud Service include: WebLogic Server on Exalogic, secure, highly available infrastructure Database Service & IDE Integration Open, Standard-based Deploy Web Apps, Web Services, REST Services Fully managed and supported by Oracle For more information, please visit Oracle Cloud, Oracle Cloud Java Service and Oracle Cloud Developer Service. If your enterprise prefers a private cloud, for reasons such as security, control, manageability, and complex integration that prevent your applications from being deployed on a public cloud, Fusion Middleware also provide you with the products and tools you need.  Sometimes called Private PaaS, private clouds have their predecessors in shared-services arrangements many large companies have been building in the past decade.  The difference, however, are in the scope of the services, and depth of their capabilities.  In terms of vertical stack depth, private clouds not only provide hardware and software infrastructure to run your applications, they also provide services such as integration and security, that your applications need.  Horizontally, private clouds provide monitoring, management, lifecycle, and charge back capabilities out-of-box that shared-services platforms did not have before. Oracle Fusion Middleware includes the complete stack of hardware and software for you to build private clouds: SOA suite and BPM suite to support systems integration and process flow between applications deployed on your private cloud and the rest of your organization Identity and Access Management suite to provide security, provisioning, and access services for applications deployed on your private cloud WebLogic Server to run your applications Enterprise Manager's Cloud Management pack to monitor, manage, upgrade applications running on your private cloud Exalogic or optimized Oracle-Sun hardware to build out your private cloud The most important key differentiator for Oracle's cloud solutions is portability, between private and public clouds.  This is unique to Oracle because portability requires the vendor to have product depth and breadth in both public cloud services and private cloud product offerings.  Most public cloud vendors cannot provide the infrastructure and tools customers need to build their own private clouds.  In reverse, traditional software tools vendors typically do not have the product and expertise breadth to build out and offer a public cloud.  Oracle can.  It is important for customers that the products and technologies  Oracle uses to build its public is the same set that it sells to customers for them to build private clouds.  Fundamentally, that enables skills reuse,  as well as application portability. For more information on Oracle PaaS offerings, please visit Oracle's product information page.    Resources Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Best Method For Evaluating Existing Software or New Software

    How many of us have been faced with having to decide on an off-the-self or a custom built component, application, or solution to integrate in to an existing system or to be the core foundation of a new system? What is the best method for evaluating existing software or new software still in the design phase? One of the industry preferred methodologies to use is the Active Reviews for Intermediate Designs (ARID) evaluation process.  ARID is a hybrid mixture of the Active Design Review (ADR) methodology and the Architectural Tradeoff Analysis Method (ATAM). So what is ARID? ARD’s main goal is to ensure quality, detailed designs in software. One way in which it does this is by empowering reviewers by assigning generic open ended survey questions. This approach attempts to remove the possibility for allowing the standard answers such as “Yes” or “No”. The ADR process ignores the “Yes”/”No” questions due to the fact that they can be leading based on how the question is asked. Additionally these questions tend to receive less thought in comparison to more open ended questions. Common Active Design Review Questions What possible exceptions can occur in this component, application, or solution? How should exceptions be handled in this component, application, or solution? Where should exceptions be handled in this component, application, or solution? How should the component, application, or solution flow based on the design? What is the maximum execution time for every component, application, or solution? What environments can this component, application, or solution? What data dependencies does this component, application, or solution have? What kind of data does this component, application, or solution require? Ok, now I know what ARID is, how can I apply? Let’s imagine that your organization is going to purchase an off-the-shelf (OTS) solution for its customer-relationship management software. What process would we use to ensure that the correct purchase is made? If we use ARID, then we will have a series of 9 steps broken up by 2 phases in order to ensure that the correct OTS solution is purchases. Phase 1 Identify the Reviewers Prepare the Design Briefing Prepare the Seed Scenarios Prepare the Materials When identifying reviewers for a design it is preferred that they be pulled from a candidate pool comprised of developers that are going to implement the design. The believe is that developers actually implementing the design will have more a vested interest in ensuring that the design is correct prior to the start of code. Design debriefing consist of a summary of the design, examples of the design solving real world examples put in to use and should be no longer than two hours typically. The primary goal of this briefing is to adequately summarize the design so that the review members could actually implement the design. In the example of purchasing an OTS product I would attempt to review my briefing prior to its distribution with the review facilitator to ensure that nothing was excluded that should have not been. This practice will also allow me to test the length of the briefing to ensure that can be delivered in an appropriate about of time. Seed Scenarios are designed to illustrate conceptualized scenarios when applied with a set of sample data. These scenarios can then be used by the reviewers in the actual evaluation of the software, All materials needed for the evaluation should be prepared ahead of time so that they can be reviewed prior to and during the meeting. Materials Included: Presentation Seed Scenarios Review Agenda Phase 2 Present ARID Present Design Brainstorm and prioritize scenarios Apply scenarios Summarize Prior to the start of any ARID review meeting the Facilitator should define the remaining steps of ARID so that all the participants know exactly what they are doing prior to the start of the review process. Once the ARID rules have been laid out, then the lead designer presents an overview of the design which typically takes about two hours. During this time no questions about the design or rational are allowed to be asked by the review panel as a standard, but they are written down for use latter in the process. After the presentation the list of compiled questions is then summarized and sent back to the lead designer as areas that need to be addressed further. In the example of purchasing an OTS product issues could arise regarding security, the implementation needed or even if this is this the correct product to solve the needed solution. After the Design presentation a brainstorming and prioritize scenarios process begins by reducing the seed scenarios down to just the highest priority scenarios.  These will then be used to test the design for suitability. Once the selected scenarios have been defined the reviewers apply the examples provided in the presentation to the scenarios. The intended output of this process is to provide code or pseudo code that makes use of the examples provided while solving the selected seed scenarios. As a standard rule, the designers of the systems are not allowed to help the review board unless they all become stuck. When this occurs it is documented and along with the reason why the designer needed to help the review panel back on track. Once all of the scenarios have been completed the review facilitator reviews with the group issues that arise during the process. Then the reviewers will be polled as to efficacy of the review experience. References: Clements, Paul., Kazman, Rick., Klien, Mark. (2002). Evaluating Software Architectures: Methods and Case Studies Indianapolis, IN: Addison-Wesley

    Read the article

  • How to Mentor a Junior Developer

    - by Josh Johnson
    This title is a little broad but I may need to give a little background before I can ask my question properly. I know that similar questions have been asked here already. But in my case I'm not asking if I should be mentoring someone or if the person is a good fit for being a software developer. That is not my place to judge. I have not been asked outright, but it is apparent that myself and other fellow senior developers are to mentor the new developers that start here. I have no problem with this whatsoever and, in many cases, it lends me a fresh perspective on things and I end up learning in the process. Also, I remember how beneficial it was in the beginning of my career when someone would take some time to teach me something. When I say "new developer" they could be anywhere from fresh out of college to having a year or two of experience. Recently and in the past we've had people start here who seem to have an attitude toward development/programming which is different from mine and hard for me to reconcile; they seem to extract just enough information to get the task done but not really learn from it. I find myself going over and over the same issues with them. I understand that part of this could be a personality thing, but I feel it's my job to do my best and sort of push them out of the nest while they're under my wing, so to speak. How can I impart just enough information so that they will learn but not give so much as to solve the problem for them? Or perhaps: What's the proper response to questions that are designed to take the path of least resistance and, in essence, force them to learn instead of take the easy way out? These questions are probably more general teaching questions and don't have that much to do specifically with software development. Note: I do not get a say in what tasks they are working on. Management doles the task out and it could be anything from a very simple bug fix to starting an entire application by themselves. While this is not ideal by any means and obviously presents its own gauntlet of challenges, I feel it's a topic best left for another question. So the best I can do is help them with the problem at hand and try to help them break it down into simpler problems and also check their commit logs and point out mistakes that they made. My main objectives are to: Help them out and give them the tools they need to start becoming more self-reliant. Steer them in the right direction and break bad development habits early on. Lessen the amount of time I spend with them (the personality type described above seems to need much more one-on-one time and does not do well over IM or email. While that's generally fine, I can't always stop what I'm working on, break my stride, and help them debug an error on a moments notice; I have my own projects that need to get done).

    Read the article

  • Implement Tree/Details With Taskflow Regions Using EJB

    - by Deepak Siddappa
    This article describes on Display Tree/Details using taskflow regions.Use Case DescriptionLet us take scenario where we need to display Tree/Details, left region contains category hierarchy with items listed in a tree structure (ex:- Region-Countries-Locations-Departments in tree format) and right region contains the Employees list.In detail, Here User may drills down through categories using a tree until Employees are listed. Clicking the tree node name displays Employee list in the adjacent pane related to particular tree node. Implementation StepsThe script for creating the tables and inserting the data required for this application CreateSchema.sql Lets create a Java EE Web Application with Entities based on Regions, Countries, Locations, Departments and Employees table. Create a Stateless Session Bean and data control for the Stateless Session Bean. Add the below code to the session bean and expose the method in local/remote interface and generate a data control for that.Note:- Here in the below code "em" is a EntityManager. public List<Employees> empFilteredByTreeNode(String treeNodeType, String paramValue) { String queryString = null; try { if (treeNodeType == "null") { queryString = "select * from Employees emp ORDER BY emp.employee_id ASC"; } else if (Pattern.matches("[a-zA-Z]+[_]+[a-zA-Z]+[_]+[[0-9]+]+", treeNodeType)) { queryString = "select * from employees emp INNER JOIN departments dept\n" + "ON emp.department_id = dept.department_id JOIN locations loc\n" + "ON dept.location_id = loc.location_id JOIN countries cont\n" + "ON loc.country_id = cont.country_id JOIN regions reg\n" + "ON cont.region_id = reg.region_id and reg.region_name = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.contains("regionsFindAll_bc_countriesList_1")) { queryString = "select * from employees emp INNER JOIN departments dept \n" + "ON emp.department_id = dept.department_id JOIN locations loc \n" + "ON dept.location_id = loc.location_id JOIN countries cont \n" + "ON loc.country_id = cont.country_id and cont.country_name = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.contains("regionsFindAll_bc_locationsList_1")) { queryString = "select * from employees emp INNER JOIN departments dept ON emp.department_id = dept.department_id JOIN locations loc ON dept.location_id = loc.location_id and loc.city = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.trim().contains("regionsFindAll_bc_departmentsList_1")) { queryString = "select * from Employees emp INNER JOIN Departments dept ON emp.DEPARTMENT_ID = dept.DEPARTMENT_ID and dept.DEPARTMENT_NAME = '" + paramValue + "'"; } } catch (NullPointerException e) { System.out.println(e.getMessage()); } return em.createNativeQuery(queryString, Employees.class).getResultList(); } In the ViewController project, create two ADF taskflow with page Fragments and name them as FirstTaskflow and SecondTaskflow respectively. Open FirstTaskflow,from component palette drop view(Page Fragment) name it as TreeList.jsff. Open SeconfTaskflow, from component palette drop view(Page Fragment) name it as EmpList.jsff and create two paramters in its overview parameters tab as shown in below image. Open TreeList.jsff , from data control palette drop regionsFindAll->Tree as ADF Tree. In Edit Tree Binding dialog, for Tree Level Rules select the display attributes as follows:-model.Regions - regionNamemodel.Countries - countryNamemodel.Locations - citymodel.Departments - departmentName In structure panel, click on af:Tree - t1 and select selectionListener with edit property. Create a "TreeBean" managed bean with scope as "session" as shown in below Image. Create new method as getTreeNodeSelectedValue and click ok. Open TreeBean managed bean and add the below code: private String treeNodeType; private String paramValue; public void getTreeNodeSelectedValue(SelectionEvent selectionEvent) { RichTree tree = (RichTree)selectionEvent.getSource(); RowKeySet addedSet = selectionEvent.getAddedSet(); Iterator i = addedSet.iterator(); TreeModel model = (TreeModel)tree.getValue(); model.setRowKey(i.next()); JUCtrlHierNodeBinding node = (JUCtrlHierNodeBinding)tree.getRowData(); //oracle.jbo.Row Row rw = node.getRow(); Object selectedTreeNode = node.getAttribute(0); Object treeListType = node.getBindings(); String treeNodeType = treeListType.toString(); this.setParamValue(selectedTreeNode.toString()); this.setTreeNodeType(treeNodeType); } public void setTreeNodeType(String treeNodeType) { this.treeNodeType = treeNodeType; } public String getTreeNodeType() { return treeNodeType; } public void setParamValue(String paramValue) { this.paramValue = paramValue; } public String getParamValue() { return paramValue; }<br /> Open EmpList.jsff , from data control palette drop empFilteredByTreeNode->Employees->Table as ADF Read-only Table. After selecting the  Employees result set, in Edit Action Binding dialog window pass the pageFlowScope parameters as shown in below Image. In empList.jsff page, click Binding tab and click on Create Executable binding and select Invoke action and follow as shown in below image. Edit executeEmpFiltered invoke action properties and set the Refresh to ifNeeded, So when ever the page needs the method will be executed. Create Main.jspx page with page template as Oracle Three Column Layout. Drop FirstTaskflow as Region in start facet and drop SecondTaskflow as Region in center facet, Edit task Flow Binding dialog window pass the Input Paramters as shown in below Image. Run the Main.jspx, tree will be displayed in left region and emp details will displyaed on the right region. Click on the Americas in tree node, all emp related to the Americas related will be displayed. Click on Americas->United States of America->South San Francisco->Accounting, only employee belongs to the Accounting department will be displayed.

    Read the article

  • 5 Android Keyboard Replacements to Help You Type Faster

    - by Chris Hoffman
    Android allows developers to replace its keyboard with their own keyboard apps. This has led to experimentation and great new features, like the gesture-typing feature that’s made its way into Android’s official keyboard after proving itself in third-party keyboards. This sort of customization isn’t possible on Apple’s iOS or even Microsoft’s modern Windows environments. Installing a third-party keyboard is easy — install it from Google Play, launch it like another app, and it will explain how to enable it. Google Keyboard Google Keyboard is Android’s official keyboard, as seen on Google’s Nexus devices. However, there’s a good chance your Android smartphone or tablet comes with a keyboard designed by its manufacturer instead. You can install the Google Keyboard from Google Play, even if your device doesn’t come with it. This keyboard offers a wide variety of features, including a built-in gesture-typing feature, as popularized by Swype. It also offers prediction, including full next-word prediction based on your previous word, and includes voice recognition that works offline on modern versions of Android. Google’s keyboard may not offer the most accurate swiping feature or the best autocorrection, but it’s a great keyboard that feels like it belongs in Android. SwiftKey SwiftKey costs $4, although you can try it free for one month. In spite of its price, many people who rarely buy apps have been sold on SwiftKey. It offers amazing auto-correction and word-prediction features. Just mash away on your touch-screen keyboard, typing as fast as possible, and SwiftKey will notice your mistakes and type what you actually meant to type. SwiftKey also now has built-in support for gesture-typing via SwiftKey Flow, so you get a lot of flexibility. At $4, SwiftKey may seem a bit pricey, but give the month-long trial a try. A great keyboard makes all the typing you do everywhere on your phone better. SwiftKey is an amazing keyboard if you tap-to-type rather than swipe-to-type. Swype While other keyboards have copied Swype’s swipe-to-type feature, none have completely matched its accuracy. Swype has been designing a gesture-typing keyboard for longer than anyone else and its gesture feature still seems more accurate than its competitors’ gesture support. If you use gesture-typing all the time, you’ll probably want to use Swype. Swype can now be installed directly from Google Play without the old, tedious process of registering a beta account and sideloading the Swype app. Swype offers a month-long free trial and the full version is available for $1 afterwards. Minuum Minuum is a crowdfunded keyboard that is currently still in beta and only supports English. We include it here because it’s so interesting — it’s a great example of the kind of creativity and experimentation that happens when you allow developers to experiment with their own forms of keyboard. Minuum uses a tiny, minimum keyboard that frees up your screen space, so your touch-screen keyboard doesn’t hog your device’s screen. Rather than displaying a full keyboard on your screen, Minuum displays a single row of letters.  Each letter is small and may be difficult to hit, but that doesn’t matter — Minuum’s smart autocorrection algorithms interpret what you intended to type rather than typing the exact letters you press. Just swipe to the right to type a space and accept Minuum’s suggestion. At $4 for a beta version with no trial, Minuum may seem a bit pricy. But it’s a great example of the flexibility Android allows. If there’s a problem with this keyboard, it’s that it’s a bit late — in an age of 5″ smartphones with 1080p screens, full-size keyboards no longer feel as cramped. MessagEase MessagEase is another example of a new take on text input. Thankfully, this keyboard is available for free. MessagEase presents all letters in a nine-button grid. To type a common letter, you’d tap the button. To type an uncommon letter, you’d tap the button, hold down, and swipe in the appropriate direction. This gives you large buttons that can work well as touch targets, especially when typing with one hand. Like any other unique twist on a traditional keyboard, you’d have to give it a few minutes to get used to where the letters are and the new way it works. After giving it some practice, you may find this is a faster way to type on a touch-screen — especially with one hand, as the targets are so large. Google Play is full of replacement keyboards for Android phones and tablets. Keyboards are just another type of app that you can swap in. Leave a comment if you’ve found another great keyboard that you prefer using. Image Credit: Cheon Fong Liew on Flickr     

    Read the article

  • Incomplete upgrade 12.04 to 12.10

    - by David
    Everything was running smoothly. Everything had been downloaded from Internet, packages had been installed and a prompt asked for some obsolete programs/files to be removed or kept. After that the computer crashed and and to manually force a shutdown. I turned it on again and surprise I was on 12.10! Still the upgrade was not finished! How can I properly finish that upgrade? Here's the output I got in the command line after following posted instructions: i astrill - Astrill VPN client software i dayjournal - Simple, minimal, digital journal. i gambas2-gb-form - A gambas native form component i gambas2-gb-gtk - The Gambas gtk component i gambas2-gb-gtk-ext - The Gambas extended gtk GUI component i gambas2-gb-gui - The graphical toolkit selector component i gambas2-gb-qt - The Gambas Qt GUI component i gambas2-gb-settings - Gambas utilities class i A gambas2-runtime - The Gambas runtime i google-chrome-stable - The web browser from Google i google-talkplugin - Google Talk Plugin i indicator-keylock - Indicator for Lock Keys i indicator-ubuntuone - Indicator for Ubuntu One synchronization s i A language-pack-kde-zh-hans - KDE translation updates for language Simpl i language-pack-kde-zh-hans-base - KDE translations for language Simplified C i libapt-inst1.4 - deb package format runtime library idA libattica0.3 - a Qt library that implements the Open Coll idA libbabl-0.0-0 - Dynamic, any to any, pixel format conversi idA libboost-filesystem1.46.1 - filesystem operations (portable paths, ite idA libboost-program-options1.46.1 - program options library for C++ idA libboost-python1.46.1 - Boost.Python Library idA libboost-regex1.46.1 - regular expression library for C++ i libboost-serialization1.46.1 - serialization library for C++ idA libboost-signals1.46.1 - managed signals and slots library for C++ idA libboost-system1.46.1 - Operating system (e.g. diagnostics support idA libboost-thread1.46.1 - portable C++ multi-threading i libcamel-1.2-29 - Evolution MIME message handling library i libcmis-0.2-0 - CMIS protocol client library i libcupsdriver1 - Common UNIX Printing System(tm) - Driver l i libdconf0 - simple configuration storage system - runt i libdvdcss2 - Simple foundation for reading DVDs - runti i libebackend-1.2-1 - Utility library for evolution data servers i libecal-1.2-10 - Client library for evolution calendars i libedata-cal-1.2-13 - Backend library for evolution calendars i libedataserver-1.2-15 - Utility library for evolution data servers i libexiv2-11 - EXIF/IPTC metadata manipulation library i libgdu-gtk0 - GTK+ standard dialog library for libgdu i libgdu0 - GObject based Disk Utility Library idA libgegl-0.0-0 - Generic Graphics Library idA libglew1.5 - The OpenGL Extension Wrangler - runtime en i libglew1.6 - OpenGL Extension Wrangler - runtime enviro i libglewmx1.6 - OpenGL Extension Wrangler - runtime enviro i libgnome-bluetooth8 - GNOME Bluetooth tools - support library i libgnomekbd7 - GNOME library to manage keyboard configura idA libgsoap1 - Runtime libraries for gSOAP i libgweather-3-0 - GWeather shared library i libimobiledevice2 - Library for communicating with the iPhone i libkdcraw20 - RAW picture decoding library i libkexiv2-10 - Qt like interface for the libexiv2 library i libkipi8 - library for apps that want to use kipi-plu i libkpathsea5 - TeX Live: path search library for TeX (run i libmagickcore4 - low-level image manipulation library i libmagickwand4 - image manipulation library i libmarblewidget13 - Marble globe widget library idA libmusicbrainz4-3 - Library to access the MusicBrainz.org data i libnepomukdatamanagement4 - Basic Nepomuk data manipulation interface i libnux-2.0-0 - Visual rendering toolkit for real-time app i libnux-2.0-common - Visual rendering toolkit for real-time app i libpoppler19 - PDF rendering library i libqt3-mt - Qt GUI Library (Threaded runtime version), i librhythmbox-core5 - support library for the rhythmbox music pl i libusbmuxd1 - USB multiplexor daemon for iPhone and iPod i libutouch-evemu1 - KernelInput Event Device Emulation Library i libutouch-frame1 - Touch Frame Library i libutouch-geis1 - Gesture engine interface support i libutouch-grail1 - Gesture Recognition And Instantiation Libr idA libx264-120 - x264 video coding library i libyajl1 - Yet Another JSON Library i linux-headers-3.2.0-29 - Header files related to Linux kernel versi i linux-headers-3.2.0-29-generic - Linux kernel headers for version 3.2.0 on i linux-image-3.2.0-29-generic - Linux kernel image for version 3.2.0 on 64 i mplayerthumbs - video thumbnail generator using mplayer i myunity - Unity configurator i A openoffice.org-calc - office productivity suite -- spreadsheet i A openoffice.org-writer - office productivity suite -- word processo i python-brlapi - Python bindings for BrlAPI i python-louis - Python bindings for liblouis i rts-bpp-dkms - rts-bpp driver in DKMS format. i system76-driver - Universal driver for System76 computers. i systemconfigurator - Unified Configuration API for Linux Instal i systemimager-client - Utilities for creating an image and upgrad i systemimager-common - Utilities and libraries common to both the i systemimager-initrd-template-am - SystemImager initrd template for amd64 cli i touchpad-indicator - An indicator for the touchpad i ubuntu-tweak - Ubuntu Tweak i A unity-lens-utilities - Unity Utilities lens i A unity-scope-calculator - Calculator engine i unity-scope-cities - Cities engine i unity-scope-rottentomatoes - Unity Scope Rottentomatoes

    Read the article

  • Wisdom Lies in Collaborative Power and Intelligence

    - by kellsey.ruppel
    By Alakh Verma, Director, Platform Technology Solutions   In my recent blog posts, I shared insights on Predictive Analytics (Will Predictive Analytics at 'Speed of Thoughts' Help Businesses?), Real Time Decisions (How critical are Real Time decisions in business today?) and their significance in our lives in general and in businesses today. In the current business paradigm shift- with evolutionary social business, it is paramount that businesses look for wisdom in collaborative power and intelligence and equip their employees with the tools to engage with one another. There is an old time saying that 5 sticks tied together are stronger and unable to break as opposed to an individual stick. We have recently witnessed the power of ordinary people uniting together and fought collaboratively using Facebook and Twitter to topple down dictators in Tunisia, Egypt and Libya—and are threatening absolute rule in Syria. And an India one man’s (Anna Hazare) campaign against corruption went viral, bringing thousands to the streets in support. As anyone who has worked in a sizeable organization knows, there is no guarantee that the organization as a whole will perform efficiently and achieve its goals, even if each employee is individually efficient and every team has a high level of productivity. To achieve enterprise productivity, it is necessary not only for individuals and groups to “do things right” by working productively but also for the enterprise as a whole to “do the right things” - form the right teams, make the right decisions, allocate resources correctly, and effectively coordinate activities across the entire organization. Most organizations fall short of the optimal level of enterprise productivity because of one or more of these reasons, all at a great cost to the business.  They are disconnected from themselves with various parts of the organization unintentionally working at cross-purposes with each other.  Information that exists is not getting shared or reused.  Human talent is not being applied where it is most needed.  The same problems are being solved repeatedly by multiple groups. Intelligent collaboration through automated business processes has the ability to alter the course of any important business activity, with a potentially dramatic impact on the financial performance of the business. Whether it is a simple email exchange, a physical or virtual meeting, a task force, or a large-scale project, the activity is inherently collaborative.  In fact, collaboration can be defined as the work that takes place among people when a business process is not pre-determining how the work should take place. Collaboration is many things: information sharing, brainstorming, problem solving, best practice negotiation, innovation, coordination of activity, alignment of purpose, and so forth.  Collaboration is the “white space” between the business processes; it is the glue that holds an organization together, and the lubricant that allows the machinery to keep running.  Real time search and collaborative capabilities of the right people with the right content supported by defined processes will provide unparallel wisdom in the organization in the most competitive business environment today. Interestingly, technologies such as Oracle WebCenter offer these capabilities in our Web based business transactions and compliment in the overall collaborative intelligence and power to truly transform organizations to social businesses. Looking to learn more about engaging your employees to collaborate together and providing a complete user experience for your customers? You won't want to miss our webcast today! Drive Online Engagement with Intuitive Portals and Websites

    Read the article

  • What You Can Learn from the NFL Referee Lockout

    - by Christina McKeon
    American football is a lot like religion. The fans are devoted followers that take brand loyalty to a whole new level. These fans that worship their teams each week showed that they are powerful customers whose voice has an impact. Yesterday, these fans proved that their opinion could force the hand of a large and powerful institution. With a three-month NFL referee lockout that seemed like it was nowhere close to resolution, the Green Bay Packers and the Seattle Seahawks competed last Monday night. For those of you that might have been out of the news cycle the past few days, Green Bay lost the game due to a controversial call that many experts and analysts agree should have resulted in Green Bay winning the game. Outrage ensued. The NFL had pulled replacement referees from the high school ranks, and these replacements did not have the knowledge and experience to handle high intensity NFL games. Fans protested about their customer experience. Their anger-filled rants were heard in social media, in the headlines of newspapers, on radio, and on national TV. Suddenly, the NFL was moved to reach an agreement with the referees. That agreement was reached late in the night on Wednesday with many believing that the referees had the upper hand forcing the owners into submission. Some might argue that the referees benefited, not the fans. Since the fans wanted qualified and competent referees, I would say the fans did benefit. The referees are scheduled to return to the field this Sunday, so the fans got what they wanted. What can you learn from this negative customer experience? Customers are in control. NFL owners thought they were controlling this situation with the upper hand over referees. The owners figured out they weren’t in control when their fans reacted negatively. Customers can make or break you more now than ever before, which is why it is more important to connect with them, engage them in a personal manner, and create rewarding relationships. Protect your brand. Whether knowingly or unknowingly, the NFL put their brand and each team’s brand at risk with replacement referees. Think about each business decision you make, and how it may impact your brand at different points in time. A decision that results in a gain today could result in a larger loss down the road. Customer experience matters. The NFL likely foresaw declining revenues in ticket sales, merchandising, advertising, and other areas if the lockout continued. While fans primarily spoke with their minds in the days following the Green Bay debacle, their wallets would be the next things to speak. Customer experience directly affects your success and is one of the few areas where you can differentiate your business. What would you do if your brand got such negative attention? Would you be prepared to navigate such stormy waters? Would you be able to prevent such a fiasco? If you don’t have a good answer to these questions, consider joining us October 3-5, 2012 at the Oracle Customer Experience Summit in San Francisco. You’ll have the opportunity to learn even more about customer experience from industry experts such as best-selling author Seth Godin, Paul Hagen and Kerry Bodine from Forrester Research, Inc., George Kembel from the Stanford d.School, Bruce Temkin of The Temkin Group, and Gene Alvarez from Gartner Inc.. There will also be plenty of your peers and customer experience experts available for networking and discussions.

    Read the article

  • Pace Layering Comes Alive

    - by Tanu Sood
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Rick Beers is Senior Director of Product Management for Oracle Fusion Middleware. Prior to joining Oracle, Rick held a variety of executive operational positions at Corning, Inc. and Bausch & Lomb. With a professional background that includes senior management positions in manufacturing, supply chain and information technology, Rick brings a unique set of experiences to cover the impact that technology can have on business models, processes and organizations. Rick hosts the IT Leaders Editorial on a monthly basis. By now, readers of this column are quite familiar with Oracle AppAdvantage, a unified framework of middleware technologies, infrastructure and applications utilizing a pace layered approach to enterprise systems platforms. 1. Standardize and Consolidate core Enterprise Applications by removing invasive customizations, costly workarounds and the complexity that multiple instances creates. 2. Move business specific processes and applications to the Differentiate Layer, thus creating greater business agility with process extensions and best of breed applications managed by cross- application process orchestration. 3. The Innovate Layer contains all the business capabilities required for engagement, collaboration and intuitive decision making. This is the layer where innovation will occur, as people engage one another in a secure yet open and informed way. 4. Simplify IT by minimizing complexity, improving performance and lowering cost with secure, reliable and managed systems across the entire Enterprise. But what hasn’t been discussed is the pace layered architecture that Oracle AppAdvantage adopts. What is it, what are its origins and why is it relevant to enterprise scale applications and technologies? It’s actually a fascinating tale that spans the past 20 years and a basic understanding of it provides a wonderful context to what is evolving as the future of enterprise systems platforms. It all begins in 1994 with a book by noted architect Stewart Brand, of ’Whole Earth Catalog’ fame. In his 1994 book How Buildings Learn, Brand popularized the term ‘Shearing Layers’, arguing that any building is actually a hierarchy of pieces, each of which inherently changes at different rates. In 1997 he produced a 6 part BBC Series adapted from the book, in which Part 6 focuses on Shearing Layers. In this segment Brand begins to introduce the concept of ‘pace’. Brand further refined this idea in his subsequent book, The Clock of the Long Now, which began to link the concept of Shearing Layers to computing and introduced the term ‘pace layering’, where he proposes that: “An imperative emerges: an adaptive [system] has to allow slippage between the differently-paced systems … otherwise the slow systems block the flow of the quick ones and the quick ones tear up the slow ones with their constant change. Embedding the systems together may look efficient at first but over time it is the opposite and destructive as well.” In 2000, IBM architects Ian Simmonds and David Ing published a paper entitled A Shearing Layers Approach to Information Systems Development, which applied the concept of Shearing Layers to systems design and development. It argued that at the time systems were still too rigid; that they constrained organizations by their inability to adapt to changes. The findings in the Conclusions section are particularly striking: “Our starting motivation was that enterprises need to become more adaptive, and that an aspect of doing that is having adaptable computer systems. The challenge is then to optimize information systems development for change (high maintenance) rather than stability (low maintenance). Our response is to make it explicit within software engineering the notion of shearing layers, and explore it as the principle that systems should be built to be adaptable in response to the qualitatively different rates of change to which they will be subjected. This allows us to separate functions that should legitimately change relatively slowly and at significant cost from that which should be changeable often, quickly and cheaply.” The problem at the time of course was that this vision of adaptable systems was simply not possible within the confines of 1st generation ERP, which were conceived, designed and developed for standardization and compliance. It wasn’t until the maturity of open, standards based integration, and the middleware innovation that followed, that pace layering became an achievable goal. And Oracle is leading the way. Oracle’s AppAdvantage framework makes pace layering come alive by taking a strategic vision 20 years in the making and transforming it to a reality. It allows enterprises to retain and even optimize their existing ERP systems, while wrapping around those ERP systems three layers of capabilities that inherently adapt as needed, at a pace that’s optimal for the enterprise.

    Read the article

  • Seven Random Thoughts on JavaOne

    - by HecklerMark
    As most people reading this blog may know, last week was JavaOne. There are a lot of summary/recap articles popping up now, and while I didn't want to just "add to pile", I did want to share a few observations. Disclaimer: I am an Oracle employee, but most of these observations are either externally verifiable or based upon a collection of opinions from Oracle and non-Oracle attendees alike. Anyway, here are a few take-aways: The Java ecosystem is alive and well, with a breadth and depth that is impossible to adequately describe in a short post...or a long post, for that matter. If there is any one area within the Java language or JVM that you would like to - or need to - know more about, it's well-represented at J1. While there are several IDEs that are used to great effect by the developer community, NetBeans is on a roll. I lost count how many sessions mentioned or used NetBeans, but it was by far the dominant IDE in use at J1. As a recent re-convert to NetBeans, I wasn't surprised others liked it so well, only how many. OpenJDK, OpenJFX, etc. Many developers were understandably concerned with the change of sponsorship/leadership when Java creator and longtime steward Sun Microsystems was acquired by Oracle. The read I got from attendees regarding Oracle's stewardship was almost universally positive, and the push for "openness" is deep and wide within the current Java environs. Few would probably have imagined it to be this good, this soon. Someone observed that "Larry (Ellison) is competitive, and he wants to be the best...so if he wants to have a community, it will be the best community on the planet." Like any company, Oracle is bound to make missteps, but leadership seems to be striking an excellent balance between embracing open efforts and innovating in competitive paid offerings. JavaFX (2.x) isn't perfect or comprehensive, but a great many people (myself included) see great potential, are developing for it, and are really excited about where it is and where it may be headed. This is another part of the Java ecosystem that has impressive depth for being so new (JavaFX 1.x aside). If you haven't kicked the tires yet, give it a try! You'll be surprised at how capable and versatile it is, and you'll probably catch yourself smiling while coding again.  :-) JavaEE is everywhere. Not exactly a newsflash, but there is a lot of buzz around EE still/again/anew. Sessions ranged from updated component specs/technologies to Websockets/HTML5, from frameworks to profiles and application servers. Programming "server-side" Java isn't confined to the server (as you no doubt realize), and if you still consider JavaEE a cumbersome beast, you clearly haven't been using the last couple of versions. Download GlassFish or the WebLogic Zip distro (or another JavaEE 6 implementation) and treat yourself. JavaOne is not inexpensive, but to paraphrase an old saying, "If you think that's expensive, you should try ignorance." :-) I suppose it's possible to attend J1 and learn nothing, but you'd have to really work at it! Attending even a single session is bound to expand your horizons and make you approach your code, your problem domain, differently...even if it's a session about something you already know quite well. The various presenters offer vastly different perspectives and challenge you to re-think your own approach(es). And finally, if you think the scheduled sessions are great - and make no mistake, most are clearly outstanding - wait until you see what you pick up from what I like to call the "hallway sessions". Between the presentations, people freely mingle in the hallways, go to lunch and dinner together, and talk. And talk. And talk. Ideas flow freely, sparking other ideas and the "crowdsourcing" of knowledge in a way that is hard to imagine outside of a conference of this magnitude. Consider this the "GO" part of a "BOGO" (Buy One, Get One) offer: you buy the ticket to the "structured" part of JavaOne and get the hallway sessions at no additional charge. They're really that good. If you weren't able to make it to JavaOne this year, you can still watch/listen to the sessions online by visiting the JavaOne course catalog and clicking the media link(s) in the right column - another demonstration of Oracle's commitment to the Java community. But make plans to be there next year to get the full benefit! You'll be glad you did. All the best,Mark P.S. - I didn't mention several other exciting developments in areas like the embedded space and the "internet of things" (M2M), robotics, optimization, and the cloud (among others), but I think you get the idea. JavaOne == brainExpansion;  Hope to see you there next year!

    Read the article

  • How to Speed Up Any Android Phone By Disabling Animations

    - by Chris Hoffman
    Android phones — and tablets, too — display animations when moving between apps and screens. These animations look very slick, but they waste time — especially on fast phones, which could switch between apps instantly if not for the animations. Disabling these animations will speed up navigating between different apps and interface screens on your phone, saving you time. You can also speed up the animations if you’d rather see them. Access the Developer Options Menu First, we’ll need to access the Developer Options menu. It’s hidden by default so Android users won’t stumble across it unless they’re actually looking for it. To access the Developer Options menu, open the Settings screen, scroll down to the bottom of the list, and tap the About phone or About tablet option. Scroll down to the Build number field and tap it repeatedly. Eventually, you’ll see a message appear saying “You are now a developer!”. The Developer options submenu now appears on the Settings screen. You’ll find it near the bottom of the list, just above the About phone or About tablet option. Disable Interface Animations Open the Developer Options screen and slide the switch at the top of the screen to On. This allows you to change the hidden options on this screen. If you ever want to re-enable the animations and revert your changes, all you have to do is slide the Developer Options switch back to Off. Scroll down to the Drawing section. You’ll find the three options we want here — Window animation scale, Transition animation scale, and Animator duration scale. Tap each option and set it to Animation off to disable the associated animations. If you’d like to speed up the animations without disabling them entirely, select the Animation .5x option instead. If you’re feeling really crazy, you can even select longer animation durations. You can make the animations take as much as ten times longer with the Animation 10x setting. The Animator duration scale option applies to the transition animation that appears when you tap the app drawer button on your home screen.  Your change here won’t take effect immediately — you’ll have to restart Android’s launcher after changing the Animator duration scale setting. To restart Android’s launcher, open the Settings screen, tap Apps, swipe over to the All category, scroll down, and tap the Launcher app. Tap the Force stop button to forcibly close the launcher, then tap your device’s home button to re-launch the launcher. Your app drawer will now open immediately, too. Now whenever you open an app or transition to a new screen, it will pop up as quickly as possible — no waiting for animations and wasting processing power rendering them. How much of a speed improvement you’ll see here depends on your Android device and how fast it is. On our Nexus 4, this change makes many apps appear and become usable instantly if they’re running in the background. If you have a slower device, you may have to wait a moment for apps to be usable. That’s one of the big reasons why Android and other operating systems use animations. Animations help paper over delays that can occur while the operating system loads the app.     

    Read the article

  • Social Search: Looking for Love

    - by Mike Stiles
    For marketers and enterprise executives who have placed a higher priority on and allocated bigger budgets to search over social, it might be time to notice yet another shift that’s well underway. Social is search. Search marketing was always more of an internal slam-dunk than other digital initiatives. Even a C-suite that understood little about the new technology world knew it’s a good thing when people are able to find you. Google was the new Yellow Pages. Only with Google, you could get your listing first without naming yourself “AAAA Plumbing.” There were wizards out there who could give your business prominence in front of people who were specifically looking for what you offered. Other search giants like Bing also came along to offer such ideal matchmaking possibilities. But what if the consumer isn’t using a search engine to find what they’re looking for? And what if the search engines started altering their algorithms so that search placement manipulation was more difficult? Both of those things have started to happen. Experian Hitwise’s numbers show that visits to the major search engines in the UK dropped 100 million through August. Search engines are far from dead, or even challenged. But more and more, the public is discovering the sites and brands they need through advice they get via social, not search. You’ll find the worlds of social and search increasingly co-mingling as well. Search behemoths Google and Bing are including Facebook and Google+ into their engines. Meanwhile, Facebook and Twitter have done some integration of global web search into their platforms. So what makes social such a worthwhile search entity for brands? First and foremost, the consumer has demonstrated a behavior of acting on recommendations from social connections. A cry in the wilderness like, “Anybody know any good catering companies?” will usually yield a link (and an endorsement) from a friend such as “Yeah, check out Just-Cheese-Balls Catering.” There’s no such human-driven force/influence behind the big search engines. Facebook’s Mark Zuckerberg and others call it “Friend Mining.” It is, in essence, searching for answers from friends’ experiences as opposed to faceless code. And Facebook has all of those friends’ experiences already stored as data. eMarketer says search in an $18 billion business, and investors are really into it. So no shock Facebook’s ready to leverage their social graph into relevant search. What do you do about all this as a brand? For one thing, it’s going to lead to some interesting paid marketing opportunities around the corner, including Sponsored Stories bought against certain queries, inserting deals into search results, capitalizing on social search results on mobile, etc. Apart from that, it might be time to stop mentally separating social and search in your strategic planning and budgeting. Courting your fans on social will cumulatively add up to more valuable, personally endorsed recommendations for your company when a consumer conducts a search on social. Fail to foster those relationships, fail to engage, fail to provide knock-em-dead customer service, fail to wow them with your actual products and services…and you’ll wind up with the visibility you deserve in social search results.

    Read the article

  • Conflict Minerals - Design to Compliance

    - by C. Chadwick
    Dr. Christina  Schröder - Principal PLM Consultant, Enterprise PLM Solutions EMEA What does the Conflict Minerals regulation mean? Conflict Minerals has recently become a new buzz word in the manufacturing industry, particularly in electronics and medical devices. Known as the "Dodd-Frank Section 1502", this regulation requires SEC listed companies to declare the origin of certain minerals by 2014. The intention is to reduce the use of tantalum, tungsten, tin, and gold which originate from mines in the Democratic Republic of Congo (DRC) and adjoining countries that are controlled by violent armed militia abusing human rights. Manufacturers now request information from their suppliers to see if their raw materials are sourced from this region and which smelters are used to extract the metals from the minerals. A standardized questionnaire has been developed for this purpose (download and further information). Soon, even companies which are not directly affected by the Conflict Minerals legislation will have to collect and maintain this information since their customers will request the data from their suppliers. Furthermore, it is expected that the public opinion and consumer interests will force manufacturers to avoid the use of metals with questionable origin. Impact for existing products Several departments are involved in the process of collecting data and providing conflict minerals compliance information. For already marketed products, purchasing typically requests Conflict Minerals declarations from the suppliers. In order to address requests from customers, technical operations or product management are usually responsible for keeping track of all parts, raw materials and their suppliers so that the required information can be provided. For complex BOMs, it is very tedious to maintain complete, accurate, up-to-date, and traceable data. Any product change or new supplier can, in addition to all other implications, have an effect on the Conflict Minerals compliance status. Influence on product development  It makes sense to consider compliance early in the planning and design of new products. Companies should evaluate which metals are needed or contained in supplier parts and if these could originate from problematic sources. The answer influences the cost and risk analysis during the development. If it is known early on that a part could be non-compliant with respect to Conflict Minerals, alternatives can be evaluated and thus costly changes at a later stage can be avoided. Integrated compliance management  Ideally, compliance data for Conflict Minerals, but also for other regulations like REACH and RoHS, should be managed in an integrated supply chain system. The compliance status is directly visible across the entire BOM at any part level and for the finished product. If data is missing, a request to the supplier can be triggered right away without having to switch to another system. The entire process, from identification of the relevant parts, requesting information, handling responses, data entry, to compliance calculation is fully covered end-to-end while being transparent for all stakeholders. Agile PLM Product Governance and Compliance (PG&C) The PG&C module extends Agile PLM with exactly this integrated functionality. As with the entire Agile product suite, PG&C can be configured according to customer requirements: data fields, attributes, workflows, routing, notifications, and permissions, etc… can be quickly and easily tailored to a customer’s needs. Optionally, external databases can be interfaced to query commercially available sources of Conflict Minerals declarations which obviates the need for a separate supplier request in many cases. Suppliers can access the system directly for data entry through a special portal. The responses to the standard EICC-GeSI questionnaire can be imported by the supplier or internally. Manual data entry is also supported. A set of compliance-specific dashboards and reports complement the functionality Conclusion  The increasing number of product compliance regulations, for which Conflict Minerals is just one example, requires companies to implement an efficient data and process management in this area. Consumer awareness in this matter increases as well so that an integrated system from development to production also provides a competitive advantage. Follow this link to learn more about Agile's PG&C solution

    Read the article

  • UPOS RFIDScanner data format

    - by Robert Snyder
    A lot of work that I do currently is based in the OPOS/UPOS world. My company has a device that can read 13.56Mhz tags (RFID), Smart Cards, and Mag Stripe cards. Up until somewhat recently I have only been working with RFID for a very specific scenario. That was to read UltraLight C and Desfire cards. These cards were all setup very specifically so that I could take the data read from those cards and force it into a MSR track2 format. The past couple of weeks, however, I have been working on reading RFID credit cards (since I have a Visa card I've been using mine), and Smart Card credit cards. (The visa card I have has both) In learning how to communicate with SmartCard and reading ISO7816 and EMVCO documents I became a little more familiar with how info is stored. But now I have a question regarding UPOS. The RFID data on my Visa is stored (and read) very similar to how the data is stored and read from the Smart Card on my Visa. Cool. Well in the UPOS spec for SmartCardRW the ReadData method returns a byte array. That's cool, I can just return all that data and then parse it as my heart desires. The RFID though has a LinkedList of Tags. Well this makes sense in terms of my Visa card (reminds me of a question I have in regards to SmartCard, but that is for another question) but what about ULC and Desfire, or for that matter any Mifare card. Pages, Files, Purses don't exactly fit the Tag profile. For instance lets just say I read pages 4-12 on my ULC card. Each page I read is 4 bytes long. Does this mean I have 9 tags in my LinkedList? Is my Tag id the page number? Or then how does that translate to Desfire? I open application 123456 and read file 1 and file 2, Do I have 2 tags? and if so what is my tag id? At least with my Visa I think that I have to use the Tag id (ex 5F24 for my expiration date) and value of {0x15, 0x10, 0x31} Part of me says yes..that makes sense. Another part of me says, "well if that is the case then why doesn't SmartCardRW have Tags?" So that is my question. How do I format my data from those different types of media? or is that the job of my Control Object (the application)? Is so how does it know? The only protocols I have are: // Summary: // Enumerates the available predefined RFID tag protocols the device supports. [Flags] public enum RFIDProtocols { EpcClass0 = 1, RFIDSdt0Plus = 2, EpcClass1 = 4, EpcClass1Gen2 = 8, EpcClass2 = 16, Iso14443A = 4096, Iso14443B = 8192, Iso15693 = 12288, Iso180006B = 16384, Other = 16777216, All = 1073741824, } If I use that well all of my cards that I have are all Iso14443A. I use the ATQA and the SAK to know what type of card I really have. There is no RFID property that lets me specify that. So I'm lost.

    Read the article

  • SQL Server Optimizer Malfunction?

    - by Tony Davis
    There was a sharp intake of breath from the audience when Adam Machanic declared the SQL Server optimizer to be essentially "stuck in 1997". It was during his fascinating "Query Tuning Mastery: Manhandling Parallelism" session at the recent PASS SQL Summit. Paraphrasing somewhat, Adam (blog | @AdamMachanic) offered a convincing argument that the optimizer often delivers flawed plans based on assumptions that are no longer valid with today’s hardware. In 1997, when Microsoft engineers re-designed the database engine for SQL Server 7.0, SQL Server got its initial implementation of a cost-based optimizer. Up to SQL Server 2000, the developer often had to deploy a steady stream of hints in SQL statements to combat the occasionally wilful plan choices made by the optimizer. However, with each successive release, the optimizer has evolved and improved in its decision-making. It is still prone to the occasional stumble when we tackle difficult problems, join large numbers of tables, perform complex aggregations, and so on, but for most of us, most of the time, the optimizer purrs along efficiently in the background. Adam, however, challenged further any assumption that the current optimizer is competent at providing the most efficient plans for our more complex analytical queries, and in particular of offering up correctly parallelized plans. He painted a picture of a present where complex analytical queries have become ever more prevalent; where disk IO is ever faster so that reads from disk come into buffer cache faster than ever; where the improving RAM-to-data ratio means that we have a better chance of finding our data in cache. Most importantly, we have more CPUs at our disposal than ever before. To get these queries to perform, we not only need to have the right indexes, but also to be able to split the data up into subsets and spread its processing evenly across all these available CPUs. Improvements such as support for ColumnStore indexes are taking things in the right direction, but, unfortunately, deficiencies in the current Optimizer mean that SQL Server is yet to be able to exploit properly all those extra CPUs. Adam’s contention was that the current optimizer uses essentially the same costing model for many of its core operations as it did back in the days of SQL Server 7, based on assumptions that are no longer valid. One example he gave was a "slow disk" bias that may have been valid back in 1997 but certainly is not on modern disk systems. Essentially, the optimizer assesses the relative cost of serial versus parallel plans based on the assumption that there is no IO cost benefit from parallelization, only CPU. It assumes that a single request will saturate the IO channel, and so a query would not run any faster if we parallelized IO because the disk system simply wouldn’t be able to handle the extra pressure. As such, the optimizer often decides that a serial plan is lower cost, often in cases where a parallel plan would improve performance dramatically. It was challenging and thought provoking stuff, as were his techniques for driving parallelism through query logic based on subsets of rows that define the "grain" of the query. I highly recommend you catch the session if you missed it. I’m interested to hear though, when and how often people feel the force of the optimizer’s shortcomings. Barring mistakes, such as stale statistics, how often do you feel the Optimizer fails to find the plan you think it should, and what are the most common causes? Is it fighting to induce it toward parallelism? Combating unexpected plans, arising from table partitioning? Something altogether more prosaic? Cheers, Tony.

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >