Search Results

Search found 5460 results on 219 pages for 'composite component'.

Page 207/219 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Massive Silverlight Giveaway! DevExpress , Syncfusion, Crypto Obfuscator and SL Spy!

    - by mbcrump
    Oh my, have we grown! Maybe I should change the name to Multiple Silverlight Giveaways. So far, my Silverlight giveaways have been such a success that I’m going to be able to give away more than one Silverlight product every month. Last month, we gave away 3 great products. 1) ComponentOne Silverlight Controls 2)  ComponentOne XAP Optimizer (with obfuscation) and 3) Silverlight Spy. This month, we will give away 4 great Silverlight products and have 4 different winners. This way the Silverlight community can grow with more than just one person winning all the prizes. This month we will be giving away: DevExpress Silverlight Controls – Over 50+ Silverlight Controls Syncfusion User Interface Edition - Create stunning line of business silverlight applications with a wide range of components including a high performance grid, docking manager, chart, gauge, scheduler and much more. Crypto Obfuscator – Works for all .NET including Silverlight/Windows Phone 7. Silverlight Spy – provides a license EVERY month for this giveaway. ----------------------------------------------------------------------------------------------------------------------------------------------------------- Win a FREE developer’s license of one of the products listed above! 4 winners will be announced on April 1st, 2011! To be entered into the contest do the following things: Subscribe to my feed. – Use Google Reader, email or whatever is best for you.  Leave a comment below with a valid email account (I WILL NOT share this info with anyone.) Retweet the following : I just entered to win free #Silverlight controls from @mbcrump . Register here: http://mcrump.me/fTSmB8 ! Don’t change the URL because this will allow me to track the users that Tweet this page. Don’t forget to visit each of the vendors sites because they made this possible. MichaelCrump.Net provides Silverlight Giveaways every month. You can also see the latest giveaway by bookmarking http://giveaways.michaelcrump.net . ---------------------------------------------------------------------------------------------------------------------------------------------------------- DevExpress Silverlight Controls Let’s take a quick look at some of the software that is provided in this giveaway. Before we get started with the Silverlight Controls, here is a couple of links to bookmark for the DevExpress Silverlight Controls: The Live Demos of the Silverlight Controls is located here. Great Video Tutorials of the Silverlight Controls are here. One thing that I liked about the DevExpress is how easy it was to find demos of each control. After you install the controls the following Program Group appears complete with “demos” that include full-source.   So, the first question that you may ask is, “What is included?” Here is the official list below. I wanted to show several of the controls that I think developers will use the most. The Book – Very rich animation between switching pages. Very easy to add your own images and custom text. The Menu – This is another control that just looked great. You can easily add images to the menu items with a few lines of XAML. The Window / Dialog Box – You can use this control to make a very beautiful “Wizard” to help your users navigate between pages. This is useful in setup or installation. Calculator – This would be useful for any type of Banking app. Also a first that I’ve seen from a 3rd party Control company. DatePicker – This controls feels a lot smoother than the one provided by Microsoft. It also provides the ability to “Clear” the selection. Overall the DevExpress Silverlight Controls feature a lot of quality controls that you should check out. You can go ahead and download a trial version of it right now by clicking here. If you win the contest you can simply enter your registration key and continue using the product without reinstalling. Syncfusion User Interface Edition Before we get started with the Syncfusion User Interface Edition, here is a couple of links to bookmark. The Live Demos can be found here. You can download a demo of it now at http://www.syncfusion.com/downloads/evalstart. After you install the Syncfusion, you can view the dashboard to run locally installed samples. You may also download the documentation to your local machine if needed. Since the name of the package is “User Interface Edition”, I decided to share several samples that struck me as “awesome”. Dashboard Gauges – I was very impressed with the various gauges they have included. The digital clock also looks very impressive. Diagram – The diagrams are also very easy to build. In the sample project below you can drag/drop the shapes onto the content pane. More complex lines like the Bezier lines are also easy to create using Syncfusion. Scheduling – Another strong component was the Scheduling with built-in support for Themes. Tools – If all of that wasn’t enough, it also comes with a nice pack of essential tools. Syncfusion has a nice variety of Silverlight Controls that you should check out. You can go ahead and download a trial version of it right now by clicking here. Crypto Obfuscator The following feature set is what is important to me in an Obfuscator since I am a Silverlight/WP7 Developer: And thankfully this is what you get in Crypto Obfuscator. You can download a trial version right now if you want to go ahead and play with it. Let’s spend a few moments taking a look at the application. After you have installed Crypto Obfuscator you will see the following screen: After you click on Assemblies you have the option to add your .XAP file in: I went ahead and loaded my .xap file from a Silverlight Application. At this point, you can simply save your project and hit “Obfuscate” and your done. You don’t have to mess with any of the other settings if you don’t want too. Of course, you can change the settings and add obfuscation rules, watermarks and signing if you wish.  After Obfuscation, it looks like this in .NET Reflector: I was trying to browse through methods and it actually crashed Reflector. This confirms the level of protection the obfuscator is providing. If this were a commercial application that my team built, I would have a huge smile on my face right now. Crypto Obfuscator is a great product and I hope you will spend the time learning more about it. Silverlight Spy Silverlight Spy is a runtime inspector tool that will tell you pretty much everything that is going on with the application. Basically, you give it a URL that contains a Silverlight application and you can explore the element tree, events, xaml and so much more. This has already been reviewed on MichaelCrump.net. _________________________________________________________________________________________ Thanks for reading and don’t forget to leave a comment below in order to win one of the four prizes available! Subscribe to my feed

    Read the article

  • Do I need to store a generic rotation point/radius for rotating around a point other than the origin for object transforms?

    - by Casey
    I'm having trouble implementing a non-origin point rotation. I have a class Transform that stores each component separately in three 3D vectors for position, scale, and rotation. This is fine for local rotations based on the center of the object. The issue is how do I determine/concatenate non-origin rotations in addition to origin rotations. Normally this would be achieved as a Transform-Rotate-Transform for the center rotation followed by a Transform-Rotate-Transform for the non-origin point. The problem is because I am storing the individual components, the final Transform matrix is not calculated until needed by using the individual components to fill an appropriate Matrix. (See GetLocalTransform()) Do I need to store an additional rotation (and radius) for world rotations as well or is there a method of implementation that works while only using the single rotation value? Transform.h #ifndef A2DE_CTRANSFORM_H #define A2DE_CTRANSFORM_H #include "../a2de_vals.h" #include "CMatrix4x4.h" #include "CVector3D.h" #include <vector> A2DE_BEGIN class Transform { public: Transform(); Transform(Transform* parent); Transform(const Transform& other); Transform& operator=(const Transform& rhs); virtual ~Transform(); void SetParent(Transform* parent); void AddChild(Transform* child); void RemoveChild(Transform* child); Transform* FirstChild(); Transform* LastChild(); Transform* NextChild(); Transform* PreviousChild(); Transform* GetChild(std::size_t index); std::size_t GetChildCount() const; std::size_t GetChildCount(); void SetPosition(const a2de::Vector3D& position); const a2de::Vector3D& GetPosition() const; a2de::Vector3D& GetPosition(); void SetRotation(const a2de::Vector3D& rotation); const a2de::Vector3D& GetRotation() const; a2de::Vector3D& GetRotation(); void SetScale(const a2de::Vector3D& scale); const a2de::Vector3D& GetScale() const; a2de::Vector3D& GetScale(); a2de::Matrix4x4 GetLocalTransform() const; a2de::Matrix4x4 GetLocalTransform(); protected: private: a2de::Vector3D _position; a2de::Vector3D _scale; a2de::Vector3D _rotation; std::size_t _curChildIndex; Transform* _parent; std::vector<Transform*> _children; }; A2DE_END #endif Transform.cpp #include "CTransform.h" #include "CVector2D.h" #include "CVector4D.h" A2DE_BEGIN Transform::Transform() : _position(), _scale(1.0, 1.0), _rotation(), _curChildIndex(0), _parent(nullptr), _children() { /* DO NOTHING */ } Transform::Transform(Transform* parent) : _position(), _scale(1.0, 1.0), _rotation(), _curChildIndex(0), _parent(parent), _children() { /* DO NOTHING */ } Transform::Transform(const Transform& other) : _position(other._position), _scale(other._scale), _rotation(other._rotation), _curChildIndex(0), _parent(other._parent), _children(other._children) { /* DO NOTHING */ } Transform& Transform::operator=(const Transform& rhs) { if(this == &rhs) return *this; this->_position = rhs._position; this->_scale = rhs._scale; this->_rotation = rhs._rotation; this->_curChildIndex = 0; this->_parent = rhs._parent; this->_children = rhs._children; return *this; } Transform::~Transform() { _children.clear(); _parent = nullptr; } void Transform::SetParent(Transform* parent) { _parent = parent; } void Transform::AddChild(Transform* child) { if(child == nullptr) return; _children.push_back(child); } void Transform::RemoveChild(Transform* child) { if(_children.empty()) return; _children.erase(std::remove(_children.begin(), _children.end(), child), _children.end()); } Transform* Transform::FirstChild() { if(_children.empty()) return nullptr; return *(_children.begin()); } Transform* Transform::LastChild() { if(_children.empty()) return nullptr; return *(_children.end()); } Transform* Transform::NextChild() { if(_children.empty()) return nullptr; std::size_t s(_children.size()); if(_curChildIndex >= s) { _curChildIndex = s; return nullptr; } return _children[_curChildIndex++]; } Transform* Transform::PreviousChild() { if(_children.empty()) return nullptr; if(_curChildIndex == 0) { return nullptr; } return _children[_curChildIndex--]; } Transform* Transform::GetChild(std::size_t index) { if(_children.empty()) return nullptr; if(index > _children.size()) return nullptr; return _children[index]; } std::size_t Transform::GetChildCount() const { if(_children.empty()) return 0; return _children.size(); } std::size_t Transform::GetChildCount() { return static_cast<const Transform&>(*this).GetChildCount(); } void Transform::SetPosition(const a2de::Vector3D& position) { _position = position; } const a2de::Vector3D& Transform::GetPosition() const { return _position; } a2de::Vector3D& Transform::GetPosition() { return const_cast<a2de::Vector3D&>(static_cast<const Transform&>(*this).GetPosition()); } void Transform::SetRotation(const a2de::Vector3D& rotation) { _rotation = rotation; } const a2de::Vector3D& Transform::GetRotation() const { return _rotation; } a2de::Vector3D& Transform::GetRotation() { return const_cast<a2de::Vector3D&>(static_cast<const Transform&>(*this).GetRotation()); } void Transform::SetScale(const a2de::Vector3D& scale) { _scale = scale; } const a2de::Vector3D& Transform::GetScale() const { return _scale; } a2de::Vector3D& Transform::GetScale() { return const_cast<a2de::Vector3D&>(static_cast<const Transform&>(*this).GetScale()); } a2de::Matrix4x4 Transform::GetLocalTransform() const { Matrix4x4 p((_parent ? _parent->GetLocalTransform() : a2de::Matrix4x4::GetIdentity())); Matrix4x4 t(a2de::Matrix4x4::GetTranslationMatrix(_position)); Matrix4x4 r(a2de::Matrix4x4::GetRotationMatrix(_rotation)); Matrix4x4 s(a2de::Matrix4x4::GetScaleMatrix(_scale)); return (p * t * r * s); } a2de::Matrix4x4 Transform::GetLocalTransform() { return static_cast<const Transform&>(*this).GetLocalTransform(); } A2DE_END

    Read the article

  • ANTS Memory Profiler 7.0

    - by James Michael Hare
    I had always been a fan of ANTS products (Reflector is absolutely invaluable, and their performance profiler is great as well – very easy to use!), so I was curious to see what the ANTS Memory Profiler could show me. Background While a performance profiler will track how much time is typically spent in each unit of code, a memory profiler gives you much more detail on how and where your memory is being consumed and released in a program. As an example, I’d been working on a data access layer at work to call a market data web service.  This web service would take a list of symbols to quote and would return back the quote data.  To help consolidate the thousands of web requests per second we get and reduce load on the web services, we implemented a 5-second cache of quote data.  Not quite long enough to where customers will typically notice a quote go “stale”, but just long enough to be able to collapse multiple quote requests for the same symbol in a short period of time. A 5-second cache may not sound like much, but it actually pays off by saving us roughly 42% of our web service calls, while still providing relatively up-to-date information.  The question is whether or not the extra memory involved in maintaining the cache was worth it, so I decided to fire up the ANTS Memory Profiler and take a look at memory usage. First Impressions The main thing I’ve always loved about the ANTS tools is their ease of use.  Pretty much everything is right there in front of you in a way that makes it easy for you to find what you need with little digging required.  I’ve worked with other, older profilers before (that shall remain nameless other than to hint it was created by a very large chip maker) where it was a mind boggling experience to figure out how to do simple tasks. Not so with AMP.  The opening dialog is very straightforward.  You can choose from here whether to debug an executable, a web application (either in IIS or from VS’s web development server), windows services, etc. So I chose a .NET Executable and navigated to the build location of my test harness.  Then began profiling. At this point while the application is running, you can see a chart of the memory as it ebbs and wanes with allocations and collections.  At any given point in time, you can take snapshots (to compare states) zoom in, or choose to stop at any time.  Snapshots Taking a snapshot also gives you a breakdown of the managed memory heaps for each generation so you get an idea how many objects are staying around for extended periods of time (as an object lives and survives collections, it gets promoted into higher generations where collection becomes less frequent). Generating a snapshot brings up an analysis view with very handy graphs that show your generation sizes.  Almost all my memory is in Generation 1 in the managed memory component of the first graph, which is good news to me, because Gen 2 collections are much rarer.  I once3 made the mistake once of caching data for 30 minutes and found it didn’t get collected very quick after I released my reference because it had been promoted to Gen 2 – doh! Analysis It looks like (from the second pie chart) that the majority of the allocations were in the string class.  This also is expected for me because the majority of the memory allocated is in the web service responses, so it doesn’t seem the entities I’m adapting to (to prevent being too tightly coupled to the web service proxy classes, which can change easily out from under me) aren’t taking a significant portion of memory. I also appreciate that they have clear summary text in key places such as “No issues with large object heap fragmentation were detected”.  For novice users, this type of summary information can be critical to getting them to use a tool and develop a good working knowledge of it. There is also a handy link at the bottom for “What to look for on the summary” which loads a web page of help on key points to look for. Clicking over to the session overview, it’s easy to compare the samples at each snapshot to see how your memory is growing, shrinking, or staying relatively the same.  Looking at my snapshots, I’m pretty happy with the fact that memory allocation and heap size seems to be fairly stable and in control: Once again, you can check on the large object heap, generation one heap, and generation two heap across each snapshot to spot trends. Back on the analysis tab, we can go to the [Class List] button to get an idea what classes are making up the majority of our memory usage.  As was little surprise to me, System.String was the clear majority of my allocations, though I found it surprising that the System.Reflection.RuntimeMehtodInfo came in second.  I was curious about this, so I selected it and went into the [Instance Categorizer].  This view let me see where these instances to RuntimeMehtodInfo were coming from. So I scrolled back through the graph, and discovered that these were being held by the System.ServiceModel.ChannelFactoryRefCache and I was satisfied this was just an artifact of my WCF proxy. I also like that down at the bottom of the Instance Categorizer it gives you a series of filters and offers to guide you on which filter to use based on the problem you are trying to find.  For example, if I suspected a memory leak, I might try to filter for survivors in growing classes.  This means that for instances of a class that are growing in memory (more are being created than cleaned up), which ones are survivors (not collected) from garbage collection.  This might allow me to drill down and find places where I’m holding onto references by mistake and not freeing them! Finally, if you want to really see all your instances and who is holding onto them (preventing collection), you can go to the “Instance Retention Graph” which creates a graph showing what references are being held in memory and who is holding onto them. Visual Studio Integration Of course, VS has its own profiler built in – and for a free bundled profiler it is quite capable – but AMP gives a much cleaner and easier-to-use experience, and when you install it you also get the option of letting it integrate directly into VS. So once you go back into VS after installation, you’ll notice an ANTS menu which lets you launch the ANTS profiler directly from Visual Studio.   Clicking on one of these options fires up the project in the profiler immediately, allowing you to get right in.  It doesn’t integrate with the Visual Studio windows themselves (like the VS profiler does), but still the plethora of information it provides and the clear and concise manner in which it presents it makes it well worth it. Summary If you like the ANTS series of tools, you shouldn’t be disappointed with the ANTS Memory Profiler.  It was so easy to use that I was able to jump in with very little product knowledge and get the information I was looking it for. I’ve used other profilers before that came with 3-inch thick tomes that you had to read in order to get anywhere with the tool, and this one is not like that at all.  It’s built for your everyday developer to get in and find their problems quickly, and I like that! Tweet Technorati Tags: Influencers,ANTS,Memory,Profiler

    Read the article

  • Nashorn, the rhino in the room

    - by costlow
    Nashorn is a new runtime within JDK 8 that allows developers to run code written in JavaScript and call back and forth with Java. One advantage to the Nashorn scripting engine is that is allows for quick prototyping of functionality or basic shell scripts that use Java libraries. The previous JavaScript runtime, named Rhino, was introduced in JDK 6 (released 2006, end of public updates Feb 2013). Keeping tradition amongst the global developer community, "Nashorn" is the German word for rhino. The Java platform and runtime is an intentional home to many languages beyond the Java language itself. OpenJDK’s Da Vinci Machine helps coordinate work amongst language developers and tool designers and has helped different languages by introducing the Invoke Dynamic instruction in Java 7 (2011), which resulted in two major benefits: speeding up execution of dynamic code, and providing the groundwork for Java 8’s lambda executions. Many of these improvements are discussed at the JVM Language Summit, where language and tool designers get together to discuss experiences and issues related to building these complex components. There are a number of benefits to running JavaScript applications on JDK 8’s Nashorn technology beyond writing scripts quickly: Interoperability with Java and JavaScript libraries. Scripts do not need to be compiled. Fast execution and multi-threading of JavaScript running in Java’s JRE. The ability to remotely debug applications using an IDE like NetBeans, Eclipse, or IntelliJ (instructions on the Nashorn blog). Automatic integration with Java monitoring tools, such as performance, health, and SIEM. In the remainder of this blog post, I will explain how to use Nashorn and the benefit from those features. Nashorn execution environment The Nashorn scripting engine is included in all versions of Java SE 8, both the JDK and the JRE. Unlike Java code, scripts written in nashorn are interpreted and do not need to be compiled before execution. Developers and users can access it in two ways: Users running JavaScript applications can call the binary directly:jre8/bin/jjs This mechanism can also be used in shell scripts by specifying a shebang like #!/usr/bin/jjs Developers can use the API and obtain a ScriptEngine through:ScriptEngine engine = new ScriptEngineManager().getEngineByName("nashorn"); When using a ScriptEngine, please understand that they execute code. Avoid running untrusted scripts or passing in untrusted/unvalidated inputs. During compilation, consider isolating access to the ScriptEngine and using Type Annotations to only allow @Untainted String arguments. One noteworthy difference between JavaScript executed in or outside of a web browser is that certain objects will not be available. For example when run outside a browser, there is no access to a document object or DOM tree. Other than that, all syntax, semantics, and capabilities are present. Examples of Java and JavaScript The Nashorn script engine allows developers of all experience levels the ability to write and run code that takes advantage of both languages. The specific dialect is ECMAScript 5.1 as identified by the User Guide and its standards definition through ECMA international. In addition to the example below, Benjamin Winterberg has a very well written Java 8 Nashorn Tutorial that provides a large number of code samples in both languages. Basic Operations A basic Hello World application written to run on Nashorn would look like this: #!/usr/bin/jjs print("Hello World"); The first line is a standard script indication, so that Linux or Unix systems can run the script through Nashorn. On Windows where scripts are not as common, you would run the script like: jjs helloWorld.js. Receiving Arguments In order to receive program arguments your jjs invocation needs to use the -scripting flag and a double-dash to separate which arguments are for jjs and which are for the script itself:jjs -scripting print.js -- "This will print" #!/usr/bin/jjs var whatYouSaid = $ARG.length==0 ? "You did not say anything" : $ARG[0] print(whatYouSaid); Interoperability with Java libraries (including 3rd party dependencies) Another goal of Nashorn was to allow for quick scriptable prototypes, allowing access into Java types and any libraries. Resources operate in the context of the script (either in-line with the script or as separate threads) so if you open network sockets and your script terminates, those sockets will be released and available for your next run. Your code can access Java types the same as regular Java classes. The “import statements” are written somewhat differently to accommodate for language. There is a choice of two styles: For standard classes, just name the class: var ServerSocket = java.net.ServerSocket For arrays or other items, use Java.type: var ByteArray = Java.type("byte[]")You could technically do this for all. The same technique will allow your script to use Java types from any library or 3rd party component and quickly prototype items. Building a user interface One major difference between JavaScript inside and outside of a web browser is the availability of a DOM object for rendering views. When run outside of the browser, JavaScript has full control to construct the entire user interface with pre-fabricated UI controls, charts, or components. The example below is a variation from the Nashorn and JavaFX guide to show how items work together. Nashorn has a -fx flag to make the user interface components available. With the example script below, just specify: jjs -fx -scripting fx.js -- "My title" #!/usr/bin/jjs -fx var Button = javafx.scene.control.Button; var StackPane = javafx.scene.layout.StackPane; var Scene = javafx.scene.Scene; var clickCounter=0; $STAGE.title = $ARG.length>0 ? $ARG[0] : "You didn't provide a title"; var button = new Button(); button.text = "Say 'Hello World'"; button.onAction = myFunctionForButtonClicking; var root = new StackPane(); root.children.add(button); $STAGE.scene = new Scene(root, 300, 250); $STAGE.show(); function myFunctionForButtonClicking(){   var text = "Click Counter: " + clickCounter;   button.setText(text);   clickCounter++;   print(text); } For a more advanced post on using Nashorn to build a high-performing UI, see JavaFX with Nashorn Canvas example. Interoperable with frameworks like Node, Backbone, or Facebook React The major benefit of any language is the interoperability gained by people and systems that can read, write, and use it for interactions. Because Nashorn is built for the ECMAScript specification, developers familiar with JavaScript frameworks can write their code and then have system administrators deploy and monitor the applications the same as any other Java application. A number of projects are also running Node applications on Nashorn through Project Avatar and the supported modules. In addition to the previously mentioned Nashorn tutorial, Benjamin has also written a post about Using Backbone.js with Nashorn. To show the multi-language power of the Java Runtime, there is another interesting example that unites Facebook React and Clojure on JDK 8’s Nashorn. Summary Nashorn provides a simple and fast way of executing JavaScript applications and bridging between the best of each language. By making the full range of Java libraries to JavaScript applications, and the quick prototyping style of JavaScript to Java applications, developers are free to work as they see fit. Software Architects and System Administrators can take advantage of one runtime and leverage any work that they have done to tune, monitor, and certify their systems. Additional information is available within: The Nashorn Users’ Guide Java Magazine’s article "Next Generation JavaScript Engine for the JVM." The Nashorn team’s primary blog or a very helpful collection of Nashorn links.

    Read the article

  • Cost Comparison Hard Disk Drive to Solid State Drive on Price per Gigabyte - dispelling a myth!

    - by tonyrogerson
    It is often said that Hard Disk Drive storage is significantly cheaper per GiByte than Solid State Devices – this is wholly inaccurate within the database space. People need to look at the cost of the complete solution and not just a single component part in isolation to what is really required to meet the business requirement. Buying a single Hitachi Ultrastar 600GB 3.5” SAS 15Krpm hard disk drive will cost approximately £239.60 (http://scan.co.uk, 22nd March 2012) compared to an OCZ 600GB Z-Drive R4 CM84 PCIe costing £2,316.54 (http://scan.co.uk, 22nd March 2012); I’ve not included FusionIO ioDrive because there is no public pricing available for it – something I never understand and personally when companies do this I immediately think what are they hiding, luckily in FusionIO’s case the product is proven though is expensive compared to OCZ enterprise offerings. On the face of it the single 15Krpm hard disk has a price per GB of £0.39, the SSD £3.86; this is what you will see in the press and this is what sales people will use in comparing the two technologies – do not be fooled by this bullshit people! What is the requirement? The requirement is the database will have a static size of 400GB kept static through archiving so growth and trim will balance the database size, the client requires resilience, there will be several hundred call centre staff querying the database where queries will read a small amount of data but there will be no hot spot in the data so the randomness will come across the entire 400GB of the database, estimates predict that the IOps required will be approximately 4,000IOps at peak times, because it’s a call centre system the IO latency is important and must remain below 5ms per IO. The balance between read and write is 70% read, 30% write. The requirement is now defined and we have three of the most important pieces of the puzzle – space required, estimated IOps and maximum latency per IO. Something to consider with regard SQL Server; write activity requires synchronous IO to the storage media specifically the transaction log; that means the write thread will wait until the IO is completed and hardened off until the thread can continue execution, the requirement has stated that 30% of the system activity will be write so we can expect a high amount of synchronous activity. The hardware solution needs to be defined; two possible solutions: hard disk or solid state based; the real question now is how many hard disks are required to achieve the IO throughput, the latency and resilience, ditto for the solid state. Hard Drive solution On a test on an HP DL380, P410i controller using IOMeter against a single 15Krpm 146GB SAS drive, the throughput given on a transfer size of 8KiB against a 40GiB file on a freshly formatted disk where the partition is the only partition on the disk thus the 40GiB file is on the outer edge of the drive so more sectors can be read before head movement is required: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 3,733 IOps at an average latency of 34.06ms (34 MiB/s). The same test was done on the same disk but the test file was 130GiB: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 528 IOps at an average latency of 217.49ms (4 MiB/s). From the result it is clear random performance gets worse as the disk fills up – I’m currently writing an article on short stroking which will cover this in detail. Given the work load is random in nature looking at the random performance of the single drive when only 40 GiB of the 146 GB is used gives near the IOps required but the latency is way out. Luckily I have tested 6 x 15Krpm 146GB SAS 15Krpm drives in a RAID 0 using the same test methodology, for the same test above on a 130 GiB for each drive added the performance boost is near linear, for each drive added throughput goes up by 5 MiB/sec, IOps by 700 IOps and latency reducing nearly 50% per drive added (172 ms, 94 ms, 65 ms, 47 ms, 37 ms, 30 ms). This is because the same 130GiB is spread out more as you add drives 130 / 1, 130 / 2, 130 / 3 etc. so implicit short stroking is occurring because there is less file on each drive so less head movement required. The best latency is still 30 ms but we have the IOps required now, but that’s on a 130GiB file and not the 400GiB we need. Some reality check here: a) the drive randomness is more likely to be 50/50 and not a full 100% but the above has highlighted the effect randomness has on the drive and the more a drive fills with data the worse the effect. For argument sake let us assume that for the given workload we need 8 disks to do the job, for resilience reasons we will need 16 because we need to RAID 1+0 them in order to get the throughput and the resilience, RAID 5 would degrade performance. Cost for hard drives: 16 x £239.60 = £3,833.60 For the hard drives we will need disk controllers and a separate external disk array because the likelihood is that the server itself won’t take the drives, a quick spec off DELL for a PowerVault MD1220 which gives the dual pathing with 16 disks 146GB 15Krpm 2.5” disks is priced at £7,438.00, note its probably more once we had two controller cards to sit in the server in, racking etc. Minimum cost taking the DELL quote as an example is therefore: {Cost of Hardware} / {Storage Required} £7,438.60 / 400 = £18.595 per GB £18.59 per GiB is a far cry from the £0.39 we had been told by the salesman and the myth. Yes, the storage array is composed of 16 x 146 disks in RAID 10 (therefore 8 usable) giving an effective usable storage availability of 1168GB but the actual storage requirement is only 400 and the extra disks have had to be purchased to get the  IOps up. Solid State Drive solution A single card significantly exceeds the IOps and latency required, for resilience two will be required. ( £2,316.54 * 2 ) / 400 = £11.58 per GB With the SSD solution only two PCIe sockets are required, no external disk units, no additional controllers, no redundant controllers etc. Conclusion I hope by showing you an example that the myth that hard disk drives are cheaper per GiB than Solid State has now been dispelled - £11.58 per GB for SSD compared to £18.59 for Hard Disk. I’ve not even touched on the running costs, compare the costs of running 18 hard disks, that’s a lot of heat and power compared to two PCIe cards!Just a quick note: I've left a fair amount of information out due to this being a blog! If in doubt, email me :)I'll also deal with the myth that SSD's wear out at a later date as well - that's just way over done still, yes, 5 years ago, but now - no.

    Read the article

  • How to Tell a Hardware Problem From a Software Problem

    - by Chris Hoffman
    Your computer seems to be malfunctioning — it’s slow, programs are crashing or Windows may be blue-screening. Is your computer’s hardware failing, or does it have a software problem that you can fix on your own? This can actually be a bit tricky to figure out. Hardware problems and software problems can lead to the same symptoms — for example, frequent blue screens of death may be caused by either software or hardware problems. Computer is Slow We’ve all heard the stories — someone’s computer slows down over time because they install too much software that runs at startup or it becomes infected with malware. The person concludes that their computer is slowing down because it’s old, so they replace it. But they’re wrong. If a computer is slowing down, it has a software problem that can be fixed. Hardware problems shouldn’t cause your computer to slow down. There are some rare exceptions to this — perhaps your CPU is overheating and it’s downclocking itself, running slower to stay cooler — but most slowness is caused by software issues. Blue Screens Modern versions of Windows are much more stable than older versions of Windows. When used with reliable hardware with well-programmed drivers, a typical Windows computer shouldn’t blue-screen at all. If you are encountering frequent blue screens of death, there’s a good chance your computer’s hardware is failing. Blue screens could also be caused by badly programmed hardware drivers, however. If you just installed or upgraded hardware drivers and blue screens start, try uninstalling the drivers or using system restore — there may be something wrong with the drivers. If you haven’t done anything with your drivers recently and blue screens start, there’s a very good chance you have a hardware problem. Computer Won’t Boot If your computer won’t boot, you could have either a software problem or a hardware problem. Is Windows attempting to boot and failing part-way through the boot process, or does the computer no longer recognize its hard drive or not power on at all? Consult our guide to troubleshooting boot problems for more information. When Hardware Starts to Fail… Here are some common components that can fail and the problems their failures may cause: Hard Drive: If your hard drive starts failing, files on your hard drive may become corrupted. You may see long delays when you attempt to access files or save to the hard drive. Windows may stop booting entirely. CPU: A failing CPU may result in your computer not booting at all. If the CPU is overheating, your computer may blue-screen when it’s under load — for example, when you’re playing a demanding game or encoding video. RAM: Applications write data to your RAM and use it for short-term storage. If your RAM starts failing, an application may write data to part of the RAM, then later read it back and get an incorrect value. This can result in application crashes, blue screens, and file corruption. Graphics Card: Graphics card problems may result in graphical errors while rendering 3D content or even just while displaying your desktop. If the graphics card is overheating, it may crash your graphics driver or cause your computer to freeze while under load — for example, when playing demanding 3D games. Fans: If any of the fans fail in your computer, components may overheat and you may see the above CPU or graphics card problems. Your computer may also shut itself down abruptly so it doesn’t overheat any further and damage itself. Motherboard: Motherboard problems can be extremely tough to diagnose. You may see occasional blue screens or similar problems. Power Supply: A malfunctioning power supply is also tough to diagnose — it may deliver too much power to a component, damaging it and causing it to malfunction. If the power supply dies completely, your computer won’t power on and nothing will happen when you press the power button. Other common problems — for example, a computer slowing down — are likely to be software problems. It’s also possible that software problems can cause many of the above symptoms — malware that hooks deep into the Windows kernel can cause your computer to blue-screen, for example. The Only Way to Know For Sure We’ve tried to give you some idea of the difference between common software problems and hardware problems with the above examples. But it’s often tough to know for sure, and troubleshooting is usually a trial-and-error process. This is especially true if you have an intermittent problem, such as your computer blue-screening a few times a week. You can try scanning your computer for malware and running System Restore to restore your computer’s system software back to its previous working state, but these aren’t  guaranteed ways to fix software problems. The best way to determine whether the problem you have is a software or hardware one is to bite the bullet and restore your computer’s software back to its default state. That means reinstalling Windows or using the Refresh or reset feature on Windows 8. See whether the problem still persists after you restore its operating system to its default state. If you still see the same problem – for example, if your computer is blue-screening and continues to blue-screen after reinstalling Windows — you know you have a hardware problem and need to have your computer fixed or replaced. If the computer crashes or freezes while reinstalling Windows, you definitely have a hardware problem. Even this isn’t a completely perfect method — for example, you may reinstall Windows and install the same hardware drivers afterwards. If the hardware drivers are badly programmed, the blue-screens may continue. Blue screens of death aren’t as common on Windows these days — if you’re encountering them frequently, you likely have a hardware problem. Most blue screens you encounter will likely be caused by hardware issues. On the other hand, other common complaints like “my computer has slowed down” are easily fixable software problems. When in doubt, back up your files and reinstall Windows. Image Credit: Anders Sandberg on Flickr, comedy_nose on Flickr     

    Read the article

  • Evaluating Solutions to Manage Product Compliance? Don't Wait Much Longer

    - by Kerrie Foy
    Depending on severity, product compliance issues can cause all sorts of problems from run-away budgets to business closures. But effective policies and safeguards can create a strong foundation for innovation, productivity, market penetration and competitive advantage. If you’ve been putting off a systematic approach to product compliance, it is time to reconsider that decision, or indecision. Why now?  No matter what industry, companies face a litany of worldwide and regional regulations that require proof of product compliance and environmental friendliness for market access.  For example, Restriction of Hazardous Substances (RoHS) is a regulation that restricts the use of six dangerous materials used in the manufacture of electronic and electrical equipment.  ROHS was originally adopted by the European Union in 2003 for implementation in 2006, and it has evolved over time through various regional versions for North America, China, Japan, Korea, Norway and Turkey.  In addition, the RoHS directive allowed for material exemptions used in Medical Devices, but that exemption ends in 2014.   Additional regulations worth watching are the Battery Directive, Waste Electrical and Electronic Equipment (WEEE), and Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) directives.  Additional evolving regulations are coming from governing bodies like the Food and Drug Administration (FDA) and the International Organization for Standardization (ISO). Corporate sustainability initiatives are also gaining urgency and influencing product design. In a survey of 405 corporations in the Global 500 by Carbon Disclosure Project, co-written by PwC (CDP Global 500 Climate Change Report 2012 entitled Business Resilience in an Uncertain, Resource-Constrained World), 48% of the respondents indicated they saw potential to create new products and business services as a response to climate change. Just 21% reported a dedicated budget for the research. However, the report goes on to explain that those few companies are winning over new customers and driving additional profits by exploiting their abilities to adapt to environmental needs. The article cites Dell as an example – Dell has invested in research to develop new products designed to reduce its customers’ emissions by more than 10 million metric tons of CO2e per year. This reduction in emissions should save Dell’s customers over $1billion per year as a result! Over time we expect to see many additional companies prove that eco-design provides marketplace benefits through differentiation and direct customer value. How do you meet compliance requirements and also successfully invest in eco-friendly designs? No doubt companies struggle to answer this question. After all, the journey to get there may involve transforming business models, go-to-market strategies, supply networks, quality assurance policies and compliance processes per the rapidly evolving global and regional directives. There may be limited executive focus on the initiative, inability to quantify noncompliance, or not enough resources to justify investment. To make things even more difficult to address, compliance responsibility can be a passionate topic within an organization, making the prospect of change on an enterprise scale problematic and time-consuming. Without a single source of truth for product data and without proper processes in place, ensuring product compliance burgeons into a crushing task that is cost-prohibitive and overwhelming to an organization. With all the overhead, certain markets or demographics become simply inaccessible. Therefore, the risk to consumer goodwill and satisfaction, revenue, business continuity, and market potential is too great not to solve the compliance challenge. Companies are beginning to adapt and even thrive in today’s highly regulated and transparent environment by implementing systematic approaches to product compliance that are more than functional bandages but revenue-generating engines. Consider partnering with Oracle to help you address your compliance needs. Many of the world’s most innovative leaders and pioneers are leveraging Oracle’s Agile Product Lifecycle Management (PLM) portfolio of enterprise applications to manage the product value chain, centralize product data, automate processes, and launch more eco-friendly products to market faster.   Particularly, the Agile Product Governance & Compliance (PG&C) solution provides out-of-the-box functionality to integrate actionable regulatory information into the enterprise product record from the ideation to the disposal/recycling phase. Agile PG&C makes it possible to efficiently manage compliance per corporate green initiatives as well as regional and global directives. Options are critical, but so is ease-of-use. Anyone who’s grappled with compliance policy knows legal interpretation plays a major role in determining how an organization responds to regulation. Agile PG&C gives you the freedom to configure product compliance per your needs, while maintaining rigorous control over the product record in an easy-to-use interface that facilitates adoption efforts. It allows you to assign regulations as specifications for a part or BOM roll-up. Each specification has a threshold value that alerts you to a non-compliance issue if the threshold value is exceeded. Set however many regulations as specifications you need to make sure a product can be sold in your target countries. Another option is to implement like one of our leading consumer electronics customers and define your own “catch-all” specification to ensure compliance in all markets. You can give your suppliers secure access to enter their component data or integrate a third party’s data. With Agile PG&C you are able to design compliance earlier into your products to reduce cost and improve quality downstream when stakes are higher. Agile PG&C is a comprehensive solution that makes product compliance more reliable and efficient. Throughout product lifecycles, use the solution to support full material disclosures, efficiently manage declarations with your suppliers, feed compliance data into a corrective action if a product must be changed, and swiftly satisfy audits by showing all due diligence tracked in one solution. Given the compounding regulation and consumer focus on urgent environmental issues, now is the time to act. Implementing an enterprise, systematic approach to product compliance is a competitive investment. From the start, Agile Product Governance & Compliance enables companies to confidently design for compliance and sustainability, reduce the cost of compliance, minimize the risk of business interruption, deliver responsible products, and inspire new innovation.  Don’t wait any longer! To find out more about Agile Product Governance & Compliance download the data sheet, contact your sales representative, or call Oracle at 1-800-633-0738. Many thanks to Shane Goodwin, Senior Manager, Oracle Agile PLM Product Management, for contributions to this article. 

    Read the article

  • From J2EE to Java EE: what has changed?

    - by Bruno.Borges
    See original @Java_EE tweet on 29 May 2014 Yeap, it has been 8 years since the term J2EE was replaced, and still some people refer to it (mostly recruiters, luckily!). But then comes the question: what has changed besides the name? Our community friend Abhishek Gupta worked on this question and provided an excellent response titled "What's in a name? Java EE? J2EE?". But let me give you a few highlights here so you don't lose yourself with YATO (yet another tab opened): J2EE used to be an infrastructure and resources provider only, requiring developers to depend on external 3rd-party frameworks to then implement application requirements or improve productivity J2EE used to require hundreds of XML lines of codes to define just a dozen of resources like EJBs, MDBs, Servlets, and so on J2EE used to support only EAR (Enterprise Archives) with a bunch of other archives like JARs and WARs just to run a simple Web application And so on, and so on! It was a great technology but still required a lot of work to get something up and running. Remember xDoclet? Remember Struts? The old days of pure Hibernate code? Or when Ajax became a trending topic and we were all implementing it with DWR Servlet? Still, we J2EE developers survived, and learned, and helped evolve the platform to a whole new level of DX (Developer Experience). A new DX for J2EE suggested a new name. One that referred to the platform as the Enterprise Edition of Java, because "Java is why we're here" quoting Bill Shannon. The release of Java EE 5 included so many features that clearly showed developers the platform was going after all those DX gaps. Radical simplification of the persistence model with the introduction of JPA Support of Annotations following the launch of Java SE 5.0 Updated XML APIs with the introduction of StAX Drastic simplification of the EJB component model (with annotations!) Convention over Configuration and Dependency Injection A few bullets you may say but that represented a whole new DX and a vision for upcoming versions. Clearly, the release of Java EE 5 helped drive the future of the platform by reducing the number of XMLs, Java Interfaces, simplified configurations, provided convention-over-configuration, etc! We then saw the release of Java EE 6 with even more great features like Managed Beans, CDI, Bean Validation, improved JSP and Servlets APIs, JASPIC, the posisbility to deploy plain WARs and so many other improvements it is difficult to list in one sentence. And we've gotta give Spring Framework some credit here: thanks to Rod Johnson and team, concepts like Dependency Injection fit perfectly into the Java EE Platform. Clearly, Spring used to be one of the most inspiring frameworks for the Java EE platform, and it is great to see things like Pivotal and Spring supporting JSR 352 Batch API standard! Cooperation to keep improving DX at maximum in the server-side Java landscape.  The master piece result of these previous releases is seen and called today as Java EE 7, which by providing a newly and improved JavaServer Faces release, with new features for Web Development like WebSockets API, improved JAX-RS, and JSON-P, but also including Batch API and so many other great improvements, has increased developer productivity and brought innovation to server-side Java developers. Java EE is not just a new name (which was introduced back in May 2006!) but a new Developer Experience for server-side Java developers. To show you why we are here and where we are going (see the Java EE 8 update), we wanted to share with you a draft of the new Java EE logos that the evangelist team created, to help you spread the word about Java EE. You can get access to these images at the Java EE Platform Facebook Album, or the Google+ Java EE Platform Album whichever is better for you, but don't forget to like and/or +1 those social network profiles :-) A message to all job recruiters: stop using J2EE and start using Java EE if you want to find great Java EE 5, Java EE 6, or Java EE 7 developers To not only save you recruiter valuable characters when tweeting that job opportunity but to also match the correct term, we invite you to replace long terms like "Java/J2EE" or even worse "#Java #J2EE #JEE" or all these awkward combinations with the only acceptable hashtag: #JavaEE. And to prove that Java EE is catching among developers and even recruiters, and that J2EE is past, let me highlight here how are the jobs trends! The image below is from Indeed.com trends page, for the following keywords: J2EE, Java/J2EE, Java/JEE, JEE. As you can see, J2EE is indeed going away, while JEE saw some increase. Perhaps because some people are just lazy to type "Java" but at the same time they are aware that J2EE (the '2') is past. We shall forgive that for a while :-) Another proof that J2EE is going away is by looking at its trending statistics at Google. People have been showing less and less interest in the term J2EE. See the chart below:  Recruiter, if you still need proof that J2EE is past, that Java EE is trending, and that other job recruiters are seeking for Java EE developers, and that the developer community is aware of the new term, perhaps these other charts can show you what term you should be using. See for example the Job Trends for Java EE at Indeed.com and notice where it started... 2006! 8 years ago :-) Last but not least, the Google Trends for Java EE term (including the still wrong but forgivable JavaEE term) shows us that the new term is catching up very well. J2EE is past. Oh, and don't worry about the curves going down. We developers like to be hipsters sometimes and today only AngularJS, NodeJS, BigData are going up. Java EE and other traditional server-side technologies such as Spring, or even from other platforms such as Ruby on Rails, PHP, Grails, are pretty much consolidated and the curves... well, they are consolidated too. So If you are a Java EE developer, drop that J2EE from your résumé, and let recruiters also know that this term is past. Embrace Java EE, and enjoy a new developer experience for server-side Java developers. Java EE on TwitterJava EE on Google+Java EE on Facebook

    Read the article

  • Master Data

    - by david.butler(at)oracle.com
    Let's take a deeper look at what we mean when we talk about 'Master' data. In its most general sense, master data is data that exists in more than one operational application. These are the applications that automate business processes. These applications require significant amounts of data to function correctly.  This includes data about the objects that are involved in transactions, as well as the transaction data itself.  For example, when a customer buys a product, the transaction is managed by a sales application.  The objects of the transaction are the Customer and the Product.  The transactional data is the time, place, price, discount, payment methods, etc. used at the point of sale. Many thousands of transactional data attributes are needed within the application. These important data elements are local to the applications and have no bearing on other applications. Harmonization and synchronization across applications is not necessary. The Customer and Product objects of the transaction also have a large number of attributes. Customer for example, includes hierarchies, hierarchical and matrixed relationships, contacts, classifications, preferences, accounts, identifiers, profiles, and addresses galore for 'ship to', 'mail to'; 'service at'; etc. Dozens of attributes exist for individuals, hundreds for organizations, and thousands for products. This data has meaning beyond any particular application. It exists in many applications and drives the vital cross application enterprise business processes. These are the processes that define and differentiate the organization. At every decision point, information about the objects of the process determines the direction of the process flow. This is the nature of the data that exists in more than one application, and this is why we call it 'master data'. Let me elaborate. Parties Oracle has developed a party schema to model all participants in your daily business operations. It models people, organizations, groups, customers, contacts, employees, and suppliers. It models their accounts, locations, classifications, and preferences.  And most importantly, it models the vast array of hierarchical and matrixed relationships that exist between all the participants in your real world operations.  The model logically separates people and organizations from their relationships and accounts.  This separation creates flexibility unmatched in the industry and accounts for the fact that the Oracle schema for Customers, Suppliers, and Accounts is a true superset of the wide variety of commercial and homegrown customer models in existence. Sites Sites are places where business is conducted. They can be addresses, clusters such as retail malls, locations within a cluster, floors within a building, places where meters are located, rooms on floors, etc.  Fully understanding all attributes of a site is key to many business processes. Attributes such as 'noise abatement policy' at a point of delivery, or the size of an oven in a business kitchen drive day-to-day activities such as delivery schedules or food promotions. Typically this kind of data is siloed in departments and scattered across applications and spreadsheets.  This leads to conflicting information and poor operational efficiencies. Oracle's Global Single Schema can hold all site attributes in one place and enables a single version of authoritative site information across the enterprise. Products and Services The Oracle Global Single Schema also includes a number of entities that define the products and services a company creates and offers for sale. Key entities include Items organized into Catalogs and Price Lists. The Catalog structures provide for the ability to capture different views of a product such as engineering, manufacturing, and service which are based on a unified product model. As a result, designers, manufacturing engineers, purchasers and partners can work simultaneously on a common product definition. The Catalog schema allows for unlimited attributes, combines them into meaningful groups, and maps them to catalog categories to track these different types of information. The model also maps an unlimited number of functional structures for each item. For example, multiple Bills of Material (BOMs) can be constructed representing requirements BOM, features BOM, and packaging BOM for an item. The Catalog model also supports hierarchical information about each item and all standard Global Data Synchronization attributes. Business Processes Utilizing Linked Data Entities Each business entity codified into a centralized master data environment significantly improves the efficiency of the automated business processes that use the consolidated data.  When all the key business entities used by an organization's process are so consolidated, the advantages are multiplied.  The primary reason for business process breakdowns (i.e. data errors across application boundaries) is eliminated. All processes are positively impacted and business process automation is itself automated.  I like to use the "Call to Resolution" business process as an example to help illustrate this important point. It involves call center applications, service applications, RMA applications, transportation applications, inventory applications, etc. Customer, Site, Product and Supplier master data must all be correct and consistent across these applications.  What's more, the data relationships between customer and product, and product and suppliers must be right. This is the minimum quality needed to insure the business process flows without error. But that is not the end of the story. Critical master data attributes such as customer loyalty, profitability, credit worthiness, and propensity to buy can optimize the call center point of contact component of the process. Critical product information such as alternative parts or equivalent products can optimize the resolution selected by the process. A comprehensive understanding of the 'service at' location can help insure multiple trips are avoided in the process. Full supplier information on reliability, delivery delays, and potential alternates can prevent supplier exceptions and play a significant role in optimizing the process.  In other words, these master data attributes enable the optimization of the "Call to Resolution" enterprise business process. Master data supports and guides business process flows. Thus the phrase 'Master Data' is indeed appropriate. MDM is the software that houses, manages, and governs the master data that resides in all applications and controls the enterprise business processes. A complete master data solution takes a data model that holds fully attributed master data entities and their inter-relationships. Oracle has this model. Oracle, with its deep understanding of application data is the logical choice for managing all your master data within the enterprise whether or not your organization actually runs any Oracle Applications.

    Read the article

  • Securing an ADF Application using OES11g: Part 1

    - by user12587121
    Future releases of the Oracle stack should allow ADF applications to be secured natively with Oracle Entitlements Server (OES). In a sequence of postings here I explore one way to achive this with the current technology, namely OES 11.1.1.5 and ADF 11.1.1.6. ADF Security Basics ADF Bascis The Application Development Framework (ADF) is Oracle’s preferred technology for developing GUI based Java applications.  It can be used to develop a UI for Swing applications or, more typically in the Oracle stack, for Web and J2EE applications.  ADF is based on and extends the Java Server Faces (JSF) technology.  To get an idea, Oracle provides an online demo to showcase ADF components. ADF can be used to develop just the UI part of an application, where, for example, the data access layer is implemented using some custom Java beans or EJBs.  However ADF also has it’s own data access layer, ADF Business Components (ADF BC) that will allow rapid integration of data from data bases and Webservice interfaces to the ADF UI component.   In this way ADF helps implement the MVC  approach to building applications with UI and data components. The canonical tutorial for ADF is to open JDeveloper, define a connection to a database, drag and drop a table from the database view to a UI page, build and deploy.  One has an application up and running very quickly with the ability to quickly integrate changes to, for example, the DB schema. ADF allows web pages to be created graphically and components like tables, forms, text fields, graphs and so on to be easily added to a page.  On top of JSF Oracle have added drag and drop tooling with JDeveloper and declarative binding of the UI to the data layer, be it database, WebService or Java beans.  An important addition is the bounded task flow which is a reusable set of pages and transitions.   ADF adds some steps to the page lifecycle defined in JSF and adds extra widgets including powerful visualizations. It is worth pointing out that the Oracle Web Center product (portal, content management and so on) is based on and extends ADF. ADF Security ADF comes with it’s own security mechanism that is exposed by JDeveloper at development time and in the WLS Console and Enterprise Manager (EM) at run time. The security elements that need to be addressed in an ADF application are: authentication, authorization of access to web pages, task-flows, components within the pages and data being returned from the model layer. One  typically relies on WLS to handle authentication and because of this users and groups will also be handled by WLS.  Typically in a Dev environment, users and groups are stored in the WLS embedded LDAP server. One has a choice when enabling ADF security (Application->Secure->Configure ADF Security) about whether to turn on ADF authorization checking or not: In the case where authorization is enabled for ADF one defines a set of roles in which we place users and then we grant access to these roles to the different ADF elements (pages or task flows or elements in a page). An important notion here is the difference between Enterprise Roles and Application Roles. The idea behind an enterprise role is that is defined in terms of users and LDAP groups from the WLS identity store.  “Enterprise” in the sense that these are things available for use to all applications that use that store.  The other kind of role is an Application Role and the idea is that  a given application will make use of Enterprise roles and users to build up a set of roles for it’s own use.  These application roles will be available only to that application.   The general idea here is that the enterprise roles are relatively static (for example an Employees group in the LDAP directory) while application roles are more dynamic, possibly depending on time, location, accessed resource and so on.  One of the things that OES adds that is that we can define these dynamic membership conditions in Role Mapping Policies. To make this concrete, here is how, at design time in Jdeveloper, one assigns these rights in Jdeveloper, which puts them into a file called jazn-data.xml: When the ADF app is deployed to a WLS this JAZN security data is pushed to the system-jazn-data.xml file of the WLS deployment for the policies and application roles and to the WLS backing LDAP for the users and enterprise roles.  Note the difference here: after deploying the application we will see the users and enterprise roles show up in the WLS LDAP server.  But the policies and application roles are defined in the system-jazn-data.xml file.  Consult the embedded WLS LDAP server to manage users and enterprise roles by going to the domain console and then Security Realms->myrealm->Users and Groups: For production environments (or in future to share this data with OES) one would then perform the operation of “reassociating” this security policy and application role data to a DB schema (or an LDAP).  This is done in the EM console by reassociating the Security Provider.  This blog posting has more explanations and references on this reassociation process. If ADF Authentication and Authorization are enabled then the Security Policies for a deployed application can be managed in EM.  Our goal is to be able to manage security policies for the applicaiton rather via OES and it's console. Security Requirements for an ADF Application With this package tour of ADF security we can see that to secure an ADF application with we would expect to be able to take care of at least the following items: Authentication, including a user and user-group store Authorization for page access Authorization for bounded Task Flow access.  A bounded task flow has only one point of entry and so if we protect that entry point by calling to OES then all the pages in the flow are protected.  Authorization for viewing data coming from the data access layer In the next posting we will describe a sample ADF application and required security policies. References ADF Dev Guide: Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework: Enabling ADF Security in a Fusion Web Application Oracle tutorial on securing a sample ADF application, appears to require ADF 11.1.2 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Why you need to learn async in .NET

    - by PSteele
    I had an opportunity to teach a quick class yesterday about what’s new in .NET 4.0.  One of the topics was the TPL (Task Parallel Library) and how it can make async programming easier.  I also stressed that this is the direction Microsoft is going with for C# 5.0 and learning the TPL will greatly benefit their understanding of the new async stuff.  We had a little time left over and I was able to show some code that uses the Async CTP to accomplish some stuff, but it wasn’t a simple demo that you could jump in to and understand so I thought I’d thrown one together and put it in a blog post. The entire solution file with all of the sample projects is located here. A Simple Example Let’s start with a super-simple example (WindowsApplication01 in the solution). I’ve got a form that displays a label and a button.  When the user clicks the button, I want to start displaying the current time for 15 seconds and then stop. What I’d like to write is this: lblTime.ForeColor = Color.Red; for (var x = 0; x < 15; x++) { lblTime.Text = DateTime.Now.ToString("HH:mm:ss"); Thread.Sleep(1000); } lblTime.ForeColor = SystemColors.ControlText; (Note that I also changed the label’s color while counting – not quite an ILM-level effect, but it adds something to the demo!) As I’m sure most of my readers are aware, you can’t write WinForms code this way.  WinForms apps, by default, only have one thread running and it’s main job is to process messages from the windows message pump (for a more thorough explanation, see my Visual Studio Magazine article on multithreading in WinForms).  If you put a Thread.Sleep in the middle of that code, your UI will be locked up and unresponsive for those 15 seconds.  Not a good UX and something that needs to be fixed.  Sure, I could throw an “Application.DoEvents()” in there, but that’s hacky. The Windows Timer Then I think, “I can solve that.  I’ll use the Windows Timer to handle the timing in the background and simply notify me when the time has changed”.  Let’s see how I could accomplish this with a Windows timer (WindowsApplication02 in the solution): public partial class Form1 : Form { private readonly Timer clockTimer; private int counter;   public Form1() { InitializeComponent(); clockTimer = new Timer {Interval = 1000}; clockTimer.Tick += UpdateLabel; }   private void UpdateLabel(object sender, EventArgs e) { lblTime.Text = DateTime.Now.ToString("HH:mm:ss"); counter++; if (counter == 15) { clockTimer.Enabled = false; lblTime.ForeColor = SystemColors.ControlText; } }   private void cmdStart_Click(object sender, EventArgs e) { lblTime.ForeColor = Color.Red; counter = 0; clockTimer.Start(); } } Holy cow – things got pretty complicated here.  I use the timer to fire off a Tick event every second.  Inside there, I can update the label.  Granted, I can’t use a simple for/loop and have to maintain a global counter for the number of iterations.  And my “end” code (when the loop is finished) is now buried inside the bottom of the Tick event (inside an “if” statement).  I do, however, get a responsive application that doesn’t hang or stop repainting while the 15 seconds are ticking away. But doesn’t .NET have something that makes background processing easier? The BackgroundWorker Next I try .NET’s BackgroundWorker component – it’s specifically designed to do processing in a background thread (leaving the UI thread free to process the windows message pump) and allows updates to be performed on the main UI thread (WindowsApplication03 in the solution): public partial class Form1 : Form { private readonly BackgroundWorker worker;   public Form1() { InitializeComponent(); worker = new BackgroundWorker {WorkerReportsProgress = true}; worker.DoWork += StartUpdating; worker.ProgressChanged += UpdateLabel; worker.RunWorkerCompleted += ResetLabelColor; }   private void StartUpdating(object sender, DoWorkEventArgs e) { var workerObject = (BackgroundWorker) sender; for (int x = 0; x < 15; x++) { workerObject.ReportProgress(0); Thread.Sleep(1000); } }   private void UpdateLabel(object sender, ProgressChangedEventArgs e) { lblTime.Text = DateTime.Now.ToString("HH:mm:ss"); }   private void ResetLabelColor(object sender, RunWorkerCompletedEventArgs e) { lblTime.ForeColor = SystemColors.ControlText; }   private void cmdStart_Click(object sender, EventArgs e) { lblTime.ForeColor = Color.Red; worker.RunWorkerAsync(); } } Well, this got a little better (I think).  At least I now have my simple for/next loop back.  Unfortunately, I’m still dealing with event handlers spread throughout my code to co-ordinate all of this stuff in the right order. Time to look into the future. The async way Using the Async CTP, I can go back to much simpler code (WindowsApplication04 in the solution): private async void cmdStart_Click(object sender, EventArgs e) { lblTime.ForeColor = Color.Red; for (var x = 0; x < 15; x++) { lblTime.Text = DateTime.Now.ToString("HH:mm:ss"); await TaskEx.Delay(1000); } lblTime.ForeColor = SystemColors.ControlText; } This code will run just like the Timer or BackgroundWorker versions – fully responsive during the updates – yet is way easier to implement.  In fact, it’s almost a line-for-line copy of the original version of this code.  All of the async plumbing is handled by the compiler and the framework.  My code goes back to representing the “what” of what I want to do, not the “how”. I urge you to download the Async CTP.  All you need is .NET 4.0 and Visual Studio 2010 sp1 – no need to set up a virtual machine with the VS2011 beta (unless, of course, you want to dive right in to the C# 5.0 stuff!).  Starting playing around with this today and see how much easier it will be in the future to write async-enabled applications.

    Read the article

  • Oracle OpenWorld 2013 – Wrap up by Sven Bernhardt

    - by JuergenKress
    OOW 2013 is over and we’re heading home, so it is time to lean back and reflecting about the impressions we have from the conference. First of all: OOW was great! It was a pleasure to be a part of it. As already mentioned in our last blog article: It was the biggest OOW ever. Parallel to the conference the America’s Cup took place in San Francisco and the Oracle Team America won. Amazing job by the team and again congratulations from our side Back to the conference. The main topics for us are: Oracle SOA / BPM Suite 12c Adaptive Case management (ACM) Big Data Fast Data Cloud Mobile Below we will go a little more into detail, what are the key takeaways regarding the mentioned points: Oracle SOA / BPM Suite 12c During the five days at OOW, first details of the upcoming major release of Oracle SOA Suite 12c and Oracle BPM Suite 12c have been introduced. Some new key features are: Managed File Transfer (MFT) for transferring big files from a source to a target location Enhanced REST support by introducing a new REST binding Introduction of a generic cloud adapter, which can be used to connect to different cloud providers, like Salesforce Enhanced analytics with BAM, which has been totally reengineered (BAM Console now also runs in Firefox!) Introduction of templates (OSB pipelines, component templates, BPEL activities templates) EM as a single monitoring console OSB design-time integration into JDeveloper (Really great!) Enterprise modeling capabilities in BPM Composer These are only a few points from what is coming with 12c. We are really looking forward for the new realese to come out, because this seems to be really great stuff. The suite becomes more and more integrated. From 10g to 11g it was an evolution in terms of developing SOA-based applications. With 12c, Oracle continues it’s way – very impressive. Adaptive Case Management Another fantastic topic was Adaptive Case Management (ACM). The Oracle PMs did a great job especially at the demo grounds in showing the upcoming Case Management UI (will be available in 11g with the next BPM Suite MLR Patch), the roadmap and the differences between traditional business process modeling. They have been very busy during the conference because a lot of partners and customers have been interested Big Data Big Data is one of the current hype themes. Because of huge data amounts from different internal or external sources, the handling of these data becomes more and more challenging. Companies have a need for analyzing the data to optimize their business. The challenge is here: the amount of data is growing daily! To store and analyze the data efficiently, it is necessary to have a scalable and flexible infrastructure. Here it is important that hardware and software are engineered to work together. Therefore several new features of the Oracle Database 12c, like the new in-memory option, have been presented by Larry Ellison himself. From a hardware side new server machines like Fujitsu M10 or new processors, such as Oracle’s new M6-32 have been announced. The performance improvements, when using one of these hardware components in connection with the improved software solutions were really impressive. For more details about this, please take look at our previous blog post. Regarding Big Data, Oracle also introduced their Big Data architecture, which consists of: Oracle Big Data Appliance that is preconfigured with Hadoop Oracle Exdata which stores a huge amount of data efficently, to achieve optimal query performance Oracle Exalytics as a fast and scalable Business analytics system Analysis of the stored data can be performed using SQL, by streaming the data directly from Hadoop to an Oracle Database 12c. Alternatively the analysis can be directly implemented in Hadoop using “R”. In addition Oracle BI Tools can be used to analyze the data. Fast Data Fast Data is a complementary approach to Big Data. A huge amount of mostly unstructured data comes in via different channels with a high frequency. The analysis of these data streams is also important for companies, because the incoming data has to be analyzed regarding business-relevant patterns in real-time. Therefore these patterns must be identified efficiently and performant. To do so, in-memory grid solutions in combination with Oracle Coherence and Oracle Event Processing demonstrated very impressive how efficient real-time data processing can be. One example for Fast Data solutions that was shown during the OOW was the analysis of twitter streams regarding customer satisfaction. The feeds with negative words like “bad” or “worse” have been filtered and after a defined treshold has been reached in a certain timeframe, a business event was triggered. Cloud Another key trend in the IT market is of course Cloud Computing and what it means for companies and their businesses. Oracle announced their Cloud strategy and vision – companies can focus on their real business while all of the applications are available via Cloud. This also includes Oracle Database or Oracle Weblogic, so that companies can also build, deploy and run their own applications within the cloud. Three different approaches have been introduced: Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Software as a Service (SaaS) Using the IaaS approach only the infrastructure components will be managed in the Cloud. Customers will be very flexible regarding memory, storage or number of CPUs because those parameters can be adjusted elastically. The PaaS approach means that besides the infrastructure also the platforms (such as databases or application servers) necessary for running applications will be provided within the Cloud. Here customers can also decide, if installation and management of these infrastructure components should be done by Oracle. The SaaS approach describes the most complete one, hence all applications a company uses are managed in the Cloud. Oracle is planning to provide all of their applications, like ERP systems or HR applications, as Cloud services. In conclusion this seems to be a very forward-thinking strategy, which opens up new possibilities for customers to manage their infrastructure and applications in a flexible, scalable and future-oriented manner. As you can see, our OOW days have been very very interresting. We collected many helpful informations for our projects. The new innovations presented at the confernce are great and being part of this was even greater! We are looking forward to next years’ conference! Links: http://www.oracle.com/openworld/index.html http://thecattlecrew.wordpress.com/2013/09/23/first-impressions-from-oracle-open-world-2013 SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: cattleCrew,Sven Bernhard,OOW2013,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to create a virtual network with Azure Connect

    - by Herve Roggero
    If you are trying to establish a virtual network between machines located in disparate networks, you can either use VPN, Virtual Network or Azure Connect. If you want to establish a connection between machines located in Windows Azure, you should consider using the Virtual Network service. If you want to establish a connection between local machines and Virtual Machines in Windows Azure, you may be able to use your existing VPN device (assuming you have one), as long as the device is supported by Microsoft. If the VPN device you are using isn’t supported, or if you are trying to create a virtual network between machines from disparate networks (such as machines located in another cloud provider), you can use Azure Connect. This blog post explains how Azure Connect can help you create virtual networks between multiple servers in the cloud, various servers in different cloud environments, and on-premise. Note: Azure Connect is currently in Technical Preview. About Azure Connect Let’s do a quick review of Azure Connect. This technology implements an IPSec tunnel from machines to to a relay service located in the Microsoft cloud (Azure). So in essence, Azure Connect doesn’t provide a point-to-point connection between machines; the network communication is tunneled through the relay service. The relay service in turn offers a mechanism to enforce basic communication rules that you define through Groups. We will review this later. You could network two or more VMs in the Azure cloud (although you should consider using a Virtual Network if you go this route), or servers in the Azure cloud and other machines in the Amazon cloud for example, or even two or more on-premise servers located in different locations for which a direct network connection is not an option. You can place any number of machines in your topology. Azure Connect gives you great flexibility on how you want to build your virtual network across various environments. So Azure Connect makes sense when you want to: Connect machines located in different cloud providers Connect on-premise machines running in different locations Connect Azure VMs with on-premise (if you do not have a VPN device, or if your device is not supported) Connect Azure Roles (Worker Roles, Web Roles) with on-premise servers or in other cloud providers The diagram below shows you a high level network topology that involves machines in the Windows Azure cloud, other cloud providers and on-premise. You should note that the only required component in this diagram is the Relay itself. The other machines are optional (although your network is useful only if you have two or more machines involved). Relay agents are currently available in three geographic areas: US, Europe and Asia. You can change which region you want to use in the Windows Azure management portal. High Level Network Topology With Azure Connect Azure Connect Agent Azure Connect establishes a virtual network and creates virtual adapters on your machines; these virtual adapters communicate through the Relay using IPSec. This is achieved by installing an agent (the Azure Connect Agent) on all the machines you want in your network topology. However, you do not need to install the agent on Worker Roles and Web Roles; that’s because the agent is already installed for you. Any other machine, including Virtual Machines in Windows Azure, needs the agent installed.  To install the agent, simply go to your Windows Azure portal (http://windows.azure.com) and click on Networks on the bottom left panel. You will see a list of subscriptions under Connect. If you select a subscription, you will be able to click on the Install Local Endpoint icon on top. Clicking on this icon will begin the download and installation process for the agent. Activating Roles for Azure Connect As previously mentioned, you do not need to install the Azure Connect Agent on Worker Roles and Web Roles because it is already loaded. However, you do need to activate them if you want the roles to participate in your network topology. To do this, you will need to click on the Get Activation Token icon. The activation token must then be copied and placed in the configuration file of your roles. For more information on how to perform this step, visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg432964.aspx. Firewall Rules Note that specific firewall rules must exist to allow the agent to communicate through the Relay. You will need to allow TCP 443 and ICMPv6. For additional information, please visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg433061.aspx. CA Certificates You can optionally require agents to sign their activation request with the Relay using a trusted certificate issued by a Certificate Authority (CA). Click on Activation Options to learn more. Groups To create your network topology you must first create a group. A group represents a logical container of endpoints (or machines) that can communicate through the Relay. You can create multiple groups allowing you to manage network communication differently. For example you could create a DEVELOPMENT group and a PRODUCTION group. To add an endpoint you must first install an agent that will create a virtual adapter on the machine on which it is installed (as discussed in the previous section). Once you have created a group and installed the agents, the machines will appear in the Windows Azure management portal and you can start assigning machines to groups. The next figure shows you that I created a group called LocalGroup and assigned two machines (both on-premise) to that group. Groups and Computers in Azure Connect As I mentioned previously you can allow these machines to establish a network connection. To do this, you must enable the Interconnected option in the group. The following diagram shows you the definition of the group. In this topology I chose to include local machines only, but I could also add worker roles and web roles in the Azure Roles section (you must first activate your roles, as discussed previously). You could also add other Groups, allowing you to manage inter-group communication. Defining a Group in Azure Connect Testing the Connection Now that my agents have been installed on my two machines, the group defined and the Interconnected option checked, I can test the connection between my machines. The next screenshot shows you that I sent a PING request to DEVLAP02 from DEVDSK02. The PING request was successful. Note however that the time is in the hundreds of milliseconds on average. That is to be expected because the machines are connecting through the Relay located in the cloud. Going through the Relay introduces an extra hop in the communication chain, so if your systems rely on high performance, you may want to conduct some basic performance tests. Sending a PING Request Through The Relay Conclusion As you can see, creating a network topology between machines using the Azure Connect service is simple. It took me less than five minutes to create the above configuration, including the time it took to install the Azure Connect agents on the two machines. The flexibility of Azure Connect allows you to create a virtual network between disparate environments, as long as your operating systems are supported by the agent. For more information on Azure Connect, visit the MSDN website at http://msdn.microsoft.com/en-us/library/windowsazure/gg432997.aspx. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net. Special Thanks I would like thank those that helped me figure out how Azure Connect works: Marcel Meijer - http://blogs.msmvps.com/marcelmeijer/ Michael Wood - Http://www.mvwood.com Glenn Block - http://www.codebetter.com/glennblock Yves Goeleven - http://cloudshaper.wordpress.com/ Sandrino Di Mattia - http://fabriccontroller.net/ Mike Martin - http://techmike2kx.wordpress.com

    Read the article

  • RiverTrail - JavaScript GPPGU Data Parallelism

    - by JoshReuben
    Where is WebCL ? The Khronos WebCL working group is working on a JavaScript binding to the OpenCL standard so that HTML 5 compliant browsers can host GPGPU web apps – e.g. for image processing or physics for WebGL games - http://www.khronos.org/webcl/ . While Nokia & Samsung have some protype WebCL APIs, Intel has one-upped them with a higher level of abstraction: RiverTrail. Intro to RiverTrail Intel Labs JavaScript RiverTrail provides GPU accelerated SIMD data-parallelism in web applications via a familiar JavaScript programming paradigm. It extends JavaScript with simple deterministic data-parallel constructs that are translated at runtime into a low-level hardware abstraction layer. With its high-level JS API, programmers do not have to learn a new language or explicitly manage threads, orchestrate shared data synchronization or scheduling. It has been proposed as a draft specification to ECMA a (known as ECMA strawman). RiverTrail runs in all popular browsers (except I.E. of course). To get started, download a prebuilt version https://github.com/downloads/RiverTrail/RiverTrail/rivertrail-0.17.xpi , install Intel's OpenCL SDK http://www.intel.com/go/opencl and try out the interactive River Trail shell http://rivertrail.github.com/interactive For a video overview, see  http://www.youtube.com/watch?v=jueg6zB5XaM . ParallelArray the ParallelArray type is the central component of this API & is a JS object that contains ordered collections of scalars – i.e. multidimensional uniform arrays. A shape property describes the dimensionality and size– e.g. a 2D RGBA image will have shape [height, width, 4]. ParallelArrays are immutable & fluent – they are manipulated by invoking methods on them which produce new ParallelArray objects. ParallelArray supports several constructors over arrays, functions & even the canvas. // Create an empty Parallel Array var pa = new ParallelArray(); // pa0 = <>   // Create a ParallelArray out of a nested JS array. // Note that the inner arrays are also ParallelArrays var pa = new ParallelArray([ [0,1], [2,3], [4,5] ]); // pa1 = <<0,1>, <2,3>, <4.5>>   // Create a two-dimensional ParallelArray with shape [3, 2] using the comprehension constructor var pa = new ParallelArray([3, 2], function(iv){return iv[0] * iv[1];}); // pa7 = <<0,0>, <0,1>, <0,2>>   // Create a ParallelArray from canvas.  This creates a PA with shape [w, h, 4], var pa = new ParallelArray(canvas); // pa8 = CanvasPixelArray   ParallelArray exposes fluent API functions that take an elemental JS function for data manipulation: map, combine, scan, filter, and scatter that return a new ParallelArray. Other functions are scalar - reduce  returns a scalar value & get returns the value located at a given index. The onus is on the developer to ensure that the elemental function does not defeat data parallelization optimization (avoid global var manipulation, recursion). For reduce & scan, order is not guaranteed - the onus is on the dev to provide an elemental function that is commutative and associative so that scan will be deterministic – E.g. Sum is associative, but Avg is not. map Applies a provided elemental function to each element of the source array and stores the result in the corresponding position in the result array. The map method is shape preserving & index free - can not inspect neighboring values. // Adding one to each element. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.map(function inc(v) {     return v+1; }); //<2,3,4,5,6> combine Combine is similar to map, except an index is provided. This allows elemental functions to access elements from the source array relative to the one at the current index position. While the map method operates on the outermost dimension only, combine, can choose how deep to traverse - it provides a depth argument to specify the number of dimensions it iterates over. The elemental function of combine accesses the source array & the current index within it - element is computed by calling the get method of the source ParallelArray object with index i as argument. It requires more code but is more expressive. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.combine(function inc(i) { return this.get(i)+1; }); reduce reduces the elements from an array to a single scalar result – e.g. Sum. // Calculate the sum of the elements var source = new ParallelArray([1,2,3,4,5]); var sum = source.reduce(function plus(a,b) { return a+b; }); scan Like reduce, but stores the intermediate results – return a ParallelArray whose ith elements is the results of using the elemental function to reduce the elements between 0 and I in the original ParallelArray. // do a partial sum var source = new ParallelArray([1,2,3,4,5]); var psum = source.scan(function plus(a,b) { return a+b; }); //<1, 3, 6, 10, 15> scatter a reordering function - specify for a certain source index where it should be stored in the result array. An optional conflict function can prevent an exception if two source values are assigned the same position of the result: var source = new ParallelArray([1,2,3,4,5]); var reorder = source.scatter([4,0,3,1,2]); // <2, 4, 5, 3, 1> // if there is a conflict use the max. use 33 as a default value. var reorder = source.scatter([4,0,3,4,2], 33, function max(a, b) {return a>b?a:b; }); //<2, 33, 5, 3, 4> filter // filter out values that are not even var source = new ParallelArray([1,2,3,4,5]); var even = source.filter(function even(iv) { return (this.get(iv) % 2) == 0; }); // <2,4> Flatten used to collapse the outer dimensions of an array into a single dimension. pa = new ParallelArray([ [1,2], [3,4] ]); // <<1,2>,<3,4>> pa.flatten(); // <1,2,3,4> Partition used to restore the original shape of the array. var pa = new ParallelArray([1,2,3,4]); // <1,2,3,4> pa.partition(2); // <<1,2>,<3,4>> Get return value found at the indices or undefined if no such value exists. var pa = new ParallelArray([0,1,2,3,4], [10,11,12,13,14], [20,21,22,23,24]) pa.get([1,1]); // 11 pa.get([1]); // <10,11,12,13,14>

    Read the article

  • Why do we (really) program to interfaces?

    - by Kyle Burns
    One of the earliest lessons I was taught in Enterprise development was "always program against an interface".  This was back in the VB6 days and I quickly learned that no code would be allowed to move to the QA server unless my business objects and data access objects each are defined as an interface and have a matching implementation class.  Why?  "It's more reusable" was one answer.  "It doesn't tie you to a specific implementation" a slightly more knowing answer.  And let's not forget the discussion ending "it's a standard".  The problem with these responses was that senior people didn't really understand the reason we were doing the things we were doing and because of that, we were entirely unable to realize the intent behind the practice - we simply used interfaces and had a bunch of extra code to maintain to show for it. It wasn't until a few years later that I finally heard the term "Inversion of Control".  Simply put, "Inversion of Control" takes the creation of objects that used to be within the control (and therefore a responsibility of) of your component and moves it to some outside force.  For example, consider the following code which follows the old "always program against an interface" rule in the manner of many corporate development shops: 1: ICatalog catalog = new Catalog(); 2: Category[] categories = catalog.GetCategories(); In this example, I met the requirement of the rule by declaring the variable as ICatalog, but I didn't hit "it doesn't tie you to a specific implementation" because I explicitly created an instance of the concrete Catalog object.  If I want to test the functionality of the code I just wrote I have to have an environment in which Catalog can be created along with any of the resources upon which it depends (e.g. configuration files, database connections, etc) in order to test my functionality.  That's a lot of setup work and one of the things that I think ultimately discourages real buy-in of unit testing in many development shops. So how do I test my code without needing Catalog to work?  A very primitive approach I've seen is to change the line the instantiates catalog to read: 1: ICatalog catalog = new FakeCatalog();   once the test is run and passes, the code is switched back to the real thing.  This obviously poses a huge risk for introducing test code into production and in my opinion is worse than just keeping the dependency and its associated setup work.  Another popular approach is to make use of Factory methods which use an object whose "job" is to know how to obtain a valid instance of the object.  Using this approach, the code may look something like this: 1: ICatalog catalog = CatalogFactory.GetCatalog();   The code inside the factory is responsible for deciding "what kind" of catalog is needed.  This is a far better approach than the previous one, but it does make projects grow considerably because now in addition to the interface, the real implementation, and the fake implementation(s) for testing you have added a minimum of one factory (or at least a factory method) for each of your interfaces.  Once again, developers say "that's too complicated and has me writing a bunch of useless code" and quietly slip back into just creating a new Catalog and chalking any test failures up to "it will probably work on the server". This is where software intended specifically to facilitate Inversion of Control comes into play.  There are many libraries that take on the Inversion of Control responsibilities in .Net and most of them have many pros and cons.  From this point forward I'll discuss concepts from the standpoint of the Unity framework produced by Microsoft's Patterns and Practices team.  I'm primarily focusing on this library because it questions about it inspired this posting. At Unity's core and that of most any IoC framework is a catalog or registry of components.  This registry can be configured either through code or using the application's configuration file and in the most simple terms says "interface X maps to concrete implementation Y".  It can get much more complicated, but I want to keep things at the "what does it do" level instead of "how does it do it".  The object that exposes most of the Unity functionality is the UnityContainer.  This object exposes methods to configure the catalog as well as the Resolve<T> method which is used to obtain an instance of the type represented by T.  When using the Resolve<T> method, Unity does not necessarily have to just "new up" the requested object, but also can track dependencies of that object and ensure that the entire dependency chain is satisfied. There are three basic ways that I have seen Unity used within projects.  Those are through classes directly using the Unity container, classes requiring injection of dependencies, and classes making use of the Service Locator pattern. The first usage of Unity is when classes are aware of the Unity container and directly call its Resolve method whenever they need the services advertised by an interface.  The up side of this approach is that IoC is utilized, but the down side is that every class has to be aware that Unity is being used and tied directly to that implementation. Many developers don't like the idea of as close a tie to specific IoC implementation as is represented by using Unity within all of your classes and for the most part I agree that this isn't a good idea.  As an alternative, classes can be designed for Dependency Injection.  Dependency Injection is where a force outside the class itself manipulates the object to provide implementations of the interfaces that the class needs to interact with the outside world.  This is typically done either through constructor injection where the object has a constructor that accepts an instance of each interface it requires or through property setters accepting the service providers.  When using dependency, I lean toward the use of constructor injection because I view the constructor as being a much better way to "discover" what is required for the instance to be ready for use.  During resolution, Unity looks for an injection constructor and will attempt to resolve instances of each interface required by the constructor, throwing an exception of unable to meet the advertised needs of the class.  The up side of this approach is that the needs of the class are very clearly advertised and the class is unaware of which IoC container (if any) is being used.  The down side of this approach is that you're required to maintain the objects passed to the constructor as instance variables throughout the life of your object and that objects which coordinate with many external services require a lot of additional constructor arguments (this gets ugly and may indicate a need for refactoring). The final way that I've seen and used Unity is to make use of the ServiceLocator pattern, of which the Patterns and Practices team has also provided a Unity-compatible implementation.  When using the ServiceLocator, your class calls ServiceLocator.Retrieve in places where it would have called Resolve on the Unity container.  Like using Unity directly, it does tie you directly to the ServiceLocator implementation and makes your code aware that dependency injection is taking place, but it does have the up side of giving you the freedom to swap out the underlying IoC container if necessary.  I'm not hugely concerned with hiding IoC entirely from the class (I view this as a "nice to have"), so the single biggest problem that I see with the ServiceLocator approach is that it provides no way to proactively advertise needs in the way that constructor injection does, allowing more opportunity for difficult to track runtime errors. This blog entry has not been intended in any way to be a definitive work on IoC, but rather as something to spur thought about why we program to interfaces and some ways to reach the intended value of the practice instead of having it just complicate your code.  I hope that it helps somebody begin or continue a journey away from being a "Cargo Cult Programmer".

    Read the article

  • Create Auto Customization Criteria OAF Search Page

    - by PRajkumar
    1. Create a New Workspace and Project Right click Workspaces and click create new OAworkspace and name it as PRajkumarCustSearch. Automatically a new OA Project will also be created. Name the project as CustSearchDemo and package as prajkumar.oracle.apps.fnd.custsearchdemo   2. Create a New Application Module (AM) Right Click on CustSearchDemo > New > ADF Business Components > Application Module Name -- CustSearchAM Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   3. Enable Passivation for the Root UI Application Module (AM) Right Click on CustSearchAM > Edit SearchAM > Custom Properties > Name – RETENTION_LEVEL Value – MANAGE_STATE Click add > Apply > OK   4. Create Test Table and insert data some data in it (For Testing Purpose)   CREATE TABLE xx_custsearch_demo (   -- ---------------------     -- Data Columns     -- ---------------------     column1                  VARCHAR2(100),     column2                  VARCHAR2(100),     column3                  VARCHAR2(100),     column4                  VARCHAR2(100),     -- ---------------------     -- Who Columns     -- ---------------------     last_update_date    DATE         NOT NULL,     last_updated_by     NUMBER   NOT NULL,     creation_date          DATE         NOT NULL,     created_by               NUMBER   NOT NULL,     last_update_login   NUMBER  );   INSERT INTO xx_custsearch_demo VALUES('v1','v2','v3','v4',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v1','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v2','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v3','v4','v5','v6',SYSDATE,0,SYSDATE,0,0); Now we have 4 records in our custom table   5. Create a New Entity Object (EO) Right click on SearchDemo > New > ADF Business Components > Entity Object Name – CustSearchEO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.schema.server Database Objects -- XX_CUSTSEARCH_DEMO   Note – By default ROWID will be the primary key if we will not make any column to be primary key   Check the Accessors, Create Method, Validation Method and Remove Method   6. Create a New View Object (VO) Right click on CustSearchDemo > New > ADF Business Components > View Object Name -- CustSearchVO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   In Step2 in Entity Page select CustSearchEO and shuttle them to selected list   In Step3 in Attributes Window select columns Column1, Column2, Column3, Column4, and shuttle them to selected list   In Java page deselect Generate Java file for View Object Class: CustSearchVOImpl and Select Generate Java File for View Row Class: CustSearchVORowImpl   7. Add Your View Object to Root UI Application Module Select Right click on CustSearchAM > Application Modules > Data Model Select CustSearchVO and shuttle to Data Model list   8. Create a New Page Right click on CustSearchDemo > New > Web Tier > OA Components > Page Name -- CustSearchPG Package -- prajkumar.oracle.apps.fnd.custsearchdemo.webui   9. Select the CustSearchPG and go to the strcuture pane where a default region has been created   10. Select region1 and set the following properties: ID -- PageLayoutRN Region Style -- PageLayout AM Definition -- prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Window Title – AutoCustomize Search Page Window Title – AutoCustomization Search Page Auto Footer -- True   11. Add a Query Bean to Your Page Right click on PageLayoutRN > New > Region Select new region region1 and set following properties ID – QueryRN Region Style – query Construction Mode – autoCustomizationCriteria Include Simple Panel – False Include Views Panel – False Include Advanced Panel – False   12. Create a New Region of style table Right Click on QueryRN > New > Region Using Wizard Application Module – prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Available View Usages – CustSearchVO1   In Step2 in Region Properties set following properties Region ID – CustSearchTable Region Style – Table   In Step3 in View Attributes shuttle all the items (Column1, Column2, Column3, Column4) available in “Available View Attributes” to Selected View Attributes: In Step4 in Region Items page set style to “messageStyledText” for all items   13. Select CustSearchTable in Structure Panel and set property Width to 100%   14. Include Simple Search Panel Right Click on QueryRN > New > simpleSearchPanel Automatically region2 (header Region) and region1 (MessageComponentLayout Region) created Set Following Properties for region2 Id – SimpleSearchHeader Text -- Simple Search   15. Now right click on message Component Layout Region (SimpleSearchMappings) and create two message text input beans and set the below properties to each   Message TextInputBean1 Id – SearchColumn1 Search Allowed – True Data Type – VARCHAR2 Maximum Length – CSS Class – OraFieldText Prompt – Column1   Message TextInputBean2 Id – SearchColumn2 Search Allowed -- True Data Type – VARCHAR2 Maximum Length – 100 CSS Class – OraFieldText Prompt – Column2   16. Now Right Click on query Components and create simple Search Mappings. Then automatically SimpleSearchMappings and QueryCriteriaMap1 created   17.  Now select the QueryCriteriaMap1 and set the below properties Id – SearchColumn1Map Search Item – SearchColumn1 Result Item – Column1   18. Now again right click on simpleSearchMappings -> New -> queryCriteriaMap, and then set the below properties Id – SearchColumn2Map Search Item – SearchColumn2 Result Item – Column2   19. Congratulation you have successfully finished Auto Customization Search page. Run Your CustSearchPG page and Test Your Work            

    Read the article

  • Who could ask for more with LESS CSS? (Part 3 of 3&ndash;Clrizr)

    - by ToString(theory);
    Welcome back!  In the first two posts in this series, I covered some of the awesome features in CSS precompilers such as SASS and LESS, as well as how to get an initial project setup up and running in ASP.Net MVC 4. In this post, I will cover an actual advanced example of using LESS in a project, and show some of the great productivity features we gain from its usage. Introduction In the first post, I mentioned two subjects that I will be using in this example – constants, and color functions.  I’ve always enjoyed using online color scheme utilities such as Adobe Kuler or Color Scheme Designer to come up with a scheme based off of one primary color.  Using these tools, and requesting a complementary scheme you can get a couple of shades of your primary color, and a couple of shades of a complementary/accent color to display. Because there is no way in regular css to do color operations or store variables, there was no way to accomplish something like defining a primary color, and have a site theme cascade off of that.  However with tools such as LESS, that impossibility becomes a reality!  So, if you haven’t guessed it by now, this post is on the creation of a plugin/module/less file to drop into your project, plugin one color, and have your primary theme cascade from it.  I only went through the trouble of creating a module for getting Complementary colors.  However, it wouldn’t be too much trouble to go through other options such as Triad or Monochromatic to get a module that you could use off of that. Step 1 – Analysis I decided to mimic Adobe Kuler’s Complementary theme algorithm as I liked its simplicity and aesthetics.  Color Scheme Designer is great, but I do believe it can give you too many color options, which can lead to chaos and overload.  The first thing I had to check was if the complementary values for the color schemes were actually hues rotated by 180 degrees at all times – they aren’t.  Apparently Adobe applies some variance to the complementary colors to get colors that are actually more aesthetically appealing to users.  So, I opened up Excel and began to plot complementary hues based on rotation in increments of 10: Long story short, I completed the same calculations for Hue, Saturation, and Lightness.  For Hue, I only had to record the Complementary hue values, however for saturation and lightness, I had to record the values for ALL of the shades.  Since the functions were too complicated to put into LESS since they aren’t constant/linear, but rather interval functions, I instead opted to extrapolate the HSL values using the trendline function for each major interval, onto intervals of spacing 1. For example, using the hue extraction, I got the following values: Interval Function 0-60 60-140 140-270 270-360 Saturation and Lightness were much worse, but in the end, I finally had functions for all of the intervals, and then went the route of just grabbing each shades value in intervals of 1.  Step 2 – Mapping I declared variable names for each of these sections as something that shouldn’t ever conflict with a variable someone would define in their own file.  After I had each of the values, I extracted the values and put them into files of their own for hue variables, saturation variables, and lightness variables…  Example: /*HUE CONVERSIONS*/@clrizr-hue-source-0deg: 133.43;@clrizr-hue-source-1deg: 135.601;@clrizr-hue-source-2deg: 137.772;@clrizr-hue-source-3deg: 139.943;@clrizr-hue-source-4deg: 142.114;.../*SATURATION CONVERSIONS*/@clrizr-saturation-s2SV0px: 0;@clrizr-saturation-s2SV1px: 0;@clrizr-saturation-s2SV2px: 0;@clrizr-saturation-s2SV3px: 0;@clrizr-saturation-s2SV4px: 0;.../*LIGHTNESS CONVERSIONS*/@clrizr-lightness-s2LV0px: 30;@clrizr-lightness-s2LV1px: 31;@clrizr-lightness-s2LV2px: 32;@clrizr-lightness-s2LV3px: 33;@clrizr-lightness-s2LV4px: 34;...   In the end, I have 973 lines of mapping/conversion from source HSL to shade HSL for two extra primary shades, and two complementary shades. The last bit of the work was the file to compose each of the shades from these mappings. Step 3 – Clrizr Mapper The final step was the hardest to overcome as I was still trying to understand LESS to its fullest extent.  Imports As mentioned previously, I had separated the HSL mappings into different files, so the first necessary step is to import those for use into the Clrizr plugin: @import url("hue.less");@import url("saturation.less");@import url("lightness.less"); Extract Component Values For Each Shade Next, I extracted the necessary information for each shade HSL before shade composition: @clrizr-input-saturation: 1px+floor(saturation(@clrizr-input))-1;@clrizr-input-lightness: 1px+floor(lightness(@clrizr-input))-1; @clrizr-complementary-hue: formatstring("clrizr-hue-source-{0}", ceil(hue(@clrizr-input))); @clrizr-primary-2-saturation: formatstring("clrizr-saturation-s2SV{0}",@clrizr-input-saturation);@clrizr-primary-1-saturation: formatstring("clrizr-saturation-s1SV{0}",@clrizr-input-saturation);@clrizr-complementary-1-saturation: formatstring("clrizr-saturation-c1SV{0}",@clrizr-input-saturation); @clrizr-primary-2-lightness: formatstring("clrizr-lightness-s2LV{0}",@clrizr-input-lightness);@clrizr-primary-1-lightness: formatstring("clrizr-lightness-s1LV{0}",@clrizr-input-lightness);@clrizr-complementary-1-lightness: formatstring("clrizr-lightness-c1LV{0}",@clrizr-input-lightness); Here, you can see a couple of odd things…  On the first line, I am using operations to add units to the saturation and lightness.  This is due to some limitations in the operations that would give me saturation or lightness in %, which can’t be in a variable name.  So, I use first add 1px to it, which casts the result of the following functions as px instead of %, and then at the end, I remove that pixel.  You can also see here the formatstring method which is exactly what it sounds like – something like String.Format(string str, params object[] obj). Get Primary & Complementary Shades Now that I have components for each of the different shades, I can now compose them into each of their pieces.  For this, I use the @@ operator which will look for a variable with the name specified in a string, and then call that variable: @clrizr-primary-2: hsl(hue(@clrizr-input), @@clrizr-primary-2-saturation, @@clrizr-primary-2-lightness);@clrizr-primary-1: hsl(hue(@clrizr-input), @@clrizr-primary-1-saturation, @@clrizr-primary-1-lightness);@clrizr-primary: @clrizr-input;@clrizr-complementary-1: hsl(@@clrizr-complementary-hue, @@clrizr-complementary-1-saturation, @@clrizr-complementary-1-lightness);@clrizr-complementary-2: hsl(@@clrizr-complementary-hue, saturation(@clrizr-input), lightness(@clrizr-input)); That’s is it, for the most part.  These variables now hold the theme for the one input color – @clrizr-input.  However, I have one last addition… Perceptive Luminance Well, after I got the colors, I decided I wanted to also get the best font color that would go on top of it.  Black or white depending on light or dark color.  Now I couldn’t just go with checking the lightness, as that is half the story.  You see, the human eye doesn’t see ALL colors equally well but rather has more cells for interpreting green light compared to blue or red.  So, using the ratio, we can calculate the perceptive luminance of each of the shades, and get the font color that best matches it! @clrizr-perceptive-luminance-ps2: round(1 - ( (0.299 * red(@clrizr-primary-2) ) + ( 0.587 * green(@clrizr-primary-2) ) + (0.114 * blue(@clrizr-primary-2)))/255)*255;@clrizr-perceptive-luminance-ps1: round(1 - ( (0.299 * red(@clrizr-primary-1) ) + ( 0.587 * green(@clrizr-primary-1) ) + (0.114 * blue(@clrizr-primary-1)))/255)*255;@clrizr-perceptive-luminance-ps: round(1 - ( (0.299 * red(@clrizr-primary) ) + ( 0.587 * green(@clrizr-primary) ) + (0.114 * blue(@clrizr-primary)))/255)*255;@clrizr-perceptive-luminance-pc1: round(1 - ( (0.299 * red(@clrizr-complementary-1)) + ( 0.587 * green(@clrizr-complementary-1)) + (0.114 * blue(@clrizr-complementary-1)))/255)*255;@clrizr-perceptive-luminance-pc2: round(1 - ( (0.299 * red(@clrizr-complementary-2)) + ( 0.587 * green(@clrizr-complementary-2)) + (0.114 * blue(@clrizr-complementary-2)))/255)*255; @clrizr-col-font-on-primary-2: rgb(@clrizr-perceptive-luminance-ps2, @clrizr-perceptive-luminance-ps2, @clrizr-perceptive-luminance-ps2);@clrizr-col-font-on-primary-1: rgb(@clrizr-perceptive-luminance-ps1, @clrizr-perceptive-luminance-ps1, @clrizr-perceptive-luminance-ps1);@clrizr-col-font-on-primary: rgb(@clrizr-perceptive-luminance-ps, @clrizr-perceptive-luminance-ps, @clrizr-perceptive-luminance-ps);@clrizr-col-font-on-complementary-1: rgb(@clrizr-perceptive-luminance-pc1, @clrizr-perceptive-luminance-pc1, @clrizr-perceptive-luminance-pc1);@clrizr-col-font-on-complementary-2: rgb(@clrizr-perceptive-luminance-pc2, @clrizr-perceptive-luminance-pc2, @clrizr-perceptive-luminance-pc2); Conclusion That’s it!  I have posted a project on clrizr.codePlex.com for this, and included a testing page for you to test out how it works.  Feel free to use it in your own project, and if you have any questions, comments or suggestions, please feel free to leave them here as a comment, or on the contact page!

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • Projected Results

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Monica Mehta Yasser Mahmud has seen a revolution in project management over the past decade. During that time, the former Primavera product strategist (who joined Oracle when his company was acquired in 2008) has not only observed a transformation in the way IT systems support corporate projects but the role project portfolio management (PPM) plays in the enterprise. “15 years ago project management was the domain of project management office (PMO),” Mahmud recalls of earlier days. “But over the course of the past decade, we've seen it transform into a mission critical enterprise discipline, that has made Primavera indispensable in the board room. Now, as a senior manager, a board member, or a C-level executive you have direct and complete visibility into what’s kind of going on in the organization—at a level of detail that you're going to consume that information.” Now serving as Oracle’s vice president of product strategy and industry marketing, Mahmud shares his thoughts on how Oracle’s Primavera solutions have evolved and how best-in-class project portfolio management systems can help businesses stay competitive. Profit: What do you feel are the market dynamics that are changing project management today? Mahmud: First, the data explosion. We're generating data at twice the rate at which we can actually store it. The same concept applies for project-intensive organizations. A lot of data is gathered, but what are we really doing with it? Are we turning data into insight? Are we using that insight and turning it into foresight with analytics tools? This is a key driver that will separate the very good companies—the very competitive companies—from those that are not as competitive. Another trend is centered on the explosion of mobile computing. By the year 2013, an estimated 35 percent of the world’s workforce is going to be mobile. That’s one billion people. So the question is not if you're going to go mobile, it’s how fast you are going to go mobile. What kind of impact does that have on how the workforce participates in projects? What worked ten to fifteen years ago is not going to work today. It requires a real rethink around the interfaces and how data is actually presented. Profit: What is the role of project management in this new landscape? Mahmud: We recently conducted a PPM study with the Economist Intelligence Unit centered to determine how important project management is considered within organizations. Our target was primarily CFOs, CIOs, and senior managers and we discovered that while 95 percent of participants believed it critical to their business, only six percent were confident that projects were delivered on time and on budget. That’s a huge gap. Most organizations are looking for efficiency, especially in these volatile financial times. But senior management can’t keep track of every project in a large organization. As a result, executives are attempting to inventory the work being conducted under their watch. What is often needed is a very high-level assessment conducted at the board level to say, “Here are the 50 initiatives that we have underway. How do they line up with our strategic drivers?” This line of questioning can provide early warning that work and strategy are out of alignment; finding the gap between what the business needs to do and the actual performance scorecard. That’s low-hanging fruit for any executive looking to increase efficiency and save money. But it can only be obtained through proper assessment of existing projects—and you need a project system of record to get that done. Over the next decade or so, project management is going to transform into holistic work management. Business leaders will want make sure key projects align with corporate strategy, but also the ability to drill down into daily activity and smaller projects to make sure they line up as well. Keeping employees from working on tasks—even for a few hours—that don’t line up with corporate goals will, in many ways, become a competitive differentiator. Profit: How do all of these market challenges and shifting trends impact Oracle’s Primavera solutions and meeting customers’ needs? Mahmud: For Primavera, it’s a transformation from being a project management application to a PPM system in the enterprise. Also making that system a mission-critical application by connecting to other key applications within the ecosystem, such as the enterprise resource planning (ERP), supply chain, and CRM systems. Analytics have also become a huge component. Business analytics have made Oracle’s Primavera applications pertinent in the boardroom. Now, as a senior manager, a board member, a CXO, CIO, or CEO, you have direct visibility into what’s going on in the organization at a level that you're able to consume that information. In addition, all of this information pairs up really well with your financials and other data. Certainly, when you're an Oracle shop, you have that visibility that you didn’t have before from a project execution perspective. Profit: What new strategies and tools are being implemented to create a more efficient workplace for users? Mahmud: We believe very strongly that just because you call something an enterprise project portfolio management system doesn’t make it so—you have to get people to want to participate in the system. This can’t be mandated down from the top. It simply doesn’t work that way. A truly adoptable solution is one that makes it super easy for all types users to participate, by providing them interfaces where they live. Keeping that in mind, a major area of development has been alternative user interfaces. This is increasingly resulting in the creation of lighter weight, targeted interfaces such as iOS applications, and smartphones interfaces such as for iPhone and Android platform. Profit: How does this translate into the development of Oracle’s Primavera solutions? Mahmud: Let me give you a few examples. We recently announced the launch of our Primavera P6 Team Member application, which is a native iOS application for the iPhone. This interface makes it easier for team members to do their jobs quickly and effectively. Similarly, we introduced the Primavera analytics application, which can be consumed via mobile devices, and when married with Oracle Spatial capabilities, users can get a geographical view of what’s going on and which projects are occurring in various locations around the world. Lastly, we introduced advanced email integration that allows project team members to status work via E-mail. This functionality leverages the fact that users are in E-mail system throughout the day and allows them to status their work without the need to launch the Primavera application. It comes back to a mantra: provide as many alternative user interfaces as possible, so you can give people the ability to work, to participate, to raise issues, to create projects, in the places where they live. Do it in such a way that it’s non-intrusive, do it in such a way that it’s easy and intuitive and they can get it done in a short amount of time. If you do that, workers can get back to doing what they're actually getting paid for.

    Read the article

  • Webcast Q&A: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the second webcast in our WebCenter in Action webcast series, "Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter, where customer Michael Chander from Qualcomm and Vince Casarez & Gourav Goyal from Oracle Partner Keste shared how Oracle WebCenter is powering Qualcomm’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Mike Chandler, Qualcomm Q: Did you run into any issues when integrating all of the different applications together?A: Definitely, our main challenges were in the area of user provisioning and security propagation, all the standard stuff you might expect when hooking up SSO for authentication and authorization. In addition, we spent several iterations getting the UI’s in sync. While everyone was given the same digital material to build too, each team interpreted and implemented it their own way. Initially as a user navigated, if you were looking for it, you could slight variations in color or font or width , stuff like that. So we had to pull all the developers responsible for the UI together and get pixel level agreement on a lot of things so we could ensure seamless transitions across applications. Q: What has been the biggest benefit your end users have seen?A: Wow, there have been several. An SSO enabled environment was huge a win for our users. The portal application that this replaced had not really been invested in by the business. With this project, we had full business participation and backing, and it really showed in some key areas like the shopping experience. For example, while ordering in the previous site, the items did not have any pictures or really usable descriptions. A tremendous amount of work was done to try and make the site more intuitive and user friendly. Site performance has also drastically improved thanks to new hardware, improved database design, and of course the fact that ADF has made great strides in runtime performance. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: Within a large company, I’m sure there is always going to be competition for large projects, as there was here. Once we got through the technical analysis and settled on the technology choices, it was actually no resistance to implementing the solution. This project was fully driven by the business with the aim of long term growth. I can confidently say that the fact that this project was given the utmost importance by both the business and IT really help put down any resistance that you would typically see while implementing a new solution. Q: Given the performance, what do you estimate to be the top end capacity of the system? A:I think our top end capacity is really only limited by our hardware. I’m comfortable saying we could grow 10x on our current hardware, both in terms of transactions and users. We can easily spin up new JVM instances if needed. We already use less JVM’s than we had planned. In addition, ADF is doing a very good job with his connection pooling and application module pooling, so we see a very good ratio of users connected to the systems vs db connections, without impacting performace. Q: What's the overview or summary of feedback from the users interacting with the site?A: Feedback has been overwhelmingly positive from both the business and our customers. They’re very happy with the new SSO environment , the new LAF, and the performance of the site. Of course, it’s not all roses. No matter what, there are always going to be people that don’t like the layout or the color scheme, etc. By and large though, customers are happy and the business is happy. Q: Can you describe the impressions about the site before and after the project within Qualcomm?A: Before the project, the site worked and people were using it, but most people were not happy with it. It was slow and tended to be a bit tempermental, for example a user would perform a transaction and the system would throw and unexpected error. The user could back up and retry the steps and things would work fine, so why didn’t work the first time?. From a UI perspective, we’d hear comments like it looked like it was built by a high school student.  Vince Casarez & Gourav Goyal, Keste Q: Did you run into any obstacles when implementing the solution?A: It's interesting some people call them "obstacles" on this project we just called them "dependencies".  There were both technical and business related dependencies that we had to work out. Mike points out the SSO dependencies and the coordination and synchronization between the teams to have a seamless login experience and a seamless end user experience.  There was also a set of dependencies on the User Acceptance testing to make sure that everyone understood the use cases for how the system would be used.  With a branching into a new market and trying to match a simple user experience as many consumer sites have today, there was always a tendency for the team members to provide their suggestions on how things could be simpler.  But with all the work up front on the user design and getting the business driving this set of experiences, this minimized the downstream suggestions that tend to distract a team.  In this case, all the work up front allowed us to enumerate the "dependencies" and keep the distractions to a minimum. Q: Was there a lot of custom work that needed to be done for this particular solution?A: The focus for this particular solution was really on the custom processes. The interesting thing is that with the data flows and the integration with applications, there are some pre-built integrations, but realistically for the process flow, we had to build those. The framework and tooling we used made things easier so we didn’t have to implement core functionality, like transitioning from screen to screen or from flow to flow. The design feature of Task Flows really helped speed the development and keep the component infrastructure in line with the dynamic processes.  Task flows and other elements like Skins are core to the infrastructure or technology stack of Oracle. This then allowed the team to center the project focus around the business flows and use cases to meet the core requirements and keep the project on time. Q: What do you think were the keys to success for rolling out WebCenter?A:  The 5 main keys to success were: 1) Sponsorship from the whole organization around this project from senior executive agreement, business owners driving functionality, and IT development alignment; 2) Upfront design planning and use case definition to clearly define the project scope and requirements; 3) Focussed development and project management aligned with the top level goals and drivers; 4) User acceptance and usability testing along the way to identify potential issues and direct resolution of the issues;  and 5) Constant prioritization of the issues for development to fix by the business.  It also helps to have great team chemistry and really smart people working on the project. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter from Oracle WebCenter

    Read the article

  • CodePlex Daily Summary for Friday, September 28, 2012

    CodePlex Daily Summary for Friday, September 28, 2012Popular ReleasesWPUtils: WPUtils 1.2: Just fixed an issue related to isolated storage path for ChoosePhotoBehavior. Specifically CreateDirectory method only accepts relative path, but was given a "/photos/" path which would result in exception. Please make sure you have this fix if you are using ChoosePhotoBehavior! NOTE: Windows Phone SDK 7.1 or higher is required.TFS Timesheets: TFS Timesheets 2.0: New features: Visual Studio 2012 support Bug fixes: Scaling mode inherited rather than font scalingCRM 2011 Visual Ribbon Editor: Visual Ribbon Editor 1.1 Beta: Visual Ribbon Editor 1.1 Beta What's New: Fixed scrolling issue in UnHide dialog Added support for connecting via ADFS / IFD Added support for more than one action for a button Added support for empty StringParameter for Javascript functions Fixed bug in rule CrmClientTypeRule when selecting Outlook option Extended Prefix field in New Button dialogFree Aspx Image Gallery: Free Aspx Image Gallery Release V1: This is first basic release of my free aspx image gallery project. It is free to use and modify by the user without any need of providing any credit to me.Simple Microsoft Excel Document Converter (Convert To XLS, XLSX, PDF, XPS): ExcelDocConverter 0.1 Beta: Initial Release Microsoft Excel Documents Converter. Microsoft Excel 1997-2003 (XLS) Microsoft Excel 2007/2010 (XLSX) Portable Document Format (PDF) Microsoft XPS Document (XPS) Difference between NET2.0 and NET3.5 This program uses .NET Framework runtime library to run. Basically, they are no differences. Only the runtime library version is different. For older computers, i.e. Windows XP, might not have .NET Framework 3.5 installed, then use NET2.0 in stead. But, some Windows XP SP2 mig...Office File Properties: Office File Properties 3.3.1: Bug fix. Convert file extension to lowercase before checking.LoBDb.NET: LoBDb.NET 1.0.9: Centido.Core library: 1) SQL Server script bug fix: an error when changing the MaxLength property of an indexed string column or when changing the Precision-Scale properties of a decimal column. LobDb.NET Manager: 1) Changing the Precision, Scale, Default Value, Minimum and Maximum properties of a decimal column now enables the Save button. 2) The MaxLength property of a string column and the Precision+Scale values of a decimal column are now displayed in the column list. 3) Changing the Min...Chaos games: Chaos games: Small app for generating fractals using chaos gamesVisual Studio Icon Patcher: Version 1.5.2: This version contains no new images from v1.5.1 Contains the following improvements: Better support for detecting the installed languages The extract & inject commands won’t run if Visual Studio is running You may now run in extract or inject mode The p/invoke code was cleaned up based on Code Analysis recommendations When a p/invoke method fails the Win32 error message is now displayed Error messages use red text Status messages use green textMCEBuddy 2.x: MCEBuddy 2.2.16: Changelog for 2.2.16 (32bit and 64bit) Now a standalone remote client also available to control the Engine remotely. 1. Added support for remote connections for status and configuration. MCEBuddy now uses port 23332. The remote server name, remote server port and local server port can be updated from the MCEBuddy.conf file BUT the Service or GUI needs to be restarted (i.e. reboot or restart service or restart program) for it to take effect. Refer to documentation for more details http://mce...ZXing.Net: ZXing.Net 0.9.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2393 of the java version improved api better Unity support Windows RT binaries Windows CE binaries new Windows Service demo new WPF demo WindowsCE Hotfix: Fixes an error with ISO8859-1 encoding and scannning of QR-Codes. The hotfix is only needed for the WindowsCE platform.SSIS GoogleAnalyticsSource: Version 1.1 Alpha 2: The component uses now the Google API V2.4 including the management API.MVC Bootstrap: MVC Boostrap 0.5.1: A small demo site, based on the default ASP.NET MVC 3 project template, showing off some of the features of MVC Bootstrap. This release uses Entity Framework 5 for data access and Ninject 3 for dependency injection. If you download and use this project, please give some feedback, good or bad!menu4web: menu4web 1.0 - free javascript menu for web sites: menu4web 1.0 has been tested with all major browsers: Firefox, Chrome, IE, Opera and Safari. Minified m4w.js library is less than 9K. Includes 21 menu examples of different styles. Can be freely distributed under The MIT License (MIT).Rawr: Rawr 5.0.0: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Coevery - Free CRM: Coevery 1.0.0.26: The zh-CN issue has been solved. We also add a project management module.VidCoder: 1.4.1 Beta: Updated to HandBrake 4971. This should fix some issues with stuck PGS subtitles. Fixed build break which prevented pre-compiled XML serializers from showing up. Fixed problem where a preset would get errantly marked as modified when re-opening the encode settings window or importing a new preset.JSLint for Visual Studio 2010: 1.4.0: VS2012 support is alphaBlackJumboDog: Ver5.7.2: 2012.09.23 Ver5.7.2 (1)InetTest?? (2)HTTP?????????????????100???????????Player Framework by Microsoft: Player Framework for Windows 8 (Preview 6): IMPORTANT: List of breaking changes from preview 5 Added separate samples download with .vsix dependencies instead of source dependencies Support for FreeWheel SmartXML ad responses Support for Smooth Streaming SDK DownloaderPlugins Support for VMAP and TTML polling for live scenarios Support for custom smooth streaming byte stream and scheme handlers Support for new play time and position tracking plugin Added IsLiveChanged event Added AdaptivePlugin.MaxBitrate property Add...New ProjectsChaos games: Small app to generating fractals using chaos gamesDocument Digitalization System: This system will allow the users with on one or more PCs to digitalize pdf files and store it or export it to other file formats.ExternalTokenAnalysisOffline: SPUser?UserToken????????????????。FinalProjectSeniorProject: ***Unfinished*** Senior project build GL Ponpes Selamat Kendal: Aplikasi Akuntansi Sekolah Pondok Pesantren Modern Selamat KendalHealth Care Manager: One of keynote planned for the Brazzaville Microsoft event coming soon.Orchard Commerce History with PayPal: Project expands on Nwazet.Commerce module (and is required for this module to work). Adds a purchase history, product role associations, and PayPal.PDF.NET: PWMIS ?????? Ver 4.5 ???? SMS Egypt: This project is intended to make it easy for people to send SMS to their customers using SMS gateways inside and outside Egypt. Strong Caml: Use the familiar CAML syntax, but now do it in strongly-typed, dynamic code. Just follow Visual Studio's IntelliSense, and your CAML query can't go wrong!TrainingFrameWork: TrainingFrameWork

    Read the article

  • How to Plug a Small Hole in NetBeans JSF (Join Table) Code Generation

    - by MarkH
    I was asked recently to provide an assist with designing and building a small-but-vital application that had at its heart some basic CRUD (Create, Read, Update, & Delete) functionality, built upon an Oracle database, to be accessible from various locations. Working from the stated requirements, I fleshed out the basic application and database designs and, once validated, set out to complete the first iteration for review. Using SQL Developer, I created the requisite tables, indices, and sequences for our first run. One of the tables was a many-to-many join table with three fields: one a primary key for that table, the other two being primary keys for the other tables, represented as foreign keys in the join table. Here is a simplified example of the trio of tables: Once the database was in decent shape, I fired up NetBeans to let it have first shot at the code. NetBeans does a great job of generating a mountain of essential code, saving developers what must be millions of hours of effort each year by building a basic foundation with a few clicks and keystrokes. Lest you think it (or any tool) can do everything for you, however, occasionally something tosses a paper clip into the delicate machinery and makes you open things up to fix them. Join tables apparently qualify.  :-) In the case above, the entity class generated for the join table (New Entity Classes from Database) included an embedded object consisting solely of the two foreign key fields as attributes, in addition to an object referencing each one of the "component" tables. The Create page generated (New JSF Pages from Entity Classes) worked well to a point, but when trying to save, we were greeted with an error: Transaction aborted. Hmm. A quick debugger session later and I'd identified the issue: when trying to persist the new join-table object, the embedded "foreign-keys-only" object still had null values for its two (required value) attributes...even though the embedded table objects had populated key attributes. Here's the simple fix: In the join-table controller class, find the public String create() method. It will look something like this:     public String create() {        try {            getFacade().create(current);            JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("JoinEntityCreated"));            return prepareCreate();        } catch (Exception e) {            JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured"));            return null;        }    } To restore balance to the force, modify the create() method as follows (changes in red):     public String create() {         try {            // Add the next two lines to resolve:            current.getJoinEntityPK().setTbl1id(current.getTbl1().getId().toBigInteger());            current.getJoinEntityPK().setTbl2id(current.getTbl2().getId().toBigInteger());            getFacade().create(current);            JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("JoinEntityCreated"));            return prepareCreate();        } catch (Exception e) {            JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured"));            return null;        }    } I'll be refactoring this code shortly, but for now, it works. Iteration one is complete and being reviewed, and we've met the milestone. Here's to happy endings (and customers)! All the best,Mark

    Read the article

  • Why JSF Matters (to You)

    - by reza_rahman
          "Those who have knowledge, don’t predict. Those who predict, don’t have knowledge."                                                                                                    – Lao Tzu You may have noticed Thoughtworks recently crowned the likes AngularJS, etc imminent successors to server-side web frameworks. They apparently also deemed it necessary to single out JSF for righteous scorn. I have to say as I was reading the analysis I couldn't help but remember they also promptly jumped on the Ruby, Rails, Clojure, etc bandwagon a good few years ago seemingly similarly crowing these dynamic languages imminent successors to Java. I remember thinking then as I do now whether the folks at Thoughtworks are really that much smarter than me or if they are simply more prone to the Hipster buzz of the day. I'll let you make the final call on that one. I also noticed mention of "J2EE" in the context of JSF and had to wonder how up-to-date or knowledgeable the person writing the analysis actually was given that the term was basically retired almost a decade ago. There's one thing that I am absolutely sure about though - as a long time pretty happy user of JSF, I had no choice but to speak up on what I believe JSF offers. If you feel the same way, I would encourage you to support the team behind JSF whose hard work you may have benefited from over the years. True to his outspoken character PrimeFaces lead Cagatay Civici certainly did not mince words making the case for the JSF ecosystem - his excellent write-up is well worth a read. He specifically pointed out the practical problems in going whole hog with bare metal JavaScript, CSS, HTML for many development teams. I'll admit I had to smile when I read his closing sentence as well as the rather cheerful comments to the post from actual current JSF/PrimeFaces users that are apparently supposed to be on a gloomy death march. In a similar vein, OmniFaces developer Arjan Tijms did a great job pointing out the fact that despite the extremely competitive server-side Java Web UI space, JSF seems to manage to always consistently come out in either the number one or number two spot over many years and many data sources - do give his well-written message in the JAX-RS user forum a careful read. I don't think it's really reasonable to expect this to be the case for so many years if JSF was not at least a capable if not outstanding technology. If fact if you've ever wondered, Oracle itself is one of the largest JSF users on the planet. As Oracle's Shay Shmeltzer explains in a recent JSF Central interview, many of Oracle's strategic products such as ADF, ADF Mobile and Fusion Applications itself is built on JSF. There are well over 3,000 active developers working on these codebases. I don't think anyone can think of a more compelling reason to make sure that a technology is as effective as possible for practical development under real world conditions. Standing on the shoulders of the above giants, I feel like I can be pretty brief in making my own case for JSF: JSF is a powerful abstraction that brings the original Smalltalk MVC pattern to web development. This means cutting down boilerplate code to the bare minimum such that you really can think of just writing your view markup and then simply wire up some properties and event handlers on a POJO. The best way to see what this really means is to compare JSF code for a pretty small case to other approaches. You should then multiply the additional work for the typical enterprise project to try to understand what the productivity trade-offs are. This is reason alone for me to personally never take any other approach seriously as my primary web UI solution unless it can match the sheer productivity of JSF. Thanks to JSF's focus on components from the ground-up JSF has an extremely strong ecosystem that includes projects like PrimeFaces, RichFaces, OmniFaces, ICEFaces and of course ADF Faces/Mobile. These component libraries taken together constitute perhaps the largest widget set ever developed and optimized for a single web UI technology. To begin to grasp what this really means, just briefly browse the excellent PrimeFaces showcase and think about the fact that you can readily use the widgets on that showcase by just using some simple markup and knowing near to nothing about AJAX, JavaScript or CSS. JSF has the fair and legitimate advantage of being an open vendor neutral standard. This means that no single company, individual or insular clique controls JSF - openness, transparency, accountability, plurality, collaboration and inclusiveness is virtually guaranteed by the standards process itself. You have the option to choose between compatible implementations, escape any form of lock-in or even create your own compatible implementation! As you might gather from the quote at the top of the post, I am not a fan of crystal ball gazing and certainly don't want to engage in it myself. Who knows? However far-fetched it may seem maybe AngularJS is the only future we all have after all. If that is the case, so be it. Unlike what you might have been told, Java EE is about choice at heart and it can certainly work extremely well as a back-end for AngularJS. Likewise, you are also most certainly not limited to just JSF for working with Java EE - you have a rich set of choices like Struts 2, Vaadin, Errai, VRaptor 4, Wicket or perhaps even the new action-oriented web framework being considered for Java EE 8 based on the work in Jersey MVC... Please note that any views expressed here are my own only and certainly does not reflect the position of Oracle as a company.

    Read the article

  • What's new in EJB 3.2 ? - Java EE 7 chugging along!

    - by arungupta
    EJB 3.1 added a whole ton of features for simplicity and ease-of-use such as @Singleton, @Asynchronous, @Schedule, Portable JNDI name, EJBContainer.createEJBContainer, EJB 3.1 Lite, and many others. As part of Java EE 7, EJB 3.2 (JSR 345) is making progress and this blog will provide highlights from the work done so far. This release has been particularly kept small but include several minor improvements and tweaks for usability. More features in EJB.Lite Asynchronous session bean Non-persistent EJB Timer service This also means these features can be used in embeddable EJB container and there by improving testability of your application. Pruning - The following features were made Proposed Optional in Java EE 6 and are now made optional. EJB 2.1 and earlier Entity Bean Component Contract for CMP and BMP Client View of an EJB 2.1 and earlier Entity Bean EJB QL: Query Language for CMP Query Methods JAX-RPC-based Web Service Endpoints and Client View The optional features are moved to a separate document and as a result EJB specification is now split into Core and Optional documents. This allows the specification to be more readable and better organized. Updates and Improvements Transactional lifecycle callbacks in Stateful Session Beans, only for CMT. In EJB 3.1, the transaction context for lifecyle callback methods (@PostConstruct, @PreDestroy, @PostActivate, @PrePassivate) are defined as shown. @PostConstruct @PreDestroy @PrePassivate @PostActivate Stateless Unspecified Unspecified N/A N/A Stateful Unspecified Unspecified Unspecified Unspecified Singleton Bean's transaction management type Bean's transaction management type N/A N/A In EJB 3.2, stateful session bean lifecycle callback methods can opt-in to be transactional. These methods are then executed in a transaction context as shown. @PostConstruct @PreDestroy @PrePassivate @PostActivate Stateless Unspecified Unspecified N/A N/A Stateful Bean's transaction management type Bean's transaction management type Bean's transaction management type Bean's transaction management type Singleton Bean's transaction management type Bean's transaction management type N/A N/A For example, the following stateful session bean require a new transaction to be started for @PostConstruct and @PreDestroy lifecycle callback methods. @Statefulpublic class HelloBean {   @PersistenceContext(type=PersistenceContextType.EXTENDED)   private EntityManager em;    @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)   @PostConstruct   public void init() {        myEntity = em.find(...);   }   @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)    @PostConstruct    public void destroy() {        em.flush();    }} Notice, by default the lifecycle callback methods are not transactional for backwards compatibility. They need to be explicitly opt-in to be made transactional. Opt-out of passivation for stateful session bean - If your stateful session bean needs to stick around or it has non-serializable field then the bean can be opt-out of passivation as shown. @Stateful(passivationCapable=false)public class HelloBean {    private NonSerializableType ref = ... . . .} Simplified the rules to define all local/remote views of the bean. For example, if the bean is defined as: @Statelesspublic class Bean implements Foo, Bar {    . . .} where Foo and Bar have no annotations of their own, then Foo and Bar are exposed as local views of the bean. The bean may be explicitly marked @Local as @Local@Statelesspublic class Bean implements Foo, Bar {    . . .} then this is the same behavior as explained above, i.e. Foo and Bar are local views. If the bean is marked @Remote as: @Remote@Statelesspublic class Bean implements Foo, Bar {    . . .} then Foo and Bar are remote views. If an interface is marked @Local or @Remote then each interface need to be explicitly marked explicitly to be exposed as a view. For example: @Remotepublic interface Foo { . . . }@Statelesspublic class Bean implements Foo, Bar {    . . .} only exposes one remote interface Foo. Section 4.9.7 from the specification provide more details about this feature. TimerService.getAllTimers is a newly added convenience API that returns all timers in the same bean. This is only for displaying the list of timers as the timer can only be canceled by its owner. Removed restriction to obtain the current class loader, and allow to use java.io package. This is handy if you want to do file access within your beans. JMS 2.0 alignment - A standard list of activation-config properties is now defined destinationLookup connectionFactoryLookup clientId subscriptionName shareSubscriptions Tons of other clarifications through out the spec. Appendix A provide a comprehensive list of changes since EJB 3.1. ThreadContext in Singleton is guaranteed to be thread-safe. Embeddable container implement Autocloseable. A complete replay of Enterprise JavaBeans Today and Tomorrow from JavaOne 2012 can be seen here (click on CON4654_mp4_4654_001 in Media). The specification is still evolving so the actual property or method names or their actual behavior may be different from the currently proposed ones. Are there any improvements that you'd like to see in EJB 3.2 ? The EJB 3.2 Expert Group would love to hear your feedback. An Early Draft of the specification is available. The latest version of the specification can always be downloaded from here. Java EE 7 Specification Status EJB Specification Project JIRA of EJB Specification JSR Expert Group Discussion Archive These features will start showing up in GlassFish 4 Promoted Builds soon.

    Read the article

  • Java EE 7 Survey Results!

    - by reza_rahman
    On November 8th, the Java EE EG posted a survey to gather broad community feedback on a number of critical open issues. For reference, you can find the original survey here. We kept the survey open for about three weeks until November 30th. To our delight, over 1100 developers took time out of their busy lives to let their voices be heard! The results of the survey were sent to the EG on December 12th. The subsequent EG discussion is available here. The exact summary sent to the EG is available here. We would like to take this opportunity to thank each and every one the individuals who took the survey. It is very appreciated, encouraging and worth it's weight in gold. In particular, I tried to capture just some of the high-quality, intelligent, thoughtful and professional comments in the summary to the EG. I highly encourage you to continue to stay involved, perhaps through the Adopt-a-JSR program. We would also like to sincerely thank java.net, JavaLobby, TSS and InfoQ for helping spread the word about the survey. Below is a brief summary of the results... APIs to Add to Java EE 7 Full/Web Profile The first question asked which of the four new candidate APIs (WebSocket, JSON-P, JBatch and JCache) should be added to the Java EE 7 Full and Web profile respectively. As the following graph shows, there was significant support for adding all the new APIs to the full profile: Support is relatively the weakest for Batch 1.0, but still good. A lot of folks saw WebSocket 1.0 as a critical technology with comments such as this one: "A modern web application needs Web Sockets as first class citizens" While it is clearly seen as being important, a number of commenters expressed dissatisfaction with the lack of a higher-level JSON data binding API as illustrated by this comment: "How come we don't have a Data Binding API for JSON" JCache was also seen as being very important as expressed with comments like: "JCache should really be that foundational technology on which other specs have no fear to depend on" The results for the Web Profile is not surprising. While there is strong support for adding WebSocket 1.0 and JSON-P 1.0 to the Web Profile, support for adding JCache 1.0 and Batch 1.0 is relatively weak. There was actually significant opposition to adding Batch 1. 0 (with 51.8% casting a 'No' vote). Enabling CDI by Default The second question asked was whether CDI should be enabled in Java EE environments by default. A significant majority of 73.3% developers supported enabling CDI, only 13.8% opposed. Comments such as these two reflect a strong general support for CDI as well as a desire for better Java EE alignment with CDI: "CDI makes Java EE quite valuable!" "Would prefer to unify EJB, CDI and JSF lifecycles" There is, however, a palpable concern around the performance impact of enabling CDI by default as exemplified by this comment: "Java EE projects in most cases use CDI, hence it is sensible to enable CDI by default when creating a Java EE application. However, there are several issues if CDI is enabled by default: scanning can be slow - not all libs use CDI (hence, scanning is not needed)" Another significant concern appears to be around backwards compatibility and conflict with other JSR 330 implementations like Spring: "I am leaning towards yes, however can easily imagine situations where errors would be caused by automatically activating CDI, especially in cases of backward compatibility where another DI engine (such as Spring and the like) happens to use the same mechanics to inject dependencies and in that case there would be an overlap in injections and probably an uncertain outcome" Some commenters such as this one attempt to suggest solutions to these potential issues: "If you have Spring in use and use javax.inject.Inject then you might get some unexpected behavior that could be equally confusing. I guess there will be a way to switch CDI off. I'm tempted to say yes but am cautious for this reason" Consistent Usage of @Inject The third question was around using CDI/JSR 330 @Inject consistently vs. allowing JSRs to create their own injection annotations. A slight majority of 53.3% developers supported using @Inject consistently across JSRs. 28.8% said using custom injection annotations is OK, while 18.0% were not sure. The vast majority of commenters were strongly supportive of CDI and general Java EE alignment with CDI as illistrated by these comments: "Dependency Injection should be standard from now on in EE. It should use CDI as that is the DI mechanism in EE and is quite powerful. Having a new JSR specific DI mechanism to deal with just means more reflection, more proxies. JSRs should also be constructed to allow some of their objects Injectable. @Inject @TransactionalCache or @Inject @JMXBean etc...they should define the annotations and stereotypes to make their code less procedural. Dog food it. If there is a shortcoming in CDI for a JSR fix it and we will all be grateful" "We're trying to make this a comprehensive platform, right? Injection should be a fundamental part of the platform; everything else should build on the same common infrastructure. Each-having-their-own is just a recipe for chaos and having to learn the same thing 10 different ways" Expanding the Use of @Stereotype The fourth question was about expanding CDI @Stereotype to cover annotations across Java EE beyond just CDI. A significant majority of 62.3% developers supported expanding the use of @Stereotype, only 13.3% opposed. A majority of commenters supported the idea as well as the theme of general CDI/Java EE alignment as expressed in these examples: "Just like defining new types for (compositions of) existing classes, stereotypes can help make software development easier" "This is especially important if many EJB services are decoupled from the EJB component model and can be applied via individual annotations to Java EE components. @Stateless is a nicely compact annotation. Code will not improve if that will have to be applied in the future as @Transactional, @Pooled, @Secured, @Singlethreaded, @...." Some, however, expressed concerns around increased complexity such as this commenter: "Could be very convenient, but I'm afraid if it wouldn't make some important class annotations less visible" Expanding Interceptor Use The final set of questions was about expanding interceptors further across Java EE... A very solid 96.3% of developers wanted to expand interceptor use to all Java EE components. 35.7% even wanted to expand interceptors to other Java EE managed classes. Most developers (54.9%) were not sure if there is any place that injection is supported that should not support interceptors. 32.8% thought any place that supports injection should also support interceptors. Only 12.2% were certain that there are places where injection should be supported but not interceptors. The comments reflected the diversity of opinions, generally supportive of interceptors: "I think interceptors are as fundamental as injection and should be available anywhere in the platform" "The whole usage of interceptors still needs to take hold in Java programming, but it is a powerful technology that needs some time in the Sun. Basically it should become part of Java SE, maybe the next step after lambas?" A distinct chain of thought separated interceptors from filters and listeners: "I think that the Servlet API already provides a rich set of possibilities to hook yourself into different Servlet container events. I don't find a need to 'pollute' the Servlet model with the Interceptors API"

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >