Search Results

Search found 532 results on 22 pages for 'anthony hiscox'.

Page 4/22 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Framework 4 Features: Login Id Support

    - by Anthony Shorten
    Given that Oracle Utilities Application Framework 4 is available as part of Mobile Work Force Management and other product progressively I am preparing a number of short but sweet blog entries highlighting some of the new functionality that has been implemented. This is the first entry and it is on a new security feature called Login Id. In past releases of the Oracle Utilities Application Framework, the userid used for authentication and authorization was limited to eight (8) characters in length. This mirrored what the market required in the past with LAN userids and even legacy userids being that length. The technology market has since progressed to longer userid lengths. It is very common to hear that email addresses are being used as credentials for production systems. To achieve this in past versions of the Oracle Utilities Application Framework, sites had to introduce a short userid (8 characters in length) as an alias in your preferred security store. You then configured your J2EE Web Application Server to use the alias as credentials. This sometimes was a standard feaure of the security store and/or the J2EE Web Application Server, if you were lucky. If not, some java code has to be written to implement the solution. In Oracle Utilities Application Framework 4 we introduced a new attribute on the user object called Login Id. The Login Id can be up to 256 characters in length and is an alternative to the existing userid stored on the user object. This means the Oracle Utilities Application Framework can support both long and short userids. For backward compatibility we use the Login Id for authentication but the short userid for authorization and auditing. The user object within the Oracle Utilities Application Framework holds the translation. Backward compatibility is always a consideration in any of our designs for future or changed functionality. You will see reference to this fact in the blog entries I will be composing over the next few months. We have also thought about the flexibility in implementing this feature. The Login Id can be the same value of the Userid (the default for backward compatibility) or can be different. Both the Login Id and Userid have to be unique. This avoids sharing of credentials and is also backward compatible. You can manually enter the Login Id or provision it from Oracle Identity Manager (or other tool). If you use the Login Id only, then we will not autogenerate a short userid automatically as the rules for this can vary from site to site. You have a number of options there. Most Identity provisioning tools can generate a short userid at user creation time and this can be used. If you do not use provisioning tools, then you can write a class extension using the SDK to autoegenerate the userid based upon your sites preference. When we designed the feature there were lots of styles of generating userids (random, initial and surname, numbers etc). We could not really see a clear winner in that respect so we just allowed the extension to be inserted in if necessary. Most customers indicated to us that identity provisioning was the preferred way. This is why we released an Oracle Identity Manager integration with the framework. The Login id is case sensitive now which was not supported under userid. The introduction of the Login Id allows the product to offer flexible options when configuring security whilst maintaining backward compatibility.

    Read the article

  • Configurable Objects - Introduction

    - by Anthony Shorten
    One of the interesting facilities in the framework is Configurable Object functionality (it is also known as Task Optimization and also known as Cool Tools). The idea is that any implementation can create their own views of the base product objects and services and implement functionality against those new views. For example, in Oracle Utilities Customer Care and Billing, there is a Person object. That object is used to store and manage information about individuals as well as companies. In the base product you would use the Person Maintenance screen and fill in some of the screen when you wanted to register or maintain and individual as well and fill out other parts of the screen when you wanted to register or maintain a company. This can be somewhat confusing to some customers. Using Configurable Objects this can be simplified. A business object can be created that is a view of the any object. For example, you could create a Human business object which would cover the aspects of the Person object pertaining to an individual and a Company business object to cover the aspects unique to a company. Even the tag names (i.e. Field Names) in the object can be changed to be more what the implementation is familiar with. The object can also restructure the object. For example, a common identifier for an individual in the USA is the Social Security number, this value is a Person Identifier (as this varies in each country). In the new Human object you can remap the Person Identifier as a Social Security number. To define a Business Object you use a schema editor built into the browser user interface and use a mapping language to setup the business objects. An example of the language is shown below in an extract of the schema for the Human business object. As you can see there are mapping as well as formatting and other tags. This information can be built manually or using a wizard which generates the base structure for you to alter. This is all stored as meta data when saved. Once a Business object is built it can be used as basis for code, other business objects (we support inheritance), called by a screen (called a UI Map) or even as a Web Service. This is just a start with Configurable Objects as you can also create views of base services called Business Services, Service Scripts used for non-object or complex object processing (as well as other things), UI Maps used for screens and Data Areas to reuse definitions across multiple objects. Configurable Objects are powerful and I only really touched on them here. Over the next few months I hope to add lots more entries about them.

    Read the article

  • Has RFC2324 been implemented?

    - by anthony-arnold
    I know RFC2324 was an April Fools joke. However, it seems pretty well thought out, and after reading it I figured it wouldn't be out of the question to design an automated coffee machine that used this extension to HTTP. We programmers love to reference this RFC when arguing web standards ("418 I'm a Teapot lolz!") but the joke's kind of on us. Ubiquitous computing research assumes that network-connected coffee machines are probably going to be quite common in the future, along with Internet-connected fruit and just about everything else. Has anyone actually implemented a coffee machine that is controlled via HTCPCP? Not necessarily commercial, but hacked together in a garage, maybe? I'm not talking about just a web server that responds to HTCPCP requests; I mean a real coffee machine that actually makes coffee. I haven't seen an example of that.

    Read the article

  • Smart Grid Gateway and New Meter Data Management released

    - by Anthony Shorten
    Two products have just been released and are available from edlivery.oracle.com. Smart Grid Gateway 2.0.0 - A new product to integrate to Smart Grid networks Meter Data Management 2.0.1 - A new version of the Meter Data Management product. These products are the first products to use the brand new version of the Oracle Utilities Applicaton Framework (V4.1). The new framework builds up on FW2.2 and FW4.0.2 to add exciting new features (this is just a subset): Support for Database Vault Enhancements to Business Object Maintenance Batch Statistics Portal for benchmarking Custom template user exit support File permissions now consistent with other Oracle products Use of Universal Connection Pool for all database pool access Ability to manage the batch data cache Over the next few weeks I will be publishing articles and updates to existing whitepapers to highlight all the new features.

    Read the article

  • SCSF for Visual Studio 2010

    - by Anthony Trudeau
    The Smart Client Software Factory (SCSF) for Visual Studio 2010 was uploaded tonight.  You can get it, the source code, and the documentation on the patterns & practices page. Note: Do not forget to "unblock" the documentation (CHM) file after you download it.  To unblock it right click the file, choose Properties, and click the Unblock button.

    Read the article

  • Code Analysis Rule Sets in Visual Studio 2010

    - by Anthony Trudeau
    Microsoft Visual Studio 2010 introduces the concept of rule sets when configuring code analysis.  This is a valuable change from Visual Studio 2008 that I didn't even realize I wanted.  Visual Studio 2008 by default selected all rules and then you had to remove rules on an item by item basis. The rule sets fall into logical groups including "Microsoft All Rules", "Microsoft Basic Correctness Rules", "Microsoft Security Rules", et al.  And within the project properties you can select one rule set, multiple rule sets, or you can define your own rule set based upon another. Selecting a single rule set is obviously the easiest option.  The default rule set when you create a new project is the "Microsoft Minimum Recommended Rules".  However, in my opinion the recommended rules are just too permissive.  For that reason you might want to change your rule set to "Microsoft All Rules" until you get around to creating your own rule set; or alternately you can select multiple rule sets which is an option from the rule set combo box.  The Visual Studio documentation has comprehensive help on what is contained within the rule sets. Creating your own rule set is easy if not obvious.  You need to start a rule set from an existing rule set.  To get started select a rule set in the combo box within the Code Analysis tab of the project properties.  I selected the "Microsoft All Rules" for my rule set, but you may find it easier to start with the "Microsoft Minimum Recommended Rules" if your rules are on the more permissive side. Once your rule set is selected click the Open button.  This will display a dialog that is similar in composition to the rules selection from Visual Studio 2008.  Browsing through the tree view you can select or deselect individual rules within their categories; and you can indicate that the rules are flagged as errors instead of the default which is a warning.  A nice touch to the form is that you get a help pane when you select an individual rule.  That helped me considerably when I first configured my rule set. Once you have finished selecting your rules click the Save tool button, specify a location and name, and click the Save button on the Save As dialog.  Once you're back on the Code Analysis tab you'll choose the Browse option within the combo box and open the file you just created.

    Read the article

  • Custom Templates: Using user exits

    - by Anthony Shorten
    One of the features of Oracle Utilities Application Framework V4.1 is the ability to use templates and user exits to extend the base configuration files. The configuration files used by the product are based upon a set of templates shipped with the product. When the configureEnv utility asks for configuration settings they are stored in a configuration file ENVIRON.INI which outlines the environment settings. These settings are then used by the initialSetup utility to populate the various configuration files used by the product using templates located in the templates directory of the installation. Now, whilst the majority of the installations at any site are non-production and the templates provided are generally adequate for that need, there are circumstances where extension of templates are needed to take advantage of more advanced facilities (such as advanced security and environment settings). The issue then becomes that if you alter the configuration files manually (directly or indirectly) then you may lose all your custom settings the next time you run initialSetup. To counter this we allow customers to either override templates with their own template or we now provide user exits in the templates to add fragments of configuration unique to that part of the configuration file. The latter means that the base template is still used but additions are included to provide the extensions. The provision of custom templates is supported but as soon as you use a custom template you are then responsible for reflecting any changes we put in the base template over time. Not a big task but annoying if you have to do it for multiple copies of the product. I prefer to use user exits as they seem to represent the least effort solution. The way to find the user exits available is to either read the Server Administration Guide that comes with your product or look at individual templates and look for the lines: #ouaf_user_exit <user exit name> Where <user exit name> is the name of the user exit. User exits are not always present but are in places that we feel are the most likely to be changed. If a user exit does not exist the you can always use a custom template instead. Now lets show an example. By default, the product generates a config.xml file to be used with Oracle WebLogic. This configuration file has the basic setting contained in it to manage the product. If you want to take advantage of the Oracle WebLogic advanced settings, you can use the console to make those changes and it will be reflected in the config.xml automatically. To retain those changes across invocations of initialSetup, you need to alter the template that generates the config.xml or use user exits. The technique is this. Make the change in the console and when you save the change, WebLogic will reflect it in the config.xml for you. Compare the old version and new version of the config.xml and determine what to add and then find the user exit to put it in by examining the base template. For example, by default, the console is not automatically deployed (it is deployed on demand) in the base config.xml. To make the console deploy, you can add the following line to the templates/CM_config.xml.win.exit_3.include file (for windows) or templates/CM_config.xml.exit_3.include file (for linux/unix): <internal-apps-deploy-on-demand-enabled>false</internal-apps-deploy-on-demand-enabled> Now run initialSetup to reflect the change and if you check the splapp/config/config.xml file you will see the change applied for you. Now how did I know which include file? I check the template for config.xml and found there was an user exit at the right place. I prefixed my include filename with "CM_" to denote it as a custom user exit. This will tell the upgrade tools to leave that file alone whenever you decide to upgrade (or even apply fixes). User exits can be powerful and allow customizations to be added for advanced configuration. You will see products using Oracle Utilities Application Framework use this exits themselves (usually prefixed with the product code). You are also taking advantage of them.

    Read the article

  • Updated Whitepapers for OUFW 4.0.1

    - by Anthony Shorten
    The whitepapers are progressively being updated for new facilities in Oracle Utilities Application Framework V4.0.1. The documents now will cover other versions of Oracle Utilities Application Framework (V2.1, V2.2, V4.0.1). The first round of updates are now available on My Oracle Support: 942074.1 - XAI Best Practices 836362.1 - Batch Best Practices 773473.1 - Oracle Utilities Application Framework Security Overview These have been updated for the new whitepaper format as well as the content. Content that is related to specific versions of the Framework are marked accordingly. New content since the last update are also indicated accordingly.

    Read the article

  • Message Driven Bean JMS integration

    - by Anthony Shorten
    In Oracle Utilities Application Framework V4.1 and above the product introduced the concept of real time JMS integration within the Framework for interfacing. Customer familiar with older versions of the Framework will recall that we used a component called the Multi-purpose Listener (MPL) which was a very light service bus for calling interface channels (including JMS). The MPL is not supplied with all products and customers prefer to use Oracle SOA Suite and native methods rather then MPL. In Oracle Utilities Application Framework V4.1 (and for Oracle Utilities Application Framework V2.2 via Patches 9454971, 9256359, 9672027 and 9838219) we introduced real time JMS integration natively for outbound JMS integration and using Message Driven Beans (MDB) for incoming integration. The outbound integration has not changed a lot between releases where you create an Outbound Message Type to indicate the record types to send out, create a JMS sender (though now you use the Real Time Sender) and then create an External System definition to complete the configuration. When an outbound message appears in the table of the type and external system configured (via a business event such as an algorithm or plug-in script) the Oracle Utilities Application Framework will place the message on the configured Queue linked to the JMS Sender. The inbound integration has changed. In the past you created XAI Receivers and specified configuration about what types of transactions to process. This is now all configuration file driven. The configuration files for the Business Application Server (ejb-jar.xml and weblogic-ejb-jar.xml) define Message Driven Beans and the queues to monitor. When a message appears on the queue, the MDB processes it through our web services interface. Configuration of the MDB can be native (via editing the configuration files) or through the new user exit capabilities (which is aimed at maintaining custom configuration across upgrades). The latter is better as you build fragments of configuration to make it easier to maintain. In the next few weeks a number of new whitepaper will be released to illustrate the features of the Oracle WebLogic JMS and Oracle SOA Suite integration capabilities.

    Read the article

  • How do I configure sound with PulseAudio and Multiseat?

    - by Anthony
    In the spirit of full disclosure, i just posted this question to the ubuntu forums, but i figure more heads working on it couldn't hurt. I have a multi-seat setup working quite well. Hot plugging input devices works as expected and such. The only issue I am still not able to resolve is getting the audio for each seat. Here is a summary of my attempts at getting audio to work: Make ~/.pulse/default.pa dynamically configured based on which $DISPLAY the user logs in at. See this pastebin for the details. Load pulseaudio as a system-wide instance. Couldn't get this to work. None of the audio hardware was accessible to the users. Use udev rules to mark seats in ConsoleKit. Following udev guidelines found here: http://www.freedesktop.org/wiki/Software/systemd/multiseat I didn't think this would work, although it was "guaranteed" to work by someone in irc.freenode #pulseaudio None of those attempts yielded success, which is why I now turn to the community for help. It is quite possible that the suggested methods work and I just messed some aspect of it up, idk. This is the last piece of the puzzle which is needed before I can go and update the MultiseatX page to include instructions for Ubuntu 12.04. My understandings on the situation: Access to pulseaudio is restricted to the active session as marked by ConsoleKit (something about an ACL). CK can only mark one session as active at a time. This simple little fact of life leads me to believe that the solution should involve pulseaudio being run as a system-wide instance. Each user should connect to the pulse server and be limited to a subset of all the hardware. Maybe each user connects to the pulse server via localhost, idk. I do know that regardless of my attempts and their failed results, I was always able to use sudo aplay -D plughw:0,0 /usr/share/sounds/alsa/Front_Center.wav to play something to any of the hardware. I'm grasping at straws and am now down to the last few hairs i can pull out of my head. Please, help me figure this out so we can share the wealth. Any additional information needed will be provided at your request.

    Read the article

  • Exit Infragistics, Enter Telerik

    - by Anthony Trudeau
    Today I made the purchase of the Premium Collection of components from Telerik.  This follows an evaluation I’ve been doing to replace the Infragistics components we currently use for Windows Forms, ASP.NET MVC, and WPF. It was not a formal evaluation.  I had already decided to move the company away from Infragistics.  That decision was mostly born out of frustration with support over using the Infragistics components in my first production MVC application. One such issue was a simple scenario where you have a model that has a scalar property that can be one value out of a list.  The built-in combobox does this, but I was told by Infragistics support that they didn’t support it – and it took them several emails and days of waiting between responses to determine that.  I implemented this in Telerik in a minute not including the several minutes it took me to get a rudimentary understanding for the component and its API. Here’s the code using the built-in combobox:@Html.DropDownListFor(x => x.VendorId, new SelectList(ViewBag.Vendors, "VendorId", "VendorName", Model.VendorId), "Select Id") Here’s the code using the Telerik combobox:@(Html.Telerik().ComboBoxFor(model => model.VendorId) .AutoFill(true) .BindTo(new SelectList(ViewBag.Vendors, "VendorId", "VendorName", Model.VendorId)) )   I chose Telerik over other competitors based on the professional appearance of their website, and how easy it was to find information.  I’d like to say I had time to evaluate other Infragistics competitors.  Due to time constraints I had to make an initial decision based on superficial, but still important things. I picked Telerik with the plan to only look further at other companies if my evaluation didn’t meet my expectations.  Luckily they did, because I didn’t relish the thought of carving out more time to evaluate another set of components. Overall my experience with Telerik has been superior to Infragistics in every way.  The installation was easy using their control panel installer application.  Getting up to speed has been easy.  And the communication from Telerik has met my expectations.  And we’ll continue to be good as long as I don’t start getting email messages from a sales rep saying that they want to talk to me about training and consulting – I’m looking at you Infragistics.

    Read the article

  • Ubuntu 12.10 automatic logout

    - by anthony
    I just had a new desktop constructed and installed ubuntu 12.10 which was successful for the most part but every time I open up some applications like thunderbird I get automatically logged out and all my windows are closed. System error messages also come up from time to time. Also I'm not sure if this helps but if I select the "Use system title bar and borders" in chromium I will also get logged out and from that point won't be able to use it without getting logged out in the future. If I click on particular links as well. If anyone has any tips on how I can isolate and fix the problem I would greatly appreciate it.

    Read the article

  • First PC Build (Part 1)

    - by Anthony Trudeau
    Originally posted on: http://geekswithblogs.net/tonyt/archive/2014/08/05/157959.aspxA couple of months ago I made the decision to build myself a new computer. The intended use is gaming and for using the last real version of Photoshop. I was motivated by the poor state of console gaming and a simple desire to do something I haven’t done before – build a PC from the ground up. I’ve been using PCs for more than two decades. I’ve replaced a component hear and there, but for the last 10 years or so I’ve only used laptops. Therefore, this article will be written from the perspective of someone familiar with PCs, but completely new at building. I’m not an expert and this is not a definitive guide for building a PC, but I do hope that it encourages you to try it yourself. Component List Research There was a lot of research necessary, because building a PC is completely new to me, and I haven’t kept up with what’s out there. The first thing you want to do is nail down what your goals are. Your goals are going to be driven by what you want to do with your computer and personal choice. Don’t neglect the second one, because if you’re doing this for fun you want to get what you want. In my case, I focused on three things: performance, longevity, and aesthetics. The performance aspect is important for gaming and Photoshop. This will drive what components you get. For example, heavy gaming use is going to drive your choice of graphics card. Longevity is relevant to me, because I don’t want to be changing things out anytime soon for the next hot game. The consequence of performance and longevity is cost. Finally, aesthetics was my next consideration. I could have just built a box, but it wouldn’t have been nearly as fun for me. Aesthetics might not be important to you. They are for me. I also like gadgets and that played into at least one purchase for this build. I used PC Part Picker to put together my component list. I found it invaluable during the process and I’d recommend it to everyone. One caveat is that I wouldn’t trust the compatibility aspects. It does a pretty good job of not steering you wrong, but do your own research. The rest of it isn’t really sexy. I started out with what appealed to me and then I made changes and additions as I dived deep into researching each component and interaction I could find. The resources I used are innumerable. I used reviews, product descriptions, forum posts (praises and problems), et al. to assist me. I also asked friends into gaming what they thought about my component list. And when I got near the end I posted my list to the Reddit /r/buildapc forum. I cannot stress the value of extra sets of eyeballs and first hand experiences. Some of the resources I used: PC Part Picker Tom’s Hardware bit-tech Reddit Purchase PC Part Picker favors certain vendors. You should look at others too. In my case I found their favorites to be the best. My priorities were out-the-door price and shipping time. I knew that once I started getting parts I’d want to start building. Luckily, I timed it well and everything arrived within the span of a few days. Here are my opinions on the vendors I ended up using in alphabetical order. Amazon.com is a good, reliable choice. They have excellent customer service in my experience, and I knew I wouldn’t have trouble with them. However, shipping time is often a problem when you use their free shipping unless you order expensive items (I’ve found items over $100 ship quickly). Ultimately though, price wasn’t always the best and their collection of sales tax in my state turned me off them. I did purchase my case from them. I ordered the mouse as well, but I cancelled after it was stuck four days in a “shipping soon” state. I purchased the mouse locally. Best Buy is not my favorite place to do business. There’s a lot of history with poor, uninterested sales representatives and they used to have a lot of bad anti-consumer policies. That’s a lot better now, but the bad taste is still in my mouth. I ended up purchasing the accessories from them including mouse (locally) and headphones. NCIX is a company that I’ve never heard of before. It popped up as a recommendation for my CPU cooler on PC Part Picker. I didn’t do a lot of research on the company, because their policy on you buying insurance for your orders turned me off. That policy makes it clear to me that the company finds me responsible for the shipment once it leaves their dock. That’s not right, and may run afoul of state laws. Regardless they shipped my CPU cooler quickly and I didn’t have a problem. NewEgg.com is a well known company. I had never done business with them, but I’m glad I did. They shipped quickly and provided good visibility over everything. The prices were also the best in most cases. My main complaint is that they have a lot of exchange only return policies on components. To their credit those policies are listed in the cart underneath each item. The visibility tells me that they’re not playing any shenanigans and made me comfortable dealing with that risk. The vast majority of what I ordered came from them. Coming Next In the next part I’ll tackle my build experience.

    Read the article

  • Can't install lua on 12.04 on Chrome book

    - by Anthony
    I am currently trying to install lua on my Ubuntu 12.04 on Chrome book and I keep getting this error: I'm doing good learning different programming languages (self taught) and well I wanted to start learning Lua. That error is keeping me from doing that unfortunately. I've tried doing it with and without of root access and still no success. Does anyone know what might be causing the problem or of a different way of installing Lua? Thanks a ton! Don't mind the top of the image just me taking out my frustration.

    Read the article

  • Database Vault integration available

    - by Anthony Shorten
    One of the major features of Oracle Utilities Application Framework V4.1 is the provision of a base solution for integration to the Database Vault product. Database Vault is part of Oracle’s security portfolio of product and allows database user permissions to be locked down to only allow appropriate users appropriate access to the product data. By default, when you install the product database, administrators and SYSDBA users have full DML (SELECT, INSERT, UPDATE and DELETE access) to the schemas they own and in the case of the SYSDBA users, all schemas on the database. This can be perceived as an issue. Database Vault allows an additional layer of security to disable inappropriate access. In Oracle Utilities Application Framework, a prebuilt Database Vault solution has been provided to provide base DML access to product data for product users only. The solution is shipped with the database installation files and includes a set of SQL files to create, disable, enable and delete the Database Vault objects. The solution contains a Database Vault Realm, RuleSets, Rules and Command Rules that can be used as is or extended to meet site specific needs. The solution is consistent with other Database Vault solutions provided for other Oracle applications such as PeopleSoft, E-Business Suite, JD-Edwards and Siebel. Customers familiar with the database vault solutions for those products will recognize the similarities between the solutions. For more details of the solution, refer to the Database Vault Integration for Oracle Utilities Application Framework Based Products on My Oracle Support at KB Id: 1290700.1.

    Read the article

  • NDepend 4.0 Released

    - by Anthony Trudeau
    Last week version 4.0 of NDepend was released.  NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high quality code.  A month ago I wrapped up my evaluation of the previous version of NDepend. The new version contains many minor changes, several bug fixes, and adds about 50 new code rules.  The version also adds support for Visual Studio 11, .NET Framework 4.5, and SilverLight 5.0.  But, the biggest change was the shift from CQL to CQLinq. Introducing CQLinq The latest version replaces the CQL rules language with CQLinq (CQL is still an option although the editor is buried).  As you might guess CQLinq is a flavor of Linq designed specifically for the code rules. The best way to illustrate the differences is with an example.  I used the following CQL example in Part 3 of my review: WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” This same query looks like this when implemented in CQLinq: warnif count > 0 from t in Types where t.IsInterface == true && !t.NameLike(“I”) select t I like the syntax and it is a natural fit, but I found writing the queries frustrating in the Queries and Rules Edit window.  The Queries and Rules Edit window replaces the CQL Query Edit window.  The new editor has the same style of Intellisense as the previous editor.  However, it has a few annoyances.  The error indicator is a red block.  It has the tendency of obscuring your cursor.  Additionally, writing CQLing queries is like writing plain old Linq queries, so the fact that the editor uses Enter to select from Intellisense instead of Tab is jarring.  These issues can be an obstacle to writing queries quickly.CQLinq makes it possible to write rules that weren't possible before.  Additionally, a JustMyCode domain is now possible making it easy to eliminate generated code from the analysis.Should you Buy? I recommend NDepend overall.  It has some rough points for me that I have detailed in my earlier evaluation (starting here).  But, it’s definitely worth the money.  The bigger question is: should I pay for the upgrade to 4.0?  At this point I’m on the fence, but I would go for it if you need support for Visual Studio 2011, .NET Framework 4.5, or Silverlight 5.0; or if you need one of the many rules that weren't possible before CQLinq. Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend. Resources: NDepend Release Notes

    Read the article

  • Analyzing Memory Usage: Java vs C++ Negligible?

    - by Anthony
    How does the memory usage of an integer object written in Java compare\contrast with the memory usage of a integer object written in C++? Is the difference negligible? No difference? A big difference? I'm guessing it's the same because an int is an int regardless of the language (?) The reason why I asked this is because I was reading about the importance of knowing when a program's memory requirements will prevent the programmer from solving a given problem. What fascinated me is the amount of memory required for creating a single Java object. Take for example, an integer object. Correct me if I'm wrong but a Java integer object requires 24 bytes of memory: 4 bytes for its int instance variable 16 bytes of overhead (reference to the object's class, garbage collection info & synchronization info) 4 bytes of padding As another example, a Java array (which is implemented as an object) requires 48+bytes: 24 bytes of header info 16 bytes of object overhead 4 bytes for length 4 bytes for padding plus the memory needed to store the values How do these memory usages compare with the same code written in C++? I used to be oblivious about the memory usage of the C++ and Java programs I wrote, but now that I'm beginning to learn about algorithms, I'm having a greater appreciation for the computer's resources.

    Read the article

  • SOA Suite Integration: Part 3: Loading files

    - by Anthony Shorten
    One of the most common scenarios in SOA Integration is the loading of a file into the product from an external source. In Oracle SOA Suite there is a File Adapter that can process many file types into your BPEL process. For this example I will use the File Adapter to load a file of user and emails to update the user object within the Oracle Utilities Application Framework. Remember you can repeat this process with other objects and other file types. Again I am illustrating the ease of integration. The first thing is to create an empty BPEL process that will hold our flow. In Oracle JDeveloper this can be achieved by specifying the Define Service Later template (as other templates have predefined inputs and outputs and in this case we want to specify those). So I will create simpleFileLoad process to house our process. You will start with an empty canvas so you need to first specify the load part of the process using the File Adapter. Select the File Adapter from the Component Palette under BPEL Services and drag and drop it to the left side Partner Links (left is input). You name the Service. In this case I chose LoadFile. Press Next. We will define the interface as part of the wizard so select Define from operation and schema (specified later). Press Next. We are going to choose Read File to denote that we will read the file and specify the default Operation Name as Read. Press Next. The next step is to tell the Adapter the location of the files, how to process them and what to do with them after they have been processed. I am using hardcoded locations in this example but you can have logical locations as well. Press Next. I am now going to tell the adapter how to recognize the files I want to load. In my case I am using CSV files and more importantly I am tell the adapter to run the process for each record in the file it encounters. Press Next. Now, I tell the adapter how often I want to poll for the files. I have taken the defaults. Press Next. At this stage I have no explanation of the format of the input. So I am going to invoke the Native Format Wizard which will guide me through the process of creating the file input format. Clicking the purple cog icon will start the wizard. After an introduction screen (not shown), you specify the format of the input file. The File Adapter supports multiple format types. For this example, I will use Delimited as I am going to load a CSV file. Press Next. The best way for the wizard to work is with a sample. I have a sample file and the wizard will ask how much of the file to use as a template. I will use the defaults. Note: If you are using a language that has other languages other than US-ASCII, it is at this point you specify the character set to use.  Press Next. The sample contains multiple instances of a single record type. The wizard supports complex types as well. We will use the appropriate setting for our file. Press Next. You have to specify the file element and the record element. This will be used by the input wizard to translate the CSV data into an XML structure (this will make sense later). I am using LoadUsers as my file delimiter (root element) and User Record as my record root element. Press Next. As the file is CSV the delimiter is "," so I will also specify that the End Of Line (EOL) indicator indicates the end of a record. Press Next. Up until this point your have not given the columns their names. In my case my sample includes the column names in the first record. This is not always the case but you can specify the names and formats of columns in this dialog (not shown). Press Next. The wizard now generates the schema for the input file. You can specify a name for the schema. I have used userupdate.xsd. We want to verify the schema so press Test. You can test the schema by specifying an input sample. and pressing the green play button. You will see the delimiters you specified earlier for the file and the records. Press Ok to continue. A confirmation screen will be displayed showing you the location of the schema in your project. Press Finish to return to the File Adapter configuration. You will now see the schema and elements prepopulated from the wizard. Press Next. The File Adapter configuration is now complete. Press Finish. Now you need to receive the input from the LoadFile component so we need to place a Receive node in the BPEL process by drag and dropping the Receive component from the Component Palette under BPEL Constructs onto the BPEL process. We link the receive process with the LoadFile component by dragging the left most connect node of the Receive node to the LoadFile component. Once the link is established you need to name the Receive node appropriately and as in the post of the last part of this series you need to generate input variables for the BPEL process to hold the input records in. You need to now add the product Web Service. The process is the same as described in the post of the last part of this series. You drop the Web Service BPEL Service onto the right side of the process and fill in the details of the WSDL URL . You also have to add an Invoke node to call the service and generate the input and outputs variables for the call in the Invoke node. Now, to get the inputs from File to the service. You have to use a Transform (you can use an Assign action but a Transform action is more flexible). You drag and drop the Transform component from the Component Palette under Oracle Extensions and place it between the Receive and Invoke nodes. We name the Transform Node, Mapper File and associate the source of the mapping the schema from the Receive node and the output will be the input variable from the Invoke node. We now build the transform. We first map the user and email attributes by drag and drop the elements from the left to the right. The reason we needed to use the transform is that we will be telling the AS-User service that we want to issue an update action. Remember when we registered the service we actually used Read as the default. If we do not otherwise inform the service to use the Update action it will use the Read action instead (which is not desired). To specify the update action you need to click on the transactionType node on the right and select Set Text to set the action. You need to specify the transactionType of UPD (for update). The mapping is now complete. The final BPEL process is ready for deployment. You then deploy the BPEL process to the server and to test the service by simply dropping a file, in the same pattern/name as you specified, in the directory you specified in the File Adapter. You will see each record as a separate instance entry in the Fusion Middleware Control console. You can now load files into the product. You can repeat this process for each type of file to process. While this was a simple example it illustrates the method of loading data can be achieved using SOA Suite in conjunction with our products.

    Read the article

  • Oracle BI Publisher integration Guidelines published

    - by Anthony Shorten
    The Oracle Utilities Application Framework integrates with Oracle BI Publisher 10g/11g to allow reports to be submitted and viewed within the Oracle Utilities Application Framework based product. The integration is achieved using base provided algorithms that perform the integration from within the Oracle Utilities Application Framework based product. A new whitepaper has been released that outlines how to configure the algorithm for integration and also some guidelines on how to define a new report to the product for submission and viewing. The whitepaper is available from My Oracle Support at KB Id 1299732.1 titled “BI Publisher Guidelines for Oracle Utilities Application Framework”.

    Read the article

  • NDepend Evaluation: Part 3

    - by Anthony Trudeau
    NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high code quality. NDepend uses a number of metrics and aggregates the data in pleasing static and active visual reports. My evaluation of NDepend will be broken up into several different parts. In the first part of the evaluation I looked at installing the add-in.  And in the last part I went over my first impressions including an overview of the features.  In this installment I provide a little more detail on a few of the features that I really like. Dependency Matrix The dependency matrix is one of the rich visual components provided with NDepend.  At a glance it lets you know where you have coupling problems including cycles.  It does this with number indicating the weight of the dependency and a color-coding that indicates the nature of the dependency. Green and blue cells are direct dependencies (with the difference being whether the relationship is from row-to-column or column-to-row).  Black cells are the ones that you really want to know about.  These indicate that you have a cycle.  That is, type A refers to type B and type B also refers to Type A. But, that’s not the end of the story.  A handy pop-up appears when you hover over the cell in question.  It explains the color, the dependency, and provides several interesting links that will teach you more than you want to know about the dependency. You can double-click the problem cells to explode the dependency.  That will show the dependencies on a method-by-method basis allowing you to more easily target and fix the problem.  When you’re done you can click the back button on the toolbar. Dependency Graph The dependency graph is another component provided.  It’s complementary to the dependency matrix, but it isn’t as easy to identify dependency issues using the window. On a positive note, it does provide more information than the matrix. My biggest issue with the dependency graph is determining what is shown.  This was not readily obvious.  I ended up using the navigation buttons to get an acceptable view.  I would have liked to choose what I see. Once you see the types you want you can get a decent idea of coupling strength based on the width of the dependency lines.  Double-arrowed lines are problematic and are shown in red.  The size of the boxes will be related to the metric being displayed.  This is controlled using the Box Size drop-down in the toolbar.  Personally, I don’t find the size of the box to be helpful, so I change it to Constant Font. One nice thing about the display is that you can see the entire path of dependencies when you hover over a type.  This is done by color-coding the dependencies and dependants.  It would be nice if selecting the box for the type would lock the highlighting in place. I did find a perhaps unintended work-around to the color-coding.  You can lock the color-coding in by hovering over the type, right-clicking, and then clicking on the canvas area to clear the pop-up menu.  You can then do whatever with it including saving it to an image file with the color-coding. CQL NDepend uses a code query language (CQL) to work with your code just like it was a database.  CQL cannot be confused with the robustness of T-SQL or even LINQ, but it represents an impressive attempt at providing an expressive way to enumerate and interrogate your code. There are two main windows you’ll use when working with CQL.  The CQL Query Explorer allows you to define what queries (rules) are run as part of a report – I immediately unselected rules that I don’t want in my results.  The CQL Query Edit window is where you can view or author your own rules.  The explorer window is pretty self-explanatory, so I won’t mention it further other than to say that any queries you author will appear in the custom group. Authoring your own queries is really hard to screw-up.  The Intellisense-like pop-ups tell you what you can do while making composition easy.  I was able to create a query within two minutes of playing with the editor.  My query warns if any types that are interfaces don’t start with an “I”. WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” The results from the CQL Query Edit window are immediate. That fact makes it useful for ad hoc querying.  It’s worth mentioning two things that could make the experience smoother.  First, out of habit from using Visual Studio I expect to be able to scroll and press Tab to select an item in the list (like Intellisense).  You have to press Enter when you scroll to the item you want.  Second, the commands are case-sensitive.  I don’t see a really good reason to enforce that. CQL has a lot of potential not just in enforcing code quality, but also enforcing architectural constraints that your enterprise has defined. Up Next My next update will be the final part of the evaluation.  I will summarize my experience and provide my conclusions on the NDepend add-in. ** View Part 1 of the Evaluation ** ** View Part 2 of the Evaluation ** Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend.

    Read the article

  • Foolproof way to ensure Google news pulls the correct image for it's thumbnails?

    - by Anthony
    Google news results have an acompanying thumbnail next to articles that show up in the results. If google's crawler can't find a thumbnail to pull from our site, it uses its next best guess from another site, therefore linking the image to another site but still uses our headline. Example: Headline from Reuters, Image from Livemint: Our pages absolutely have images, they are not massive in file-size or dimensions, yet we are not having them pulled / crawled correctly. We have read up on the suggestions from google, and from others around the web and nothing is panning out. Has anyone had any experience where they can ensure google news will pull a thumbnail of our choosing?

    Read the article

  • Probably the dumbest and poitnless question to ask

    - by Anthony Adams
    How can I dual boot this with Windows 8? I've tried to burn to a CD, never have a enough memory or the program tells me that the CD isn't writable. So, I want to run from USB. But I never understood how to run the program from the USB, how to download it on to the USB and how to set up the computer to run the USB before the Hard Drive. I am a beginner trying to learn Linux, if any one could help a newbie like me, that would be much appreciated.

    Read the article

  • Intel 5100 AGN disconnects and then disables my Verizon FiOS Actiontec router

    - by Anthony
    I am a new user (my first trial) of Ubuntu and booted up my Windows Vista laptop up with the Ubuntu CD. I was able to connect wirelessly to my router and stay connected for about a minute. After that it would disconnect and my router would be disabled: none of my other wireless devices would see the signal any longer, as if it disappeared. I would have to cycle the router's power toggle off/on to get it to come back on and put out a signal again. This happened repeatedly. I experienced no other problems trying the software (i.e., accessed my files w/o issue). I did not attempt to connect with an Ethernet cable. Here are the specs on my system: laptop is HP Pavilion dv5 Notebook PC system type is 64-bit operating system CPU is Intel Core 2 Duo P8600 2.4GHz ram is 4 GB network card is Intel WiFi Link 5100 AGN I read elsewhere in this forum that 64-bit systems may not be compatible with Ubuntu. Can anyone help me with this? I'd really like to be able to use this op system.

    Read the article

  • InvalidProgramException Running Unit Test

    - by Anthony Trudeau
    There is a bug in the unit testing framework in Visual Studio 2010 with unit testing.  The bug appears in a very special circumstance involving an internal generic type. The bug causes the following exception to be thrown: System.InvalidProgramException: JIT Compiler encountered an internal limitation. This occurs under the following circumstances: Type being tested is internal or private Method being tested is generic  Method being tested has an out parameter Type accessor functionality used to access the internal type The exception is not thrown if the InternalsVisibleToAttribute is assigned to the source assembly and the accessor type is not used; nor is it thrown if the method is not a generic method. Bug #635093 has been added through Microsoft Connect

    Read the article

  • Where is Oracle Utilities Application Framework V3?

    - by Anthony Shorten
    You may of noticed that the latest version of the Oracle Utilities Application Framework is V4.0.1. The last release of the Framework was V2.2. So what happened to V3? The short answer is that there is no V3 of the framework. The long answer is that the Oracle Utilities Application Framework has long been associated with Oracle Utilities Customer Care And Billing and Oracle Enterprise Taxation Management only. As more and more of the Oracle Tax And Utilities products are migrated onto the framework the association betweent eh original products on the framework is less appropriate. Therefore it was decided to pick a version number to emphasize the decouplinf of the releases of the Framework with any particular product. To illustrate this, the Oracle Mobile Workforce Management (MWM) V2.0.0 product uses Oracle Utilities Applicaton Framework V4.0.1. If we used the old numberings schema then MWM would be V4.0.1 which makes no sense, given the last release of MWM was V1.x The framework has its own development team and product management. It basicaly has its own schedule (though it is influenced by the products that use it still - which makes sense). So that s the reasoning around the version numbering change for the framework.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >