Search Results

Search found 13516 results on 541 pages for 'common dialog'.

Page 471/541 | < Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >

  • App Stores&ndash;In All Things, Its Quality Over Quantity

    - by D'Arcy Lussier
    Everybody has an opinion about Windows 8. People love it, people hate it, people are meh about it, people are apparently buying it from Microsoft stores in NYC as if it was water before a natural disaster…if there’s one thing that Microsoft product launches do well, its the ability to bring out strong emotional responses. Over at eweek.com, Don Reisinger wrote about 5 good and bad things about Windows 8. Yes, another opinion piece on WIndows 8. I figured since this one had good and bad it might be worthwhile to read. I then came across #10 on his list, and figured “What the hell…might as well post a bit of a rant on Windows 8 myself!” Here’s #10: 10. Bad: Too few apps Unfortunately, Microsoft wasn’t able to get too many developers to start producing applications for its Windows 8 Store. Microsoft hasn’t yet released official numbers, but some have said that the marketplace has less than 8,000 programs. Considering Apple’s App Store has 100 times that, it’s about time Microsoft starts leaning on developers to get more programs into its store. Believe me, Microsoft *has* been leaning on developers to get apps into the store. I’ve been asked at least 5 or 6 times from 5 or 6 different friends at Microsoft about whether I was going to write a Windows 8 app. I think Microsoft felt they had to try and address the number of apps available in their marketplace, since some people (like Don) would draw comparisons to the number of apps in the Apple marketplace. I feel for Microsoft in this, since the number of apps in a marketplace are an empty stat. Quality of Quantity I have an iPad that my family (wife, 10yo daughter, 3yo daughter) use. We all have our own apps installed on it. In addition, my wife has an iPhone 4S that she also installs apps on. As someone who gets asked by his kids often whether they can buy/download an app, the vast majority of the vast catalogue of iOS marketplace apps are crap! Do you realize how many “free” games are out there, only to really be not-free because you have to purchase in-game content to make the game actually playable? And how about searching – with such a vast array of apps and such high numbers of craptastic ones, trying to find something is incredibly difficult and can be frustrating. I would rather see that Microsoft has 8000 high quality apps in their store at launch, instead of 800000 that were mostly junk. Too Few Apps?! And seriously, 8000 is not a small number. How many iOS apps have I actually bought between the iPad and iPhone? I’ll be generous and say 30…heck, let’s round it up to 40. It’s not like I have 10,000 apps installed on my iPad, nor will that ever happen! So if people have, at the *launch* of a new platform ecosystem, EIGHT THOUSAND apps to choose from, I don’t see that as a fail at all! It should be noted that most of the most common apps (Netflix, Skype, etc.) are available for Windows 8 at launch – I guess I’ll have to wait a few weeks for My Pony Ranch and all its clones to start showing up; pity. Let’s Check Back in a Year So look, let’s check back in a year’s time and see what the app store looks like. My hope is that Microsoft doesn’t continue to push quantity over quality. Even knowing the optics that # of apps in the store carries and the pressure to catch Apple and Android marketplaces, I hope Microsoft avoids the scenario where there’s a good percentage of apps in the Windows Store that are utter rubbish and finding the gems will be cumbersome. But if that happens, we can thank guys like Dan who raised the false issue of app count at the launch for it.

    Read the article

  • Thoughts on C# Extension Methods

    - by Damon
    I'm not a huge fan of extension methods.  When they first came out, I remember seeing a method on an object that was fairly useful, but when I went to use it another piece of code that method wasn't available.  Turns out it was an extension method and I hadn't included the appropriate assembly and imports statement in my code to use it.  I remember being a bit confused at first about how the heck that could happen (hey, extension methods were new, cut me some slack) and it took a bit of time to track down exactly what it was that I needed to include to get that method back.  I just imagined a new developer trying to figure out why a method was missing and fruitlessly searching on MSDN for a method that didn't exist and it just didn't sit well with me. I am of the opinion that if you have an object, then you shouldn't have to include additional assemblies to get additional instance level methods out of that object.  That opinion applies to namespaces as well - I do not like it when the contents of a namespace are split out into multiple assemblies.  I prefer to have static utility classes instead of extension methods to keep things nicely packaged into a cohesive unit.  It also makes it abundantly clear where utility methods are used in code.  I will concede, however, that it can make code a bit more verbose and lengthy.  There is always a trade-off. Some people harp on extension methods because it breaks the tenants of object oriented development and allows you to add methods to sealed classes.  Whatever.  Extension methods are just utility methods that you can tack onto an object after the fact.  Extension methods do not give you any more access to an object than the developer of that object allows, so I say that those who cry OO foul on extension methods really don't have much of an argument on which to stand.  In fact, I have to concede that my dislike of them is really more about style than anything of great substance. One interesting thing that I found regarding extension methods is that you can call them on null objects. Take a look at this extension method: namespace ExtensionMethods {   public static class StringUtility   {     public static int WordCount(this string str)     {       if(str == null) return 0;       return str.Split(new char[] { ' ', '.', '?' },         StringSplitOptions.RemoveEmptyEntries).Length;     }   }   } Notice that the extension method checks to see if the incoming string parameter is null.  I was worried that the runtime would perform a check on the object instance to make sure it was not null before calling an extension method, but that is apparently not the case.  So, if you call the following code it runs just fine. string s = null; int words = s.WordCount(); I am a big fan of things working, but this seems to go against everything I've come to know about instance level methods.  However, an extension method is really a static method masquerading as an instance-level method, so I suppose it would be far more frustrating if it failed since there is really no reason it shouldn't succeed. Although I'm not a fan of extension methods, I will say that if you ever find yourself at an impasse with a die-hard fan of either the utility class or extension method approach, then there is a common ground.  Extension methods are defined in static classes, and you call them from those static classes as well as directly from the objects they extend.  So if you build your utility classes using extension methods, then you can have it your way and they can have it theirs. 

    Read the article

  • Sharing configuration settings between Windows Azure roles

    - by theo.spears
    If you are working on a medium-large Windows Azure project it's likely it will involve more than one role, for example separate web and worker roles. Unfortunately although all the windows azure configuration settings are stored in a single cscfg file, there is no way to share configuration settings between multiple roles. This means you have to duplicate common settings like connection strings across all your roles. There is an open Connect issue about this topic, but Microsoft have not said when they will fix it. In the mean time I've put together a dirty dirty hack cunning workaround that creates a fake role containing your shared configuration settings, and copies it to all roles as part of the build process. Here's how you set it up: 1. Download the zip file attached to this post, and unzip it into the folder containing your Azure project (not your solution folder). 2. Edit your csdef and cscfg files to include the placeholder project ServiceDefinition.csdef<?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="AzureSpendNotifier" http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition%22"http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="GLOBAL"> <ConfigurationSettings> <Setting name="ExampleSetting" /> </ConfigurationSettings> </WorkerRole> <WorkerRole name="MyWorker"> <ConfigurationSettings> </ConfigurationSettings> </WorkerRole> <WebRole name="MyWeb"> <Sites> <Site name="Web"> <Bindings> <Binding name="WebEndpoint" endpointName="WebEndpoint" /> </Bindings> </Site> </Sites> <ConfigurationSettings> </ConfigurationSettings> </WebRole> </ServiceDefinition> ServiceConfiguration.cscfg<?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="AzureSpendNotifier" xmlns=http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration osFamily="1" osVersion="*"> <Role name="GLOBAL"> <ConfigurationSettings> <Setting name="ExampleSetting" value="Hello World" /> </ConfigurationSettings> <Instances count="1" /> </Role> <Role name="MyWorker"> <Instances count="1" /> <ConfigurationSettings> </ConfigurationSettings> </Role> <Role name="MyWeb"> <Instances count="1" /> <ConfigurationSettings> </ConfigurationSettings> </Role> </ServiceConfiguration> It is important that all your roles contain a ConfigurationSettings entry in both cscfg and csdef files, even if it's empty- otherwise the shared configuration settings will not be inserted. 3. Open your azure deployment (.ccproj) project in notepad, and add the highlighted line below: ... <Import Project="$(CloudExtensionsDir)Microsoft.CloudService.targets" /> <Import Project="globalsettings/globalsettings.targets" /> </Project> It is important you add this below the Microsoft.CloudService.targets import line, as it replaces some of the rules defined in that file. Visual studio will prompt you to reload the project, say yes. At this point you will have a new Azure role called 'GLOBAL' with settings you can edit through the visual studio properties panel as normal. This role will never be deployed, but any settings you add to it will be copied to all your other roles when deployed or tested locally within visual studio.

    Read the article

  • OTN ArchBeat Top 10 for September 2012

    - by Bob Rhubart
    The results are in... Listed below are the Top 10 most popular items shared via the OTN ArchBeat Facebook Page for the month of September 2012. The Real Architects of Los Angeles - OTN Architect Day - Oct 25 No gossip. No drama. No hair pulling. Just a full day of technical sessions and peer interaction focused on using Oracle technologies in today's cloud and SOA architectures. The event is free, but seating is limited, so register now. Thursday October 25, 2012. 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Oracle Fusion Middleware Security: Attaching OWSM policies to JRF-based web services clients "OWSM (Oracle Web Services Manager) is Oracle's recommended method for securing SOAP web services," says Oracle Fusion Middleware A-Team member Andre Correa. "It provides agents that encapsulate the necessary logic to interact with the underlying software stack on both service and client sides. Such agents have their behavior driven by policies. OWSM ships with a bunch of policies that are adequate to most common real world scenarios." His detailed post shows how to make it happen. Oracle 11gR2 RAC on Software Defined Network (SDN) (OpenvSwitch, Floodlight, Beacon) | Gilbert Stan "The SDN [software defined network] idea is to separate the control plane and the data plane in networking and to virtualize networking the same way we have virtualized servers," explains Gil Standen. "This is an idea whose time has come because VMs and vmotion have created all kinds of problems with how to tell networking equipment that a VM has moved and to preserve connectivity to VPN end points, preserve IP, etc." H/T to Oracle ACE Director Tim Hall for the recommendation. Process Oracle OER Events using a simple Web Service | Bob Webster Bob Webster's post "provides an example of a simple web service that processes Oracle Enterprise Repository (OER) Events. The service receives events from OER and utilizes the OER REX API to implement simple OER automations for selected event types." Understanding Oracle BI 11g Security vs Legacy Oracle BI 10g | Christian Screen "After conducting a large amount of Oracle BI 10g to Oracle BI 11g upgrades and after writing the Oracle BI 11g book,"says Oracle ACE Christian Screen, "I still continually get asked one of the most basic questions regarding security in Oracle BI 11g; How does it compare to Oracle BI 10g? The trail of questions typically goes on to what are the differences? And, how do we leverage our current Oracle BI 10g security table schema in Oracle BI 11g?" OIM-OAM-OAAM integration using TAP – Request Flow you must understand!! | Atul Kumar Atul Kumar's post addresses "key points and request flow that you must understand" when integrating three Oracle Identity Management product Oracle Identity Management, Oracle Access Management, and Oracle Adaptive Access Manager. Adding a runtime LOV for a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena illustrates how to customize the parameters tab for a taskflow in WebCenter. Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET | Scott Nelson "It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings," says Scott Nelson. "Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there." 15 Lessons from 15 Years as a Software Architect | Ingo Rammer In this presentation from the GOTO Conference in Copenhagen, Ingo Rammer shares 15 tips regarding people, complexity and technology that he learned doing software architecture for 15 years. WebCenter Content (WCC) Trace Sections | ECM Architect ECM Architect Kevin Smith shares a detailed technical post covering WebCenter Content (WCC) Trace Sections. Thought for the Day "Eventually everything connects - people, ideas, objects. The quality of the connections is the key to quality per se." — Charles Eames (June 17, 1907 – August 21, 1978) Source: SoftwareQuotes.com

    Read the article

  • What is the SharePoint Action Framework and Why do I need it ?

    - by SAF
    For those out there that are a little curious as to whether SAF is any use to your organisation, please read this FAQ.  What is SAF ? SAF is free to use. SAF is the "SharePoint Action Framework", it was built by myself and Hugo (plus a few others along the way). SAF is written entirely in C# code available from : http://saf.codeplex.com.   SAF is a way to automate SharePoint configuration changes. An Action is a command/class/task/script written in C# that performs a unit of execution against SharePoint such as "CreateWeb"  or "AddLookupColumn". A SAF Macro is collection of one or more Actions. SAF Macro can be run from Msbuild, a Feature, StsAdm or common plain old .Net code. Parameters can be passed to a Macro at run-time from a variety of sources such as "Environment Variable", "*.config", "Msbuild Properties", Feature Properties, command line args, .net code. SAF emits lots of trace statements at run-time, these can be viewed using "DebugView". One Action can pass parameters to another Action. Parameters can be set using Expression Syntax such as "DateTime.Now".  You should consider SAF is you suffer from one of the following symptoms... "Our developers write lots of code to deploy changes at release time - it's always rushed" "I don't want my developers shelling out to Powershell or Stsadm from a Feature". "We have loads of Console applications now, I have lost track of where they are, or what they do" "We seem to be writing similar scripts against SharePoint in lots of ways, testing is hard". "My scripts often have lots of errors - they are done at the last minute". "When something goes wrong - I have no idea what went wrong or how to solve it". "Our Features get stuck and bomb out half way through - there no way to roll them back". "We have tons of Features now - I can't keep track". "We deploy Features to run one-off tasks" "We have a library of reusable scripts, but, we can only run it in one way, sometimes we want to run it from MSbuild and a Feature". "I want to automate the deployment of changes to our development environment". "I would like to run a housekeeping task on a scheduled basis"   So I like the sound of SAF - what's the problems ?  Realistically, there are few things that need to be considered: Someone on your team will need to spend a day or 2 understanding SAF and deciding exactly how you want to use it. I would suggest a Tech Lead, SysAdm or SP Architect will need to download it, try out the examples, look through the unit tests. Ask us questions. Although, SAF can be downloaded and set to go in a few minutes, you will still need to address issues such as - "Do you want to execute your Macros in MsBuild or from a Feature ?" You will need to decide who is going to do your deployments - is it each developer to themself, or do you require a dedicated Build Manager ? As most environments (Dev, QA, Live etc) require different settings (e.g. Urls, Database names, accounts etc), you will more than likely want to define these and set a properties file up for each environment. (These can then be injected into Saf at run-time). There may be no Action to solve your particular problem. If this is the case, suggest it to us - we can try and write it, or write it yourself. It's very easy to write a new Action - we have an approach to easily unit test it, document it and author it. For example, I wrote one to deploy  a WSP in 2 hours the other day. Alternatively, Saf can also call Stsadm commands and Powershell scripts.   Anyway, I do hope this helps! If you still need help, or a quick start, we can also offer consultancy around SAF. If you want to know more give us a call or drop an email to [email protected]

    Read the article

  • Too Clever for My Own Good

    - by AjarnMark
    Yesterday I caught myself being a little too clever for my own good with some ASP.NET code.  It seems that I have forgotten some of my good old classic HTML and JavaScript skills, and become too dependent on the .NET Framework and WebControls to do the work for me.  Here’s the scenario… In order to improve the User Interface and better communicate to the user when something is happening that they need to wait for, we have started to modify some of our larger (slower) pages to display messages like Processing… or Reloading… while they are cycling through a postback.  (Yes, I understand this could be improved by using AJAX / Callbacks and so on, but even then, you need to let your user know that they need to wait for that section to be re-rendered, so for the moment these pages will continue to use good ol’ Postbacks.)  It’s a very simple trick, really.  All I want to do is when some control triggers a postback, first run a little client-side JavaScript to hide the main contents of the page (such as a GridView) and display the appropriate message.  This lets the user know, “Hey, we’re doing something, don’t click another link or scroll and try to take action right now.” The first places I hooked this up were easy.  Most common cause of a postback:  Buttons.  And when you’re writing the markup or declarative code for an ASP:Button control, there is the handy OnClientClick property which is designed for just this purpose…to run client-side JavaScript before the postback occurs.  This is distinguished from the OnClick property which tells the control what Server-side code to run.  Great!  Done!  Easy! But then there are other controls like DropDownLists and CheckBoxes that we use on our pages with the AutoPostback=True setting which cause postbacks.  And these don’t have OnClientClick or OnClientSelectedIndexChanged events.  So I started getting creative, using an ASP:CustomValidator control in conjunction with setting the CausesValidation and ValidationGroup settings on these controls, which basically caused the action on the control to fire the Custom Validator, which was defined with a Client Side validation function which then did the hide content/show message code (and return a meaningless IsValid setting).  This also caused me to define a different ValidationGroup setting for my real data entry validator controls so that I could control them separately and only have them fire when I really wanted validation, and not just my show/hide trick. For a little while I was pretty proud of myself for coming up with this clever approach to get around what I considered to be a serious oversight on the DropDownList and CheckBox controls declarative syntax.  Then, in the midst of my smugness, just as I was about to commit my changes to the source code repository, it dawned on me that there is a much simpler and much more appropriate way to accomplish this.  All that I really needed to do was to put in my server-side code (I used the Page_Init section) a call to MyControl.Attributes.Add(“onClick”, “myJavaScriptFunctionName()”) for the checkboxes, and for the DropDownLists (which become select tags) use “onChange” instead of “onClick”.  This is exactly the type of thing that the Attributes collection is there for…so you can add attributes to be rendered with the control that you would have otherwise stuck right into the HTML markup if you had been doing this by hand in the first place. Ugh!  A few hours wasted on clever tricks that I ended up completely removing, but I did learn a lot more about custom validators and validation groups in the process.  And got a good reminder that all that stuff (HTML, JavaScript, and CSS) I learned back when I wrote classic ASP pages is still valuable today.  Oh, and one more thing…don’t get lulled into too much reliance on the the whiz-bang tool to do it for you.  After all, WebControls are just another layer of abstraction, and sometimes you need to dig down through the layers and get a little closer to the native language.

    Read the article

  • Framework 4 Features: Login Id Support

    - by Anthony Shorten
    Given that Oracle Utilities Application Framework 4 is available as part of Mobile Work Force Management and other product progressively I am preparing a number of short but sweet blog entries highlighting some of the new functionality that has been implemented. This is the first entry and it is on a new security feature called Login Id. In past releases of the Oracle Utilities Application Framework, the userid used for authentication and authorization was limited to eight (8) characters in length. This mirrored what the market required in the past with LAN userids and even legacy userids being that length. The technology market has since progressed to longer userid lengths. It is very common to hear that email addresses are being used as credentials for production systems. To achieve this in past versions of the Oracle Utilities Application Framework, sites had to introduce a short userid (8 characters in length) as an alias in your preferred security store. You then configured your J2EE Web Application Server to use the alias as credentials. This sometimes was a standard feaure of the security store and/or the J2EE Web Application Server, if you were lucky. If not, some java code has to be written to implement the solution. In Oracle Utilities Application Framework 4 we introduced a new attribute on the user object called Login Id. The Login Id can be up to 256 characters in length and is an alternative to the existing userid stored on the user object. This means the Oracle Utilities Application Framework can support both long and short userids. For backward compatibility we use the Login Id for authentication but the short userid for authorization and auditing. The user object within the Oracle Utilities Application Framework holds the translation. Backward compatibility is always a consideration in any of our designs for future or changed functionality. You will see reference to this fact in the blog entries I will be composing over the next few months. We have also thought about the flexibility in implementing this feature. The Login Id can be the same value of the Userid (the default for backward compatibility) or can be different. Both the Login Id and Userid have to be unique. This avoids sharing of credentials and is also backward compatible. You can manually enter the Login Id or provision it from Oracle Identity Manager (or other tool). If you use the Login Id only, then we will not autogenerate a short userid automatically as the rules for this can vary from site to site. You have a number of options there. Most Identity provisioning tools can generate a short userid at user creation time and this can be used. If you do not use provisioning tools, then you can write a class extension using the SDK to autoegenerate the userid based upon your sites preference. When we designed the feature there were lots of styles of generating userids (random, initial and surname, numbers etc). We could not really see a clear winner in that respect so we just allowed the extension to be inserted in if necessary. Most customers indicated to us that identity provisioning was the preferred way. This is why we released an Oracle Identity Manager integration with the framework. The Login id is case sensitive now which was not supported under userid. The introduction of the Login Id allows the product to offer flexible options when configuring security whilst maintaining backward compatibility.

    Read the article

  • Oracle Tutor: XPDL conversion (and why you should care)

    - by mary.keane
    You may have noticed that the Oracle Business Process Converter feature in Tutor 14 supports "XPDL" conversion to Oracle Business Process Analysis Suite (BPA), Oracle Business Process Management Suite (BPM), and Oracle Tutor, and you may have briefly wondered "what is XPDL?" before you moved on to the Visio import feature (a very popular feature in Tutor 14). This posting is for those who do not yet understand (or care) about XPDL and process modeling. Many of us (and I'm including myself) have spent years working in the process definition arena: we've written procedures, designed systems and software to help others write procedures, and have been responsible for embedding policies and procedures into training material for employees. We've worked with tools such as Oracle Tutor, Microsoft Visio, Microsoft Word, and UPK. Most of us have never worked with "modeling tools" before, and we certainly never had to understand BPMN. It's a brave new world in this arena, and companies desperately need people with policy and procedural system expertise to be able to work with system analysts so there is a seamless transfer of knowledge from IT to employees. When working with applications, a picture is worth a thousand words, so eventually you're going to need to understand and be able to work with business process models. XPDL is an acronym for XML Process Definition Language, and it is an interchange format for business process models. It allows you to take a BPMN model that was developed in one workflow application such as BizAgi and import it into another workflow application or a true BPMN management system such as Oracle BPM. Specifically, the XPDL format contains the graphical information of a model as well as any executable information. By using a common format, models can be moved from a basic modeling application used by business owners to applications used by system architects. Over 80 applications support the XPDL format, including MetaStorm ProVision, BEA ALBPM, BizAgi, and Tibco. I mention these applications because we have provided XSLT mapping files specifically for these vendors. Oracle Business Process Converter was designed with user extensibility in mind, and thus users can add their own XML files so that additional XPDL models from other vendors can be converted to BPM, BPA, and Oracle Tutor. Instructions on how to add your own files can be found in Appendix 4 of the Oracle Business Converter manual. Let's take a visual look at how this works. Here is an example of a model devloped in BizAgi: This model can be created by the average business user without a large learning curve, and it's a good start for the system analyst who will be adding web services as well as for the business manager who manages the process described in the model. By exporting this model as XPDL, the information can be converted into Oracle BPA and Oracle BPM as well as converted to Oracle Tutor to become the framework for a procedure. Through this conversion feature, one graphic illustration of a business process can be used by a system analyst, business analyst, business manager, and employee, as seen below. Model Converted to Tutor Procedure Below is the task section of the procedure after conversion from an XPDL file. Model converted to BPA Model converted to BPM End users still want step by step instructions on how to perform their jobs, so procedures (Oracle Tutor) and application simulations (UPK) are still a critical piece of the solution. But IT professionals need graphic descriptions of how the applications work, regardless of whether there are any tasks involving humans. Now there is a way to convert procedures (Oracle Tutor docx files) and basic models (XPDL files) so that business managers and system analysts can share process information. References Wikipedia XPDL. Workflow Management Coalition, XPDL Support and Resources Oracle Business Process Converter manual, Oracle Tutor 14 Oracle Business Process Management 11g If you have any XPDL conversion stories to share, we'd love to hear from you. Best wishes for the coming new year, Mary Keane, Senior Development Manager, Oracle Tutor and BPM

    Read the article

  • Why you shouldn't add methods to interfaces in APIs

    - by Simon Cooper
    It is an oft-repeated maxim that you shouldn't add methods to a publically-released interface in an API. Recently, I was hit hard when this wasn't followed. As part of the work on ApplicationMetrics, I've been implementing auto-reporting of MVC action methods; whenever an action was called on a controller, ApplicationMetrics would automatically report it without the developer needing to add manual ReportEvent calls. Fortunately, MVC provides easy hook when a controller is created, letting me log when it happens - the IControllerFactory interface. Now, the dll we provide to instrument an MVC webapp has to be compiled against .NET 3.5 and MVC 1, as the lowest common denominator. This MVC 1 dll will still work when used in an MVC 2, 3 or 4 webapp because all MVC 2+ webapps have a binding redirect redirecting all references to previous versions of System.Web.Mvc to the correct version, and type forwards taking care of any moved types in the new assemblies. Or at least, it should. IControllerFactory In MVC 1 and 2, IControllerFactory was defined as follows: public interface IControllerFactory { IController CreateController(RequestContext requestContext, string controllerName); void ReleaseController(IController controller); } So, to implement the logging controller factory, we simply wrap the existing controller factory: internal sealed class LoggingControllerFactory : IControllerFactory { private readonly IControllerFactory m_CurrentController; public LoggingControllerFactory(IControllerFactory currentController) { m_CurrentController = currentController; } public IController CreateController( RequestContext requestContext, string controllerName) { // log the controller being used FeatureSessionData.ReportEvent("Controller used:", controllerName); return m_CurrentController.CreateController(requestContext, controllerName); } public void ReleaseController(IController controller) { m_CurrentController.ReleaseController(controller); } } Easy. This works as expected in MVC 1 and 2. However, in MVC 3 this type was throwing a TypeLoadException, saying a method wasn't implemented. It turns out that, in MVC 3, the definition of IControllerFactory was changed to this: public interface IControllerFactory { IController CreateController(RequestContext requestContext, string controllerName); SessionStateBehavior GetControllerSessionBehavior( RequestContext requestContext, string controllerName); void ReleaseController(IController controller); } There's a new method in the interface. So when our MVC 1 dll was redirected to reference System.Web.Mvc v3, LoggingControllerFactory tried to implement version 3 of IControllerFactory, was missing the GetControllerSessionBehaviour method, and so couldn't be loaded by the CLR. Implementing the new method Fortunately, there was a workaround. Because interface methods are normally implemented implicitly in the CLR, if we simply declare a virtual method matching the signature of the new method in MVC 3, then it will be ignored in MVC 1 and 2 and implement the extra method in MVC 3: internal sealed class LoggingControllerFactory : IControllerFactory { ... public virtual SessionStateBehaviour GetControllerSessionBehaviour( RequestContext requestContext, string controllerName) {} ... } However, this also has problems - the SessionStateBehaviour type only exists in .NET 4, and we're limited to .NET 3.5 by support for MVC 1 and 2. This means that the only solutions to support all MVC versions are: Construct the LoggingControllerFactory type at runtime using reflection Produce entirely separate dlls for MVC 1&2 and MVC 3. Ugh. And all because of that blasted extra method! Another solution? Fortunately, in this case, there is a third option - System.Web.Mvc also provides a DefaultControllerFactory type that can provide the implementation of GetControllerSessionBehaviour for us in MVC 3, while still allowing us to override CreateController and ReleaseController. However, this does mean that LoggingControllerFactory won't be able to wrap any calls to GetControllerSessionBehaviour. This is an acceptable bug, given the other options, as very few developers will be overriding GetControllerSessionBehaviour in their own custom controller factory. So, if you're providing an interface as part of an API, then please please please don't add methods to it. Especially if you don't provide a 'default' implementing type. Any code compiled against the previous version that can't be updated will have some very tough decisions to make to support both versions.

    Read the article

  • SQL SERVER – Puzzle #1 – Querying Pattern Ranges and Wild Cards

    - by Pinal Dave
    Note: Read at the end of the blog post how you can get five Joes 2 Pros Book #1 and a surprise gift. I have been blogging for almost 7 years and every other day I receive questions about Querying Pattern Ranges. The most common way to solve the problem is to use Wild Cards. However, not everyone knows how to use wild card properly. SQL Queries 2012 Joes 2 Pros Volume 1 – The SQL Queries 2012 Hands-On Tutorial for Beginners Book On Amazon | Book On Flipkart Learn SQL Server get all the five parts combo kit Kit on Amazon | Kit on Flipkart Many people know wildcards are great for finding patterns in character data. There are also some special sequences with wildcards that can give you even more power. This series from SQL Queries 2012 Joes 2 Pros® Volume 1 will show you some of these cool tricks. All supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the SQL 2012 series Volume 1 in the file SQLQueries2012Vol1Chapter2.2Setup.sql. If you need help setting up then look in the “Free Videos” section on Joes2Pros under “Getting Started” called “How to install your labs” Querying Pattern Ranges The % wildcard character represents any number of characters of any length. Let’s find all first names that end in the letter ‘A’. By using the percentage ‘%’ sign with the letter ‘A’, we achieve this goal using the code sample below: SELECT * FROM Employee WHERE FirstName LIKE '%A' To find all FirstName values beginning with the letters ‘A’ or ‘B’ we can use two predicates in our WHERE clause, by separating them with the OR statement. Finding names beginning with an ‘A’ or ‘B’ is easy and this works fine until we want a larger range of letters as in the example below for ‘A’ thru ‘K’: SELECT * FROM Employee WHERE FirstName LIKE 'A%' OR FirstName LIKE 'B%' OR FirstName LIKE 'C%' OR FirstName LIKE 'D%' OR FirstName LIKE 'E%' OR FirstName LIKE 'F%' OR FirstName LIKE 'G%' OR FirstName LIKE 'H%' OR FirstName LIKE 'I%' OR FirstName LIKE 'J%' OR FirstName LIKE 'K%' The previous query does find FirstName values beginning with the letters ‘A’ thru ‘K’. However, when a query requires a large range of letters, the LIKE operator has an even better option. Since the first letter of the FirstName field can be ‘A’, ‘B’, ‘C’, ‘D’, ‘E’, ‘F’, ‘G’, ‘H’, ‘I’, ‘J’ or ‘K’, simply list all these choices inside a set of square brackets followed by the ‘%’ wildcard, as in the example below: SELECT * FROM Employee WHERE FirstName LIKE '[ABCDEFGHIJK]%' A more elegant example of this technique recognizes that all these letters are in a continuous range, so we really only need to list the first and last letter of the range inside the square brackets, followed by the ‘%’ wildcard allowing for any number of characters after the first letter in the range. Note: A predicate that uses a range will not work with the ‘=’ operator (equals sign). It will neither raise an error, nor produce a result set. --Bad query (will not error or return any records) SELECT * FROM Employee WHERE FirstName = '[A-K]%' Question: You want to find all first names that start with the letters A-M in your Customer table and end with the letter Z. Which SQL code would you use? a. SELECT * FROM Customer WHERE FirstName LIKE 'm%z' b. SELECT * FROM Customer WHERE FirstName LIKE 'a-m%z' c. SELECT * FROM Customer WHERE FirstName LIKE 'a-m%z' d. SELECT * FROM Customer WHERE FirstName LIKE '[a-m]%z' e. SELECT * FROM Customer WHERE FirstName LIKE '[a-m]z%' f. SELECT * FROM Customer WHERE FirstName LIKE '[a-m]%z' g. SELECT * FROM Customer WHERE FirstName LIKE '[a-m]z%' Contest Leave a valid answer before June 18, 2013 in the comment section. 5 winners will be selected from all the valid answers and will receive Joes 2 Pros Book #1. 1 Lucky person will get a surprise gift from Joes 2 Pros. The contest is open for all the countries where Amazon ships the book (USA, UK, Canada, India and many others). Special Note: Read all the options before you provide valid answer as there is a small trick hidden in answers. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • PASS Summit 2011 &ndash; Part II

    - by Tara Kizer
    I arrived in Seattle last Monday afternoon to attend PASS Summit 2011.  I had really wanted to attend Gail Shaw’s (blog|twitter) and Grant Fritchey’s (blog|twitter) pre-conference seminar “All About Execution Plans” on Monday, but that would have meant flying out on Sunday which I couldn’t do.  On Tuesday, I attended Allan Hirt’s (blog|twitter) pre-conference seminar entitled “A Deep Dive into AlwaysOn: Failover Clustering and Availability Groups”.  Allan is a great speaker, and his seminar was packed with demos and information about AlwaysOn in SQL Server 2012.  Unfortunately, I have lost my notes from this seminar and the presentation materials are only available on the pre-con DVD.  Hmpf! On Wednesday, I attended Gail Shaw’s “Bad Plan! Sit!”, Andrew Kelly’s (blog|twitter) “SQL 2008 Query Statistics”, Dan Jones’ (blog|twitter) “Improving your PowerShell Productivity”, and Brent Ozar’s (blog|twitter) “BLITZ! The SQL – More One Hour SQL Server Takeovers”.  In Gail’s session, she went over how to fix bad plans and bad query patterns.  Update your stale statistics! How to fix bad plans Use local variables – optimizer can’t sniff it, so it’ll optimize for “average” value Use RECOMPILE (at the query or stored procedure level) – CPU hit OPTIMIZE FOR hint – most common value you’ll pass How to fix bad query patterns Don’t use them – ha! Catch-all queries Use dynamic SQL OPTION (RECOMPILE) Multiple execution paths Split into multiple stored procedures OPTION (RECOMPILE) Modifying parameter values Use local variables Split into outer and inner procedure OPTION (RECOMPILE) She also went into “last resort” and “very last resort” options, but those are risky unless you know what you are doing.  For the average Joe, she wouldn’t recommend these.  Examples are query hints and plan guides. While I enjoyed Andrew’s session, I didn’t take any notes as it was familiar material.  Andrew is a great speaker though, and I’d highly recommend attending his sessions in the future. Next up was Dan’s PowerShell session.  I need to look into profiles, manifests, function modules, and function import scripts more as I just didn’t quite grasp these concepts.  I am attending a PowerShell training class at the end of November, so maybe that’ll help clear it up.  I really enjoyed the Excel integration demo.  It was very cool watching PowerShell build the spreadsheet in real-time.  I must look into this more!  On a side note, I am jealous of Dan’s hair.  Fabulous hair! Brent’s session showed us how to quickly gather information about a server that you will be taking over database administration duties for.  He wrote a script to do a fast health check and then later wrapped it into a stored procedure, sp_Blitz.  I can’t wait to use this at my work even on systems where I’ve been the primary DBA for years, maybe there’s something I’ve overlooked.  We are using EPM to help standardize our environment and uncover problems, but sp_Blitz will definitely still help us out.  He even provides a cloud-based update feature, sp_BlitzUpdate, for sp_Blitz so you don’t have to constantly update it when he makes a change.  I think I’ll utilize his update code for some other challenges that we face at my work.

    Read the article

  • OEG11gR2 integration with OES11gR2 Authorization with condition

    - by pgoutin
    Introduction This OES use-case has been defined originally by Subbu Devulapalli (http://accessmanagement.wordpress.com/).  Based on this OES museum use-case, I have developed the OEG11gR2 policy able to deal with the OES authorization with condition. From an OEG point of view, the way to deal with OES condition is to provide with the OES request some Environmental / Context Attributes.   Museum Use-Case  All painting in the museum have security sensors, an alarm goes off when a person comes too close a painting. The employee designated for maintenance needs to use their ID and disable the alarm before maintenance. You are the Security Administrator for the museum and you have been tasked with creating authorization policies to manage authorization for different paintings. Your first task is to understand how paintings are organized. Asking around, you are surprised to see that there isno formal process in place, so you need to start from scratch. the museum tracks the following attributes for each painting 1. Name of the work 2. Painter 3. Condition (good/poor) 4. Cost You compile the list of paintings  Name of Painting  Painter  Paint Condition  Cost  Mona Lisa  Leonardo da Vinci  Good  100  Magi  Leonardo da Vinci  Poor  40  Starry Night  Vincent Van Gogh  Poor  75  Still Life  Vincent Van Gogh  Good  25 Being a software geek who doesn’t (yet) understand art, you feel that price(or insurance price) of a painting is the most important criteria. So you feel that based on years-of-experience employees can be tasked with maintaining different paintings. You decide that paintings worth over 50 cost should be only handled by employees with over 20 years of experience and employees with less than 10 years of experience should not handle any painting. Lets us start with policy modeling. All paintings have a common set of attributes and actions, so it will be good to have them under a single Resource Type. Based on this resource type we will create the actual resources. So our high level model is: 1) Resource Type: Painting which has action manage and the following four attributes a) Name of the work b) Painter c) Condition (good/poor) d) Cost 2) To keep things simple lets use painting name for Resource name (in real world you will try to use some identifier which is unique, because in future we may end up with more than one painting which has the same name.) 3) Create Resources based on the previous table 4) Create an identity attribute Experience (Integer) 5) Create the following authorization policies a) Allow employees with over 20 years experience to access all paintings b) Allow employees with 10 – 20 years of experience to access painting which cost less than 50 c) Deny access to all paintings for employees with less than 10 year of experience OES Authorization Configuration We do need to create 2 authorization policies with specific conditions a) Allow employees with over 20 years experience to access all paintings b) Allow employees with 10 – 20 years of experience to access painting which cost less than 50 c) Deny access to all paintings for employees with less than 10 year of experience We don’t need an explicit policy for Deny access to all paintings for employees with less than 10 year of experience, because Oracle Entitlements Server will automatically deny if there is no matching policy. OEG Policy The OEG policy looks like the following The 11g Authorization filter configuration is similar to :  The ${PAINTING_NAME} and ${USER_EXPERIENCE} variables are initialized by the "Retrieve from the HTTP header" filters for testing purpose. That's to say, under Service Explorer, we need to provide 2 attributes "Experience" & "Painting" following the OES 11g Authorization filter described above.

    Read the article

  • Free and Open Source Software in Oracle Solaris 11.1

    - by user13277799
    Oracle Solaris 11.1 contains number of Free and Open Source packages. The following table contains important FOSS packages with their versions available in this latest Oracle Solaris release. a2ps 4.14 aalib 1.4.0 pmtools 20071116 apache-ant 1.7.1 httpd 2.2.22 mod_dtrace 0.3.1 mod_fcgid 2.3.6 tomcat-connectors 1.2.28 mod_perl 2.0.4 mod_proxy_html 3.1.1 modsecurity-apache 2.5.9 mod_wsgi 3.3 apr 1.3.9 apr-util 1.3.9 areca 7.1 autoconf 2.68 autogen 5.9 automake 1.10 automake 1.11.2 automake 1.9.6 bash 4.1 bcc 0.16.17 beanshell 2.0b4 db 5.1.25 bind 9.6-ESV-R7-P2 binutils 2.21.1 bison 2.3 bzip2 1.0.6 cdrtools 3.00 clisp 2.47 cmake 2.8.6 gnu 0.5.11 conflict 20100627 convmv 1.15 coreutils 8.5 cups 1.4.5 curl 7.21.2 cvs 1.12.13 diffutils 2.8.7 doxygen 1.7.6.1 ejabberd 2.1.8 elinks 0.11.7 emacs 23.4 otp_src R12B-5 fcgi 2.4.0 fetchmail 6.3.22 flex 2.5.35 foomatic-db 20080903 foomatic-db-engine 3.0-20080903 foomatic-filters 4.0.15 foomatic-filters-ppds 20080818 fping 2.4b2_to gawk 3.1.8 gcc 3.4.3 gcc 4.5.2 gd 2.0.35 gdb 6.8 gdbm 1.8.3 gettext 0.16.1 grep 2.10 ghostscript 9.00 git 1.7.9.2 gnu-gs-fonts-other 6.0 gnu-gs-fonts-std 6.0 gmp 4.3.2 gnupg 2.0.17 gnuplot 4.6.0 pth 2.0.7 gocr 0.48 gperf 3.0.3 gpgme 1.1.8 grails 1.0.3 graphviz 2.28.0 tar 1.26 guile 1.8.6 gutenprint 5.2.7 gzip 1.4 hal-cups-utils 0.6.19 hexedit 1.2.12 hplip 3.10.9 httping 1.4.4 hwdata 0.5.11 iftop 0.17 ilmbase 1.0.1 ImageMagick 6.3.4 iperf 2.0.4 ipmitool 1.8.11 ircii 20060725 dhcp 4.1-ESV-R7 junit 4.10 INIT 2011-02-08 lcms 1.19 less 436 lftp 4.3.1 libassuan 2.0.1 confuse 2.6 libedit 20110802-3.0 libee 0.3.2 libestr 0.1.2 libevent 1.4.14b expat 2.1.0 libidn 1.19 libksba 1.1.0 libmcrypt 2.5.8 libmemcached 0.16 libmng 1.0.10 neon 0.29.5 libnet 1.1.5 libpcap 1.1.1 librsync 0.9.7 libsigsegv 2.6 libsndfile 1.0.23 libtecla 1.6.1 libtool 2.4.2 libtorrent 0.12.2 libusbugen 0.1.8 libusb 0.1.8 libxml2 2.7.6 libxslt 1.1.26 lighttpd 1.4.23 links 1.03 logilab-astng 0.19.0 logilab-common 0.40.0 lua 5.1.4 m4 1.4.12 make 3.82 mc 4.7.5.2 meld 1.4.0 memcached 1.4.5 memcached-java 2.0.1 mercurial 2.2.1 mpc 0.9 mpfr 2.4.2 mutt 1.5.21 mysql 5.1.37 ncftp 3.2.3 net-snmp 5.4.1 nethack 3.4.3 nmap 5.51 ntp-dev 4.2.5 open-fabrics 1.5.3 openexr 1.6.1 openldap 2.4.30 openscap 0.8.1 openssl 0.9.8q openssl 1.0.0j libopenusb 1.0.1 p7zip 9.20.1 pam_pkcs11 0.6.0 patch 2.5.9 pconsole 1.0 pcre 8.21 perl 5.12.4 DBI 1.58 Net-SSLeay 1.36 pmtools 1.10 XML-Parser 2.36 XML-Simple 2.18 PHP 5.2.17 PHP 5.3.14 pinentry 0.7.6 privoxy 3.0.17 proftpd 1.3.3 psutils p17 pv 1.2.0 pwgen 2.06 pylint 0.18.0 CherryPy 3.1.2 coverage 3.5 jsonrpclib 0.1.3 ldtp 2.1.1 M2Crypto 0.21.1 Mako 0.4.1 nose 1.1.2 ply 3.1 pybonjour 1.1.1 pycups 1.9.46 pycurl 7.19.0 lxml 2.3.3 pyOpenSSL 0.11 Python 2.6.8 Python 2.7.3 setuptools 0.6 quagga 0.99.19 quilt 0.60 rdiff-backup 1.3.3 readline 5.2 rpm2cpio 0.5.11 rsync 3.0.8 rsyslog 6.2.0 rtorrent 0.8.2 ruby 1.8.7 samba 3.6.6 sane-backends 1.0.19 sane-frontends 1.0.14 screen 4.0.3 sed 4.2.1 sendmail 8.14.5 slang 2.2.4 slib 3b1 slrn 0.9.9 snort 2.8.4.1 sox 14.3.2 spawn-fcgi 1.6.3 squid 3.1.18 stdcxx 4.2.1 subversion 1.7.5 sudo 1.8.4.5 swig 1.3.35 expect 5.45 tcl 8.5.9 tk 8.5.9 tls 1.6 tcpdump 4.1.1 tcsh 6.17.00 texinfo 4.7 tidy 1.0.0 timezone apache-tomcat 6.0.35 top 3.8beta1 trousers 0.3.6 unixODBC 2.3.0 unrar 4.1.4 unzip 6.0 vim 7.3 visual-panels wget 1.12 which 2.16 wireshark 1.8.2 wxGTK 2.8.12 xorriso 0.6.0 xz 5.0.1 zip 3.0 zlib 1.2.3 zsh 4.3.17

    Read the article

  • When to use Aspect Oriented Architecture (AOA/AOD)

    When is it appropriate to use aspect oriented architecture? I think the only honest answer to this question is that it depends on the context for which the question is being asked. There really are no hard and fast rules regarding the selection of an architectural model(s) for a project because each model provides good and bad benefits. Every system is built with a unique requirements and constraints. This context will dictate when to use one type of architecture over another or in conjunction with others. To me aspect oriented architecture models should be a sub-phase in the architectural modeling and design process especially when creating enterprise level models. Personally, I like to use this approach to create a base architectural model that is defined by non-functional requirements and system quality attributes.   This general model can then be used as a starting point for additional models because it is targets all of the business key quality attributes required by the system.Aspect oriented architecture is a method for modeling non-functional requirements and quality attributes of a system known as aspects. These models do not deal directly with specific functionality. They do categorize functionality of the system. This approach allows a system to be created with a strong emphasis on separating system concerns into individual components. These cross cutting components enables a systems to create with compartmentalization in regards to non-functional requirements or quality attributes.  This allows for the reduction in code because an each component maintains an aspect of a system that can be called by other aspects. This approach also allows for a much cleaner and smaller code base during the implementation and support of a system. Additionally, enabling developers to develop systems based on aspect-oriented design projects will be completed faster and will be more reliable because existing components can be shared across a system; thus, the time needed to create and test the functionality is reduced.   Example of an effective use of Aspect Oriented ArchitectureIn my experiences, aspect oriented architecture can be very effective with large or more complex systems. Typically, these types of systems have a large number of concerns so the act of defining them is very beneficial for reducing the system’s complexity because components can be developed to address each concern while exposing functionality to the other system components. The benefits to using the aspect oriented approach as the starting point for a system is that it promotes communication between IT and the business due to the fact that the aspect oriented models are quality attributes focused so not much technical understanding is needed to understand the model.An example of this can be in developing a new intranet website. Common Intranet Concerns: Error Handling Security Logging Notifications Database connectivity Example of a not as effective use of Aspect Oriented ArchitectureAgain in my experiences, aspect oriented architecture is not as effective with small or less complex systems in comparison.  There is no need to model concerns for a system that has a limited amount of them because the added overhead would not be justified for the actual benefits of creating the aspect oriented architecture model.  Furthermore, these types of projects typically have a reduced time schedule and a limited budget.  The creation of the Aspect oriented models would increase the overhead of a project and thus increase the time needed to implement the system. An example of this is seen by creating a small application to poll a network share for new files and then FTP them to a new location.  The two primary concerns for this project is to monitor a network drive and FTP files to a new location.  There is no need to create an aspect model for this system because there will never be a need to share functionality amongst either of these concerns.  To add to my point, this system is so small that it could be created with just a few classes so the added layer of componentizing the concerns would be complete overkill for this situation. References:Brichau, Johan; D'Hondt, Theo. (2006) Aspect-Oriented Software Development (AOSD) - An Introduction. Retreived from: http://www.info.ucl.ac.be/~jbrichau/courses/introductionToAOSD.pdf

    Read the article

  • SQL SERVER – #TechEdIn – Presenting Tomorrow on Speed Up! – Parallel Processes and Unparalleled Performance at TechEd India 2012

    - by pinaldave
    Performance tuning is always a very hot topic when it is about SQL Server. SQL Server Performance Tuning is a very challenging subject that requires expertise in Database Administration and Database Development. I always have enjoyed talking about SQL Server Performance tuning subject. However, in India, it’s actually the very first time someone is presenting on this interesting subject, so this time I had the biggest challenge to present this session. Frequently enough, we get these two kind of questions: How to turn off parallelism as it is reducing performance? How to turn on parallelism as I want more performance? The reality is that not everyone knows what exactly is needed by their system. In this session, I have attempted to answer this very question. I’ve decided to provide a balanced view but stay away from theory, which leads us to say “It depends”. The session will have a clear message about this towards its end. Deck Details Slides: 45+ Demos: 7+ Bonus Quiz: 5 Images: 10+ Session delivery time: 52 Mins + 8 Mins of Q & A I have presented this session a couple of times to my friends and so far have received good feedback. Oftentimes, when people hear that I am going to present 45 slides, they all say it is too much to cover. However, when I am done with the session the usual reaction is that I truly gave justice to those slides. Action Item Here are a few of the action items for all of those who are going to attend this session: If you want to attend the session, just come early. There’s a good chance that you may not get a seat because right before me, there is a session from SQL Guru Vinod Kumar. He performs a powerful delivery of million concepts in just a little time. Quiz. I will be asking few questions during the session as well as before the session starts. If you get the correct answer, I will give unique learning material for you. You may not want to miss this learning opportunity at any cosst. Session Details Title: Speed Up! – Parallel Processes and Unparalleled Performance (Add to Calendar) Abstract: “More CPU, More Performance” – A  very common understanding is that usage of multiple CPUs can improve the performance of the query. To get a maximum performance out of any query, one has to master various aspects of the parallel processes. In this deep-dive session, we will explore this complex subject with a very simple interactive demo. Attendees will walk away with proper understanding of CX_PACKET wait types, MAXDOP, parallelism threshold and various other concepts. Date and Time: March 23, 2012, 12:15 to 13:15 Location: Hotel Lalit Ashok - Kumara Krupa High Grounds, Bengaluru – 560001, Karnataka, India. Add to Calendar Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • DataContractSerializer: type is not serializable because it is not public?

    - by Michael B. McLaughlin
    I recently ran into an odd and annoying error when working with the DataContractSerializer class for a WP7 project. I thought I’d share it to save others who might encounter it the same annoyance I had. So I had an instance of  ObservableCollection<T> that I was trying to serialize (with T being a class I wrote for the project) and whenever it would hit the code to save it, it would give me: The data contract type 'ProjectName.MyMagicItemsClass' is not serializable because it is not public. Making the type public will fix this error. Alternatively, you can make it internal, and use the InternalsVisibleToAttribute attribute on your assembly in order to enable serialization of internal members - see documentation for more details. Be aware that doing so has certain security implications. This, of course, was malarkey. I was trying to write an instance of MyAwesomeClass that looked like this: [DataContract] public class MyAwesomeClass { [DataMember] public ObservableCollection<MyMagicItemsClass> GreatItems { get; set; }   [DataMember] public ObservableCollection<MyMagicItemsClass> SuperbItems { get; set; }     public MyAwesomeClass { GreatItems = new ObservableCollection<MyMagicItemsClass>(); SuperbItems = new ObservableCollection<MyMagicItemsClass>(); } }   That’s all well and fine. And MyMagicItemsClass was also public with a parameterless public constructor. It too had DataContractAttribute applied to it and it had DataMemberAttribute applied to all the properties and fields I wanted to serialize. Everything should be cool, but it’s not because I keep getting that “not public” exception. I could tell you about all the things I tried (generating a List<T> on the fly to make sure it wasn’t ObservableCollection<T>, trying to serialize the the Collections directly, moving it all to a separate library project, etc.), but I want to keep this short. In the end, I remembered my the “Debug->Exceptions…” VS menu option that brings up the list of exception-related circumstances under which the Visual Studio debugger will break. I checked the “Thrown” checkbox for “Common Language Runtime Exceptions”, started the project under the debugger, and voilà: the true problem revealed itself. Some of my properties had fairly elaborate setters whose logic I wanted to ignore. So for some of them, I applied an IgnoreDataMember attribute to them and applied the DataMember attribute to the underlying fields instead. All of which, in line with good programming practices, were private. Well, it just so happens that WP7 apps run in a “partial trust” environment and outside of “full trust”-land, DataContractSerializer refuses to serialize or deserialize non-public members. Of course that exception was swallowed up internally by .NET so all I ever saw was that bizarre message about things that I knew for certain were public being “not public”. I changed all the private fields I was serializing to public and everything worked just fine. In hindsight it all makes perfect sense. The serializer uses reflection to build up its graph of the object in order to write it out. In partial trust, you don’t want people using reflection to get at non-public members of an object since there are potential security problems with allowing that (you could break out of the sandbox pretty quickly by reflecting and calling the appropriate methods and cause some havoc by reflecting and setting the appropriate fields in certain circumstances. The fact that you cannot reflect your own assembly seems a bit heavy-handed, but then again I’m not a compiler writer or a framework designer and I have no idea what sorts of difficulties would go into allowing that from a compilation standpoint or what sorts of security problems allowing that could present (if any). So, lesson learned. If you get an incomprehensible exception message, turn on break on all thrown exceptions and try running it again (it might take a couple of tries, depending) and see what pops out. Chances are you’ll find the buried exception that actually explains what was going on. And if you’re getting a weird exception when trying to use DataContractSerializer complaining about public types not being public, chances are you’re trying to serialize a private or protected field/property.

    Read the article

  • Tuning Default WorkManager - Advantages and Disadvantages

    - by Murali Veligeti
    Before discussing on Tuning Default WorkManager, lets have a brief introduction on What is Default WorkManger Before Weblogic Server 9.0 release, we had the concept of Execute Queues. WebLogic Server (before WLS 9.0), processing was performed in multiple execute queues. Different classes of work were executed in different queues, based on priority and ordering requirements, and to avoid deadlocks. In addition to the default execute queue, weblogic.kernel.default, there were pre-configured queues dedicated to internal administrative traffic, such as weblogic.admin.HTTP and weblogic.admin.RMI.Users could control thread usage by altering the number of threads in the default queue, or configure custom execute queues to ensure that particular applications had access to a fixed number of execute threads, regardless of overall system load. From WLS 9.0 release onwards WebLogic Server uses is a single thread pool (single thread pool which is called Default WorkManager), in which all types of work are executed. WebLogic Server prioritizes work based on rules you define, and run-time metrics, including the actual time it takes to execute a request and the rate at which requests are entering and leaving the pool.The common thread pool changes its size automatically to maximize throughput. The queue monitors throughput over time and based on history, determines whether to adjust the thread count. For example, if historical throughput statistics indicate that a higher thread count increased throughput, WebLogic increases the thread count. Similarly, if statistics indicate that fewer threads did not reduce throughput, WebLogic decreases the thread count. This new strategy makes it easier for administrators to allocate processing resources and manage performance, avoiding the effort and complexity involved in configuring, monitoring, and tuning custom executes queues. The Default WorkManager is used to handle thread management and perform self-tuning.This Work Manager is used by an application when no other Work Managers are specified in the application’s deployment descriptors. In many situations, the default Work Manager may be sufficient for most application requirements. WebLogic Server’s thread-handling algorithms assign each application its own fair share by default. Applications are given equal priority for threads and are prevented from monopolizing them. The default work-manager, as its name tells, is the work-manager defined by default.Thus, all applications deployed on WLS will use it. But sometimes, when your application is already in production, it's obvious you can't take your EAR / WAR, update the deployment descriptor(s) and redeploy it.The default work-manager belongs to a thread-pool, as initial thread-pool comes with only five threads, that's not much. If your application has to face a large number of hits, you may want to start with more than that.Well, that's quite easy. You have  two option to do so.1) Modify the config.xmlJust add the following line(s) in your server definition : <server> <name>AdminServer</name> <self-tuning-thread-pool-size-min>100</self-tuning-thread-pool-size-min> <self-tuning-thread-pool-size-max>200</self-tuning-thread-pool-size-max> [...] </server> 2) Adding some JVM parameters Add the following system property in setDomainEnv.sh/setDomainEnv.cmd or startWebLogic.sh/startWebLogic.cmd : -Dweblogic.threadpool.MinPoolSize=100 -Dweblogic.threadpool.MaxPoolSize=100 Reboot WLS and see the option has been taken into account . Disadvantage: So far its fine. But here there is an disadvantage in tuning Default WorkManager. Internally Weblogic Server has many work managers configured for different types of work.  if we run out of threads in the self-tuning pool(because of system property -Dweblogic.threadpool.MaxPoolSize) due to being undersized, then important work that WLS might need to do could be starved.  So, while limiting the self-tuning would limit the default WorkManager and internally it also limits all other internal WorkManagers which WLS uses.So the best alternative is to override the default WorkManager that means creating a WorkManager for the Application and assign the WorkManager for the application instead of tuning the Default WorkManager.

    Read the article

  • Oracle MDM Maturity Model

    - by David Butler
    A few weeks ago, I discussed the results of a survey conducted by Oracle’s Insight team. The survey was based on the data management maturity model that the Oracle Insight team has developed over the years as they analyzed customer IT organizations to help them get more out of everything they already have. I thought you might like to learn more about the maturity model itself. It can help you figure out where you stand when it comes to getting your organizations data management act together. The model covers maturity levels around five key areas: Profiling data sources; Defining a data strategy; Defining a data consolidation plan; Data maintenance; and Data utilization. Profile data sources: Profiling data sources involves taking an inventory of all data sources from across your IT landscape. Then evaluate the quality of the data in each source system. This enables the scoping of what data to collect into an MDM hub and what rules are needed to insure data harmonization across systems. Define data strategy: A data strategy requires an understanding of the data usage. Given data usage, various data governance requirements need to be developed. This includes data controls and security rules as well as data structure and usage policies. Define data consolidation strategy: Consolidation requires defining your operational data model. How integration is to be accomplished. Cross referencing common data attributes from multiple systems is needed. Synchronization policies also need to be developed. Data maintenance: The desired standardization needs to be defined, including what constitutes a ‘match’ once the data has been standardized. Cleansing rules are a part of this methodology. Data quality monitoring requirements also need to be defined. Utilize the data: What data gets published, and who consumes the data must be determined. How to get the right data to the right place in the right format given its intended use must be understood. Validating the data and insuring security rules are in place and enforced are crucial aspects for full no-risk data utilization. For each of the above data management areas, a maturity level needs to be assessed. Where your organization wants to be should also be identified using the same maturity levels. This results in a sound gap analysis your organization can use to create action plans to achieve the ultimate goals. Marginal is the lowest level. It is characterized by manually maintaining trusted sources; lacking or inconsistent, silo’d structures with limited integration, and gaps in automation. Stable is the next leg up the MDM maturity staircase. It is characterized by tactical MDM implementations that are limited in scope and target a specific division.  It includes limited data stewardship capabilities as well. Best Practice is a serious MDM maturity level characterized by process automation improvements. The scope is enterprise wide. It is a business solution that provides a single version of the truth, with closed-loop data quality capabilities. It is typically driven by an enterprise architecture group with both business and IT representation.   Transformational is the highest MDM maturity level. At this level, MDM is quantitatively managed. It is integrated with Business Intelligence, SOA, and BPM. MDM is leveraged in business process orchestration. Take an inventory using this MDM Maturity Model and see where you are in your journey to full MDM maturity with all the business benefits that accrue to organizations who have mastered their data for the benefit of all operational applications, business processes, and analytical systems. To learn more, Trevor Naidoo and I have written the Oracle MDM Maturity Model whitepaper. It’s free, so go ahead and download it and use it as you see fit.

    Read the article

  • DallasXAML.com – A New User Group for Silverlight, WPF, XBAP, etc.

    - by vblasberg
                                     http://DallasXAML.com   I’ve devoted much of last month to starting the DallasXAML User Group.  I finally got back into user group management after 2 years away from leading the Dallas C# SIG.  Now I’m having fun getting a Silverlight/WPF user group going strong for the Dallas / Ft. Worth community.  Our first meeting was March 3rd at the Improving Enterprises offices in North Dallas.  We had about 25 to 35 attendees in the first meeting and it went well.  We covered the most important topic that everyone should understand well – data binding.   So I chose the XAML user group so we can get together for a common group improvement in the Dallas / Ft. Worth area and learn cross-technology information that we can use now.  It is not a lecture hall.  The great thing is that we’ll provide hands-on experience with most every meeting.  The goal is to get the experience that we can use the next work day.  I unfortunately broke that rule by speaking all through the first meeting, but next month is part two with more hands-on data binding.   The differentiation is this group concentrates on XAML, not Silverlight or Windows Client alone.  What we learn in one area, we gain for all areas.  That includes the Silverlight for Windows Phone 7 coming later this year.  Next year it may be Windows Phone 8, 9, or whatever.    I started developing WPF seriously almost a year ago.  I experienced the painful learning curve.  Anyone who reports that there isn’t a big learning curve either thinks in XAML before it was developed, is on the Silverlight or WPF development team, or has already conquered the learning and forgot the pain.  So I wanted to share the pain or make it easier for others – same thing.  I have found that the more I learn and use good disciplined techniques, the more interesting and rewarding development is again.   A few months ago, I was sitting in the iPhone development session at the Dallas C# SIG.  After the meeting, the audience was polled for future topics.  After a few suggestions, Silverlight got the big hands up.  That makes sense because it’s still the hot topic for many Microsoft developers.  So I surfed around and found that there aren’t enough user groups to help in this area.  I polled a few local group leaders and did the work to start the group.  This week I got a telerik controls licence and improved the site with some great controls, namely the RadHtmlPlaceholder control.  It provides a Silverlight control to show HTML in an IFrame-like area.  On DallasXAML.com, the newsletters and resource pages display in HTML because Silverlight just isn’t there yet.  I’m looking forward to a Silverlight XPS viewer with flow documents.  There are some good commercial version available, but this is a non-profit group.    The DallasXAML.com site points to many other resources such as podcasts and webcasts.  I would rather give them the credit than try to out-do them.  So check out the DallasXAML user group site and attend our meetings if you can.  We meet the first Tuesday of the month.   -Vince DallasXAML User Group Leader  

    Read the article

  • How to Convert Videos to 3GP for Mobile Phones

    - by DigitalGeekery
    Would you like to play videos on your phone, but the device only supports 3GP files? We’ll show you how to convert popular video files into 3GP mobile phone video format with Pazera Free Video to 3GP Converter. Download the Pazera Free Video to 3GP Converter (Download link below). It will allow you to convert popular video files (AVI, MPEG, MP4, FLV, MKV, and MOV) to work on your mobile phone. There is no installation to run. You’ll just need to unzip the download folder and double-click the videoto3gp.exe file to run the application. To add video files to the queue, click on the Add files button. Browse for your file, and click Open.   Your video will be added to the Queue. You can add multiple files to the queue and convert them all at one time. The converter comes with several pre-configured profiles for conversion settings. To load a profile, select one from the Profile drop down list and then click the Load button. The settings in the panels at the bottom of the application will be automatically updated.   If you are a more advanced user, the options on the lower panels allow for adjusting settings to your liking. You can choose between 3GP and 3G2 (for some older phones), H.263, MPEG-4, and XviD video codecs, AAC or AMR-NB audio codecs, as well as a variety of bitrates, resolutions, etc.  By default, the converted file will be output to the same location as the input directory. You can change it by clicking the text box input radio button and browsing for a different folder. Click Convert to start the conversion process. A conversion output box will open and display the progress. When finished, click Close.   Now you’re ready to load the video onto your phone and enjoy.     Conclusion Pazera Free Video to 3GP Converter is not exactly the ultimate video conversion tool, but it is quick and simple enough for the average user to convert most video formats to 3GP. Plus, it’s portable. You can copy the folder to a USB drive and take it with you. Do you have some 3GP video files you’d like to convert to more common formats? Check out our earlier article on how to convert 3GP to AVI and MPEG for free. Link Download Pazera Free Video to 3GP Converter Similar Articles Productive Geek Tips Convert .3GP and .3G2 Files to AVI / MPEG for FreeExtract Audio from a Video File with Pazera Free Audio ExtractorConvert PDF Files to Word Documents and Other FormatsConvert YouTube Videos to MP3 with YouTube DownloaderFriday Fun: Watch HD Video Content with Meevid TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Daily Motivator (Firefox) FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation"

    Read the article

  • Educational, well-written FOSS projects to read, study or discuss

    - by Godot
    Before you say it: yes, this "question" has been asked other times. However, I could not fine many of such questions and not that easily, and those I found had similar results. What I'm trying to say that there are no comprehensive lists of well written Open Source projects, so I decided to set some requirements for the entries (one or possibly more): Idiomatic use of the language in which they are written The project should be lightweight. Not as in "a few kbs", as in "clean" and possibly following the UNIX philosophy, making an efficient use of resources and performing its duty and nothing more. No code bloat, most importantly. Projects like Firefox and GNOME wouldn't qualify, for example. Minimal reliance on external, non-standard libraries, with exceptions for some common FOSS libraries (curses, Xlib, OpenGL and possibly "usual suspects" like gtk+, webkit and Boost). Reliance on well-written libraries is welcome. No reliance on proprietary software - for obvious reasons (programs that rely on XNA, DirectX, Cocoa and similar, for example). Well-documented code is welcome. Include link to web interfaces to their repositories if possible. Here are some sample projects that often pop up in these threads: Operating Systems Plan 9 from Bell Labs: More or less, the official "sequel" to UNIX. Written in C by the same people who invented C! NetBSD: The most portable BSD implementation, written in C and also a good example of portable and organized code. Network and Databases Sqlite: Extremely lightweight and extremely efficient, one of the best pieces of C software I've seen. Count the lines yourself! Lighttpd: A small but pretty reliable web server written in C. Programming languages and VMs Lua: extremely lightweight multi-paradigm programming language. Written in C. Tiny C Compiler: Really tiny C compiler. Not really comparable to GCC or Clang but does its job. PyPy: A Python implementation written in Python. Pharo: OK, I admit it, I'm not really a Smalltalk expert but Pharo is a fork of Squeak and looked rather interesting. Stackless Python - An implementation of Python that doesn't rely on the C call stack - written in C (with some parts in Python) Games and 3D: Angband: One of the most accessible roguelike codebases around here, written in C. Ogre3D: Cross-platform 3D engine. Gets bloated if you don't skip the platform-specific implementation code, otherwise is a pretty solid example of good C++ OO. Simon Tatham's Portable Puzzle Collection: Title says it all. Other - dwm: Lightweight window manager. Written in C. Emulation and Reverse Engineering - Bochs: x86 emulator, written in C++ and tiny enough. - MAME: If you want to see C at one of its lowest levels, MAME is for you. May not be as clean as the other projects but it can teach you A LOT. Before you ask: I didn't mention Linux because it has become quite bloated in the last few years, Linus has also confirmed it. Nonetheless, it'd be a great educational read the same, even if for other reasons. Same for GCC. Feel free to edit or wikify my post. I hope you won't lock my question, I'm only trying to organize a little community effort for the good of all those people who want to enhance their coding skills.

    Read the article

  • HOWTO Turn off SPARC T4 or Intel AES-NI crypto acceleration.

    - by darrenm
    Since we released hardware crypto acceleration for SPARC T4 and Intel AES-NI support we have had a common question come up: 'How do I test without the hardware crypto acceleration?'. Initially this came up just for development use so developers can do unit testing on a machine that has hardware offload but still cover the code paths for a machine that doesn't (our integration and release testing would run on all supported types of hardware anyway).  I've also seen it asked in a customer context too so that we can show that there is a performance gain from the hardware crypto acceleration, (not just the fact that SPARC T4 much faster performing processor than T3) and measure what it is for their application. With SPARC T2/T3 we could easily disable the hardware crypto offload by running 'cryptoadm disable provider=n2cp/0'.  We can't do that with SPARC T4 or with Intel AES-NI because in both of those classes of processor the encryption doesn't require a device driver instead it is unprivileged user land callable instructions. Turns out there is away to do this by using features of the Solaris runtime loader (ld.so.1). First I need to expose a little bit of implementation detail about how the Solaris Cryptographic Framework is implemented in Solaris 11.  One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.  The alternate to this is having the application coded to call getisax() and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so, and the unfortunately misnamed due to historical reasons libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  For SPARC T4 that would be: export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" and for Intel systems with AES-NI support: export LD_HWCAP="-aes" This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  It also works for the Oracle DB and Java JCE.  However does not work for the default enabled OpenSSL "t4" or "aes-ni" engines (unfortunately) because they do explicit calls to getisax() themselves rather than using multiple ELF cap sections. However we can still use OpenSSL to demonstrate this by explicitly selecting "pkcs11" engine  using only a single process and thread.  $ openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 54170.81k 187416.00k 489725.70k 805445.63k 1018880.00k $ LD_HWCAP="-aes" openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 29376.37k 58328.13k 79031.55k 86738.26k 89191.77k We can clearly see the difference this makes in the case where AES offload to the SPARC T4 was disabled. The "t4" engine is faster than the pkcs11 one because there is less overhead (again on a SPARC T4-1 using only a single process/thread - using -multi you will get even bigger numbers). $ openssl speed -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 85526.61k 89298.84k 91970.30k 92662.78k 92842.67k Yet another cool feature of the Solaris linker/loader, thanks Rod and Ali. Note these above openssl speed output is not intended to show the actual performance of any particular benchmark just that there is a significant improvement from using hardware acceleration on SPARC T4. For cryptographic performance benchmarks see the http://blogs.oracle.com/BestPerf/ postings.

    Read the article

  • Oracle EMEA News Digest - May 2014

    - by Steve Walker
    Systems Oracle introduced a technology preview of an OpenStack® distribution that allows Oracle Linux and Oracle VM users to work with the open source cloud software. This provides customers with additional choices and interoperability while taking advantage of the efficiency, performance, scalability, and security of Oracle Linux and Oracle VM. The distribution is delivered as part of the Oracle Linux and Oracle VM Premier Support offerings, at no additional cost. Oracle plans to work further with the OpenStack community to develop and enhance its enterprise-class capabilities to meet customer demands. Also in the Open Source arena, Oracle announced the general availability of MySQL Fabric. MySQL Fabric provides an integrated system that makes it simpler to manage groups of MySQL databases. It delivers both high availability - via failure detection and failover - and scalability through automated data sharding. Oracle Database, Middleware and Technology The company made two announcements for Oracle Tuxedo, the #1 application server for C, C++, COBOL and Java deployments in private cloud or traditional data center environments. With enhanced management and monitoring features and tighter integration with Oracle technologies, the latest release of Oracle Tuxedo 12c enables organizations to dramatically increase application throughput, while reducing total cost of ownership and time to market for new application development and deployment. Oracle also introduced the latest release of its mainframe application rehosting platform, Oracle Tuxedo ART 12c, to help organizations speed up migration projects and accelerate the adoption of the new environment by current IT staff. It enables organizations to accelerate the rehosting of IBM mainframe applications and greatly enhance management and supportability of the rehosted applications while reducing costs and risk. Applications According to new Oracle studies, B2B and B2C commerce professionals find integrated, omni-channel customer experiences increasingly valuable to their organizations, and are continuing to invest in technologies and digital content strategies to facilitate them. The studies—one for B2B and one for B2C—surveyed e-commerce professionals in business and technology departments from around the world. Although the priorities, success metrics, and technology investments differed between the two groups, customer acquisition and retention emerged as common themes across B2B and B2C. Growing market share and enhancing customer experience are cited as top investment areas for all e-commerce professionals. In product news, Oracle announced the latest release of Oracle Business Intelligence (BI) Applications (version 11.1.1.8.1, in case anyone asks). It includes prebuilt connectors between Oracle Procurement and Spend Analytics and Oracle’s JD Edwards. Additionally, a new Oracle Human Resources Analytics module for developing and maintaining a skilled workforce has been introduced. In use at more than 4,000 companies worldwide, Oracle BI Applications support leading enterprise applications, including Oracle E-Business Suite, Oracle’s PeopleSoft, Oracle's Siebel CRM, Oracle’s JD Edwards EnterpriseOne offering high-performing analytics at a lower cost. Industries For the Communications Industry, Oracle has launched a new release of the Oracle Communications Core Session Manager. This gives CSPs a new way to design, deploy and manage complex networking services and embrace next-generation technology, It provides them with an immediate entry point for  network function virtualization (NFV) efforts, allowing them to realize immediate benefits associated with network virtualization – including increased service agility and improved network resource sharing. And for the Utilities Industry, Oracle is releasing solutions with new business features and enhanced technical architecture that help position utilities for success now and into the future. Oracle has provided new releases for its customer information system,  meter data management system, customer self-service solution and mobile workforce management solution.

    Read the article

  • Moving monarchs and dragons: migrating the JDK bugs to JIRA

    - by darcy
    Among insects, monarch butterflies and dragonflies have the longest migrations; migrating JDK bugs involves a long journey as well! As previously announced by Mark back in March, we've been working according to a revised plan to transition the JDK bug management from Sun's legacy system to initially an Oracle-internal JIRA instance which is afterward made visible and usable externally. I've been busily working on this project for the last few months and the team has made good progress on many aspects of the effort: JDK bugs will be imported into JIRA regardless of age; bugs will also be imported regardless of state, including closed bugs. Consequently, the JDK bug project will start pre-populated with over 100,000 existing bugs, some dating all the way back to 1994. This will allow a continuity of information and allow new issues to be linked to old ones. Using a custom import process, the Sun bug numbers will be preserved in JIRA. For example, the Sun bug with bug number 4040458 will become "JDK-4040458" in JIRA. In JIRA the project name, "JDK" in our case, is part of the bug's identifier. Bugs created after the JIRA migration will be numbered starting at 8000000; bugs imported from the legacy system have numbers ranging between 1000000 and 79999999. We're working with the bugs.sun.com team to try to maintain continuity of the ability to both read JDK bug information as well as to file new incidents. At least for now, the overall architecture of bugs.sun.com will be the same as it is today: it will be a gateway bridging to an Oracle-internal system, but the internal system will change to JIRA from the legacy database. Generally we are aiming to preserve the visibility of bugs currently viewable on bugs.sun.com; however, bugs in areas not related to the JDK will not be visible after the transition to JIRA. New incoming incidents will be sent to a separate JIRA project for initial triage before possibly being moved into the JDK project. JDK bug management leans heavily on being able to track the state of bugs in multiple releases, especially to coordinate delivering synchronized security releases (known as CPUs, critital patch updates, in Oracle parlance). For a security release, it is common for half a dozen or more release trains to be affected (for example, JDK 5, JDK 6 update, OpenJDK 6, JDK 7 update, JDK 8, virtual releases for HotSpot express, etc.). We've determined we need to track at least the tuple of (release, responsible engineer/assignee for the release, status in the release) for the release trains a fix is going into. To do this in JIRA, we are creating a separate port/backport issue type along with a custom link type to allow the multiple release information to be easily grouped and presented together. The Sun legacy system had a three-level classification scheme, product, category, and subcategory. Out of the box, JIRA only has a one-level classification, component. We've implemented a custom second-level classification, subcomponent. As part of the bug migration we've taken the opportunity to think about how bugs should be grouped under a two-level system and we'll the new system will be simpler and more regular. The main top-level components of the JDK product will include: core-libs client-libs deploy install security-libs other-libs tools hotspot For the libs areas, the primary name of the subcomportment will be the package of the API in question. In the core-libs component, there will be subcomponents like: java.lang java.lang.class_loading java.math java.util java.util:i18n In the tools component, subcomponents will primarily correspond to command names in $JDK/bin like, jar, javac, and javap. The first several bulk imports of the JDK bugs into JIRA have gone well and we're continuing to refine the import to have greater fidelity to the current data, including by reconstructing information not brought over in a structured fashion during the previous large JDK bug system migration back in 2004. We don't currently have a firm timeline of when the new system will be usable externally, but as it becomes available, I'll share further information in follow-up blog posts.

    Read the article

  • Tweeting about Oracle Applications Usability: Points to Consider

    - by ultan o'broin
    Here are a few pointers to anyone interested in tweeting about Oracle Applications usability or user experience (UX). These are based on my own experiences and practice, and may not necessarily reflect the views of Oracle, of course (touché, see the footer). If you are an Oracle employee and tweet about our offerings, then read up and follow the corporate social media policy. For the record, I tweet under the following account names: @ultan, @localization, @gamifyOracle, and @usableapps. The last two are supposedly Oracle subject-dedicated, but I mix it up on occassion. Fill out your Twitter account profile, and add a profile picture too. Disclose your interest. Don’t leave either the profile or image blank if you want to be taken seriously (or followed by me). Don’t tweet from a locked down Twitter account, as the message cannot be circulated to anyone who doesn't follow you. Open up the account if you really want to get that UX message out. Stay on message. The usable apps website, Misha Vaughan's VoX blog, and the Oracle Applications blog are good sources of UX messages and information, but you can find many other product team, individual, and corporate-wide sources with a little bit of searching. Set up a Google Alert with pertinent related keywords to get a daily digest of new information right in your inbox. Be original about it. Add your own insight and wit to the message, were relevant. Just circulating and RTing stock headlines adds no value to your effort or to the reader, and is somewhat lazy, in my opinion. Leave room for RTing of your tweet. So, don’t max out those 140 characters. Keep it under 130 if you want to be RTed without modification (or at all-I am not a fan of modifying tweets [MT], way too much effort for the medium). Remove articles and punctuation marks and use fragments, abbreviations, and so on at will to keep the tweet short enough, but leave keywords intact, as people search on those. Follow any Fusion UX Advocates who are on Twitter too (you can search for these names), and not just Oracle employees. Don't just follow people you like or think like you, or those who you think like you or are like-minded. Take a look at who is following or being followed by other tweeters and er, follow up. Create and socialize others to use an easily remembered or typed hashtag, or use what’s already popularized (for an event or conference, for example). We used #gamifyOracle for the applications UX gamification design jam, and other popular applications UX ones are #fusionapps and #usableapps (or at least I’m trying to popularize it). But, before you start the messaging, if you want to keep a record of the hashtag traffic, then set it up with an archiving service. Twitter’s own tweet lifespan is short. Don't mix up hashtags (#) with Twitter handles (@) that have the same name. Sending a tweet to @gamifyOracle will just be seen by @gamifyOracle (me) and any followers we have in common. Sending it to #gamifyOracle is seen by anyone following or searching for that hashtag. No dissing the competition. But there is no rule about not following them on Twitter to see the market reactions to Oracle announcements and this can even let you can tailor your own message accordingly. Don’t be boring. Mix it up a bit. Every 10th or so tweet, divert into other areas of interest, personal ones, even. No constant “I just received K+ in this and that” or “I just checked into wherever” on foursquare pouring into the Twittersteam, please. I just don’t care and will probably unfollow such people pretty quickly. And now, your Twitter tips and experiences with this subject? Them go in the comments...

    Read the article

< Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >