Search Results

Search found 10741 results on 430 pages for 'self improvement'.

Page 242/430 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • MEF CompositionInitializer for WPF

    - by Reed
    The Managed Extensibility Framework is an amazingly useful addition to the .NET Framework.  I was very excited to see System.ComponentModel.Composition added to the core framework.  Personally, I feel that MEF is one tool I’ve always been missing in my .NET development. Unfortunately, one perfect scenario for MEF tends to fall short of it’s full potential is in Windows Presentation Foundation development.  In particular, there are many times when the XAML parser constructs objects in WPF development, which makes composition of those parts difficult.  The current release of MEF (Preview Release 9) addresses this for Silverlight developers via System.ComponentModel.Composition.CompositionInitializer.  However, there is no equivalent class for WPF developers. The CompositionInitializer class provides the means for an object to compose itself.  This is very useful with WPF and Silverlight development, since it allows a View, such as a UserControl, to be generated via the standard XAML parser, and still automatically pull in the appropriate ViewModel in an extensible manner.  Glenn Block has demonstrated the usage for Silverlight in detail, but the same issues apply in WPF. As an example, let’s take a look at a very simple case.  Take the following XAML for a Window: <Window x:Class="WpfApplication1.MainView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="220" Width="300"> <Grid> <TextBlock Text="{Binding TheText}" /> </Grid> </Window> This does nothing but create a Window, add a simple TextBlock control, and use it to display the value of our “TheText” property in our DataContext class.  Since this is our main window, WPF will automatically construct and display this Window, so we need to handle constructing the DataContext and setting it ourselves. We could do this in code or in XAML, but in order to do it directly, we would need to hard code the ViewModel type directly into our XAML code, or we would need to construct the ViewModel class and set it in the code behind.  Both have disadvantages, and the disadvantages grow if we’re using MEF to compose our ViewModel. Ideally, we’d like to be able to have MEF construct our ViewModel for us.  This way, it can provide any construction requirements for our ViewModel via [ImportingConstructor], and it can handle fully composing the imported properties on our ViewModel.  CompositionInitializer allows this to occur. We use CompositionInitializer within our View’s constructor, and use it for self-composition of our View.  Using CompositionInitializer, we can modify our code behind to: public partial class MainView : Window { public MainView() { InitializeComponent(); CompositionInitializer.SatisfyImports(this); } [Import("MainViewModel")] public object ViewModel { get { return this.DataContext; } set { this.DataContext = value; } } } We then can add an Export on our ViewModel class like so: [Export("MainViewModel")] public class MainViewModel { public string TheText { get { return "Hello World!"; } } } MEF will automatically compose our application, decoupling our ViewModel injection to the DataContext of our View until runtime.  When we run this, we’ll see: There are many other approaches for using MEF to wire up the extensible parts within your application, of course.  However, any time an object is going to be constructed by code outside of your control, CompositionInitializer allows us to continue to use MEF to satisfy the import requirements of that object. In order to use this from WPF, I’ve ported the code from MEF Preview 9 and Glenn Block’s (now obsolete) PartInitializer port to Windows Presentation Foundation.  There are some subtle changes from the Silverlight port, mainly to handle running in a desktop application context.  The default behavior of my port is to construct an AggregateCatalog containing a DirectoryCatalog set to the location of the entry assembly of the application.  In addition, if an “Extensions” folder exists under the entry assembly’s directory, a second DirectoryCatalog for that folder will be included.  This behavior can be overridden by specifying a CompositionContainer or one or more ComposablePartCatalogs to the System.ComponentModel.Composition.Hosting.CompositionHost static class prior to the first use of CompositionInitializer. Please download CompositionInitializer and CompositionHost for VS 2010 RC, and contact me with any feedback. Composition.Initialization.Desktop.zip Edit on 3/29: Glenn Block has since updated his version of CompositionInitializer (and ExportFactory<T>!), and made it available here: http://cid-f8b2fd72406fb218.skydrive.live.com/self.aspx/blog/Composition.Initialization.Desktop.zip This is a .NET 3.5 solution, and should soon be pushed to CodePlex, and made available on the main MEF site.

    Read the article

  • Building a database installer with WiX, datadude and Visual Studio 2010

    - by jamiet
    Today I have been using Windows Installer XML (WiX) to build an installer (.msi file) that would install a SQL Server database on a server of my choosing; the source code for that database lives in datadude (a tool which you may know by one of quite a few other names). The basis for this work was a most excellent blog post by Duke Kamstra entitled Implementing a WIX installer that calls the GDR version of VSDBCMD.EXE which coves the delicate intricacies of doing this – particularly how to call Vsdbcmd.exe in a CustomAction. Unfortunately there are a couple of things wrong with Duke’s post: Searching for “datadude wix” didn’t turn it up in the first page of search results and hence it took me a long time to find it. And I knew that it existed. If someone else were after a post on using WiX with datadude its likely that they would never have come across Duke’s post and that would be a great shame because its the definitive post on the matter. It was written in October 2009 and had not been updated for Visual Studio 2010. Well, this blog post is an attempt to solve those problems. Hopefully I’ve solved the first one just by following a few of my blogging SEO tips while writing this blog post, in the rest of it I will explain how I took Duke’s code and updated it to work in Visual Studio 2010. If you need to build a database installer using WiX, datadude and Visual Studio 2010 then you still need to follow Duke’s blog post so go and do that now. Below are the amendments that I made that enabled the project to get built in Visual Studio 2010: In VS2010 datadude’s output files have changed from being called Database.<suffix> to <ProjectName>_Database.<suffix>. Duke’s code was referencing the old file name formats. Duke used $(var.SolutionDir) and relative paths to point to datadude artefacts I have replaced these with Votive Project References http://wix.sourceforge.net/manual-wix3/votive_project_references.htm I commented out all references to MicrosoftSqlTypesDbschema in DatabaseArtifacts.wxi. I don't think this is produced in VS2010 (I may be wrong about that but it wasn't in the output from my project) Similarly I commented out component MicrosoftSqlTypesDbschema in VsdbcmdArtifacts.wxi. It wasn't where Duke's code said it should have been so am assuming/hoping it isn't needed. Duke's ?define block to work out appropriate SrcArchPath actually wasn't working for me (i.e. <?if $(var.Platform)=x64 ?> was evaluating to false)  so I just took out the conditional stuff and declared the path explicitly to the “Program Files (x86)” path. The old code is still there though if you need to put it back. None of the <RegistrySearch> stuff is needed for VS2010 - so I commented it all out! Changed to use /manifest option rather than /model option on vsdbcmd.exe command-line. Personal preference is all! Added a new component in order to bundle along the vsdbcmd.exe.config file Made the install of the Custom Action dependent on the relevant feature being selected for install. This one is actually really important – deselecting the database feature for installation does not, by default, stop the CustomAction from executing and so would cause an error - so that scenario needs to be catered for I have made my amended solution available for download at: http://cid-550f681dad532637.office.live.com/self.aspx/Public/BlogShare/20110210/InstallMyDatabase.zip It contains two projects: the WiX project and the datadude project that is the source to be deployed (for demo purposes it only contains one table). I have also made the .msi available although in order that it gets through file blockers I changed the name from InstallMyDatabase.msi to InstallMyDatabase.ms_ – simply rename the file back once you have downloaded it from: http://cid-550f681dad532637.office.live.com/self.aspx/Public/BlogShare/20110210/InstallMyDatabase.ms%5E_ .You can try it out for yourself – the only thing it does is dump the files into %Program Files%\MyDatabase and uses them to install a database onto a server of your choosing with a name of your choosing - no damaging side-affects. I will caveat this by saying “it works on my machine” and, not having access to a plethora of different machines, I haven’t tested it anywhere else. One potential issue that I know of is that Vsdbcmd.exe has a dependency on SQL Server CE although if you have SQL Server tools or Visual Studio installed you should be fine. Unfortunately its not possible to bundle along the SQL Server CE installer in the .msi because Windows will not allow you to call one installer from inside another – the recommended way to get around this problem is to build a bootstrapper to bundle the whole lot together but doing that is outside the scope of this blog post. If you discover any other issues then please let me know. Here are the screenshots from the installer: And once installed…. Hope this is useful! @jamiet 

    Read the article

  • Pinning Projects and Solutions with Visual Studio 2010

    - by ScottGu
    This is the twenty-fourth in a series of blog posts I’m doing on the VS 2010 and .NET 4 release. Today’s blog post covers a very small, but still useful, feature of VS 2010 – the ability to “pin” projects and solutions to both the Windows 7 taskbar as well VS 2010 Start Page.  This makes it easier to quickly find and open projects in the IDE. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] VS 2010 Jump List on Windows 7 Taskbar Windows 7 added support for customizing the taskbar at the bottom of your screen.  You can “pin” and re-arrange your application icons on it however you want. Most developers using Visual Studio 2010 on Windows 7 probably already know that they can “pin” the Visual Studio icon to the Windows 7 taskbar – making it always present.  What you might not yet have discovered, though, is that Visual Studio 2010 also exposes a Taskbar “jump list” that you can use to quickly find and load your most recently used projects as well. To activate this, simply right-click on the VS 2010 icon in the task bar and you’ll see a list of your most recent projects.  Clicking one will load it within Visual Studio 2010: Pinning Projects on the VS 2010 Jump List with Windows 7 One nice feature also supported by VS 2010 is the ability to optionally “pin” projects to the jump-list as well – which makes them always listed at the top.  To enable this, simply hover over the project you want to pin and then click the “pin” icon that appears on the right of it: When you click the pin the project will be added to a new “Pinned” list at the top of the jumplist: This enables you to always display your own list of projects at the top of the list.  You can optionally click and drag them to display in any order you want. VS 2010 Start Page and Project Pinning VS 2010 has a new “start page” that displays by default each time you launch a new instance of Visual Studio.  In addition to displaying learning and help resources, it also includes a “Recent Projects” section that you can use to quickly load previous projects that you have recently worked on: The “Recent Projects” section of the start page also supports the concept of “pinning” a link to projects you want to always keep in the list – regardless of how recently they’ve been accessed. To “pin” a project to the list you simply select the “pin” icon that appears when you hover over an item within the list: Once you’ve pinned a project to the start page list it will always show up in it (at least until you “unpin” it). Summary This project pinning support is a small but nice usability improvement with VS 2010 and can make it easier to quickly find and load projects/solutions.  If you work with a lot of projects at the same time it offers a nice shortcut to load them. Hope this helps, Scott

    Read the article

  • James Atkinson - New Blog Home

    - by jatkinson
    I'm migrating my blog that is currently hosted over at vbCity.com (which is an outstanding developer community!) to a new home at geekswithblogs.net. I truly appreciate the comradery of Serge B, Ged Mead, and the other team members at the "City". What you can expect to find here (my interests): Most .NET programming topics General computing Language examples in C#, VB.NET, and Boo WCF WPF Mathematical / GPS solutions F# (in progress... if you can say that much) Obsessed with code performance (speed) Some photography My background: Kansas State University Grad (Agriculture Technology Management) From Richmond, VA Self taught programmer (started with C# in VS2002) NOT a professional programmer (enables free thinking?!)  I'm no Jeff Atwood or Beth Massi, but you should expect to see some interesting stuff to follow.

    Read the article

  • ADNOC talks about 50x increase in performance

    - by KLaker
    If you are still wondering about how Exadata can revolutionise your business then I would recommend watching this great video which was recorded at this year's OpenWorld. First a little background...The Abu Dhabi National Oil Company for Distribution (ADNOC) is an integrated energy company that was founded in 1973. ADNOC Distribution markets and distributes petroleum products and services within the United Arab Emirates and internationally. As one of the largest and most innovative government-owned petroleum companies in the Arab Gulf, ADNOC Distribution is renowned and respected for the exceptional quality and reliability of its products and services. Its five corporate divisions include more than 200 filling stations (a number that is growing at 8% annually), more than 150 convenience stores, 10 vehicle inspection stations, as well as wholesale and retail sales of bulk fuel, gas, oil, diesel, and lubricants. ADNOC selected Oracle Exadata Database Machine after extensive research because it provided them with a single platform that can run mixed workloads in a single unified machine: "We chose Oracle Exadata Database Machine because it.offered a fully integrated and highly engineered system that was ready to deploy. With our infrastructure running all the same technology, we can operate any type of Oracle Database without restrictions and be prepared for business growth," said Ali Abdul Aziz Al-Ali, IT division manager, ADNOC Distribution. ".....we could consolidate our transaction processing and business intelligence onto one platform. Competing solutions are just not capable of doing that." - Awad Ahmed Ali El-Sidiq, Senior Database Administrator, ADNOC Distribution In this new video Awad Ahmen Ali El Sidddig, Senior DBA at ADNOC, talks about the impact that Exadata has had on his team and the whole business. ADNOC is using our engineered systems to drive and manage all their workloads: from transaction systems to payments system to data warehouse to BI environment. A true Disk-to-Dashboard revolution using Engineered Systems. This engineered approach is delivering 50x improvement in performance with one queries running 100x faster! The IT has even revolutionised some of their data warehouse related processes with the help of Exadata and now jobs that were taking over 4 hours now run in a few minutes.  To watch the video click on the image below which will take you to our Oracle YouTube page: (if the above link does not work, click here: http://www.youtube.com/watch?v=zcRpxc6u5Ic) Now that queries are running 100x faster and jobs are completing in minutes not hours, what is next for the IT team at ADNOC? Like many of our customers ADNOC is now looking to take advantage of big data to help them better align their business operations with customer behaviour and customer insights. To help deliver this next level of insight the IT team is looking at the new features in Oracle Database 12c such as the new in-memory feature to deliver even more performance gains.  The great news is that Awad Ahmen Ali El Sidddig was awarded DBA of the Year - EMEA within our Data Warehouse Global Leaders programme and you can see the badge for this award pop-up at the start of video. Well done to everyone at ADNOC and thanks for spending the time with us at OOW to create this great video.

    Read the article

  • Webcast with Brian Griffin, Ancestry, 2013 Winner 10 Best Web Support Sites

    - by Tuula Fai
    The web is one of the fastest growing channels for providing service, support and information, as seen in The Service Council's (TSC) latest multi-channel research survey. Join TSC's Chief Customer Officer Sumair Dutta as he shares key findings from his current customer experience research from over 200 organizations. Sumair will be joined by Brian Griffin, Senior Program Manager, Global Support Experience, Ancestry.com who will show how Ancestry is using the web as a powerful tool to enhance self-service opportunities and increase customer engagement. Smarter Web Service Educast Thursday, November 14th 2 pm ET / 11 am PT Register: http://bit.ly/1cwz4Ns  

    Read the article

  • Suitable Hosting for Web Development Company: VPS Hosting, Cloud hosting, Shared Hosting

    - by KoolKabin
    Hi, We are web development company here in nepal. We are still in growth phase. Our URL: http://www.outsourcingnepal.com We have our clients demanding for web hosting and are now ready for getting hosting package to host websites of our clients. Since we have multiple clients and they want their own cpanel for self configuration, only creating ftp channel is not appropirate. So their own cpanel is needed. So i thought of reseller package. When i searched for reseller package i came across with vps, cloud hosting and shared hosting. So now confused which one is better for our company?

    Read the article

  • SQLAuthority News – Community Tech Days – A SQL Legends in Ahmedabad – December 11, 2010

    - by pinaldave
    Ahmedabad is going to be fortunate city again on December 11. We are going to have SQL Server Legends present at the prestigious event of Community Tech Days in Ahmedabad. The venue details are as following: H K Hall, H K College Campus, Near Handloom House, Opp. Natraj Cinema, Ashram Road, Ahmedabad – 380009 Click here to Registration for the event. Agenda of the event is as following. 10:15am – 10:30am     Welcome – Pinal Dave 10:30am – 11:15am     SQL Tips and Tricks for .NET Developers by Jacob Sebastian 11:15am – 11:30am     Tea Break 11:30am – 12:15pm     Best Database Practice for SharePoint Server by Pinal Dave 12:15pm – 01:00pm     Self Service Business Intelligence by Rushabh Mehta 01:00pm – 02:00pm     Lunch 02:00pm – 02:45pm     Managing your future, Managing your time by Vinod Kumar 02:45pm – 03:30pm     Windows Azure News and Introducing Storage Services by Mahesh Devjibhai Dhola 03:30pm – 03:45pm     Tea Break 03:45pm – 04:30pm     Improve Silverlight application with Threads and MEF by Prabhjot Singh Bakshi 04:30pm – 04:45pm     Thank you – Mahesh Devjibhai Dhola Ahmedabad considers itself extremely fortunate when there are SQL Legends presenting on various subjects in front of community. Here is brief introduction about them in my own words. (Their names are in order of the agenda). 1) Jacob Sebastian (SQL Server MVP) – This person needs no introduction. Every developer and programmer in Ahmedabad and India knows him. He is the one man who is founder of various community-related ideas like SQL Challenges, SQL Quiz and BeyondRelational. He works with me on all the community-related activities; we are extremely good friends. 2) Rushabh Mehta (SQL Server MVP) – If you use SQL Server – you know this man. He is the President of SQL Server of Professional Association (PASS) and one of the leading Business Intelligence (BI) Experts renowned in the world. He has blessed Ahmedabad once before and now doing once again this year. 3) Vinod Kumar (Microsoft Evangelist – SQL Server & BI) – Ahmedabad remembers him very well. During his last visit to Ahmedabad, a fight had almost broke outside the hall amidst the rush to listen him. There were more people standing and listening to him than those who were seated. This is one man Ahmedabad will never forget. 4) and Myself. I will not rate myself in the league of abovementioned experts, but I must say that I am fortunate to have friends like those above. We also have two strong .NET presenters – Mahesh and Prabhjot. During this event, there will be plenty of giveaways, lots of fun, demos and pure technical talk, specifically no marketing and promotion – just pure technical talk. The most interesting part is that all the SQL Legends – Jacob, Rushabh and Vinod are for sure presenting on SQL Server but with a twist. Jacob – He is going to talk about .NET and SQL – Optimization Techniques Rushabh – He is going to talk about SQL and BI – Self Service BI Vinod – He is going to talk about professional development of developers – Managing Time Pinal – Best Practices for SharePoint Database Administrators – SharePoint DBA – I have presented this session earlier. I promise this event is going to be one of the best events held ever. You can read about the earlier event over here. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Cut Caseload Costs, Speed Service Delivery For Social Services

    - by michael.seback
    Lower Caseload Costs, Speedier Service Delivery with New Oracle Social Services Solution Oracle has just introduced a new solution for social services agencies that's designed to help case workers address the challenges of rising workloads and growing demands by citizens for additional services. In the past, IT departments developed custom software in an effort to meet program outcomes. "Because this capability is out of the box with the Oracle solution, there's less complexity for organizations and an overall lower total cost of ownership," says Kimberly Ellison-Taylor, Oracle's executive director of health and human services. "Self service brings costs down to just pennies per interaction and makes it possible for clients to receive government services more quickly," Ellison-Taylor says. read more

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Oracle Unveils Industry’s Broadest Cloud Strategy

    - by kellsey.ruppel
    Oracle Unveils Industry’s Broadest Cloud Strategy Adds Social Cloud and Showcases early customers Redwood Shores, Calif. – June 6, 2012 “Almost seven years of relentless engineering and innovation plus key strategic acquisitions. An investment of billions. We are now announcing the most comprehensive Cloud on the planet Earth,” said Oracle CEO, Larry Ellison. “Most cloud vendors only have niche assets. They don’t have platforms to extend. Oracle is the only vendor that offers a complete suite of modern, socially-enabled applications, all based on a standards-based platform.” News Facts In a major strategy update today, Larry Ellison announced the industry’s broadest and most advanced Cloud strategy and introduced Oracle Cloud Social Services, a broad Enterprise Social Platform offering. Oracle Cloud delivers a broad set of industry-standards based, integrated services that provide customers with subscription-based access to Oracle Platform Services, Application Services, and Social Services, all completely managed, hosted and supported by Oracle. Offering a wide range of business applications and platform services, the Oracle Cloud is the only cloud to enable customers to avoid the data and business process fragmentation that occurs when using multiple, siloed public clouds. Oracle Cloud is powered by leading enterprise-grade infrastructure, including Oracle Exadata and Oracle Exalogic, providing customers and partners with a high-performance, reliable, and secure infrastructure for running critical business applications. Oracle Cloud enables easy self-service for both business users and developers. Business users can order, configure, extend, and monitor their applications. Developers and administrators can easily develop, deploy, monitor and manage their applications. As part of the event, Oracle also showcased several early Oracle Cloud customers and partners including system integrators and independent software vendors. Oracle Cloud Platform Services Built on a common, complete, standards-based and enterprise-grade set of infrastructure components, Oracle Cloud Platform Services enable customers to speed time to market and lower costs by quickly building, deploying and managing bespoke applications. Oracle Cloud Platform Services will include: Database Services to manage data and build database applications with the Oracle Database. Java Services to develop, deploy and manage Java applications with Oracle WebLogic. Developer Services to allow application developers to collaboratively build applications. Web Services to build Web applications rapidly using PHP, Ruby, and Python. Mobile Services to allow developers to build cross-platform native and HTML5 mobile applications for leading smartphones and tablets. Documents Services to allow project teams to collaborate and share documents through online workspaces and portals. Sites Services to allow business users to develop and maintain visually engaging .com sites Analytics Services to allow business users to quickly build and share analytic dashboards and reports through the Cloud. Oracle Cloud Application Services Oracle Cloud Application Services provides customers access to the industry’s broadest range of enterprise applications available in the cloud today, with built-in business intelligence, social and mobile capabilities. Easy to setup, configure, extend, use and administer, Oracle Cloud Application Services will include: ERP Services: A complete set of Financial Accounting, Project Management, Procurement, Sourcing, and Governance, Risk & Compliance solutions. HCM Services: A complete Human Capital Management solution including Global HR, Workforce Lifecycle Management, Compensation, Benefits, Payroll and other solutions. Talent Management Services: A complete Talent Management solution including Recruiting, Sourcing, Performance Management, and Learning. Sales and Marketing Services: A complete Sales and Marketing solution including Sales Planning, Territory Management, Leads & Opportunity Management, and Forecasting. Customer Experience Services: A complete Customer Service solution including Web Self-Service, Contact Centers, Knowledge Management, Chat, and e-mail Management. Oracle Cloud Social Services Oracle Cloud Social Services provides the most broad and complete enterprise social platform available in the cloud today.  With Oracle Cloud Social Services, enterprises can engage with their customers on a range of social media properties in a comprehensive and meaningful fashion including social marketing, commerce, service and listening. The platform also provides enterprises with a rich social networking solution for their employees to collaborate effectively inside the enterprise. Oracle’s integrated social platform will include: Oracle Social Network to enable secure enterprise collaboration and purposeful social networking for business. Oracle Social Data Services to aggregate data from social networks and enterprise data sources to enrich business applications. Oracle Social Marketing and Engagement Services to enable marketers to centrally create, publish, moderate, manage, measure and report on their social marketing campaigns. Oracle Social Intelligence Services to enable marketers to analyze social media interactions and to enable customer service and sales teams to engage with customers and prospects effectively. Supporting Resources Oracle Cloud – learn more cloud.oracle.com – sign up now Webcast – watch the replay About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. For more information about Oracle (NASDAQ:ORCL), visit www.oracle.com. TrademarksOracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

    Read the article

  • Rendering ASP.NET MVC Razor Views outside of MVC revisited

    - by Rick Strahl
    Last year I posted a detailed article on how to render Razor Views to string both inside of ASP.NET MVC and outside of it. In that article I showed several different approaches to capture the rendering output. The first and easiest is to use an existing MVC Controller Context to render a view by simply passing the controller context which is fairly trivial and I demonstrated a simple ViewRenderer class that simplified the process down to a couple lines of code. However, if no Controller Context is available the process is not quite as straight forward and I referenced an old, much more complex example that uses my RazorHosting library, which is a custom self-contained implementation of the Razor templating engine that can be hosted completely outside of ASP.NET. While it works inside of ASP.NET, it’s an awkward solution when running inside of ASP.NET, because it requires a bit of setup to run efficiently.Well, it turns out that I missed something in the original article, namely that it is possible to create a ControllerContext, if you have a controller instance, even if MVC didn’t create that instance. Creating a Controller Instance outside of MVCThe trick to make this work is to create an MVC Controller instance – any Controller instance – and then configure a ControllerContext through that instance. As long as an HttpContext.Current is available it’s possible to create a fully functional controller context as Razor can get all the necessary context information from the HttpContextWrapper().The key to make this work is the following method:/// <summary> /// Creates an instance of an MVC controller from scratch /// when no existing ControllerContext is present /// </summary> /// <typeparam name="T">Type of the controller to create</typeparam> /// <returns>Controller Context for T</returns> /// <exception cref="InvalidOperationException">thrown if HttpContext not available</exception> public static T CreateController<T>(RouteData routeData = null) where T : Controller, new() { // create a disconnected controller instance T controller = new T(); // get context wrapper from HttpContext if available HttpContextBase wrapper = null; if (HttpContext.Current != null) wrapper = new HttpContextWrapper(System.Web.HttpContext.Current); else throw new InvalidOperationException( "Can't create Controller Context if no active HttpContext instance is available."); if (routeData == null) routeData = new RouteData(); // add the controller routing if not existing if (!routeData.Values.ContainsKey("controller") && !routeData.Values.ContainsKey("Controller")) routeData.Values.Add("controller", controller.GetType().Name .ToLower() .Replace("controller", "")); controller.ControllerContext = new ControllerContext(wrapper, routeData, controller); return controller; }This method creates an instance of a Controller class from an existing HttpContext which means this code should work from anywhere within ASP.NET to create a controller instance that’s ready to be rendered. This means you can use this from within an Application_Error handler as I needed to or even from within a WebAPI controller as long as it’s running inside of ASP.NET (ie. not self-hosted). Nice.So using the ViewRenderer class from the previous article I can now very easily render an MVC view outside of the context of MVC. Here’s what I ended up in my Application’s custom error HttpModule: protected override void OnDisplayError(WebErrorHandler errorHandler, ErrorViewModel model) { var Response = HttpContext.Current.Response; Response.ContentType = "text/html"; Response.StatusCode = errorHandler.OriginalHttpStatusCode; var context = ViewRenderer.CreateController<ErrorController>().ControllerContext; var renderer = new ViewRenderer(context); string html = renderer.RenderView("~/Views/Shared/GenericError.cshtml", model); Response.Write(html); }That’s pretty sweet, because it’s now possible to use ViewRenderer just about anywhere in any ASP.NET application, not only inside of controller code. This also allows the constructor for the ViewRenderer from the last article to work without a controller context parameter, using a generic view as a base for the controller context when not passed:public ViewRenderer(ControllerContext controllerContext = null) { // Create a known controller from HttpContext if no context is passed if (controllerContext == null) { if (HttpContext.Current != null) controllerContext = CreateController<ErrorController>().ControllerContext; else throw new InvalidOperationException( "ViewRenderer must run in the context of an ASP.NET " + "Application and requires HttpContext.Current to be present."); } Context = controllerContext; }In this case I use the ErrorController class which is a generic controller instance that exists in the same assembly as my ViewRenderer class and that works just fine since ‘generically’ rendered views tend to not rely on anything from the controller other than the model which is explicitly passed.While these days most of my apps use MVC I do still have a number of generic pieces in most of these applications where Razor comes in handy. This includes modules like the above, which when they error often need to display error output. In other cases I need to generate string template output for emailing or logging data to disk. Being able to render simply render an arbitrary View to and pass in a model makes this super nice and easy at least within the context of an ASP.NET application!You can check out the updated ViewRenderer class below to render your ‘generic views’ from anywhere within your ASP.NET applications. Hope some of you find this useful.ResourcesViewRenderer Class in Westwind.Web.Mvc Library (Github)Original ViewRenderer ArticleRazor Hosting Library (GitHub)Original Razor Hosting Article© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Finding a problem in some task [closed]

    - by nagisa
    Recently I competed in nation wide programming contest finals. Not unexpectedly all problems were algorithmic. I lost (40 points out of 600. Winner got ~300). I know why I lost very well - I don't know how to find actual problem in those obfuscated tasks which are life-blood of every competition. I think that being self-taught and not well versed in algorithms got me too. As side effect of learning things myself I know how to search for information, however all I could find are couple questions about learning algorithms. For now I put Python Algorithms: Mastering Basic Algorithms in the Python Language and Analysis of Algorithms which I found in those questions to my "to read" list. That leaves my first problem of not knowing how to find a problem unsolved. Will that ability come with learning algorithms? Or does it need some special attention? Any suggestions are welcomed.

    Read the article

  • Ubuntu Software Center does not allow changes to software sources

    - by Michael Goldshteyn
    The checkboxes that appear to be changeable under the Edit / Software Sources dialog box cannot be changed. I click on them and they just turn gray and stay at their current setting. Update: When I run software-center from a terminal window and try to change one of the checkbox settings, I get: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/softwareproperties/gtk/SoftwarePropertiesGtk.py", line 649, in on_isv_source_toggled self.backend.ToggleSourceUse(str(source_entry)) File "/usr/lib/python2.7/dist-packages/dbus/proxies.py", line 143, in __call__ **keywords) File "/usr/lib/python2.7/dist-packages/dbus/connection.py", line 630, in call_blocking message, timeout) dbus.exceptions.DBusException: com.ubuntu.SoftwareProperties.PermissionDeniedByPolicy: com.ubuntu.softwareproperties.applychanges These things happen instead of it properly prompting me for a password (for root privs).

    Read the article

  • Google I/O 2010 - The joys of engineering leadership

    Google I/O 2010 - The joys of engineering leadership Google I/O 2010 - How to lose friends and alienate people: The joys of engineering leadership Tech Talks Brian W. Fitzpatrick, Ben Collins-Sussman Are you considered the 'point' person for your team? Do you have sweaty palms, headaches, and a calendar full of meetings? You may have an affliction called 'manager'. This is treatable through careful analysis and therapy. We'll examine how you may have arrived at this state and how you can once again regain your self-respect and the respect of your peers. Hear real-life stories of both good and bad leadership. Learn to lead by following. For all I/O 2010 sessions, please go to code.google.com/events/io/2010/sessions.html From: GoogleDevelopers Views: 6 0 ratings Time: 56:02 More in Science & Technology

    Read the article

  • Microsoft Press Deal of the Day - 4/April/2012 - Accessing Data with Microsoft® .NET Framework 4

    - by TATWORTH
    The Microsoft Press half price deal of the day at http://shop.oreilly.com/product/9780735627390.do is the training material for the MCTS Self-Paced Training Kit (Exam 70-516)  "Accessing Data with Microsoft® .NET Framework 4""EXAM PREP GUIDE Ace your preparation for the skills measured by MCTS Exam 70-516—and on the job—with this official Microsoft study guide. Work at your own pace through a series of lessons and reviews that fully cover each exam objective. Then, reinforce and apply what you’ve learned through real-world case scenarios and practice exercises. Maximize your performance on the exam by mastering the skills and experience measured by these objectives: Modeling dataManaging connections and contextQuerying dataManipulating dataDeveloping and deploying reliable applications"

    Read the article

  • Donald Farmer comes to SQLBits

    What do medieval archaeology, fish farming, Southwestern University of Chongqing and Microsoft Business Intelligence have in common? If you know, you should tell Donald Farmer, because he has been deeply involved in all of them at various times. Donald has worked in the Microsoft Business Intelligence team for 8 years covering many subject areas: data integration, information quality, metadata intelligence, master data management, OLAP, predictive analytics and self-service BI. He is a well-known speaker at Microsoft and other industry events, and the author of several books and articles.   Great news from SQLBits! We can now confirm that Donald Farmer has agreed to do a pre-conference training day and the key note for our SQL Server 2008 and SQL Server 2008 R2 day. As Program Manager for Project Gemini, no-one is better placed to tell you what is going to be in R2 and what is not! More information about the Pre-conference Training Day and SQL 2008 and R2 Friday will be released soon.

    Read the article

  • Legal question.

    - by Kjow
    Hi all, a question bounces in my head from some time. Copyright laws are different by nation to nation, but generally which is the border line to break a copyright? Suppose to make a game that is very similar to an other come out in the past, e.g. a Pacman clone or a Space Invaders clone, but nothing from original titles are grabbed and maybe they're not made in 2d, but in 3d. The titles aren't "Pacman clone - the return" or "Space Invaders - they did it again", and not also "Pocman" or "Space Evaders" (maybe this last could be fun for some "creative financers" that need to escape from earth :D). Finally suppose to call these some thing like "Popcorn, fruit and ghosts" (or the acronym: "PFG") and "Kill all enemy" (or the acronym: "KAE"). In this case (not grab- all self-made) and no references to original titles, but with a game that feels very similar to "ispiration ones"... they could be sold to somewhere like "Valve's Steam"? Regards, Kjow

    Read the article

  • AutoVue at the Oracle Asset Lifecycle Management Summit

    - by celine.beck
    I recently had the opportunity to attend and present the integration between AutoVue and Primavera P6 during the Oracle ALM Summit, which was held in March at Redwood Shores, on Oracle Headquarters grounds. The ALM Summit brought together over 300 Oracle maintenance practitioners who endured the foggy and rainy San Francisco weather to attend the 4th edition of this Oracle-driven conference. Attendees have roles in maintenance management and IT. Following a general session, Ralph Rio from ARC Advisory Group provided a very interesting keynote session discussing Asset Management directions, both in the short and long run. An interesting point that Ralph raised is that most organizations have done a good job at improving performance at the design / build, operate and maintain and portfolio management phases by leveraging solutions like Asset Lifecycle Management and Project & Portfolio management solutions; however, there seem to be room for improvement in between those phases, when information flows from one group to the other, during the data handover phase or when time comes to update / modify drawings to reflect the reality of physical assets. This is where AutoVue comes into play. By integrating with enterprise applications like content management systems, asset lifecycle management applications and project management solutions, AutoVue can be a real-process enabler, streamlining information flows from concept/design to decommissioning and ensuring that all project stakeholders have access to asset information and engineering data throughout the asset lifecycle. AutoVue's built-in digital annotation capabilities allows maintenance workers and technicians to report changes in configuration and visually capture the delta between as-built and as-maintained versions of asset documents. This information can then be easily handed over to engineers who can identify changes and incorporate these modifications into the drawings during the next round of document revisions. PPL Power Generation, an electric utilities headquarted in Allentown, Pennsylvania discussed this usage of AutoVue during an interesting Webcast around AutoVue's role in the Utilities space. After the keynote sessions, participants broke off into product-centric tracks around Oracle's Asset Lifecycle Management solutions (E-Business Suite, PeopleSoft, and JD Edwards). The second day of the conference was the occasion for us to present the integration between AutoVue and Primavera P6 to the Maintenance Summit audience. The presentation was a great success and generated much discussion with partners and customers during breaks. People seemed highly interested in learning more about our plans for integrating AutoVue and Primavera P6 with Oracle's ALM solutions...stay tune for further information on the subject!

    Read the article

  • Integrate Microsoft Translator into your ASP.Net application

    - by sreejukg
    In this article I am going to explain how easily you can integrate the Microsoft translator API to your ASP.Net application. Why we need a translation API? Once you published a website, you are opening a channel to the global audience. So making the web content available only in one language doesn’t cover all your audience. Especially when you are offering products/services it is important to provide contents in multiple languages. Users will be more comfortable when they see the content in their native language. How to achieve this, hiring translators and translate the content to all your user’s languages will cost you lot of money, and it is not a one time job, you need to translate the contents on the go. What is the alternative, we need to look for machine translation. Thankfully there are some translator engines available that gives you API level access, so that automatically you can translate the content and display to the user. Microsoft Translator API is an excellent set of web service APIs that allows developers to use the machine translation technology in their own applications. The Microsoft Translator API is offered through Windows Azure market place. In order to access the data services published in Windows Azure market place, you need to have an account. The registration process is simple, and it is common for all the services offered through the market place. Last year I had written an article about Bing Search API, where I covered the registration process. You can refer the article here. http://weblogs.asp.net/sreejukg/archive/2012/07/04/integrate-bing-search-api-to-asp-net-application.aspx Once you registered with Windows market place, you will get your APP ID. Now you can visit the Microsoft Translator page and click on the sign up button. http://datamarket.azure.com/dataset/bing/microsofttranslator As you can see, there are several options available for you to subscribe. There is a free version available, great. Click on the sign up button under the package that suits you. Clicking on the sign up button will bring the sign up form, where you need to agree on the terms and conditions and go ahead. You need to have a windows live account in order to sign up for any service available in Windows Azure market place. Once you signed up successfully, you will receive the thank you page. You can download the C# class library from here so that the integration can be made without writing much code. The C# file name is TranslatorContainer.cs. At any point of time, you can visit https://datamarket.azure.com/account/datasets to see the applications you are subscribed to. Click on the Use link next to each service will give you the details of the application. You need to not the primary account key and URL of the service to use in your application. Now let us start our ASP.Net project. I have created an empty ASP.Net web application using Visual Studio 2010 and named it Translator Sample, any name could work. By default, the web application in solution explorer looks as follows. Now right click the project and select Add -> Existing Item and then browse to the TranslatorContainer.cs. Now let us create a page where user enter some data and perform the translation. I have added a new web form to the project with name Translate.aspx. I have placed one textbox control for user to type the text to translate, the dropdown list to select the target language, a label to display the translated text and a button to perform the translation. For the dropdown list I have selected some languages supported by Microsoft translator. You can get all the supported languages with their codes from the below link. http://msdn.microsoft.com/en-us/library/hh456380.aspx The form looks as below in the design surface of Visual Studio. All the class libraries in the windows market place requires reference to System.Data.Services.Client, let us add the reference. You can find the documentation of how to use the downloaded class library from the below link. http://msdn.microsoft.com/en-us/library/gg312154.aspx Let us evaluate the translatorContainer.cs file. You can refer the code and it is self-explanatory. Note the namespace name used (Microsoft), you need to add the namespace reference to your page. I have added the following event for the translate button. The code is self-explanatory. You are creating an object of TranslatorContainer class by passing the translation service URL. Now you need to set credentials for your Translator container object, which will be your account key. The TranslatorContainer support a method that accept a text input, source language and destination language and returns DataServiceQuery<Translation>. Let us see this working, I just ran the application and entered Good Morning in the Textbox. Selected target language and see the output as follows. It is easy to build great translator applications using Microsoft translator API, and there is a reasonable amount of translation you can perform in your application for free. For enterprises, you can subscribe to the appropriate package and make your application multi-lingual.

    Read the article

  • The 2012 Gartner-FEI CFO Technology Survey -- Reviewed by Jeff Henley, Oracle Chairman

    - by Di Seghposs
    Jeff Henley and Oracle Business Analytics VP Rich Clayton break down the findings of the 2012 Gartner-FEI CFO Technology Survey.  The survey produced by Gartner gathers CFOs perceptions about technology, trends and planned improvements to operations.  Financial executives and IT professionals can use these findings to align spending and organizational priorities and understand how technology should support corporate performance.    Listen to the webcast with Jeff Henley and Rich Clayton - Watch Now » Download the full report for all the details -   Read the Report »        Key Findings ·        Despite slow economic growth, CFOs expect conservative, steady IT spending. ·        The CFOs role in IT investment has increased again in 2012. ·        The 45% of IT leaders that report to the CFO are more than report to any other executive, and represent an increase of 3%. ·        Business analytics needs technology improvement. ·        CFOs are focused on business analytics and business applications more than on technology. ·        Information, social, cloud and mobile technology trends are on CFOs' radar. ·        Focusing on corporate performance management (CPM) projects, 63% of CFOs plan to upgrade business intelligence (BI), analytics and performance management in 2012. ·        Despite advancements in strategy management technologies, CFOs still focus on lagging key performance indicators (KPIs) only. ·        A pace-layered strategy for applications is needed (92% of CFOs believe IT doesn't provide transformation/differentiation). ·        New applications in financial governance rank high on improving compliance and efficiency.

    Read the article

  • Hello With Oracle Identity Manager Architecture

    - by mustafakaya
    Hi, my name is Mustafa! I'm a Senior Consultant in Fusion Middleware Team and living in Istanbul,Turkey. I worked many various Java based software development projects such as end-to-end web applications, CRM , Telco VAS and integration projects.I want to share my experiences and research about Fusion Middleware Products in this column. Customer always wants best solution from software consultants or developers. Solution will be a code snippet or change complete architecture. We faced different requests according to the case of customer. In my posts i want to discuss Fusion Middleware Products Architecture or how can extend usability with apis or UI customization and more and I look forward to engaging with you on your experiences and thoughts on this.  In my first post, i will be discussing Oracle Identity Manager architecture  and i plan to discuss Oracle Identity Manager 11g features in next posts. Oracle Identity Manager System Architecture Oracle Identity Governance includes Oracle Identity Manager,Oracle Identity Analytics and Oracle Privileged Account Manager. I will discuss Oracle Identity Manager architecture in this post.  In basically, Oracle Identity Manager is a n-tier standard  Java EE application that is deployed on Oracle WebLogic Server and uses  a database .  Oracle Identity Manager presentation tier has three different screen and two different client. Identity Self Service and Identity System Administration are web-based thin client. Design Console is a Java Swing Client that communicates directly with the Business Service Tier.  Identity Self Service provides end-user operations and delegated administration features. System Administration provides system administration functions. And Design Console mostly use for development management operations such as  create and manage adapter and process form,notification , workflow desing, reconciliation rules etc. Business service tier is implemented as an Enterprise JavaBeans(EJB) application. So you can extense Oracle Identity Manager capabilities.  -The SMPL and EJB APIs allow develop custom plug-ins such as management roles or identities.  -Identity Services allow use core business capabilites of Oracle Identity Manager such as The User provisioning or reconciliation service. -Integration Services allow develop custom connectors or adapters for various deployment needs. -Platform Services allow use Entitlement Servers, Scheduler or SOA composites. The Middleware tier allows you using capabilites ADF Faces,SOA Suites, Scheduler, Entitlement Server and BI Publisher Reports. So OIM allows you to configure workflows uses Oracle SOA Suite or define authorization policies use with Oracle Entitlement Server. Also you can customization of OIM UI without need to write code and using ADF Business Editor  you can extend custom attributes to user,role,catalog and other objects. Data tiers; Oracle Identity Manager is driven by data and metadata which provides flexibility and adaptability to Oracle Identity Manager functionlities.  -Database has five schemas these are OIM,SOA,MDS,OPSS and OES. Oracle Identity Manager uses database to store runtime and configuration data. And all of entity, transactional and audit datas are stored in database. -Metadata Store; customizations and personalizations are stored in file-based repository or database-based repository.And Oracle Identity Manager architecture,the metadata is in Oracle Identity Manager database to take advantage of some of the advanced performance and availability features that this mode provides. -Identity Store; Oracle Identity Manager provides the ability to integrate an LDAP-based identity store into Oracle Identity Manager architecture.  Oracle Identity Manager uses the human workflow module of Oracle Service Oriented Architecture Suite. OIM connects to SOA using the T3 URL which is front-end URL for the SOA server.Oracle Identity Manager uses embedded Oracle Entitlement Server for authorization checks in OIM engine.  Several Oracle Identity Manager modules use JMS queues. Each queue is processed by a separate Message Driven Bean (MDB), which is also part of the Oracle Identity Manager application. Message producers are also part of the Oracle Identity Manager application. Oracle Identity Manager uses a scheduled jobs for some activities in the background.Some of scheduled jobs come with Out-Of-Box such as the disable users after the end date of the users or you can define your custom schedule jobs with Oracle Identity Manager APIs. You can use Oracle BI Publisher for reporting Oracle Identity Manager transactions or audit data which are in database. About me: Mustafa Kaya is a Senior Consultant in Oracle Fusion Middleware Team, living in Istanbul. Before coming to Oracle, he worked in teams developing web applications and backend services at a telco company. He is a Java technology enthusiast, software engineer and addicted to learn new technologies,develop new ideas. Follow Mustafa on Twitter,Connect on LinkedIn, and visit his site for Oracle Fusion Middleware related tips.

    Read the article

  • OTN Architect Day Headed to Reston, VA - May 16

    - by Bob Rhubart
    In 2011 OTN Architect Day made stops in Chicago, Denver, Phoenix, Redwood Shores, and Toronto. The 2012 series begins with OTN Architect Day in Reston, VA on Wednesday May 16. Registration is now open for this free event, but don't get caught napping -- seating is limited, and the event is just 5 weeks away. The information below reflects the most recent updates to the event agenda, including the addition of Oracle ACE Director Kai Yu as the guest keynote speaker. Kai is Senior System Engineer / Architect at Dell, Inc., and has been very busy of late as a speaker at various industry and Oracle User Group events. I'm very happy Kai has agreed to make the trek from his hometown in Austin, TX to share his insight at the Architect Day event in Reston.  If you're in the area, put this one on your calendar. You won't be sorry.   Venue Sheraton Reston Hotel 11810 Sunrise Valley Drive Reston, VA 20191 Event Agenda 8:30 am - 9:00 am Registration and Continental Breakfast 9:00 am - 9:15 am Welcome and Opening Comments 9:15 am - 10:00 am Engineered Systems: Oracle's Vision for the Future | Ralf Dossman Oracle's Exadata and Exalogic are impressive products in their own right. But working in combination they deliver unparalleled transaction processing performance with up to a 30x increase over existing legacy systems, with the lowest cost of ownership over a 3 or 5 year basis than any other hardware. In this session you'll learn how to leverage Oracle's Engineered Systems within your enterprise to deliver record-breaking performance at the lowest TCO. 10:00 am - 10:30 am High Availability Infrastructure for Cloud Computing | Kai Yu Infrastructure high availability is extremely critical to Cloud Computing. In a Cloud system that hosts a large number of databases and applications with different SLAs, any unplanned outage can be devastating, and even a small planned downtime may be unacceptable. This presentation will discuss various technology solutions and the related best practices that system architects should consider in cloud infrastructure design to ensure high availability. 10:30 am - 10:45 am Break 10:45 am - 11:30 am Breakout Sessions: (pick one) Innovations in Grid Computing with Oracle Coherence | Bjorn Boe Learn how Coherence can increase the availability, scalability and performance of your existing applications with its advanced low-latency data-grid technologies. Also hear some interesting industry-specific use cases that customers had implemented and how Oracle is integrating Coherence into its Enterprise Java stack. Cloud Computing - Making IT Simple | Scott Mattoon The road to Cloud Computing is not without a few bumps. This session will help to smooth out your journey by tackling some of the potential complications. We'll examine whether standardization is a prerequisite for the Cloud. We'll look at why refactoring isn't just for application code. We'll check out deployable entities and their simplification via higher levels of abstraction. And we'll close out the session with a look at engineered systems and modular clouds. 11:30 pm - 12:15 pm Breakout Sessions: (pick one) Oracle Enterprise Manager | Joe Diemer Oracle Enterprise Manager (EM) provides complete lifecycle management for the cloud - from automated cloud setup to self-service delivery to cloud operations. In this session you'll learn how to take control of your cloud infrastructure with EM features including Consolidation Planning and Self-Service provisioning with Metering and Chargeback. Come hear how Oracle is expanding its management capabilities into the cloud! Rationalization and Defense in Depth - Two Steps Closer to the Clouds | Dave Chappelle Security represents one of the biggest concerns about cloud computing. In this session we'll get past the FUD with a real-world look at some key issues. We'll discuss the infrastructure necessary to support rationalization and security services, explore architecture for defense -in-depth, and deal frankly with the good, the bad, and the ugly in Cloud security. 12:15 pm - 1:15 pm Lunch 1:40 pm - 2:00 pm Panel Discussion - Q&A 2:00 pm - 2:45 pm Breakout Sessions: (pick one) 21st Century SOA | Peter Belknap Service Oriented Architecture has evolved from concept to reality in the last decade. The right methodology coupled with mature SOA technologies has helped customers demonstrate success in both innovation and ROI. In this session you will learn how Oracle SOA Suite's orchestration, virtualization, and governance capabilities provide the infrastructure to run mission critical business and system applications. And we'll take a special look at the convergence of SOA & BPM using Oracle's Unified technology stack. Track B: Oracle Cloud Reference Architecture | Anbu Krishnaswamy Cloud initiatives are beginning to dominate enterprise IT roadmaps. Successful adoption of Cloud and the subsequent governance challenges warrant a Cloud reference architecture that is applied consistently across the enterprise. This presentation gives an overview of Oracle's Cloud Reference Architecture, which is part of the Cloud Enterprise Technology Strategy (ETS). Concepts covered include common management layer capabilities, service models, resource pools, and use cases. 2:45 pm - 3:00 pm Break 3:00 pm - 4:00 pm Roundtable Discussions 4:00 pm - 4:15 pm Closing Comments & Readouts from Roundtable 4:15 pm - 5:00 pm Cocktail Reception / Networking Session schedule and content subject to change.

    Read the article

  • Not so long ago in a city not so far away by Carlos Martin

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 This is the story of how the EMEA Presales Center turned an Oracle intern into a trusted technology advisor for both Oracle’s Sales and customers. It was the summer of 2011 when I was finishing my Computer Engineering studies as well as my internship at Oracle when I was offered what could possibly be THE dream job for any young European Computer Engineer. Apart from that, it also seemed like the role was particularly tailored to me as I could leverage almost everything I learned at University and during the internship. And all of it in one of the best cities to live in, not only from my home country but arguably from Europe: Malaga! A day at EPC As part of the EPC Technology pillar, and later on completely focused on WebCenter, there was no way to describe a normal day on the job as each day had something unique. Some days I was researching documentation in order to elaborate accurate answers for a customer’s question within a Request for Information or Proposal (RFI/RFP), other days I was doing heavy programming in order to bring a Proof of Concept (PoC) for a customer to life and last not but least, some days I presented to the customer via webconference the demo I built for them the past weeks. So as you can see, the role has research, development and presentation, could you ask for more? Well, don’t worry because there IS more! Internationality As the organization’s name suggests, EMEA Presales Center, it is the Center of Presales within Europe, Middle East and Africa so I got the chance to work with great professionals from all this regions, expanding my network and learning things from one country to apply them to others. In addition to that, the teams based in the Malaga office are comprised of many young professionals hailing mainly from Western and Central European countries (although there are a couple of exceptions!) with very different backgrounds and personalities which guaranteed many laughs and stories during lunch or coffee breaks (or even while working on projects!). Furthermore, having EPC offices in Bucharest and Bangalore and thanks to today’s tele-presence technologies, I was working every day with people from India or Romania as if they were sitting right next to me and the bonding with them got stronger day by day. Career development Apart from the research and self-study I’ve earlier mentioned, one of the EPC’s Key Performance Indicators (KPI) is that 15% of your time is spent on training so you get lots and lots of trainings in order to develop both your technical product knowledge and your presentation, negotiation and other soft skills. Sometimes the training is via webcast, sometimes the trainer comes to the office and sometimes, the best times, you get to travel abroad in order to attend a training, which also helps you to further develop your network by meeting face to face with many people you only know from some email or instant messaging interaction. And as the months go by, your skills improving at a very fast pace, your relevance increasing with each new project you successfully deliver, it’s only a matter of time (and a bit of self-promoting!) that you get the attention of the manager of a more senior team and are offered the opportunity to take a new step in your professional career. For me it took 2 years to move to my current position, Technology Sales Consultant at the Oracle Direct organization. During those 2 years I had built a good relationship with the Oracle Direct Spanish sales and sales managers, who are also based in the Malaga office. I supported their former Sales Consultant in a couple of presentations and demos and were very happy with my overall performance and attitude so even before the position got eventually vacant, I got a heads-up from then in advance that their current Sales Consultant was going to move to a different position. To me it felt like a natural step, same as when I joined EPC, I had at least a 50% of the “homework” already done but wanted to experience that extra 50% to add new product and soft skills to my arsenal. The rest is history, I’ve been in the role for more than half a year as I’m writing this, achieved already some important wins, gained a lot of trust and confidence in front of customers and broadened my view of Oracle’s Fusion Middleware portfolio. I look back at the 2 years I spent in EPC and think: “boy, I’d recommend that experience to absolutely anyone with the slightest interest in IT, there are so many different things you can do as there are different kind of roles you can end up taking thanks to the experience gained at EPC” /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • How to export ECC key and Cert from NSS DB and import into JKS keystore and Oracle Wallet

    - by mv
    How to export ECC key and Cert from NSS DB and import into JKS keystore and Oracle Wallet In this blog I will write about how to extract a cert and key from NSS Db and import it to a JKS Keystore and then import that JKS Keystore into Oracle Wallet. 1. Set Java Home I pointed it to JRE 1.6.0_22 $ export JAVA_HOME=/usr/java/jre1.6.0_22/ 2. Create a self signed ECC cert in NSS DB I created NSS DB with self signed ECC certificate. If you already have NSS Db with ECC cert (and key) skip this step. $export NSS_DIR=/export/home/nss/ $$NSS_DIR/certutil -N -d . $$NSS_DIR/certutil -S -x -s "CN=test,C=US" -t "C,C,C" -n ecc-cert -k ec -q nistp192 -d . 3. Export ECC cert and key using pk12util Use NSS tool pk12util to export this cert and key into a p12 file      $$NSS_DIR/pk12util -o ecc-cert.p12 -n ecc-cert -d . -W password 4. Use keytool to create JKS keystore and import this p12 file 4.1 Import p12 file created above into a JKS keystore $JAVA_HOME/bin/keytool -importkeystore -srckeystore ecc-cert.p12 -srcstoretype PKCS12 -deststoretype JKS -destkeystore ecc.jks -srcstorepass password -deststorepass password -srcalias ecc-cert -destalias ecc-cert -srckeypass password -destkeypass password -v But if an error as shown is encountered, keytool error: java.security.UnrecoverableKeyException: Get Key failed: EC KeyFactory not available java.security.UnrecoverableKeyException: Get Key failed: EC KeyFactory not available        at com.sun.net.ssl.internal.pkcs12.PKCS12KeyStore.engineGetKey(Unknown Source)         at java.security.KeyStoreSpi.engineGetEntry(Unknown Source)         at java.security.KeyStore.getEntry(Unknown Source)         at sun.security.tools.KeyTool.recoverEntry(Unknown Source)         at sun.security.tools.KeyTool.doImportKeyStoreSingle(Unknown Source)         at sun.security.tools.KeyTool.doImportKeyStore(Unknown Source)         at sun.security.tools.KeyTool.doCommands(Unknown Source)         at sun.security.tools.KeyTool.run(Unknown Source)         at sun.security.tools.KeyTool.main(Unknown Source) Caused by: java.security.NoSuchAlgorithmException: EC KeyFactory not available         at java.security.KeyFactory.<init>(Unknown Source)         at java.security.KeyFactory.getInstance(Unknown Source)         ... 9 more 4.2 Create a new PKCS11 provider If you didn't get an error as shown above skip this step. Since we already have NSS libraries built with ECC, we can create a new PKCS11 provider Create ${java.home}/jre/lib/security/nss.cfg as follows: name = NSS     nssLibraryDirectory = ${nsslibdir}    nssDbMode = noDb    attributes = compatibility where nsslibdir should contain NSS libs with ECC support. Add the following line to ${java.home}/jre/lib/security/java.security :      security.provider.9=sun.security.pkcs11.SunPKCS11 ${java.home}/lib/security/nss.cfg Note that those who are using Oracle iPlanet Web Server or Oracle Traffic Director, NSS libs built with ECC are in <ws_install_dir>/lib or <otd_install_dir>/lib. 4.3. Now keytool should work Now you can try the same keytool command and see that it succeeds : $JAVA_HOME/bin/keytool -importkeystore -srckeystore ecc-cert.p12 -srcstoretype PKCS12 -deststoretype JKS -destkeystore ecc.jks -srcstorepass password -deststorepass password -srcalias ecc-cert -destalias ecc-cert -srckeypass password -destkeypass password -v [Storing ecc.jks] 5. Convert JKS keystore into an Oracle Wallet You can export this cert and key from JKS keystore and import it into an Oracle Wallet if you need using orapki tool as shown below. Make sure that orapki you use supports ECC. Also for ECC you MUST use "-jsafe" option. $ orapki wallet create -pwd password  -wallet .  -jsafe $ orapki wallet jks_to_pkcs12 -wallet . -pwd password -keystore ecc.jks -jkspwd password -jsafe AS $orapki wallet display -wallet . -pwd welcome1  -jsafeOracle PKI Tool : Version 11.1.2.0.0Copyright (c) 2004, 2012, Oracle and/or its affiliates. All rights reserved.Requested Certificates:User Certificates:Subject:        CN=test,C=USTrusted Certificates:Subject:        OU=Class 3 Public Primary Certification Authority,O=VeriSign\, Inc.,C=USSubject:        CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE Corporation,C=USSubject:        OU=Class 2 Public Primary Certification Authority,O=VeriSign\, Inc.,C=USSubject:        OU=Class 1 Public Primary Certification Authority,O=VeriSign\, Inc.,C=USSubject:        CN=test,C=US As you can see our ECC cert in the wallet. You can follow the same steps for RSA certs as well. 6. References http://icedtea.classpath.org/bugzilla/show_bug.cgi?id=356 http://old.nabble.com/-PATCH-FOR-REVIEW-%3A-Support-PKCS11-cryptography-via-NSS-p25282932.html http://www.mozilla.org/projects/security/pki/nss/tools/pk12util.html

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >