Search Results

Search found 8406 results on 337 pages for 'platform independent'.

Page 263/337 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • WinInet Apps failing when Internet Explorer is set to Offline Mode

    - by Rick Strahl
    Ran into a nasty issue last week when all of a sudden many of my old applications that are using WinInet for HTTP access started failing. Specifically, the WinInet HttpSendRequest() call started failing with an error of 2, which when retrieving the error boils down to: WinInet Error 2: The system cannot find the file specified Now this error can pop up in many legitimate scenarios with WinInet such as when no Internet connection is available or the HTTP configuration (usually configured in Internet Explorer’s options) is misconfigured. The error typically means that the server in question cannot be found or more specifically an Internet connection can’t be established. In this case the problem started suddenly and was causing some of my own applications (old Visual FoxPro apps using my own wwHttp library) and all Adobe Air applications (which apparently uses WinInet for its basic HTTP stack) along with a few more oddball applications to fail instantly when trying to connect via HTTP. Most other applications – all of my installed browsers, email clients, various social network updaters all worked just fine. It seems it was only WinInet apps that were failing. Yet oddly Internet Explorer appeared to be working. So the problem seemed to be isolated to those ‘classic’ applications using WinInet. WinInet’s base configuration uses the Internet Explorer options dialog. To check this out I typically go to the Internet Explorer options and find the Connection tab, and check out the LAN Setup. Make sure there are no rogue proxy settings or configuration scripts that are invalid. Trying with Auto-configuration on and off also can often fix ‘real’ configuration errors. This time however this wasn’t a problem – nothing in the LAN configuration was set (all default). I also played with the Automatic detection of settings which also had no effect. I also tried to use Fiddler to see if that would tell me something. Fiddler has a few additional WinInet configuration options in its configuration. Running Fiddler and hitting an HTTP request using WinInet would never actually hit Fiddler – the failure would occur before WinInet ever fired up the HTTP connection to go through the Fiddler HTTP proxy. And the Culprit is: Internet Explorer’s Work Offline Option The culprit in this situation was Internet Explorer which at some point, unknown to me switched into Offline Mode and was then shut down: When this Offline mode is checked when IE is running *or* if IE gets shut down with this flag set, all applications using WinInet by default assume that it’s running in offline mode. Depending on your caching HTTP headers and whether the page was cached previously you may or may not get a response or an error. For an independent non-browser application this will be highly unpredictable and likely result in failures getting online – especially if the application forces requests to always reload by disabling HTTP caching (as I do on most of my dynamic HTTP clients). What makes this especially tricky is that even when IE is in offline mode in the browser, you can still browse around the Web *if* you have a connection. IE will try to load anything it has cached from the local cache, but as soon as you hit a URL that isn’t cached it will automatically try to access that URL and uncheck the Work Offline option. Conversely if you get knocked off the Internet and browse in IE 9, IE will automatically go into offline mode. I never explicitly set offline mode – it just automatically sets itself on and off depending on the connection. Problem is if you’re not using IE all the time (as I do – rarely and just for testing so usually a few commonly used URLs) and you left it in offline mode when you exit, offline mode stays set which results in the above head scratcher. Ack. This isn’t new behavior in IE 9 BTW – this behavior has always been there, but I think what’s different is that IE now automatically switches between online and offline modes without notifying you at all, so it’s hard to tell when you are offline. Fixing the Issue in your Code If you have an application that is using WinInet, there’s a WinInet option called INTERNET_OPTION_IGNORE_OFFLINE. I just checked this out in my own applications and Internet Explorer 9 and it works, but apparently it’s been broken for some older releases (I can’t confirm how far back though) – lots of posts seem to suggest the flag doesn’t work. However, in IE 9 at least it does seem to work if you call InternetSetOption before you call HttpOpenRequest with the Http Session handle. In FoxPro code I use: DECLARE INTEGER InternetSetOption ;    IN WININET.DLL ;    INTEGER HINTERNET,;    INTEGER dwFlags,;    INTEGER @dwValue,;    INTEGER cbSize lnOptionValue = 1   && BOOL TRUE pass by reference   *** Set needed SSL flags lnResult=InternetSetOption(this.hHttpSession,;    INTERNET_OPTION_IGNORE_OFFLINE ,;  && 77    @lnOptionValue ,4)   DECLARE INTEGER HttpOpenRequest ;    IN WININET.DLL ;    INTEGER hHTTPHandle,;    STRING lpzReqMethod,;    STRING lpzPage,;    STRING lpzVersion,;    STRING lpzReferer,;    STRING lpzAcceptTypes,;    INTEGER dwFlags,;    INTEGER dwContextw     hHTTPResult=HttpOpenRequest(THIS.hHttpsession,;    lcVerb,;    tcPage,;    NULL,NULL,NULL,;    INTERNET_FLAG_RELOAD + ;    IIF(THIS.lsecurelink,INTERNET_FLAG_SECURE,0) + ;    this.nHTTPServiceFlags,0) …  And this fixes the issue at least for IE 9… In my FoxPro wwHttp class I now call this by default to never get bitten by this again… This solves the problem permanently for my HTTP client. I never want to see offline operation in an HTTP client API – it’s just too unpredictable in handling errors and the last thing you want is getting unpredictably stale data. Problem solved but this behavior is – well ugly. But then that’s to be expected from an API that’s based on Internet Explorer, eh?© Rick Strahl, West Wind Technologies, 2005-2011Posted in HTTP  Windows  

    Read the article

  • Windows Azure Learning Plan - Security

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with Security for  Windows Azure.   General Security Information Overview and general  information about Windows Azure Security - what it is, how it works, and where you can learn more. General Security Whitepaper – answers most questions http://blogs.msdn.com/b/usisvde/archive/2010/08/10/security-white-paper-on-windows-azure-answers-many-faq.aspx Windows Azure Security Notes from the Patterns and Practices site http://blogs.msdn.com/b/jmeier/archive/2010/08/03/now-available-azure-security-notes-pdf.aspx Overview of Azure Security http://www.windowsecurity.com/articles/Microsoft-Azure-Security-Cloud.html Azure Security Resources http://reddevnews.com/articles/2010/08/19/microsoft-releases-windows-azure-security-resources.aspx Cloud Computing Security Considerations http://www.microsoft.com/downloads/en/details.aspx?FamilyID=68fedf9c-1c27-4642-aa5b-0a34472303ea&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+MicrosoftDownloadCenter+%28Microsoft+Download+Center Security in Cloud Computing – a Microsoft Perspective http://www.microsoft.com/downloads/en/details.aspx?FamilyID=7c8507e8-50ca-4693-aa5a-34b7c24f4579&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+MicrosoftDownloadCenter+%28Microsoft+Download+Center Physical Security for Microsoft’s Online Computing Information on the Infrastructure and Locations for Azure Physical Security. The Global Foundation Services Group at Microsoft handles physical security http://www.globalfoundationservices.com/security/index.html Microsoft’s Security Response Center http://www.microsoft.com/security/msrc/ Software Security for Microsoft’s Online Computing Steps we take as a company to develop secure software Windows Azure is developed using the Trustworthy Computing Initiative http://www.microsoft.com/about/twc/en/us/default.aspx and  http://msdn.microsoft.com/en-us/library/ms995349.aspx Identity and Access in the Cloud http://blogs.msdn.com/b/technology_titbits_by_rajesh_makhija/archive/2010/10/29/identity-and-access-in-the-cloud.aspx Security Steps you should take While Microsoft takes great pains to secure the infrastructure, platform and code for Windows Azure, you have a responsibility to write secure code. These pointers can help you do that. Securing your cloud architecture, step-by-step http://technet.microsoft.com/en-us/magazine/gg296364.aspx Security Guidelines for Windows Azure http://redmondmag.com/articles/2010/06/15/microsoft-issues-security-guidelines-for-windows-azure.aspx  Best Practices for Windows Azure Security http://blogs.msdn.com/b/vbertocci/archive/2010/06/14/security-best-practices-for-developing-windows-azure-applications.aspx Active Directory and Windows Azure http://blogs.msdn.com/b/plankytronixx/archive/2010/10/22/projecting-your-active-directory-identity-to-the-azure-cloud.aspx Understanding Encryption (great overview and tutorial) http://blogs.msdn.com/b/plankytronixx/archive/2010/10/23/crypto-primer-understanding-encryption-public-private-key-signatures-and-certificates.aspx Securing your Connection Strings (SQL Azure) http://blogs.msdn.com/b/sqlazure/archive/2010/09/07/10058942.aspx Getting started with Windows Identity Foundation (WIF) quickly http://blogs.msdn.com/b/alikl/archive/2010/10/26/windows-identity-foundation-wif-fast-track.aspx

    Read the article

  • Security Trimmed Cross Site Collection Navigation

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). This article will serve as documentation of a fully functional codeplex project that I just created. This project will give you a WebPart that will give you security trimmed navigation across site collections. The first question is, why create such a project? In every single SharePoint project you will do, one question you will always be faced with is, what should the boundaries of sites be, and what should the boundaries of site collections be? There is no good or bad answer to this, because it really really depends on your needs. There are some factors in play here. Site Collections will allow you to scale, as a Site collection is the smallest entity you can put inside a content database Site collections will allow you to offer different levels of SLAs, because you put a site collection on a separate content database, and put that database on a separate server. Site collections are a security boundary – and they can be moved around at will without affecting other site collections. Site collections are also a branding boundary. They are also a feature deployment boundary, so you can have two site collections on the same web application with completely different nature of services. But site collections break navigation, i.e. a site collection at “/”, and a site collection at “/sites/mySiteCollection”, are completely independent of each other. If you have access to both, the navigation of / won’t show you a link to /sites/mySiteCollection. Some people refer to this as a huge issue in SharePoint. Luckily, some workarounds exist. A long time ago, I had blogged about “Implementing Consistent Navigation across Site Collections”. That approach was a no-code solution, it worked – it gave you a consistent navigation across site collections. But, it didn’t work in a security trimmed fashion! i.e., if I don’t have access to Site Collection ‘X’, it would still show me a link to ‘X’. Well this project gets around that issue. Simply deploy this project, and it’ll give you a WebPart. You can use that WebPart as either a webpart or as a server control dropped via SharePoint designer, and it will give you Security Trimmed Cross Site Collection Navigation. The code has been written for SP2010, but it will work in SP2007 with the help of http://spwcfsupport.codeplex.com . What do I need to do to make it work? I’m glad you asked! Simple! Deploy the .wsp (which you can download here). This will give you a site collection feature called “Winsmarts Cross Site Collection Navigation” as shown below. Go ahead and activate it, and this will give you a WebPart called “Winsmarts Navigation Web Part” as shown below: Just drop this WebPart on your page, and it will show you all site collections that the currently logged in user has access to. Really it’s that easy! This is shown as below - In the above example, I have two site collections that I created at /sites/SiteCollection1 and /sites/SiteCollection2. The navigation shows the titles. You see some extraneous crap as well, you might want to clean that – I’ll talk about that in a minute. What? You’re running into problems? If the problem you’re running into is that you are prompted to login three times, and then it shows a blank webpart that says “Loading your applications ..” and then craps out!, then most probably you’re using a different authentication scheme. Behind the scenes I use a custom WCF service to perform this job. OOTB, I’ve set it to work with NTLM, but if you need to make it work alternate authentications such as forms based auth, or client side certs, you will need to edit the %14%\ISAPI\Winsmarts.CrossSCNav\web.config file, specifically, this section - 1: <bindings> 2: <webHttpBinding> 3: <binding name="customWebHttpBinding"> 4: <security mode="TransportCredentialOnly"> 5: <transport clientCredentialType="Ntlm"/> 6: </security> 7: </binding> 8: </webHttpBinding> 9: </bindings> For Kerberos, change the “clientCredentialType” to “Windows” For Forms auth, remove that transport line For client certs – well that’s a bit more involved, but it’s just web.config changes – hit a good book on WCF or hire me for a billion trillion $. But fair warning, I might be too busy to help immediately. If you’re running into a different problem, please leave a comment below, but the code is pretty rock solid, so .. hmm .. check what you’re doing! BTW, I don’t  make any guarantee/warranty on this – if this code makes you sterile, unpopular, bad hairstyle, anything else, that is your problem! But, there are some known issues - I wrote this as a concept – you can easily extend it to be more flexible. Example, hierarchical nav, or, horizontal nav, jazzy effects with jquery or silverlight– all those are possible very very easily. This webpart is not smart enough to co-exist with another instance of itself on the same page. I can easily extend it to do so, which I will do in my spare(!?) time! Okay good! But that’s not all! As you can see, just dropping the WebPart may show you many extraneous site collections, or maybe you want to restrict which site collections are shown, or exclude a certain site collection to be shown from the navigation. To support that, I created a property on the WebPart called “UrlMatchPattern”, which is a regex expression you specify to trim the results :). So, just edit the WebPart, and specify a string property of “http://sp2010/sites/” as shown below. Note that you can put in whatever regex expression you want! So go crazy, I don’t care! And this gives you a cleaner look.   w00t! Enjoy! Comment on the article ....

    Read the article

  • Managing Database Clusters - A Whole Lot Simpler

    - by mat.keep(at)oracle.com
    Clustered computing brings with it many benefits: high performance, high availability, scalable infrastructure, etc.  But it also brings with it more complexity.Why ?  Well, by its very nature, there are more "moving parts" to monitor and manage (from physical, virtual and logical hosts) to fault detection and failover software to redundant networking components - the list goes on.  And a cluster that isn't effectively provisioned and managed will cause more downtime than the standalone systems it is designed to improve upon.  Not so great....When it comes to the database industry, analysts already estimate that 50% of a typical database's Total Cost of Ownership is attributable to staffing and downtime costs.  These costs will only increase if a database cluster is to hard to properly administer.Over the past 9 months, monitoring and management has been a major focus in the development of the MySQL Cluster database, and on Tuesday 12th January, the product team will be presenting the output of that development in a new webinar.Even if you can't make the date, it is still worth registering so you will receive automatic notification when the on-demand replay is availableIn the webinar, the team will cover:    * NDBINFO: released with MySQL Cluster 7.1, NDBINFO presents real-time status and usage statistics, providing developers and DBAs with a simple means of pro-actively monitoring and optimizing database performance and availability.    * MySQL Cluster Manager (MCM): available as part of the commercial MySQL Cluster Carrier Grade Edition, MCM simplifies the creation and management of MySQL Cluster by automating common management tasks, delivering higher administration productivity and enhancing cluster agility. Tasks that used to take 46 commands can be reduced to just one!    * MySQL Cluster Advisors & Graphs: part of the MySQL Enterprise Monitor and available in the commercial MySQL Cluster Carrier Grade Edition, the Enterprise Advisor includes automated best practice rules that alert on key performance and availability metrics from MySQL Cluster data nodes.You'll also learn how you can get started evaluating and using all of these tools to simplify MySQL Cluster management.This session will last round an hour and will include interactive Q&A throughout. You can learn more about MySQL Cluster Manager from this whitepaper and on-line demonstration.  You can also download the packages from eDelivery (just select "MySQL Database" as the product pack, select your platform, click "Go" and then scroll down to get the software).While managing clusters will never be easy, the webinar will show hou how it just got a whole lot simpler !

    Read the article

  • Top 31 Favorite Features in Windows Server 2012

    - by KeithMayer
    Over the past month, my fellow IT Pro Technical Evangelists and I have authored a series of articles about our Top 31 Favorite Features in Windows Server 2012.  Now that our series is complete, I’m providing a clickable index below of all of the articles in the series for your convenience, just in case you perhaps missed any of them when they were first released.  Hope you enjoy our Favorite Features in Windows Server 2012! Top 31 Favorite Features in Windows Server 2012 The Cloud OS Platform by Kevin Remde Server Manager in Windows Server 2012 by Brian Lewis Feel the Power of PowerShell 3.0 by Matt Hester Live Migrate Your VMS in One Line of PowerShell by Keith Mayer Windows Server 2012 and Hyper-V Replica by Kevin Remde Right-size IT Budgets with “Storage Spaces” by Keith Mayer Yes, there is an “I” in Team – the NIC Team! by Kevin Remde Hyper-V Network Virtualization by Keith Mayer Get Happy over the FREE Hyper-V Server 2012 by Matt Hester Simplified BranchCache in Windows Server 2012 by Brian Lewis Getting Snippy with PowerShell 3.0 by Matt Hester How to Get Unbelievable Data Deduplication Results by Chris Henley of Veeam Simplified VDI Configuration and Management by Brian Lewis Taming the New Task Manager by Keith Mayer Improve File Server Resiliency with ReFS by Keith Mayer Simplified DirectAccess by Sumeeth Evans SMB 3.0 – The Glue in Windows Server 2012 by Matt Hester Continuously Available File Shares by Steven Murawski of Edgenet Server Core - Improved Taste, Less Filling, More Uptime by Keith Mayer Extend Your Hyper-V Virtual Switch by Kevin Remde To NIC or to Not NIC Hardware Requirements by Brian Lewis Simplified Licensing and Server Versions by Kevin Remde I Think, Therefore IPAM! by Kevin Remde Windows Server 2012 and the RSATs by Kevin Remde Top 3 New Tricks in the Active Directory Admin Center by Keith Mayer Dynamic Access Control by Brian Lewis Get the Gremlin out of Your Active Directory Virtualized Infrastructure by Matt Hester Scoping out the New DHCP Failover by Keith Mayer Gone in 8 Seconds – The New CHKDSK by Matt Hester New Remote Desktop Services (RDS) by Brian Lewis No Better Time Than Now to Choose Hyper-V by Matt Hester What’s Next? Keep Learning! Want to learn more about Windows Server 2012 and Hyper-V Server 2012?  Want to prepare for certification on Windows Server 2012? Do It: Join our Windows Server 2012 “Early Experts” Challenge online peer study group for FREE at http://earlyexperts.net. You’ll get FREE access to video-based lectures, structured study materials and hands-on lab activities to help you study and prepare!  Along the way, you’ll be part of an IT Pro community of over 1,000+ IT Pros that are all helping each other learn Windows Server 2012! What are Your Favorite Features? Do you have a Favorite Feature in Windows Server 2012 that we missed in our list above?  Feel free to share your favorites in the comments below! Keith Build Your Lab! Download Windows Server 2012 Don’t Have a Lab? Build Your Lab in the Cloud with Windows Azure Virtual Machines Want to Get Certified? Join our Windows Server 2012 "Early Experts" Study Group

    Read the article

  • SQL SERVER – What is MDS? – Master Data Services in Microsoft SQL Server 2008 R2

    - by pinaldave
    What is MDS? Master Data Services helps enterprises standardize the data people rely on to make critical business decisions. With Master Data Services, IT organizations can centrally manage critical data assets company wide and across diverse systems, enable more people to securely manage master data directly, and ensure the integrity of information over time. (Source: Microsoft) Today I will be talking about the same subject at Microsoft TechEd India. If you want to learn about how to standardize your data and apply the business rules to validate data you must attend my session. MDS is very interesting concept, I will cover super short but very interesting 10 quick slides about this subject. I will make sure in very first 20 mins, you will understand following topics Introduction to Master Data Management What is Master Data and Challenges MDM Challenges and Advantage Microsoft Master Data Services Benefits and Key Features Uses of MDS Capabilities Key Features of MDS This slides decks will be followed by around 30 mins demo which will have story of entity, hierarchies, versions, security, consolidation and collection. I will be tell this story keeping business rules in center. We take one business rule which will be simple validation rule and will make it much more complex and yet very useful to product. I will also demonstrate few real life scenario where I will be talking about MDS and its usage. Do not miss this session. At the end of session there will be book awarded to best participant. My session details: Session: Master Data Services in Microsoft SQL Server 2008 R2 Date: April 12, 2010  Time: 2:30pm-3:30pm SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision making by allowing you to create, manage and propagate changes from single master view of your business entities. Also with MDS – Master Data-hub which is the vital component helps ensure reporting consistency across systems and deliver faster more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Bye Bye Year of the Dragon, Hello BPM

    - by Ajay Khanna
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} As 2012 fades and we usher in a New Year, let’s look back at some of the hottest BPM trends and those we’ll be seeing more of in the coming months. BPM is as much about people as it is about technology. As people adopt new ways of engagement, new channels of communications and new devices to interact , the changes are reflected in BPM practices. As Social and Mobile have become an integral part of our personal and professional lives, we’ll see tighter integration of social and mobile with BPM, and more use cases emerging for smarter process management in 2013. And with products and services becoming less differentiated, organizations will strive to differentiate on Customer Experience. Concepts like Pace Layered Architecture and Dynamic Case Management will provide more flexibility and agility to IT groups and knowledge workers. Take a look at some of these capabilities we showcased (see video) at Oracle OpenWorld 2012. Some of these trends that will continue to gain momentum in 2013: Social networks and social media have provided a new way for businesses to engage with customers. A prospect is likely to reach out to their social network before making any purchase. Companies are increasingly engaging with customers in social networks to influence their purchasing decisions, as well as listening to customers via tools like sentiment analysis to see what customers think about a particular product or process. These insights are valuable as companies look to improve their processes. Inside organizations, workers are using social tools to engage with each other to design new products and processes. Social collaboration tools are being used to resolve issues where an employee needs consultation to reach a decision. Oracle BPM Suite includes social interaction as an integral part of its process design and work management to empower today’s business users. Ubiquitous smart mobile devices are trending as a tool of choice for many workers. Many companies are adopting the policy of “Bring Your Own Device,” and the device of choice is a tablet. Devices like smart phones and tablets not only provide mobility to workers and customers, but they also provide additional important information – the context. By integrating the mobile context (location, photos, and preferences) into your processes, organizations can make much more informed decisions, as well as offer more personalized service to customers. Using Oracle ADF Mobile, you can easily create user interfaces for mobile devices and also capture location data for process execution. Customer experience was at the forefront of trending topics in 2012. Organizations are trying to understand their customers better and offer them more personalized and differentiated services. Customer experience is paramount when companies design sales and support processes. Companies are looking to BPM to consistently and efficiently orchestrate customer facing processes across disparate systems, departments and channels of communication. Oracle BPM Suite provides just the right capabilities for organizations to design and deliver an excellent customer experience. Pace Layered Architecture strategy is gaining traction as a way to maximize agility and minimize disruption in organizations. It provides a framework to manage the evolution of your information system when different pieces of it are changing at different rates and need to be updated independent of one another. Oracle Fusion Middleware and Oracle BPM Suite are designed with this in mind. The database layer, integration layer, application layer, and process layer should not be required to change at the same time. Most of the business changes to policy or process can be done at the process layer without disrupting the whole infrastructure. By understanding the type of change needed at a particular level, organizations can become much more agile and efficient. Adaptive Case Management proposes more flexibility to manage processes or cases that do not follow a structured process flow. In such situations, the knowledge worker managing the case needs to evaluate what step should occur next because the sequence of steps can’t be predetermined. Another characteristic is that it requires much more collaboration than straight-through process. As simple processes become automated, and customers adopt more and more self-service, cases that reach the case workers are much more complex and need more investigation. Oracle BPM suite includes comprehensive adaptive case management capability to manage such unstructured and complex processes. Smart BPM or making your BPM intelligent has been the holy grail for BPM practitioners who imagined that one day BPM would become one with Business Intelligence, Business Activity Monitoring and Complex Event Processing, making it much more responsive and helpful in organizational decision making. In 2013, organizations will begin to deploy these intelligent BPM solutions. Oracle offers an integrated solution that brings together the powerful functionality of BI, BAM, event processing, and Real Time Decisions to help organizations create smart process based solutions. In order to help customers reach their BPM goals faster and remove risks associated with BPM initiatives, Oracle has introduced Oracle Process Accelerators, pre-built best practices applications built on Oracle BPM Suite that are fully production grade and ready to deploy. These are exiting times for BPM practitioners and there is so much to look forward to in 2013. We wish you a very happy and prosperous New Year 2013. Happy BPMing!

    Read the article

  • C#/.NET Little Wonders: The Concurrent Collections (1 of 3)

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In the next few weeks, we will discuss the concurrent collections and how they have changed the face of concurrent programming. This week’s post will begin with a general introduction and discuss the ConcurrentStack<T> and ConcurrentQueue<T>.  Then in the following post we’ll discuss the ConcurrentDictionary<T> and ConcurrentBag<T>.  Finally, we shall close on the third post with a discussion of the BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. A brief history of collections In the beginning was the .NET 1.0 Framework.  And out of this framework emerged the System.Collections namespace, and it was good.  It contained all the basic things a growing programming language needs like the ArrayList and Hashtable collections.  The main problem, of course, with these original collections is that they held items of type object which means you had to be disciplined enough to use them correctly or you could end up with runtime errors if you got an object of a type you weren't expecting. Then came .NET 2.0 and generics and our world changed forever!  With generics the C# language finally got an equivalent of the very powerful C++ templates.  As such, the System.Collections.Generic was born and we got type-safe versions of all are favorite collections.  The List<T> succeeded the ArrayList and the Dictionary<TKey,TValue> succeeded the Hashtable and so on.  The new versions of the library were not only safer because they checked types at compile-time, in many cases they were more performant as well.  So much so that it's Microsoft's recommendation that the System.Collections original collections only be used for backwards compatibility. So we as developers came to know and love the generic collections and took them into our hearts and embraced them.  The problem is, thread safety in both the original collections and the generic collections can be problematic, for very different reasons. Now, if you are only doing single-threaded development you may not care – after all, no locking is required.  Even if you do have multiple threads, if a collection is “load-once, read-many” you don’t need to do anything to protect that container from multi-threaded access, as illustrated below: 1: public static class OrderTypeTranslator 2: { 3: // because this dictionary is loaded once before it is ever accessed, we don't need to synchronize 4: // multi-threaded read access 5: private static readonly Dictionary<string, char> _translator = new Dictionary<string, char> 6: { 7: {"New", 'N'}, 8: {"Update", 'U'}, 9: {"Cancel", 'X'} 10: }; 11:  12: // the only public interface into the dictionary is for reading, so inherently thread-safe 13: public static char? Translate(string orderType) 14: { 15: char charValue; 16: if (_translator.TryGetValue(orderType, out charValue)) 17: { 18: return charValue; 19: } 20:  21: return null; 22: } 23: } Unfortunately, most of our computer science problems cannot get by with just single-threaded applications or with multi-threading in a load-once manner.  Looking at  today's trends, it's clear to see that computers are not so much getting faster because of faster processor speeds -- we've nearly reached the limits we can push through with today's technologies -- but more because we're adding more cores to the boxes.  With this new hardware paradigm, it is even more important to use multi-threaded applications to take full advantage of parallel processing to achieve higher application speeds. So let's look at how to use collections in a thread-safe manner. Using historical collections in a concurrent fashion The early .NET collections (System.Collections) had a Synchronized() static method that could be used to wrap the early collections to make them completely thread-safe.  This paradigm was dropped in the generic collections (System.Collections.Generic) because having a synchronized wrapper resulted in atomic locks for all operations, which could prove overkill in many multithreading situations.  Thus the paradigm shifted to having the user of the collection specify their own locking, usually with an external object: 1: public class OrderAggregator 2: { 3: private static readonly Dictionary<string, List<Order>> _orders = new Dictionary<string, List<Order>>(); 4: private static readonly _orderLock = new object(); 5:  6: public void Add(string accountNumber, Order newOrder) 7: { 8: List<Order> ordersForAccount; 9:  10: // a complex operation like this should all be protected 11: lock (_orderLock) 12: { 13: if (!_orders.TryGetValue(accountNumber, out ordersForAccount)) 14: { 15: _orders.Add(accountNumber, ordersForAccount = new List<Order>()); 16: } 17:  18: ordersForAccount.Add(newOrder); 19: } 20: } 21: } Notice how we’re performing several operations on the dictionary under one lock.  With the Synchronized() static methods of the early collections, you wouldn’t be able to specify this level of locking (a more macro-level).  So in the generic collections, it was decided that if a user needed synchronization, they could implement their own locking scheme instead so that they could provide synchronization as needed. The need for better concurrent access to collections Here’s the problem: it’s relatively easy to write a collection that locks itself down completely for access, but anything more complex than that can be difficult and error-prone to write, and much less to make it perform efficiently!  For example, what if you have a Dictionary that has frequent reads but in-frequent updates?  Do you want to lock down the entire Dictionary for every access?  This would be overkill and would prevent concurrent reads.  In such cases you could use something like a ReaderWriterLockSlim which allows for multiple readers in a lock, and then once a writer grabs the lock it blocks all further readers until the writer is done (in a nutshell).  This is all very complex stuff to consider. Fortunately, this is where the Concurrent Collections come in.  The Parallel Computing Platform team at Microsoft went through great pains to determine how to make a set of concurrent collections that would have the best performance characteristics for general case multi-threaded use. Now, as in all things involving threading, you should always make sure you evaluate all your container options based on the particular usage scenario and the degree of parallelism you wish to acheive. This article should not be taken to understand that these collections are always supperior to the generic collections. Each fills a particular need for a particular situation. Understanding what each container is optimized for is key to the success of your application whether it be single-threaded or multi-threaded. General points to consider with the concurrent collections The MSDN points out that the concurrent collections all support the ICollection interface. However, since the collections are already synchronized, the IsSynchronized property always returns false, and SyncRoot always returns null.  Thus you should not attempt to use these properties for synchronization purposes. Note that since the concurrent collections also may have different operations than the traditional data structures you may be used to.  Now you may ask why they did this, but it was done out of necessity to keep operations safe and atomic.  For example, in order to do a Pop() on a stack you have to know the stack is non-empty, but between the time you check the stack’s IsEmpty property and then do the Pop() another thread may have come in and made the stack empty!  This is why some of the traditional operations have been changed to make them safe for concurrent use. In addition, some properties and methods in the concurrent collections achieve concurrency by creating a snapshot of the collection, which means that some operations that were traditionally O(1) may now be O(n) in the concurrent models.  I’ll try to point these out as we talk about each collection so you can be aware of any potential performance impacts.  Finally, all the concurrent containers are safe for enumeration even while being modified, but some of the containers support this in different ways (snapshot vs. dirty iteration).  Once again I’ll highlight how thread-safe enumeration works for each collection. ConcurrentStack<T>: The thread-safe LIFO container The ConcurrentStack<T> is the thread-safe counterpart to the System.Collections.Generic.Stack<T>, which as you may remember is your standard last-in-first-out container.  If you think of algorithms that favor stack usage (for example, depth-first searches of graphs and trees) then you can see how using a thread-safe stack would be of benefit. The ConcurrentStack<T> achieves thread-safe access by using System.Threading.Interlocked operations.  This means that the multi-threaded access to the stack requires no traditional locking and is very, very fast! For the most part, the ConcurrentStack<T> behaves like it’s Stack<T> counterpart with a few differences: Pop() was removed in favor of TryPop() Returns true if an item existed and was popped and false if empty. PushRange() and TryPopRange() were added Allows you to push multiple items and pop multiple items atomically. Count takes a snapshot of the stack and then counts the items. This means it is a O(n) operation, if you just want to check for an empty stack, call IsEmpty instead which is O(1). ToArray() and GetEnumerator() both also take snapshots. This means that iteration over a stack will give you a static view at the time of the call and will not reflect updates. Pushing on a ConcurrentStack<T> works just like you’d expect except for the aforementioned PushRange() method that was added to allow you to push a range of items concurrently. 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: // but you can also push multiple items in one atomic operation (no interleaves) 7: stack.PushRange(new [] { "Second", "Third", "Fourth" }); For looking at the top item of the stack (without removing it) the Peek() method has been removed in favor of a TryPeek().  This is because in order to do a peek the stack must be non-empty, but between the time you check for empty and the time you execute the peek the stack contents may have changed.  Thus the TryPeek() was created to be an atomic check for empty, and then peek if not empty: 1: // to look at top item of stack without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (stack.TryPeek(out item)) 5: { 6: Console.WriteLine("Top item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Stack was empty."); 11: } Finally, to remove items from the stack, we have the TryPop() for single, and TryPopRange() for multiple items.  Just like the TryPeek(), these operations replace Pop() since we need to ensure atomically that the stack is non-empty before we pop from it: 1: // to remove items, use TryPop or TryPopRange to get multiple items atomically (no interleaves) 2: if (stack.TryPop(out item)) 3: { 4: Console.WriteLine("Popped " + item); 5: } 6:  7: // TryPopRange will only pop up to the number of spaces in the array, the actual number popped is returned. 8: var poppedItems = new string[2]; 9: int numPopped = stack.TryPopRange(poppedItems); 10:  11: foreach (var theItem in poppedItems.Take(numPopped)) 12: { 13: Console.WriteLine("Popped " + theItem); 14: } Finally, note that as stated before, GetEnumerator() and ToArray() gets a snapshot of the data at the time of the call.  That means if you are enumerating the stack you will get a snapshot of the stack at the time of the call.  This is illustrated below: 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: var results = stack.GetEnumerator(); 7:  8: // but you can also push multiple items in one atomic operation (no interleaves) 9: stack.PushRange(new [] { "Second", "Third", "Fourth" }); 10:  11: while(results.MoveNext()) 12: { 13: Console.WriteLine("Stack only has: " + results.Current); 14: } The only item that will be printed out in the above code is "First" because the snapshot was taken before the other items were added. This may sound like an issue, but it’s really for safety and is more correct.  You don’t want to enumerate a stack and have half a view of the stack before an update and half a view of the stack after an update, after all.  In addition, note that this is still thread-safe, whereas iterating through a non-concurrent collection while updating it in the old collections would cause an exception. ConcurrentQueue<T>: The thread-safe FIFO container The ConcurrentQueue<T> is the thread-safe counterpart of the System.Collections.Generic.Queue<T> class.  The concurrent queue uses an underlying list of small arrays and lock-free System.Threading.Interlocked operations on the head and tail arrays.  Once again, this allows us to do thread-safe operations without the need for heavy locks! The ConcurrentQueue<T> (like the ConcurrentStack<T>) has some departures from the non-concurrent counterpart.  Most notably: Dequeue() was removed in favor of TryDequeue(). Returns true if an item existed and was dequeued and false if empty. Count does not take a snapshot It subtracts the head and tail index to get the count.  This results overall in a O(1) complexity which is quite good.  It’s still recommended, however, that for empty checks you call IsEmpty instead of comparing Count to zero. ToArray() and GetEnumerator() both take snapshots. This means that iteration over a queue will give you a static view at the time of the call and will not reflect updates. The Enqueue() method on the ConcurrentQueue<T> works much the same as the generic Queue<T>: 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5: queue.Enqueue("Second"); 6: queue.Enqueue("Third"); For front item access, the TryPeek() method must be used to attempt to see the first item if the queue.  There is no Peek() method since, as you’ll remember, we can only peek on a non-empty queue, so we must have an atomic TryPeek() that checks for empty and then returns the first item if the queue is non-empty. 1: // to look at first item in queue without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (queue.TryPeek(out item)) 5: { 6: Console.WriteLine("First item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Queue was empty."); 11: } Then, to remove items you use TryDequeue().  Once again this is for the same reason we have TryPeek() and not Peek(): 1: // to remove items, use TryDequeue. If queue is empty returns false. 2: if (queue.TryDequeue(out item)) 3: { 4: Console.WriteLine("Dequeued first item " + item); 5: } Just like the concurrent stack, the ConcurrentQueue<T> takes a snapshot when you call ToArray() or GetEnumerator() which means that subsequent updates to the queue will not be seen when you iterate over the results.  Thus once again the code below will only show the first item, since the other items were added after the snapshot. 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5:  6: var iterator = queue.GetEnumerator(); 7:  8: queue.Enqueue("Second"); 9: queue.Enqueue("Third"); 10:  11: // only shows First 12: while (iterator.MoveNext()) 13: { 14: Console.WriteLine("Dequeued item " + iterator.Current); 15: } Using collections concurrently You’ll notice in the examples above I stuck to using single-threaded examples so as to make them deterministic and the results obvious.  Of course, if we used these collections in a truly multi-threaded way the results would be less deterministic, but would still be thread-safe and with no locking on your part required! For example, say you have an order processor that takes an IEnumerable<Order> and handles each other in a multi-threaded fashion, then groups the responses together in a concurrent collection for aggregation.  This can be done easily with the TPL’s Parallel.ForEach(): 1: public static IEnumerable<OrderResult> ProcessOrders(IEnumerable<Order> orderList) 2: { 3: var proxy = new OrderProxy(); 4: var results = new ConcurrentQueue<OrderResult>(); 5:  6: // notice that we can process all these in parallel and put the results 7: // into our concurrent collection without needing any external locking! 8: Parallel.ForEach(orderList, 9: order => 10: { 11: var result = proxy.PlaceOrder(order); 12:  13: results.Enqueue(result); 14: }); 15:  16: return results; 17: } Summary Obviously, if you do not need multi-threaded safety, you don’t need to use these collections, but when you do need multi-threaded collections these are just the ticket! The plethora of features (I always think of the movie The Three Amigos when I say plethora) built into these containers and the amazing way they acheive thread-safe access in an efficient manner is wonderful to behold. Stay tuned next week where we’ll continue our discussion with the ConcurrentBag<T> and the ConcurrentDictionary<TKey,TValue>. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here.   Tweet Technorati Tags: C#,.NET,Concurrent Collections,Collections,Multi-Threading,Little Wonders,BlackRabbitCoder,James Michael Hare

    Read the article

  • Evolution Of High Definition TV Viewing

    - by Gopinath
    The following guest post is written by Rob, who is also blogging on entertainment technology topics on iwantsky.com Gone are the days when you need to squint to be able to see the emotions on the faces of Humphrey Bogart and Ingrid Bergman as the lovers bid each other adieu in the classic film Casablanca. These days, watching an ordinary ant painstakingly carry a leaf in Animal Planet can be an exhilarating experience as you get to see not only the slightest movement but also the demarcation line between the insect’s head, thorax and abdomen. The crystal clear imagery was made possible by the sharp minds and the tinkering hands of the scientists that have designed the modern world’s HDTV. What is HDTV and what makes people so agog to have this new innovation in TV watching? HDTV stands for High Definition TV. Television viewing has indeed made a big leap. From the grainy black and whites, TV viewing had moved to colored TVs, progressed to SD TVs and now to HDTV. HDTV is the emerging trend in TV viewing as it delivers bigger and clearer pictures and better audio. Viewers can have a cinema-like TV viewing experience right in the comforts of their own home. With HDTV the viewer is allowed to have a better viewing range. With Standard (SD) TV, the viewer has to be at a distance that is from 3 to 6 times the size of the screen. HDTV allows the viewer to enjoy sharper and clearer images as it is possible to sit at a distance that is 1.5 or 3 times the size of the screen without noticing any image pixilation. Although HDTV appears to be a fairly new innovation, this system has actually existed in various forms years ago. Development of the HDTV was started in Europe as early as 1940s. However, the NTSC and the PAL/SECAM, the two analog TV standards became dominant and became popular worldwide. The analog TV was replaced by the digital TV platform in the 1990s. Even during the analog era, attempts have been made to develop HDTV. Japan has come out with MUSE system. However, due to channel bandwidth requirement concerns, the program was shelved. The entry of four organizations into the HDTV market spurred the development of a beneficial coalition. The AT&T, ATRC, MIT and Zenith HDTV combined forces. In 1993, a Grand Alliance was formed. This group is composed of researchers and HDTV manufacturers. A common standard for the broadcast system of HDTV was developed. In 1995, the system was tested and found successful. With the higher screen resolution of HDTV, viewing has never been more enjoyable. [Image courtesy: samsung] This article titled,Evolution Of High Definition TV Viewing, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Windows Phone 7 Design using Expression Blend - Resources

    - by Nikita Polyakov
    I’ve been doing a series of talks across Florida regarding Windows Phone 7 Design using Microsoft Expression Blend 4. I discuss the WP7 phone and application experience; show how to use Expression Blend toolset to effectively design such apps. Next presentation is on 5/4/2010 at 6:30PM EST will be a webcast format over LiveMeeting at Ft. Lauderdale Online group. Registration and the LiveMeeting link are both here: http://www.fladotnet.com/Reg.aspx?EventID=459 [I will post a link if it’s recorded]   Here are the resources from my presentations: The Biggest source is the Windows Phone UI and Design Language video from MIX10 Windows Phone 7 Design Guide as it’s found on the WP7 Dev Home Page Study The Silverlight Mobile Tutorials on official Silverlight website I will be blogging a separate entry for a new demo app that will showcase the elements I presented. I suggest you actually watch all of the MIX videos about SL and Design as great primer to get you thinking the WP7 way.   A lot happening with WP7Dev and it’s just the beginning! So watch these Twitter accounts and blogs: @Ckindel - Charlie Kindel - WP7 Dev Head http://blogs.msdn.com/ckindel @WP7Dev - Official Dev Twitter @WP7 - Official WP7 Twitter Peter Torr - http://blogs.msdn.com/ptorr Mike Harsh - http://blogs.msdn.com/mharsh Shawn Oster - http://www.shawnoster.com   Other worthwhile mention my local friends speaking and blogging about Windows Phone 7: Bill Reiss is doing great presentations on Building games with XNA for Windows Phone 7. Be on the lookout for those around Florida. Bill is a Silverlight MVP and has a legacy of XNA and Silverlight games, see his site. Kevin Wolf aka ByteMaster he is a Device Application Developer MVP with tremendous experience building mobile applications. He has developed WinMo-GF a multi-platform gaming framework. Get these tools and get creating! You will need the following components installed in this order: Expression Blend 4 Beta Windows Phone Developer Tools Microsoft Expression Blend Add-in Preview for Windows Phone Microsoft Expression Blend SDK Preview for Windows Phone Want more training? Don’t forget that Channel 9 has complete walkthroughs of their WP7 Training Kit posted online. PS: To continue with all this design talk check out Microsoft .toolbox “Learn to create Silverlight applications using Expression Studio and to apply fundamental design principles.” A great website with a lot of design tutorials set up as a wonderful full course on design all for free, including a great forum community and neat little avatars you can build yourself.

    Read the article

  • Suggestions on switching from lamp based web design-development to game design-development

    - by Sandeepan Nath
    I have around 2.5 years of experience as a web developer cum designer working mainly on the LAMP platform. Now, I want to try out game development (of the likes of First Person Shooter games like Call of Duty (COD)). It is one of my dreams to some day succeed in making a profitable, popular, commercial game of this type. However, I have never done any kind of business nor even freelancing yet even in the web domain. Okay, first things first, I am just starting and I don't yet have any idea about the technologies, languages, engines (game engines) etc involved in that. I would like this question to be a complete guide for people with similar interests. Best resources for getting hold really fast What would be the best approach to get the basic hold of the domain really fast? Any resource(s) for programmers coming from other domains/experienced in other domains would be the ideal ones for me. E.g., if anybody would ask me some good resource for quickly learning PHP/Mysql, I would suggest books like "How to do everything with PHP & MySql" - because - it introduces all the basics of the domain (not the advanced things which can be later learnt by practice and also a lot by searching in stackoverflow questions) it contains some very nice working projects in the end, which help in applying the skills learnt in the chapters of the book. This is the best way for self learners, I feel. I would appreciate some similar resource which connects all concepts together to get the bigger picture. I have read about C, C++, C#, JAVA being used in game programming but not sure which language to go for (I have previously learnt a little of C and JAVA). I have also read about game engines but there would be various other concepts. Commonly accepted ways of learning Should 3D games like these be tried after 2D games? Are there some commonly accepted ways of learning such kind of games? Like in web development, we should go for frameworks after practising well with basic language, AJAX after getting properly done with simple page-reload processing etc. Apart from these, any useful tips (like language choices etc.) would be much appreciated. Like it is highly recommended to contribute to open source web projects for getting recognition, are there similar open source game projects? Thanks, Sandeepan

    Read the article

  • Deployment Options for AutoVue 20.0 Users

    - by celine.beck
    AutoVue release 20.0 boasts a brand new architecture. As part of this product rearchitecture, AutoVue can now be deployed either as a desktop deployment to serve the needs of individual users in their personal productivity; or in a Client / Server deployment for those that require connections to enterprise applications / back-end systems. The most common question that we hear from our customers about this new architecture is the following: "Is AutoVue Desktop Version still part of release 20.0 and if so, what is the difference between AutoVue Desktop Version and the Desktop deployment of AutoVue release 20.0?" A detailed answer to these questions is provided in a very complete article entitled Understanding Deployment Options for AutoVue 19.3 Desktop Version users upgrading to AutoVue 20.0 (note 1058254.1) which was posted on My Oracle Support. Is AutoVue Desktop Version still part of AutoVue 20.0? Yes, AutoVue Desktop Version 20.0 is still available to customers and partners, as a maintenance release of AutoVue 19.3. As such, it will not contain any of the new capabilities featured in AutoVue release 20.0. All format enhancements and new format support have been added to release 20.0 Desktop Version though. What is the different between AutoVue Desktop Version 20.0 and the Desktop Deployment of AutoVue release 20.0? AutoVue 20.0 Desktop deployment works like the AutoVue Desktop version. It is installed as a standalone product on each user's machine and runs a local instance of AutoVue. The AutoVue 20.0 Desktop deployment includes all new features, formats and performance enhancements included in release 20.0 (walkthrough capability, improved compare, ...) What deployment options are available to AutoVue 19.3 Desktop Version customers? AutoVue Desktop Version users can evolve at their own pace to the new AutoVue platform. With release 20.0, customers can opt to: Option 1: Stay on AutoVue Desktop Version 20.0 Option 2: Migrate to AutoVue and select the desktop deployment method Option 3: Migrate to AutoVue and select the Client/Server deployment method What is the Client / Server deployment of AutoVue 20.0? The Client/Server deployment has AutoVue installed on a server, to which local client machines connect to access and view documents. AutoVue 20.0 Client Server Deployment allows users to leverage the new online/offline capabilities in release 20.0 and easily switch between online and offline modes of operation. With the Client/Server deployment, customers also get a complete, open and standards-based set of integration tools that allows them to tie AutoVue to any enterprise applications to provide users with a consistent view of data and business objects and expand workflow automation to document-based processes. Related articles: AutoVue Release 20.0 Now Available, New Walkthrough Capability in AutoVue 20.0, Watch the AutoVue 20.0 Release Webcast, April 27 at 12pm EST

    Read the article

  • CodePlex Daily Summary for Monday, March 22, 2010

    CodePlex Daily Summary for Monday, March 22, 2010New Projects[Tool] Vczh Non-public DLL Classes Caller: Generate C# code for you to call non-public classes in DLLs very easily.Artefact Animator: Artefact Animator provides an easy to use framework for procedural time-based animations in Silverlight and WPF.cacheroo: Cacheroo is a social networking community that will make it easier for people who love geocaching to get connected.Data Processing Toolkit: An utility app to collected data from different sources (i.e. bugzilla bug reports) in a structured way. We are currently setting up the site. Mo...eXternal SQL Bridge (PHP): The eXternal SQL Bridge (XSB) allows you to bridge two websites together in a secure manner through pre-shared keys. XSB is resilient against repla...'G' - Language to Define Gestures for Touch Based Applications: A cross plat form multi-touch application framework with a language to define gestures. The application is build on Silverlight 4.0 and the languag...IIS Network Diagnostic Tools: Web implementation of "looking glass" like services (ping, traceroute) as HTTP modules for Internet Information Services.Interop Router: This project establishes a communication framework and job dispatcher for a mixed operating system cluster environment.L2 Commander: L2Commander makes it easier for both new and old l2j users to manage your server.You no longer have to waste time on finding the files you need and...MediaHelper: A utility to help clean up empty/unwanted files and folders in your filesystem.mhinze: matt hinze stuffOneMan: Focus on Silverlight and WCF technology.Rss Photo Frame Android Widget: RSS Photo Frame Android Widget permits showing pictures from any RSS feed on your Android device's desktopSingle Web Session: Web Tool Kits Current project provide developer with different tools that help to enhance web site performance, security, and other common functio...Work Item Visualization: Use DGML to visualize and analyze your TFS Work Items. Included is the ability to perform basic risk/impact analysis. It helps answer the question,...New Releases[Tool] Vczh Non-public DLL Classes Caller: Wrapper Coder (beta): Click "<Click Me To Open Assembly File>", WrapperCoder will load the assembly and referenced assembly. Check the non-public classes that you want...APS - Automatic Print Screen: APS 1.0: APS automatizes the tasks of paste the image in Paint and save it after print screen or alt+print screen. Choose directory, name and file extension...BTP Tools: e-Sword generator build 20100321: 1. Modify the indent after subtitle. 2. Add 2 spaces after subtitle.Combres - WebForm & MVC Client-side Resource Combine Library: Combres 2.0: Changes since last version (1.2) Support ignore Combres pipeline in debug mode - see issue #6088 Debug mode generates comment helping identify in...Desafio Office 2010 Brasil: DesafioOutlook: Controlando um robo com o Outlook 2010dylan.NET: dylan.NET v. 9.4: Adding Platform Invocation Services Support, full Managed Pointer Support, Charset,Dllimport,Callconv setting for P/Invoke, MarshalAs for parametersFamily Tree Analyzer: Version 1.3.2.0: Version 1.3.2.0 Add open folder button to IGI Search Form Fixes to Fact Location processing - IGIName renamed to RegionID Fix if Region ID not fou...Fasterflect - A Fast and Simple Reflection API: Fasterflect 2.0: We are pleased to release version 2.0 of Fasterflect, which contains a lot of additions and improvements from the previous version. Please refer t...IIS Network Diagnostic Tools: 1.0: Initial public release.Informant: Informant (Desktop) v0.1: This release allows users to send sms messages to 1-Many Groups or 1-Many contacts. It is a very basic release of the application. No styling has b...InfoService: InfoService v1.5 - MPE1 Package: InfoService Release v1.5.0.65 Please read Plugin installation for installation instructions.InfoService: InfoService v1.5 - RAR Package: InfoService Release v1.5.0.65 Please read Plugin installation for installation instructions.L2 Commander: Source Code Link: Where to find our source.ModularCMS: ModularCMS 1.2: Minor bug fixes.NMTools: NMTools-v40b0-20100321-0: The most noticeable aspect of this release is that NMTools is now an independent project. It will no longer tied to OpenSLIM. Nevertheless, OpenSLI...SharePoint LogViewer: SharePoint LogViewer 1.5.3: Log loading performance enhanced. Search text box now has auto complete feature.Single Web Session: Single Web Session: !Single Web Session! <httpModules> <add name="SingleSession" type="SingleWebSession.Model.WebSessionModule, SingleWebSession"/> </httpModules>Sprite Sheet Packer: 2.1 Release: Made a few crucial fixes from 2.0: - Fixed error with paths having spaces. - Fixed error with UI not unlocking. - Fixed NullReferenceException on ...uManage - AD Self-Service Portal: uManage v1.1 (.NET 4.0 RC): Updated Releasev1.1 Adds the primary ability to setup and configure the application through a setup wizard. The setup wizard will continue to evol...VCC: Latest build, v2.1.30321.0: Automatic drop of latest buildVS ChessMania: VS ChessMania V2 March Beta: Second Beta Release with move correction and making application more safe for user. New features will be added soon.WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.9.00: Whats New Added New Toolbar Plugin (By Kent Safransk) 'MediaEmbed' to Include Embed Media from Youtube, Vimeo, etc. Media Embed Plugin Added New ...WeatherBar: WeatherBar 1.0 [No Installation]: Extract the ZIP archive and run WeatherBar.exe. Current release contains some bugs that will be fixed in the next version. Check the Issue Tracker...Work Item Visualization: Release 1.0: This is the initial release of the Work Item Visualization tool. There are no known issues when it comes to the visualization aspects of the tool b...WPF Application Framework (WAF): WPF Application Framework (WAF) 1.0.0.10: Version: 1.0.0.10 (Milestone 10): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requi...WPF AutoComplete TextBox Control: Version 1.2: What's Newadds AutoAppend feature adds a new provider: UrlHistoryDataProvider sample application is updated to reflect the new things Bug Fixe...ZoomBarPlus: V2 (Beta): - Fixed bug: if the active window changed while you were in the middle of a single tap delay, long tap delay, or swipe-repeat, it would continue re...Most Popular ProjectsMetaSharpSavvy DateTimeRawrWBFS ManagerSilverlight ToolkitASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)Most Active ProjectsLINQ to TwitterRawrOData SDK for PHPjQuery Library for SharePoint Web ServicesDirectQPHPExcelFarseer Physics Enginepatterns & practices – Enterprise LibraryBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

  • Facebook Sponsored Results: Is It Getting Results?

    - by Mike Stiles
    Social marketers who like to focus on the paid aspect of the paid/earned hybrid Facebook represents may want to keep themselves aware of how the network’s new Sponsored Results ad product is performing. The ads, which appear when a user conducts a search from the Facebook search bar, have only been around a week or so. But the first statistics coming out of them are not bad. Marketer Nanigans says click-through rates on the Sponsored Results have been nearly 23 times better than regular Facebook ads. Some click-through rates have even gone over 3%. Just to give you some perspective, a TechCrunch article points out that’s the same kind of click-through rates that were being enjoyed during the go-go dot com boom of the 90’s. The average across the Internet in its entirety is now somewhere around .3% on a good day, so a 3% number should be enough to raise an eyebrow. Plus the cost-per-click price is turning up 78% lower than regular Facebook ads, so that should raise the other eyebrow. Marketers have gotten pretty used to being able to buy ads against certain keywords. Most any digital property worth its salt that sells ads offers this, and so does Facebook with its Sponsored Results product. But the unique prize Facebook brings to the table is the ability to also buy based on demographic and interest information gleaned from Facebook user profiles. With almost 950 million logging in, this is exactly the kind of leveraging of those users conventional wisdom says is necessary for Facebook to deliver on its amazing potential. So how does the Facebook user fit into this? Notorious for finding out exactly where sponsored marketing messages are appearing and training their eyeballs to avoid those areas, will the Facebook user reject these Sponsored Results? Well, Facebook may have found an area in addition to the News Feed where paid elements can’t be avoided and will be tolerated. If users want to read their News Feed, and they do, they’re going to see sponsored posts. Likewise, if they want to search for friends or Pages, and they do, they’re going to see Sponsored Results. The paid results are clearly marked as such. As long as their organic search results are not tainted or compromised, they will continue using search. But something more is going on. The early click-through rate numbers say not only do users not mind seeing these Sponsored Results, they’re finding them relevant enough to click on. And once they click, they seem to be liking what they find, with a reported 14% higher install rate than Marketplace Ads. It’s early, and obviously the jury is still out. But this is a new social paid marketing opportunity that’s well worth keeping an eye on, and that may wind up hitting the trifecta of being effective for the platform, the consumer, and the marketer.

    Read the article

  • SOA’s People Problem by Bob Rhubart

    - by JuergenKress
    Are reluctant passengers slowing down your SOA train? Based on my conversations with various experts in service-oriented architecture (SOA), the consensus is that SOA tools and technology have achieved a high level of maturity. Some even use the term industrialization to describe the current state of SOA. Given that scenario, one might assume that SOA has been wildly successful for every organization that has adopted its principles. Obviously SOA could not have achieved its current level of maturity and industrialization without having reached a tipping point in the volume of success stories to drive continued adoption. But some organizations continue to struggle with SOA. The problem, according to some experts, has little to do with tools or technologies. “One of the greatest challenges to implementing SOA has nothing to do with the intrinsic complexity behind a SOA technology platform,” says Oracle ACE Luis Augusto Weir, senior Oracle solution director at HCL AXON. “The real difficulty lies in dealing with people and processes from different parts of the business and aligning them to deliver enterprisewide solutions.” What can an organization do to meet that challenge? “Staff the right people,” says Weir. “For example, the role of a SOA architect should be as much about integrating people as it is about integrating systems. Dealing with people from different departments, backgrounds, and agendas is a huge challenge. The SOA architect role requires someone that not only has a sound architectural and technological background but also has charisma and human skills, and can communicate equally well to the business and technical teams.” The SOA architect’s communication skills are instrumental in establishing service orientation as the guiding principle across the organization. “A consistent architecture comprising both business services and IT services can comprehensively redefine the role of IT at the process level,” says Danilo Schmiedel, solution architect at Opitz Consulting. That helps to shift the focus from siloes to services and get SOA on track. To that end, Oracle ACE Director Lonneke Dikmans, a managing partner at Vennster, stresses the importance of replacing individual, uncoordinated projects with a focused program that promotes communication, cooperation, and service reuse. “Having support among lead developers and architects helps, as does having sponsors that see the business case and understand the strategic value,” she says. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Bob Rhubard,OTN,Lonneke Dikmans,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Introducing AutoVue Document Print Service

    - by celine.beck
    We recently announced the availability of our new AutoVue Document Print Service products. For more information, please read the article entitled Print Any Document Type with AutoVue Document Print Services that was posted on our blog. The AutoVue Document Print Service products help address a trivial, yet very common challenge: printing and batch printing documents. The AutoVue Document Print Service is a Web-Services based interface, which allows developers to complement their print server solutions by leveraging AutoVue's printing capabilities within broader enterprise applications like Asset Lifecycle Management, Product Lifecycle Management, Enterprise Content Management solutions, etc. This means that you can leverage the AutoVue Document Print Service products as part of your printing solution to automate the printing of virtually any document type required in any business process. Clients that consume AutoVue's Document Print Service can be written in any language (for example Java or .NET) as long as they understand Web Services Description Language (WSDL) and communicate using Simple Object Access Protocol (SOAP). The print solution consists of three main components, as described in the diagram below: a print server (not included in the AutoVue Document Print Service offering) that will interact with your application to identify the files that need to be printed, the printer to send each file, as well as the print options needed for each file (paper size, page orientation, etc), and collate the print job requests. The print server will also take care of calling the AutoVue Document Print Service to perform the actual printing. The AutoVue Document Print Services send files to a printer for printing. The AutoVue Document Print Service products leverage AutoVue's format- and platform agnostic technology to let you print/batch virtually any type of files, without requiring the authoring application installed on your machine. and Printers As shown above, you can trigger printing from your application either programmatically through automated business processes or manually through human interaction. If documents that need to be printed from your application are stored inside a content repository/Document Management System (DMS) such as Oracle Universal Content Management System (UCM), then the Print Server will need to identify the list of documents and pass the ID of each document to the AutoVue DPS to print. In this case, AutoVue DPS leverages the AutoVue VueLink integration (note: AutoVue VueLink integrations are pre-packaged AutoVue integrations with most common enterprise systems. Check our Website for more information on the subject) to fetch documents out of the document management system for printing. In lieu of the AutoVue VueLink integration, you can also leverage the AutoVue Integration Software Development Kit (iSDK) to build your own connector. If the documents you need to print from your application are not stored in a content management system, the Print Server will need to ensure that files are made available to the AutoVue Document Print Service. The Print Server could for example fetch the files out of your application or an extension to the application could be developed to fetch the files and make them available to the AutoVue DPS. More information on methods to pass on file information to the AutoVue Document Print Service products can be found in the AutoVue Document Print Service Overview documentation available on the Oracle Technology Network. Related article: Any Document Type with AutoVue Document Print Services

    Read the article

  • OWB 11gR2 &ndash; Flexible and extensible

    - by David Allan
    The Oracle data integration extensibility capabilities are something I love, nothing more frustrating than a tool or platform that is very constraining. I think extensibility and flexibility are invaluable capabilities in the data integration arena. I liked Uli Bethke's posting on some extensibility capabilities with ODI (see Nesting ODI Substitution Method Calls here), he has some useful guidance on making customizations to existing KMs, nice to learn by example. I thought I'd illustrate the same capabilities with ODI's partner OWB for the OWB community. There is a whole new world of potential. The LKM/IKM/CKM/JKMs are the primary templates that are supported (plus the Oracle Target code template), so there is a lot of potential for customizing and extending the product in this release. Enough waffle... Diving in at the deep end from Uli's post, in OWB the table operator has a number of additional properties in OWB 11gR2 that let you annotate the column usage with ODI-like properties such as the slowly changing usage or for your own user-defined purpose as in Uli's post, below you see for the target table SALES_TARGET we can use the UD5 property which when assigned the code template (knowledge module) which has been modified with Uli's change we can do custom things such as creating indices - provides The code template used by the mapping has the additional step which is basically the code illustrated from Uli's posting just used directly, the ODI 10g substitution references also supported from within OWB's runtime. Now to see whether this does what we expect before we execute it, we can check out the generated code similar to how the traditional mapping generation and preview works, you do this by clicking on the 'Inspect Code' button on the execution units code template assignment. This then  creates another tab with prefix 'Code - <mapping name>' where the generated code is put, scrolling down we find the last step with the indices being created, looks good, so we are ready to deploy and execute. After executing the mapping we can then use the 'Audit Information' panel (select the mapping in the designer tree and click on View/Audit Information), this gives us a view of the execution where we can drill into the tasks that were executed and inspect both the template and the generated code that was executed and any potential errors. Reflecting back on earlier versions of OWB, these were the kinds of features that were always highly desirable, getting under the hood of the code generation and tweaking bit and pieces - fun and powerful stuff! We can step it up a bit here and explore some further ideas. The example below is a daisy-chained set of execution units where the intermediate table is a target of one unit and the source for another. We want that table to be a global temporary table, so can tweak the templates. Back to the copy of SQL Control Append (for demo purposes) we modify the create target table step to make the table a global temporary table, with the option of on commit preserve rows. You can get a feel for some of the customizations and changes possible, providing some great flexibility and extensibility for the data integration tools.

    Read the article

  • I can't install using Wubi due to permission denied error

    - by Taksh Sharma
    I can't install ubuntu 11.10 inside my windows 7. It shows permission denied while installation. It gave a log file having the following data: 03-29 20:19 DEBUG TaskList: # Running tasklist... 03-29 20:19 DEBUG TaskList: ## Running select_target_dir... 03-29 20:19 INFO WindowsBackend: Installing into D:\ubuntu 03-29 20:19 DEBUG TaskList: ## Finished select_target_dir 03-29 20:19 DEBUG TaskList: ## Running create_dir_structure... 03-29 20:19 DEBUG CommonBackend: Creating dir D:\ubuntu 03-29 20:19 DEBUG CommonBackend: Creating dir D:\ubuntu\disks 03-29 20:19 DEBUG CommonBackend: Creating dir D:\ubuntu\install 03-29 20:19 DEBUG CommonBackend: Creating dir D:\ubuntu\install\boot 03-29 20:19 DEBUG CommonBackend: Creating dir D:\ubuntu\disks\boot 03-29 20:19 DEBUG CommonBackend: Creating dir D:\ubuntu\disks\boot\grub 03-29 20:19 DEBUG CommonBackend: Creating dir D:\ubuntu\install\boot\grub 03-29 20:19 DEBUG TaskList: ## Finished create_dir_structure 03-29 20:19 DEBUG TaskList: ## Running uncompress_target_dir... 03-29 20:19 DEBUG TaskList: ## Finished uncompress_target_dir 03-29 20:19 DEBUG TaskList: ## Running create_uninstaller... 03-29 20:19 DEBUG WindowsBackend: Copying uninstaller E:\wubi.exe -> D:\ubuntu\uninstall-wubi.exe 03-29 20:19 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi UninstallString D:\ubuntu\uninstall-wubi.exe 03-29 20:19 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi InstallationDir D:\ubuntu 03-29 20:19 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayName Ubuntu 03-29 20:19 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayIcon D:\ubuntu\Ubuntu.ico 03-29 20:19 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayVersion 11.10-rev241 03-29 20:19 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi Publisher Ubuntu 03-29 20:19 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi URLInfoAbout http://www.ubuntu.com 03-29 20:19 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi HelpLink http://www.ubuntu.com/support 03-29 20:19 DEBUG TaskList: ## Finished create_uninstaller 03-29 20:19 DEBUG TaskList: ## Running copy_installation_files... 03-29 20:19 DEBUG WindowsBackend: Copying C:\Users\Home\AppData\Local\Temp\pylB911.tmp\data\custom-installation -> D:\ubuntu\install\custom-installation 03-29 20:19 DEBUG WindowsBackend: Copying C:\Users\Home\AppData\Local\Temp\pylB911.tmp\winboot -> D:\ubuntu\winboot 03-29 20:19 DEBUG WindowsBackend: Copying C:\Users\Home\AppData\Local\Temp\pylB911.tmp\data\images\Ubuntu.ico -> D:\ubuntu\Ubuntu.ico 03-29 20:19 DEBUG TaskList: ## Finished copy_installation_files 03-29 20:19 DEBUG TaskList: ## Running get_iso... 03-29 20:19 DEBUG TaskList: New task copy_file 03-29 20:19 DEBUG TaskList: ### Running copy_file... 03-29 20:23 ERROR TaskList: [Errno 13] Permission denied Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\utils.py", line 202, in copy_file IOError: [Errno 13] Permission denied 03-29 20:23 DEBUG TaskList: # Cancelling tasklist 03-29 20:23 DEBUG TaskList: New task check_iso 03-29 20:23 ERROR root: [Errno 13] Permission denied Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 130, in select_task File "\lib\wubi\application.py", line 205, in run_cd_menu File "\lib\wubi\application.py", line 120, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\utils.py", line 202, in copy_file IOError: [Errno 13] Permission denied 03-29 20:23 ERROR TaskList: 'WindowsBackend' object has no attribute 'iso_path' Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 579, in get_iso File "\lib\wubi\backends\common\backend.py", line 565, in use_iso AttributeError: 'WindowsBackend' object has no attribute 'iso_path' 03-29 20:23 DEBUG TaskList: # Cancelling tasklist 03-29 20:23 DEBUG TaskList: # Finished tasklist 03-29 20:29 INFO root: === wubi 11.10 rev241 === 03-29 20:29 DEBUG root: Logfile is c:\users\home\appdata\local\temp\wubi-11.10-rev241.log 03-29 20:29 DEBUG root: sys.argv = ['main.pyo', '--exefile="E:\\wubi.exe"'] 03-29 20:29 DEBUG CommonBackend: data_dir=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\data 03-29 20:29 DEBUG WindowsBackend: 7z=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\bin\7z.exe 03-29 20:29 DEBUG WindowsBackend: startup_folder=C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup 03-29 20:29 DEBUG CommonBackend: Fetching basic info... 03-29 20:29 DEBUG CommonBackend: original_exe=E:\wubi.exe 03-29 20:29 DEBUG CommonBackend: platform=win32 03-29 20:29 DEBUG CommonBackend: osname=nt 03-29 20:29 DEBUG CommonBackend: language=en_IN 03-29 20:29 DEBUG CommonBackend: encoding=cp1252 03-29 20:29 DEBUG WindowsBackend: arch=amd64 03-29 20:29 DEBUG CommonBackend: Parsing isolist=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\data\isolist.ini 03-29 20:29 DEBUG CommonBackend: Adding distro Xubuntu-i386 03-29 20:29 DEBUG CommonBackend: Adding distro Xubuntu-amd64 03-29 20:29 DEBUG CommonBackend: Adding distro Kubuntu-amd64 03-29 20:29 DEBUG CommonBackend: Adding distro Mythbuntu-i386 03-29 20:29 DEBUG CommonBackend: Adding distro Ubuntu-amd64 03-29 20:29 DEBUG CommonBackend: Adding distro Ubuntu-i386 03-29 20:29 DEBUG CommonBackend: Adding distro Mythbuntu-amd64 03-29 20:29 DEBUG CommonBackend: Adding distro Kubuntu-i386 03-29 20:29 DEBUG WindowsBackend: Fetching host info... 03-29 20:29 DEBUG WindowsBackend: registry_key=Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi 03-29 20:29 DEBUG WindowsBackend: windows version=vista 03-29 20:29 DEBUG WindowsBackend: windows_version2=Windows 7 Home Basic 03-29 20:29 DEBUG WindowsBackend: windows_sp=None 03-29 20:29 DEBUG WindowsBackend: windows_build=7601 03-29 20:29 DEBUG WindowsBackend: gmt=5 03-29 20:29 DEBUG WindowsBackend: country=IN 03-29 20:29 DEBUG WindowsBackend: timezone=Asia/Calcutta 03-29 20:29 DEBUG WindowsBackend: windows_username=Home 03-29 20:29 DEBUG WindowsBackend: user_full_name=Home 03-29 20:29 DEBUG WindowsBackend: user_directory=C:\Users\Home 03-29 20:29 DEBUG WindowsBackend: windows_language_code=1033 03-29 20:29 DEBUG WindowsBackend: windows_language=English 03-29 20:29 DEBUG WindowsBackend: processor_name=Intel(R) Core(TM) i3 CPU M 370 @ 2.40GHz 03-29 20:29 DEBUG WindowsBackend: bootloader=vista 03-29 20:29 DEBUG WindowsBackend: system_drive=Drive(C: hd 61135.1523438 mb free ntfs) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(C: hd 61135.1523438 mb free ntfs) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(D: hd 12742.5507813 mb free ntfs) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(E: cd 0.0 mb free cdfs) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(F: cd 0.0 mb free ) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(G: hd 93.22265625 mb free fat32) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(Q: hd 0.0 mb free ) 03-29 20:29 DEBUG WindowsBackend: uninstaller_path=D:\ubuntu\uninstall-wubi.exe 03-29 20:29 DEBUG WindowsBackend: previous_target_dir=D:\ubuntu 03-29 20:29 DEBUG WindowsBackend: previous_distro_name=Ubuntu 03-29 20:29 DEBUG WindowsBackend: keyboard_id=67699721 03-29 20:29 DEBUG WindowsBackend: keyboard_layout=us 03-29 20:29 DEBUG WindowsBackend: keyboard_variant= 03-29 20:29 DEBUG CommonBackend: python locale=('en_IN', 'cp1252') 03-29 20:29 DEBUG CommonBackend: locale=en_IN 03-29 20:29 DEBUG WindowsBackend: total_memory_mb=3893.859375 03-29 20:29 DEBUG CommonBackend: Searching ISOs on USB devices 03-29 20:29 DEBUG CommonBackend: Searching for local CDs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Kubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Kubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Xubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Xubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Mythbuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Mythbuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: parsing info from str=Ubuntu 11.10 "Oneiric Ocelot" - Release i386 (20111012) 03-29 20:29 DEBUG Distro: parsed info={'name': 'Ubuntu', 'subversion': 'Release', 'version': '11.10', 'build': '20111012', 'codename': 'Oneiric Ocelot', 'arch': 'i386'} 03-29 20:29 INFO Distro: Found a valid CD for Ubuntu: E:\ 03-29 20:29 INFO root: Running the CD menu... 03-29 20:29 DEBUG WindowsFrontend: __init__... 03-29 20:29 DEBUG WindowsFrontend: on_init... 03-29 20:29 INFO WinuiPage: appname=wubi, localedir=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\translations, languages=['en_IN', 'en'] 03-29 20:29 INFO WinuiPage: appname=wubi, localedir=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\translations, languages=['en_IN', 'en'] 03-29 20:29 INFO root: CD menu finished 03-29 20:29 INFO root: Already installed, running the uninstaller... 03-29 20:29 INFO root: Running the uninstaller... 03-29 20:29 INFO CommonBackend: This is the uninstaller running 03-29 20:29 INFO WinuiPage: appname=wubi, localedir=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\translations, languages=['en_IN', 'en'] 03-29 20:29 INFO root: Received settings 03-29 20:29 INFO WinuiPage: appname=wubi, localedir=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\translations, languages=['en_IN', 'en'] 03-29 20:29 DEBUG TaskList: # Running tasklist... 03-29 20:29 DEBUG TaskList: ## Running Remove bootloader entry... 03-29 20:29 DEBUG WindowsBackend: Could not find bcd id 03-29 20:29 DEBUG WindowsBackend: undo_bootini C: 03-29 20:29 DEBUG WindowsBackend: undo_configsys Drive(C: hd 61135.1523438 mb free ntfs) 03-29 20:29 DEBUG WindowsBackend: undo_bootini D: 03-29 20:29 DEBUG WindowsBackend: undo_configsys Drive(D: hd 12742.5507813 mb free ntfs) 03-29 20:29 DEBUG WindowsBackend: undo_bootini G: 03-29 20:29 DEBUG WindowsBackend: undo_configsys Drive(G: hd 93.22265625 mb free fat32) 03-29 20:29 DEBUG WindowsBackend: undo_bootini Q: 03-29 20:29 DEBUG WindowsBackend: undo_configsys Drive(Q: hd 0.0 mb free ) 03-29 20:29 DEBUG TaskList: ## Finished Remove bootloader entry 03-29 20:29 DEBUG TaskList: ## Running Remove target dir... 03-29 20:29 DEBUG CommonBackend: Deleting D:\ubuntu 03-29 20:29 DEBUG TaskList: ## Finished Remove target dir 03-29 20:29 DEBUG TaskList: ## Running Remove registry key... 03-29 20:29 DEBUG TaskList: ## Finished Remove registry key 03-29 20:29 DEBUG TaskList: # Finished tasklist 03-29 20:29 INFO root: Almost finished uninstalling 03-29 20:29 INFO root: Finished uninstallation 03-29 20:29 DEBUG CommonBackend: Fetching basic info... 03-29 20:29 DEBUG CommonBackend: original_exe=E:\wubi.exe 03-29 20:29 DEBUG CommonBackend: platform=win32 03-29 20:29 DEBUG CommonBackend: osname=nt 03-29 20:29 DEBUG WindowsBackend: arch=amd64 03-29 20:29 DEBUG CommonBackend: Parsing isolist=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\data\isolist.ini 03-29 20:29 DEBUG CommonBackend: Adding distro Xubuntu-i386 03-29 20:29 DEBUG CommonBackend: Adding distro Xubuntu-amd64 03-29 20:29 DEBUG CommonBackend: Adding distro Kubuntu-amd64 03-29 20:29 DEBUG CommonBackend: Adding distro Mythbuntu-i386 03-29 20:29 DEBUG CommonBackend: Adding distro Ubuntu-amd64 03-29 20:29 DEBUG CommonBackend: Adding distro Ubuntu-i386 03-29 20:29 DEBUG CommonBackend: Adding distro Mythbuntu-amd64 03-29 20:29 DEBUG CommonBackend: Adding distro Kubuntu-i386 03-29 20:29 DEBUG WindowsBackend: Fetching host info... 03-29 20:29 DEBUG WindowsBackend: registry_key=Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi 03-29 20:29 DEBUG WindowsBackend: windows version=vista 03-29 20:29 DEBUG WindowsBackend: windows_version2=Windows 7 Home Basic 03-29 20:29 DEBUG WindowsBackend: windows_sp=None 03-29 20:29 DEBUG WindowsBackend: windows_build=7601 03-29 20:29 DEBUG WindowsBackend: gmt=5 03-29 20:29 DEBUG WindowsBackend: country=IN 03-29 20:29 DEBUG WindowsBackend: timezone=Asia/Calcutta 03-29 20:29 DEBUG WindowsBackend: windows_username=Home 03-29 20:29 DEBUG WindowsBackend: user_full_name=Home 03-29 20:29 DEBUG WindowsBackend: user_directory=C:\Users\Home 03-29 20:29 DEBUG WindowsBackend: windows_language_code=1033 03-29 20:29 DEBUG WindowsBackend: windows_language=English 03-29 20:29 DEBUG WindowsBackend: processor_name=Intel(R) Core(TM) i3 CPU M 370 @ 2.40GHz 03-29 20:29 DEBUG WindowsBackend: bootloader=vista 03-29 20:29 DEBUG WindowsBackend: system_drive=Drive(C: hd 61134.8632813 mb free ntfs) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(C: hd 61134.8632813 mb free ntfs) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(D: hd 12953.140625 mb free ntfs) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(E: cd 0.0 mb free cdfs) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(F: cd 0.0 mb free ) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(G: hd 93.22265625 mb free fat32) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(Q: hd 0.0 mb free ) 03-29 20:29 DEBUG WindowsBackend: uninstaller_path=None 03-29 20:29 DEBUG WindowsBackend: previous_target_dir=None 03-29 20:29 DEBUG WindowsBackend: previous_distro_name=None 03-29 20:29 DEBUG WindowsBackend: keyboard_id=67699721 03-29 20:29 DEBUG WindowsBackend: keyboard_layout=us 03-29 20:29 DEBUG WindowsBackend: keyboard_variant= 03-29 20:29 DEBUG WindowsBackend: total_memory_mb=3893.859375 03-29 20:29 DEBUG CommonBackend: Searching ISOs on USB devices 03-29 20:29 DEBUG CommonBackend: Searching for local CDs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Kubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Kubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Xubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Xubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Mythbuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Mythbuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 03-29 20:29 INFO Distro: Found a valid CD for Ubuntu: E:\ 03-29 20:29 INFO root: Running the installer... 03-29 20:29 INFO WinuiPage: appname=wubi, localedir=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\translations, languages=['en_IN', 'en'] 03-29 20:29 INFO WinuiPage: appname=wubi, localedir=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\translations, languages=['en_IN', 'en'] 03-29 20:30 DEBUG WinuiInstallationPage: target_drive=C:, installation_size=8000MB, distro_name=Ubuntu, language=en_US, locale=en_US.UTF-8, username=taksh 03-29 20:30 INFO root: Received settings 03-29 20:30 INFO WinuiPage: appname=wubi, localedir=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\translations, languages=['en_US', 'en'] 03-29 20:30 DEBUG TaskList: # Running tasklist... 03-29 20:30 DEBUG TaskList: ## Running select_target_dir... 03-29 20:30 INFO WindowsBackend: Installing into C:\ubuntu 03-29 20:30 DEBUG TaskList: ## Finished select_target_dir 03-29 20:30 DEBUG TaskList: ## Running create_dir_structure... 03-29 20:30 DEBUG CommonBackend: Creating dir C:\ubuntu 03-29 20:30 DEBUG CommonBackend: Creating dir C:\ubuntu\disks 03-29 20:30 DEBUG CommonBackend: Creating dir C:\ubuntu\install 03-29 20:30 DEBUG CommonBackend: Creating dir C:\ubuntu\install\boot 03-29 20:30 DEBUG CommonBackend: Creating dir C:\ubuntu\disks\boot 03-29 20:30 DEBUG CommonBackend: Creating dir C:\ubuntu\disks\boot\grub 03-29 20:30 DEBUG CommonBackend: Creating dir C:\ubuntu\install\boot\grub 03-29 20:30 DEBUG TaskList: ## Finished create_dir_structure 03-29 20:30 DEBUG TaskList: ## Running uncompress_target_dir... 03-29 20:30 DEBUG TaskList: ## Finished uncompress_target_dir 03-29 20:30 DEBUG TaskList: ## Running create_uninstaller... 03-29 20:30 DEBUG WindowsBackend: Copying uninstaller E:\wubi.exe -> C:\ubuntu\uninstall-wubi.exe 03-29 20:30 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi UninstallString C:\ubuntu\uninstall-wubi.exe 03-29 20:30 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi InstallationDir C:\ubuntu 03-29 20:30 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayName Ubuntu 03-29 20:30 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayIcon C:\ubuntu\Ubuntu.ico 03-29 20:30 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayVersion 11.10-rev241 03-29 20:30 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi Publisher Ubuntu 03-29 20:30 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi URLInfoAbout http://www.ubuntu.com 03-29 20:30 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi HelpLink http://www.ubuntu.com/support 03-29 20:30 DEBUG TaskList: ## Finished create_uninstaller 03-29 20:30 DEBUG TaskList: ## Running copy_installation_files... 03-29 20:30 DEBUG WindowsBackend: Copying C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\data\custom-installation -> C:\ubuntu\install\custom-installation 03-29 20:30 DEBUG WindowsBackend: Copying C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\winboot -> C:\ubuntu\winboot 03-29 20:30 DEBUG WindowsBackend: Copying C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\data\images\Ubuntu.ico -> C:\ubuntu\Ubuntu.ico 03-29 20:30 DEBUG TaskList: ## Finished copy_installation_files 03-29 20:30 DEBUG TaskList: ## Running get_iso... 03-29 20:30 DEBUG TaskList: New task copy_file 03-29 20:30 DEBUG TaskList: ### Running copy_file... 03-29 20:34 ERROR TaskList: [Errno 13] Permission denied Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\utils.py", line 202, in copy_file IOError: [Errno 13] Permission denied 03-29 20:34 DEBUG TaskList: # Cancelling tasklist 03-29 20:34 DEBUG TaskList: New task check_iso 03-29 20:34 ERROR root: [Errno 13] Permission denied Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 130, in select_task File "\lib\wubi\application.py", line 205, in run_cd_menu File "\lib\wubi\application.py", line 120, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\utils.py", line 202, in copy_file IOError: [Errno 13] Permission denied 03-29 20:34 ERROR TaskList: 'WindowsBackend' object has no attribute 'iso_path' Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 579, in get_iso File "\lib\wubi\backends\common\backend.py", line 565, in use_iso AttributeError: 'WindowsBackend' object has no attribute 'iso_path' 03-29 20:34 DEBUG TaskList: # Cancelling tasklist 03-29 20:34 DEBUG TaskList: # Finished tasklist I have no idea what's the problem is. I'm a kind of newbie. I'm using win7 64bit, and installing as an administrator. Please help me out!

    Read the article

  • Run Your Tests With Any NUnit Version

    - by Alois Kraus
    I always thought that the NUnit test runners and the test assemblies need to reference the same NUnit.Framework version. I wanted to be able to run my test assemblies with the newest GUI runner (currently 2.5.3). Ok so all I need to do is to reference both NUnit versions the newest one and the official for the current project. There is a nice article form Kent Bogart online how to reference the same assembly multiple times with different versions. The magic works by referencing one NUnit assembly with an alias which does prefix all types inside it. Then I could decorate my tests with the TestFixture and Test attribute from both NUnit versions and everything worked fine except that this was ugly. After playing a little bit around to make it simpler I found that I did not need to reference both NUnit.Framework assemblies. The test runners do not require the TestFixture and Test attribute in their specific version. That is really neat since the test runners are instructed by attributes what to do in a declarative way there is really no need to tie the runners to a specific version. At its core NUnit has this little method hidden to find matching TestFixtures and Tests   public bool CanBuildFrom(Type type) {     if (!(!type.IsAbstract || type.IsSealed))     {         return false;     }     return (((Reflect.HasAttribute(type,           "NUnit.Framework.TestFixtureAttribute", true) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestAttribute"       , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestCaseAttribute"   , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TheoryAttribute"     , true)); } That is versioning and backwards compatibility at its best. I tell NUnit what to do by decorating my tests classes with NUnit Attributes and the runner executes my intent without the need to bind me to a specific version. The contract between NUnit versions is actually a bit more complex (think of AssertExceptions) but this is also handled nicely by using not the concrete type but simply to check for the catched exception type by string. What can we learn from this? Versioning can be easy if the contract is small and the users of your library use it in a declarative way (Attributes). Everything beyond it will force you to reference several versions of the same assembly with all its consequences. Type equality is lost between versions so none of your casts will work. That means that you cannot simply use IBigInterface in two versions. You will need a wrapper to call the correct versioned one. To get out of this mess you can use one (and only one) version agnostic driver to encapsulate your business logic from the concrete versions. This is of course more work but as NUnit shows it can be easy. Simplicity is therefore not a nice thing to have but also requirement number one if you intend to make things more complex in version two and want to support any version (older and newer). Any interaction model above easy will not be maintainable. There are different approached to versioning. Below are my own personal observations how versioning works within the  .NET Framwork and NUnit.   Versioning Models 1. Bug Fixing and New Isolated Features When you only need to fix bugs there is no need to break anything. This is especially true when you have a big API surface. Microsoft did this with the .NET Framework 3.0 which did leave the CLR as is but delivered new assemblies for the features WPF, WCF and Windows Workflow Foundations. Their basic model was that the .NET 2.0 assemblies were declared as red assemblies which must not change (well mostly but each change was carefully reviewed to minimize the risk of breaking changes as much as possible) whereas the new green assemblies of .NET 3,3.5 did not have such obligations since they did implement new unrelated features which did not have any impact on the red assemblies. This is versioning strategy aimed at maximum compatibility and the delivery of new unrelated features. If you have a big API surface you should strive hard to do the same or you will break your customers code with every release. 2. New Breaking Features There are times when really new things need to be added to an existing product. The .NET Framework 4.0 did change the CLR in many ways which caused subtle different behavior although the API´s remained largely unchanged. Sometimes it is possible to simply recompile an application to make it work (e.g. changed method signature void Func() –> bool Func()) but behavioral changes need much more thought and cannot be automated. To minimize the impact .NET 2.0,3.0,3.5 applications will not automatically use the .NET 4.0 runtime when installed but they will keep using the “old” one. What is interesting is that a side by side execution model of both CLR versions (2 and 4) within one process is possible. Key to success was total isolation. You will have 2 GCs, 2 JIT compilers, 2 finalizer threads within one process. The two .NET runtimes cannot talk  (except via the usual IPC mechanisms) to each other. Both runtimes share nothing and run independently within the same process. This enables Explorer plugins written for the CLR 2.0 to work even when a CLR 4 plugin is already running inside the Explorer process. The price for isolation is an increased memory footprint because everything is loaded and running two times.   3. New Non Breaking Features It really depends where you break things. NUnit has evolved and many different Assert, Expect… methods have been added. These changes are all localized in the NUnit.Framework assembly which can be easily extended. As long as the test execution contract (TestFixture, Test, AssertException) remains stable it is possible to write test executors which can run tests written for NUnit 10 because the execution contract has not changed. It is possible to write software which executes other components in a version independent way but this is only feasible if the interaction model is relatively simple.   Versioning software is hard and it looks like it will remain hard since you suddenly work in a severely constrained environment when you try to innovate and to keep everything backwards compatible at the same time. These are contradicting goals and do not play well together. The easiest way out of this is to carefully watch what your customers are doing with your software. Minimizing the impact is much easier when you do not need to guess how many people will be broken when this or that is removed.

    Read the article

  • AxCMS.net 10 with Microsoft Silverlight 4 and Microsoft Visual Studio 2010

    - by Axinom
    Axinom, European WCM vendor, today announced the next version of its WCM solution AxCMS.net 10, which streamlines the processes involved in creating, managing and distributing corporate content on the internet. The new solution helps reducing ongoing costs for managing and distributing to large audiences, while at the same time drastically reducing time-to-market and one-time setup costs. http://www.AxCMS.net Axinom’s WCM portfolio, based on the Microsoft .NET Framework 4, Microsoft Visual Studio 2010 and Microsoft Silverlight 4, allows enterprises to increase process efficiency, reduce operating costs and more effectively manage delivery of rich media assets on the Web and mobile devices. Axinom solutions are widely used by major European online brands in IT, telco, retail, media and entertainment industries such as Siemens, American Express, Microsoft Corp., ZDF, Pro7Sat1 Media, and Deutsche Post. Brand New User Interface built with Silverlight 4By using Silverlight 4, Axinom’s team created a new user interface for AxCMS.net 10 that is optimized for improved usability and speed. WYSIWYG mode, integrated image editor, extended list views, and detail views of objects allow a substantial acceleration of typical editor tasks. Axinom’s team worked with Silverlight Rough Cut Editor for video management and Silverlight Analytics Framework for extended reporting to complete the wide range of capabilities included in the new release. “Axinom’s release of AxCMS.net 10 enables developers to take advantage of the latest features in Silverlight 4,” said Brian Goldfarb, director of the developer platform group at Microsoft Corp. “Microsoft is excited about the opportunity this creates for Web developers to streamline the creating, managing and distributing of online corporate content using AxCMS.net 10 and Silverlight.” Rapid Web Development with Visual Studio 2010AxCMS.net 10 is extended by additional products that enable developers to get productive quickly and help solve typical customer scenarios. AxCMS.net template projects come with documented source code that help kick-start projects and learn best practices in all aspects of Web application development. AxCMS.net overcomes many hard-to-solve technical obstacles in an out-of-the-box manner by providing a set of ready-to-use vertical solutions such as corporate Web site, Web shop, Web campaign management, email marketing, multi-channel distribution, management of rich Internet applications, and Web business intelligence. Extended Multi-Site ManagementAxCMS.net has been supporting the management of an unlimited number of Web sites for a long time. The new version 10 of AxCMS.net will further improve multi-site management and provide features to editors and developers that will simplify and accelerate multi-site and multi-language management. Extended publication workflow will take into account additional dependencies of dynamic objects, pages, and documents. “The customer requests evolved from static html pages to dynamic Web applications content with the emergence of rich media assets seamlessly combined across many channels including Web, mobile and IPTV. With the.NET Framework 4 and Silverlight 4, we’re on the fast track to making the three screen strategy a reality for our customers,” said Damir Tomicic, CEO of Axinom Group. “Our customers enjoy substantial competitive advantages of using latest Microsoft technologies. We have a long-standing, relationship with Microsoft and are committed to continued development using Microsoft tools and technologies to deliver innovative Web solutions in the future.”  

    Read the article

  • A Few Words from Oracle’s Channel Chief

    - by Meghan Fritz-Oracle
    As Oracle enters a new fiscal year, I want to take a moment and reflect on my time at Oracle thus far. The technology industry is currently at an inflection point trying to figure out where growth will come from. When you look at Oracle’s portfolio of products, it's a complete stack from applications to disc, offering differentiation in the marketplace. I was initially drawn to Oracle’s leadership, strategy, and world-class technology. Since joining the Oracle team in October 2013, I’ve had the privilege of traveling around the globe visiting our partners and customers, and wanted to share several common themes that came up during these meetings. Cloud: Many partners are trying to figure out how to build a business around the cloud. Oracle partners can currently resell or refer our cloud services. We saw over 300 percent growth from cloud resale last quarter. Engineered Systems: Hardware and software integrated together to simplify IT allows our joint customers to focus on the innovation they need to compete in a complex marketplace. We're seeing great success in a several areas, with more partners saying, “Let’s start with Oracle on Oracle.” The Internet of Things: This is the next big opportunity for device manufacturers and ISV‘s to capture market share in what is projected to be a mulit-trillion-dollar opportunity, according to Gartner.  Competition: We've got a tremendous middleware platform and a tremendous database install base. We’re not just a database company; we are a complete provider. So looking ahead, what are my priorities for fiscal 2015? Oracle PartnerNetwork has some very exciting plans on the horizon. There’s a lot more leadership and announcements to unfold, especially at this year’s Global Partner Kickoff taking place on June 25 + 26 depending on your region and time zone. I along with several other Oracle executives will be shedding light on Oracle’s strategy for the upcoming year, the latest opportunities within the OPN Specialized Program and sales strategies that will help you to continue to grow and profit with Oracle. Stay tuned for registration information next week.We also have Oracle OpenWorld and JavaOne to look forward to. These conferences are taking place in San Francisco from September 28 – October 2. We’ll have a variety of partner-specific activities for you at OPN Central @ OpenWorld including the OPN keynote, the famed AfterDark networking reception, access to the OPN Lounge and more.In the meantime, I hope that everyone has a great end to fiscal 2014.Best regards,Rich Geraffo Senior Vice President, Worldwide Alliances and Channels

    Read the article

  • A few things I learned regarding Azure billing policies

    - by Vincent Grondin
    An hour of small computing time: 0,12$ per hour A Gig of storage in the cloud: 0,15$ per hour 1 Gig of relational database using Azure SQL: 9,99$  per month A Visual Studio Professional with MSDN Premium account: 2500$ per year Winning an MSDN Professional account that comes preloaded with 750 free hours of Azure per month:  PRICELESS !!!      But was it really free???? Hmmm… Let’s see.....   Here's a few things I learned regarding Azure billing policies when I attended a promotional training at Microsoft last week...   1)  An instance deployed in the cloud really means whatever you upload in there... it doesn't matter if it's in STAGING OR PRODUCTION!!!!   Your MSDN account comes with 750 free hours of small computing time per month which should be enough hours per month for one instance of one application deployed in the cloud...  So we're cool, the application you run in the cloud doesn't cost you a penny....  BUT the one that's in staging is still consuming time!!!   So if you don’t want to end up having to pay 42$ at the end of the month on your credit card like this happened to a friend of mine, DELETE them staging applications once you’ve put them in production! This also applies to the instance count you can modify in the configuration file… So stop and think before you decide you want to spawn 50 of those hello world apps  .     2) If you have an MSDN account, then you have the promotional 750 hours of Azure credits per month and can use the Azure credits to explore the Cloud! But be aware, this promotion ends in 8 months (maybe more like 7 now) and then you will most likely go back to the standard 250 hours of Azure credits. If you do not delete your applications by then, you’ll get billed for the extra hours, believe me…   There is a switch that you can toggle and which will STOP your automatic enrollment after the promotion and prevent you from renewing the Azure Account automatically. Yes the default setting is to automatically renew your account and remember, you entered your credit card information in the registration process so, yes, you WILL be billed…  Go disable that ASAP    Log into your account, go to “Windows Azure Platform” then click the “Subscriptions” tab and on the right side, you’ll see a drop down with different “Actions” into it… Choose “Opt out of auto renew” and, NOW you’re safe…   Still, this is a great offer by Microsoft and I think everyone that has a chance should play a bit with Azure to get to know this technology a bit more...     Happy Cloud Computing All

    Read the article

  • Four Emerging Payment Stories

    - by David Dorf
    The world of alternate payments has been moving fast of late.  Innovation in this area will help both consumers and retailers, but probably hurt the banks (at least that's the plan).  Here are four recent news items in this area: Dwolla, a start-up in Iowa, is trying to make credit cards obsolete.  Twelve guys in Des Moines are using $1.3M they raised to allow businesses to skip the credit card networks and avoid the fees.  Today they move about $1M a day across their network with an average transaction size of $500. Instead of charging merchants 2.9% plus $.30 per transaction, Dwolla charges a quarter -- yep, that coin featuring George Washington. Dwolla (Web + Dollar = Dwolla) avoids the credit networks and connects directly to bank accounts using the bank's ACH network.  They are signing up banks and merchants targeting both B2B and C2B as well as P2P payments.  They leverage social networks to notify people they have a money transfer, and also have a mobile app that uses GPS location. However, all is not rosy.  There have been complaints about unexpected chargebacks and with debit fees being reduced by the big banks, the need is not as pronounced.  The big banks are working on their own network called clearXchange that could provide stiff competition. VeriFone just bought European payment processor Point for around $1B.  By itself this would not have caught my attention except for the fact that VeriFone also announced the acquisition of GlobalBay earlier this month.  In addition to their core business of selling stand-beside payment terminals, with GlobalBay they get employee-operated mobile selling tools and with Point they get a very big payment processing platform. MasterCard and Intel announced a partnership around payments, starting with PayPass, MasterCard's new payment technology.  Intel will lend its expertise to add additional levels of security, which seems to be the biggest barrier for consumer adoption.  Everyone is scrambling to get their piece of cash transactions, which still represents 85% of all transactions. Apple was awarded another mobile payment patent further cementing the rumors that the iPhone 5 will support NFC payments.  As usual, Apple is upsetting the apple cart (sorry) by moving control of key data from the carriers to Apple.  With Apple's vast number of iTunes accounts, they have a ready-made customer base to use the payment infrastructure, which I bet will slowly transition people away from credit cards and toward cheaper ACH.  Gary Schwartz explains the three step process Apple is taking to become a payment processor. Below is a picture I drew representing payments in the retail industry. There's certainly a lot of innovation happening.

    Read the article

  • SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS

    - by pinaldave
    Data Quality Services is a very important concept of SQL Server. I have recently started to explore the same and I am really learning some good concepts. Here are two very important blog posts which one should go over before continuing this blog post. Installing Data Quality Services (DQS) on SQL Server 2012 Connecting Error to Data Quality Services (DQS) on SQL Server 2012 This article is introduction to Data Quality Services for beginners. We will be using an Excel file Click on the image to enlarge the it. In the first article we learned to install DQS. In this article we will see how we can learn about building Knowledge Base and using it to help us identify the quality of the data as well help correct the bad quality of the data. Here are the two very important steps we will be learning in this tutorial. Building a New Knowledge Base  Creating a New Data Quality Project Let us start the building the Knowledge Base. Click on New Knowledge Base. In our project we will be using the Excel as a knowledge base. Here is the Excel which we will be using. There are two columns. One is Colors and another is Shade. They are independent columns and not related to each other. The point which I am trying to show is that in Column A there are unique data and in Column B there are duplicate records. Clicking on New Knowledge Base will bring up the following screen. Enter the name of the new knowledge base. Clicking NEXT will bring up following screen where it will allow to select the EXCE file and it will also let users select the source column. I have selected Colors and Shade both as a source column. Creating a domain is very important. Here you can create a unique domain or domain which is compositely build from Colors and Shade. As this is the first example, I will create unique domain – for Colors I will create domain Colors and for Shade I will create domain Shade. Here is the screen which will demonstrate how the screen will look after creating domains. Clicking NEXT it will bring you to following screen where you can do the data discovery. Clicking on the START will start the processing of the source data provided. Pre-processed data will show various information related to the source data. In our case it shows that Colors column have unique data whereas Shade have non-unique data and unique data rows are only two. In the next screen you can actually add more rows as well see the frequency of the data as the values are listed unique. Clicking next will publish the knowledge base which is just created. Now the knowledge base is created. We will try to take any random data and attempt to do DQS implementation over it. I am using another excel sheet here for simplicity purpose. In reality you can easily use SQL Server table for the same. Click on New Data Quality Project to see start DQS Project. In the next screen it will ask which knowledge base to use. We will be using our Colors knowledge base which we have recently created. In the Colors knowledge base we had two columns – 1) Colors and 2) Shade. In our case we will be using both of the mappings here. User can select one or multiple column mapping over here. Now the most important phase of the complete project. Click on Start and it will make the cleaning process and shows various results. In our case there were two columns to be processed and it completed the task with necessary information. It demonstrated that in Colors columns it has not corrected any value by itself but in Shade value there is a suggestion it has. We can train the DQS to correct values but let us keep that subject for future blog posts. Now click next and keep the domain Colors selected left side. It will demonstrate that there are two incorrect columns which it needs to be corrected. Here is the place where once corrected value will be auto-corrected in future. I manually corrected the value here and clicked on Approve radio buttons. As soon as I click on Approve buttons the rows will be disappeared from this tab and will move to Corrected Tab. If I had rejected tab it would have moved the rows to Invalid tab as well. In this screen you can see how the corrected 2 rows are demonstrated. You can click on Correct tab and see previously validated 6 rows which passed the DQS process. Now let us click on the Shade domain on the left side of the screen. This domain shows very interesting details as there DQS system guessed the correct answer as Dark with the confidence level of 77%. It is quite a high confidence level and manual observation also demonstrate that Dark is the correct answer. I clicked on Approve and the row moved to corrected tab. On the next screen DQS shows the summary of all the activities. It also demonstrates how the correction of the quality of the data was performed. The user can explore their data to a SQL Server Table, CSV file or Excel. The user also has an option to either explore data and all the associated cleansing info or data only. I will select Data only for demonstration purpose. Clicking explore will generate the files. Let us open the generated file. It will look as following and it looks pretty complete and corrected. Well, we have successfully completed DQS Process. The process is indeed very easy. I suggest you try this out yourself and you will find it very easy to learn. In future we will go over advanced concepts. Are you using this feature on your production server? If yes, would you please leave a comment with your environment and business need. It will be indeed interesting to see where it is implemented. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • Nginx and PHP Fundamentals

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/08/01/nginx-and-php-fundamentals.aspxHot on the heels of my .NET caching course, I’ve had my first “fundamentals” course released on Pluralsight: Nginx and PHP Fundamentals. It’s a practical look at two of the biggest technologies on the web – Nginx, which is the fastest growing HTTP server around (currently hosting 100+ million sites), and PHP, which powers more websites than any other server-side framework (currently 240+ million sites). The two technologies work well together, both are open-source and cross-platform and both are lightweight and easy to get started with - you just need to download and unzip the runtimes, and with a text editor you can create and host dynamic websites. I’ve used PHP as a second (sometimes third) language since 2005 when I was brought cold into an established codebase to help improve performance, and Nginx to host tier 2 apps for the last couple of years. As with any training course, you learn new things as you produce it, and it was good to focus on a different stack from my commercial .NET world. In the course I start with a website in two parts – one which is just static content, and one which processes a user registration form using ASP.NET MVC, both running in IIS. Over four modules I migrate the app to Nginx and PHP: Hosting Static Content in Nginx – how to deploy and configure Nginx for a basic website; PHP Part 1: Basic Web Forms – installing PHP and an IDE, and building a simple form with server-side validation; PHP Part 2: Packages and Integration – using PECL and Composer for packages to connect to Azure, AWS, Mongo and reCAPTCHA; Hosting PHP in Nginx – configuring Nginx to host our PHP site. Along the way I run some performance stats with JMeter, and the headlines are that Nginx running on Linux outperforms IIS on Windows for static content,by 800 requests per second over 1000 concurrent requests; and Linux+Ngnix+PHP outperforms Windows+IIS+ASP.NET MVC by 700 request per second with the same load. Of course, the headline stats don’t tell the whole story, and when you add OpCode caching for PHP and the ASP.NET Output Cache, the results are very different. As Web architecture moves away from heavy server-side processing, to Single Page Apps with client-side frameworks like AngularJS and Knockout, I think there’s an increasing need for high-performance, low-cost server technologies, and the combination of Nginx and PHP makes a compelling case.

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >