Search Results

Search found 9744 results on 390 pages for 'k means'.

Page 231/390 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • NHibernate 2 Beginner's Guide Review

    - by Ricardo Peres
    OK, here's the review I promised a while ago. This is a beginner's introduction to NHibernate, so if you have already some experience with NHibernate, you will notice it lacks a lot of concepts and information. It starts with a good description of NHibernate and why would we use it. It goes on describing basic mapping scenarios having primary keys generated with the HiLo or Identity algorithms, without actually explaining why would we choose one over the other. As for mapping, the book talks about XML mappings and provides a simple example of Fluent NHibernate, comparing it to its XML counterpart. When it comes to relations, it covers one-to-many/many-to-one and many-to-many, not one-to-one relations, but only talks briefly about lazy loading, which is, IMO, an important concept. Only Bags are described, not any of the other collection types. The log4net configuration description gets it's own chapter, which I find excessive. The chapter on configuration merely lists the most common properties for configuring NHibernate, both in XML and in code. Querying only talks about loading by ID (using Get, not Load) and using Criteria API, on which a paging example is presented as well as some common filtering options (property equals/like/between to, no examples on conjunction/disjunction, however). There's a chapter fully dedicated to ASP.NET, which explains how we can use NHibernate in web applications. It basically talks about ASP.NET concepts, though. Following it, another chapter explains how we can build our own ASP.NET providers using NHibernate (Membership, Role). The available entity generators for NHibernate are referred and evaluated on a chapter of their own, the list is fine (CodeSmith, nhib-gen, AjGenesis, Visual NHibernate, MyGeneration, NGen, NHModeler, Microsoft T4 (?) and hbm2net), examples are provided whenever possible, however, I have some problems with some of the evaluations: for example, Visual NHibernate scores 5 out of 5 on Visual Studio integration, which simply does not exist! I suspect the author means to say that it can be launched from inside Visual Studio, but then, what can't? Finally, there's a chapter I really don't understand. It seems like a bag where a lot of things are thrown in, like NHibernate Burrow (which actually isn't explained at all), Blog.Net components, CSS template conversion and web.config settings related to the maximum request length for file uploads and ending with XML configuration, with the help of GhostDoc. Like I said, the book is only good for absolute beginners, it does a fair job in explaining the very basics, but lack a lot of not-so-basic concepts. Among other things, it lacks: Inheritance mapping strategies (table per class hierarchy, table per class, table per concrete class) Load versus Get usage Other usefull ISession methods First level cache (Identity Map pattern) Other collection types other that Bag (Set, List, Map, IdBag, etc Fetch options User Types Filters Named queries LINQ examples HQL examples And that's it! I hope you find this review useful. The link to the book site is https://www.packtpub.com/nhibernate-2-x-beginners-guide/book

    Read the article

  • It’s time that you ought to know what you don’t know

    - by fatherjack
    There is a famous quote about unknown unknowns and known knowns and so on but I’ll let you review that if you are interested. What I am worried about is that there are things going on in your environment that you ought to know about, indeed you have asked to be told about but you are not getting the information. When you schedule a SQL Agent job you can set it to send an email to an inbox monitored by someone who needs to know and indeed can do something about it. However, what happens if the email process isnt successful? Check your servers with this: USE [msdb] GO /* This code selects the top 10 most recent SQLAgent jobs that failed to complete successfully and where the email notification failed too. Jonathan Allen Jul 2012 */ DECLARE @Date DATETIME SELECT @Date = DATEADD(d, DATEDIFF(d, '19000101', GETDATE()) - 1, '19000101') SELECT TOP 10 [s].[name] , [sjh].[step_name] , [sjh].[sql_message_id] , [sjh].[sql_severity] , [sjh].[message] , [sjh].[run_date] , [sjh].[run_time] , [sjh].[run_duration] , [sjh].[operator_id_emailed] , [sjh].[operator_id_netsent] , [sjh].[operator_id_paged] , [sjh].[retries_attempted] FROM [dbo].[sysjobhistory] AS sjh INNER JOIN [dbo].[sysjobs] AS s ON [sjh].[job_id] = [s].[job_id] WHERE EXISTS ( SELECT * FROM [dbo].[sysjobs] AS s INNER JOIN [dbo].[sysjobhistory] AS s2 ON [s].[job_id] = [s2].[job_id] WHERE [sjh].[job_id] = [s2].[job_id] AND [s2].[message] LIKE '%failed to notify%' AND CONVERT(DATETIME, CONVERT(VARCHAR(15), [s2].[run_date])) >= @date AND [s2].[run_status] = 0 ) AND sjh.[run_status] = 0 AND sjh.[step_id] != 0 AND CONVERT(DATETIME, CONVERT(VARCHAR(15), [run_date])) >= @date ORDER BY [sjh].[run_date] DESC , [sjh].[run_time] DESC go USE [msdb] go /* This code summarises details of SQLAgent jobs that failed to complete successfully and where the email notification failed too. Jonathan Allen Jul 2012 */ DECLARE @Date DATETIME SELECT @Date = DATEADD(d, DATEDIFF(d, '19000101', GETDATE()) - 1, '19000101') SELECT [s].name , [s2].[step_id] , CONVERT(DATETIME, CONVERT(VARCHAR(15), [s2].[run_date])) AS [rundate] , COUNT(*) AS [execution count] FROM [dbo].[sysjobs] AS s INNER JOIN [dbo].[sysjobhistory] AS s2 ON [s].[job_id] = [s2].[job_id] WHERE [s2].[message] LIKE '%failed to notify%' AND CONVERT(DATETIME, CONVERT(VARCHAR(15), [s2].[run_date])) >= @date AND [s2].[run_status] = 0 GROUP BY name , [s2].[step_id] , [s2].[run_date] ORDER BY [s2].[run_dateDESC] These two result sets will show if there are any SQL Agent jobs that have run on your servers that failed and failed to successfully email about the failure. I hope it’s of use to you. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Open Data, Government and Transparency

    - by Tori Wieldt
    A new track at TDC (The Developer's Conference in Sao Paulo, Brazil) is titled Open Data. It deals with open data, government and transparency. Saturday will be a "transparency hacker day" where developers are invited to create applications using open data from the Brazilian government.  Alexandre Gomes, co-lead of the track, says "I want to inspire developers to become "Civic hackers:" developers who create apps to make society better." It is a chance for developers to do well and do good. There are many opportunities for developers, including monitoring government expenditures and getting citizens involved via social networks. The open data movement is growing worldwide. One initiative, the Open Government Partnership, is working to make government data easier to find and access. Making this data easily available means that with the right applications, it will be easier for people to make decisions and suggestions about government policies based on detailed information. Last April, the Open Government Partnership held its annual meeting in Brasilia, the capitol of Brazil. It was a great success showcasing the innovative work being done in open data by governments, civil societies and individuals around the world. For example, Bulgaria now publishes daily data on budget spending for all public institutions. Alexandre Gomes Explains Open Data At TDC, the Open Data track will include a presentation of examples of successful open data projects, an introduction to the semantic web, how to handle big data sets, techniques of data visualization, and how to design APIs.The other track lead is Christian Moryah Miranda, a systems analyst for the Brazilian Government's Ministry of Planning. "The Brazilian government wholeheartedly supports this effort. In order to make our data available to the public, it forces us to be more consistent with our data across ministries, and that's a good step forward for us," he said. He explained the government knows they cannot achieve everything they would like without help from the public. "It is not the government versus the people, rather citizens are partners with the government, and together we can achieve great things!" Miranda exclaimed. Saturday at TDC will be a "transparency hacker day" where developers will be invited to create applications using open data from the Brazilian government. Attendees are invited to pitch their ideas, work in small groups, and present their project at the end of the conference. "For example," Gomes said, "the Brazilian government just released the salaries of all government employees and I can't wait to see what developers can do with that." Resources Open Government Partnership  U.S. Government Open Data ProjectBrazilian Government Open Data ProjectU.K. Government Open Data Project 2012 International Open Government Data Conference 

    Read the article

  • Wine is no longer able to initialize OpenGL

    - by nebukadnezzar
    Since a while, wine is no longer able to initialize OpenGL on my 64bit Linux. This is by no means a unique problem to me- Lots of people with nvidia cards running 64bit linux seem to have this problem with wine on oneiric: http://forum.winehq.org/viewtopic.php?p=66856&sid=9d6e5ad628ee6fb6e5ef04577275daed http://forum.pinguyos.com/Thread-Wine-OpenGl-Problem https://bbs.archlinux.org/viewtopic.php?id=137696 And while some launchpad bug reports say one should use this workaround: LD_PRELOAD=/usr/lib32/nvidia-current/libGL.so.1 wine <app> It unfortunately does not solve the problem at all for me; That is, if i'd run CS:S, the game will run just fine for a while, but will abort after some time, including a range of GLSL-related errors. Here the startup errors from simply running steam: + wine steam.exe fixme:process:GetLogicalProcessorInformation ((nil),0x33e488): stub [.. snip ...] fixme:dwmapi:DwmSetWindowAttribute (0x1009a, 3, 0x33d384, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x1009a, 4, 0x33d374, 4) stub err:wgl:is_extension_supported No OpenGL extensions found, check if your OpenGL setup is correct! err:wgl:is_extension_supported No OpenGL extensions found, check if your OpenGL setup is correct! err:wgl:is_extension_supported No OpenGL extensions found, check if your OpenGL setup is correct! [... this error is being reported a few dozen times, so snip again ...] err:wgl:is_extension_supported No OpenGL extensions found, check if your OpenGL setup is correct! err:wgl:is_extension_supported No OpenGL extensions found, check if your OpenGL setup is correct! err:wgl:is_extension_supported No OpenGL extensions found, check if your OpenGL setup is correct! err:wgl:is_extension_supported No OpenGL extensions found, check if your OpenGL setup is correct! fixme:iphlpapi:NotifyAddrChange (Handle 0x47cdba8, overlapped 0x45dba80): stub fixme:winsock:WSALookupServiceBeginW (0x47cdbc8 0x00000ff0 0x47cdbc4) Stub! [... snip ...] Here are the errors reported while running, and after running (because the log is huge-ish, it's pasted elsewhere): http://paste.ubuntu.com/901925/ Now, 32bit OpenGL works just fine; The 32bit executables of Nexuiz, for example, work just fine. That being said, I'm suspecting that this is a problem of wine itself. I've already manually built the git version of wine, to no avail. So what's going on? Is something broken? How do I check (correctly) whether something is broken? How do I solve this?

    Read the article

  • Renewed as MVP

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). It is with great humbleness and honor that I accept Microsoft’s MVP award for 2010. This will be my .. I forget how many years, as an MVP. So suffice to say, I was a lot younger when I first got the MVP award, but also the excitement never dies. Don’t get me wrong, I’m still young, foolish and weird :). (and good looking, might I add) I’d like to share a few things with you on what I have learnt being a part of this very prestigious program that I am so unworthy of. Never aim to be an MVP. Let it be a consequence of what you already are. Always be down to earth, just because you’re an MVP doesn’t mean you’re better than anyone else. The biggest reward of the MVP program, yes much bigger than the free top notch MSDN subscription, is the amazing interaction you will have with other fellow MVPs, and incredibly smart people in the community in general. Get involved in the community, for your own sake! You will learn so much from your peers, it is a very very rewarding experience. Learn, Learn and Learn! Never under estimate the power of knowledge. Both technical and otherwise. I thank each one of you for all the attention you have given me over the past many years. And a very special thanks to my MVP lead, Melissa Travers, and my previous MVP lead Rafael Munoz (who isn’t with Microsoft anymore, but I am sure is kicking butt wherever he is). We are truly entering a very very exciting time in the technology space. Both Google and Apple are challenging Microsoft, forcing Microsoft to innovate at a pace like never before. Microsoft is coming out with an incredible amount of good, new and exciting stuff. Windows Mobile 7, Azure, .NET 4.0, Silverlight 4.0, IE9, and of course SharePoint 2010. The level of innovation in the tech industry is simply unprecedented. A truly exciting time for anyone who lives, breathes, sleeps and dreams of technology even when awake! (Like me!) As you know, I’ve been working on my SP2010 book lately. I’m happy to also inform that the book is DONE. WOOHOO!! :). So this means, I’ll have more time to blog, and cause more trouble in general. Once again! THANK YOU! Comment on the article ....

    Read the article

  • [MISC GEEKERY] Lucid Lynx to Come Loaded with Ubuntu One Music Store

    - by Vivek
    Ubuntu 10.04 (code name Lucid Lynx) will come loaded with the Ubuntu One music store. Rhythmbox will have the Ubuntu One music store integrated in it. It’ll also allow users to download purchased music to their local machine. Ubuntu One Music Store Users will be able to access Ubuntu One music store from the sidebar of Rhythmbox. The music store is a web page that opens in the Rhythmbox player. There are albums listed on the home page of the Ubuntu One music store page. Ubuntu One music store is powered by 7digital, which is a leading digital B2B media delivery company based in London and operating globally. Canonical, the company behind Ubuntu, has partnered with 7digital to bring the music store to it’s users, integrating it with Rhythmbox and it’s cloud storage service UbuntuOne which was launched last year. The home screen of the Ubuntu One music store displays popular albums and functionality to browse and search. You can search for Artists, Tracks, Albums, or a combination of all three. Users will also be able to browse the store alphabetically, or based on different music genres. Once you select a specific artist, all their available albums are arranged in a grid. Once an album is selected, you’ll will be able to download specific songs or the whole album. You’ll also be allowed to preview different songs for 60 seconds. You’ll be able to buy tracks using a credit card or with PayPal. The purchased tracks will be visible under Library \ Purchased from Ubuntu One. The downloaded tracks are also synced with your UbuntuOne account. This means that you’ll be able to access your tracks from any where on the web. The default UbuntuOne account comes with 2 GB free storage, however, you can also purchase additional space if you need it.   All the music is in mp3 format which is not supported by default in Ubuntu. However, you can get mp3 playback functionality using GStreamer multimedia framework. Conclusion All in all the Ubuntu One music store is a positive move to enhance the user experience and also increase the popularity of Canonical in bringing Ubuntu closer to regular users. This would also provide Canonical to make some revenue in collaboration with 7digital. Ubuntu One Music Store Wiki Similar Articles Productive Geek Tips Install GIMP 2.7.1 on Lucid Lynx using PPAExaile 0.3.0 is a Music Player for UbuntuHow to install Spotify in Ubuntu 9.10 using WineAdding extra Repositories on UbuntuSpeed Up Amarok With Large Music Collections TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool

    Read the article

  • SQL SERVER – Automated Type Conversion using Expressor Studio

    - by pinaldave
    Recently I had an interesting situation during my consultation project. Let me share to you how I solved the problem using Expressor Studio. Consider a situation in which you need to read a field, such as customer_identifier, from a text file and pass that field into a database table. In the source file’s metadata structure, customer_identifier is described as a string; however, in the target database table, customer_identifier is described as an integer. Legitimately, all the source values for customer_identifier are valid numbers, such as “109380”. To implement this in an ETL application, you probably would have hard-coded a type conversion function call, such as: output.customer_identifier=stringToInteger(input.customer_identifier) That wasn’t so bad, was it? For this instance, programming this hard-coded type conversion function call was relatively easy. However, hard-coding, whether type conversion code or other business rule code, almost always means that the application containing hard-coded fields, function calls, and values is: a) specific to an instance of use; b) is difficult to adapt to new situations; and c) doesn’t contain many reusable sub-parts. Therefore, in the long run, applications with hard-coded type conversion function calls don’t scale well. In addition, they increase the overall level of effort and degree of difficulty to write and maintain the ETL applications. To get around the trappings of hard-coding type conversion function calls, developers need an access to smarter typing systems. Expressor Studio product offers this feature exactly, by providing developers with a type conversion automation engine based on type abstraction. The theory behind the engine is quite simple. A user specifies abstract data fields in the engine, and then writes applications against the abstractions (whereas in most ETL software, developers develop applications against the physical model). When a Studio-built application is run, Studio’s engine automatically converts the source type to the abstracted data field’s type and converts the abstracted data field’s type to the target type. The engine can do this because it has a couple of built-in rules for type conversions. So, using the example above, a developer could specify customer_identifier as an abstract data field with a type of integer when using Expressor Studio. Upon reading the string value from the text file, Studio’s type conversion engine automatically converts the source field from the type specified in the source’s metadata structure to the abstract field’s type. At the time of writing the data value to the target database, the engine doesn’t have any work to do because the abstract data type and the target data type are just the same. Had they been different, the engine would have automatically provided the conversion. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: SSIS

    Read the article

  • Introducing AutoVue Document Print Service

    - by celine.beck
    We recently announced the availability of our new AutoVue Document Print Service products. For more information, please read the article entitled Print Any Document Type with AutoVue Document Print Services that was posted on our blog. The AutoVue Document Print Service products help address a trivial, yet very common challenge: printing and batch printing documents. The AutoVue Document Print Service is a Web-Services based interface, which allows developers to complement their print server solutions by leveraging AutoVue's printing capabilities within broader enterprise applications like Asset Lifecycle Management, Product Lifecycle Management, Enterprise Content Management solutions, etc. This means that you can leverage the AutoVue Document Print Service products as part of your printing solution to automate the printing of virtually any document type required in any business process. Clients that consume AutoVue's Document Print Service can be written in any language (for example Java or .NET) as long as they understand Web Services Description Language (WSDL) and communicate using Simple Object Access Protocol (SOAP). The print solution consists of three main components, as described in the diagram below: a print server (not included in the AutoVue Document Print Service offering) that will interact with your application to identify the files that need to be printed, the printer to send each file, as well as the print options needed for each file (paper size, page orientation, etc), and collate the print job requests. The print server will also take care of calling the AutoVue Document Print Service to perform the actual printing. The AutoVue Document Print Services send files to a printer for printing. The AutoVue Document Print Service products leverage AutoVue's format- and platform agnostic technology to let you print/batch virtually any type of files, without requiring the authoring application installed on your machine. and Printers As shown above, you can trigger printing from your application either programmatically through automated business processes or manually through human interaction. If documents that need to be printed from your application are stored inside a content repository/Document Management System (DMS) such as Oracle Universal Content Management System (UCM), then the Print Server will need to identify the list of documents and pass the ID of each document to the AutoVue DPS to print. In this case, AutoVue DPS leverages the AutoVue VueLink integration (note: AutoVue VueLink integrations are pre-packaged AutoVue integrations with most common enterprise systems. Check our Website for more information on the subject) to fetch documents out of the document management system for printing. In lieu of the AutoVue VueLink integration, you can also leverage the AutoVue Integration Software Development Kit (iSDK) to build your own connector. If the documents you need to print from your application are not stored in a content management system, the Print Server will need to ensure that files are made available to the AutoVue Document Print Service. The Print Server could for example fetch the files out of your application or an extension to the application could be developed to fetch the files and make them available to the AutoVue DPS. More information on methods to pass on file information to the AutoVue Document Print Service products can be found in the AutoVue Document Print Service Overview documentation available on the Oracle Technology Network. Related article: Any Document Type with AutoVue Document Print Services

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • Best of "The Moth" 2010

    - by Daniel Moth
    It is the time again (like in 2004, 2005, 2006, 2007, 2008, 2009) to look back at my blog for the past year and identify areas of interest that seem to be more prominent than others. After doing so, representative posts follow in my top 5 list (in random order). 1. This was the year where I had to move for the first time since 2004 my blog engine (blogger.com –> dasBlog), host provider (zen –> godaddy), web server technology and OS (apache on Linux –> IIS on Windows Server). My goal was not to break any permalinks or the look and feel of this website. A series of posts covered how I achieved that goal, culminating in a tool for others to use if they wanted to do the same: Tool to convert blogger.com content to dasBlog. Going forward I aim to be sharing more small code utilities like that one… 2. At work I am known for being fairly responsive on email, and more importantly never dropping email balls on the floor. This is due to my email processing system, which I shared here: Processing Email in Outlook. I will be sharing more tips with regards to making the best of the Office products. 3. There is no doubt in my mind that this is the year people will remember as the one where Microsoft finally fights back in the mobile space. Even though the new platform means my Windows Mobile book sales will dwindle :-), I am ecstatic about Windows Phone 7 both as a consumer and as a developer. On the release day, to get you started I shared the top 10 Windows Phone 7 developer resources. I will be sharing my tips from my experience in writing code for and consuming this new platform… 4. For my HPC developer friends using Visual Studio, I shared Slides and code for MPI Cluster Debugger and also gave you all the links you need for getting started with Dryad and DryadLINQ from MSR. Expect more from me on cluster development in the coming year… 5. Still in the HPC space, but actually also in the game and even mainstream development, the big disruption and opportunity comes in the form of GPGPU and, on the Microsoft platform, (currently) DirectCompute. Expect more from me on gpgpu development in the coming year… Subscribe via the link on the left to stay tuned for 2011… I wish you a very Happy New Year (with whatever definition of happiness works for you)! Comments about this post welcome at the original blog.

    Read the article

  • SQLAuthority News – Social Media Series – Twitter and Myself

    - by pinaldave
    Pinal Dave on Twitter! Frequent readers of my blog might know that I am trying to get more involved in all social media sites, both professionally and personally.  Readers might also know that I have often struggled with finding the purpose of some social media sites – Twitter especially.  One of the great uses of social media is to stay connected and updated with followers.  Twitter’s 140 character limit means that Twitter is a great place to get quick updates from the world, but not a lot of deep information.  In fact, I have the feeling that Twitter’s form might actually limit its usefulness – especially for complex subjects like SQL Server. However, #sqlhelp has tag has for sure overcome that belief. You can instantly talk about SQL and get help with your SQL problems on twitter. I believe in keeping up with the changing times, and it didn’t feel right to give up on Twitter.  So I have determined a good way to use Twitter and set rules for myself.  The problem I was facing that if I followed everyone who interested me and let anyone follow me, I was completely overwhelmed by the amount of information Twitter could give me every day.  It didn’t seem like 140 characters should be able to take up so much of my time, but it took hours to sort through all the updates to find things that were of interest to me and to SQL Server. First, I was forced to unfollow anyone who made too many updates every day.  This was not an easy decision, but just for my own sake I had to limit the amount of information I could take in every day.  I still let anyone follow me who wants to, because I didn’t want to limit my readership, and I hope that they do not feel the way I did – that there are too many updates! Next, I made sure that the information I put on Twitter is useful and to the point.  I try to announce new blog posts on Twitter at least once a day, and I also try to find five posts from other people every day that are worth re-Tweeting.  This forces me to stay active in the community.  But it is not all business on Twitter.  It is also a place for me to post updates about my family and home life, for anyone who is interested. In simple words, I talk every thing and anything on twitter. If you’d like to follow me, my Twitter handle is www.twitter.com/pinaldave.  It is a good place to start if you’d like to keep updated with my blog and find out who I follow and who my influences are.  Twitter is perfect for getting little “tastes” of things you’re interested in.  If you are interested in my blog, SQL Server, or both, I hope that my Twitter updates will be interesting and helpful. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: Social Media

    Read the article

  • A few things I learned regarding Azure billing policies

    - by Vincent Grondin
    An hour of small computing time: 0,12$ per hour A Gig of storage in the cloud: 0,15$ per hour 1 Gig of relational database using Azure SQL: 9,99$  per month A Visual Studio Professional with MSDN Premium account: 2500$ per year Winning an MSDN Professional account that comes preloaded with 750 free hours of Azure per month:  PRICELESS !!!      But was it really free???? Hmmm… Let’s see.....   Here's a few things I learned regarding Azure billing policies when I attended a promotional training at Microsoft last week...   1)  An instance deployed in the cloud really means whatever you upload in there... it doesn't matter if it's in STAGING OR PRODUCTION!!!!   Your MSDN account comes with 750 free hours of small computing time per month which should be enough hours per month for one instance of one application deployed in the cloud...  So we're cool, the application you run in the cloud doesn't cost you a penny....  BUT the one that's in staging is still consuming time!!!   So if you don’t want to end up having to pay 42$ at the end of the month on your credit card like this happened to a friend of mine, DELETE them staging applications once you’ve put them in production! This also applies to the instance count you can modify in the configuration file… So stop and think before you decide you want to spawn 50 of those hello world apps  .     2) If you have an MSDN account, then you have the promotional 750 hours of Azure credits per month and can use the Azure credits to explore the Cloud! But be aware, this promotion ends in 8 months (maybe more like 7 now) and then you will most likely go back to the standard 250 hours of Azure credits. If you do not delete your applications by then, you’ll get billed for the extra hours, believe me…   There is a switch that you can toggle and which will STOP your automatic enrollment after the promotion and prevent you from renewing the Azure Account automatically. Yes the default setting is to automatically renew your account and remember, you entered your credit card information in the registration process so, yes, you WILL be billed…  Go disable that ASAP    Log into your account, go to “Windows Azure Platform” then click the “Subscriptions” tab and on the right side, you’ll see a drop down with different “Actions” into it… Choose “Opt out of auto renew” and, NOW you’re safe…   Still, this is a great offer by Microsoft and I think everyone that has a chance should play a bit with Azure to get to know this technology a bit more...     Happy Cloud Computing All

    Read the article

  • What’s New in Delphi XE6 Regular Expressions

    - by Jan Goyvaerts
    There’s not much new in the regular expression support in Delphi XE6. The big change that should be made, upgrading to PCRE 8.30 or later and switching to the pcre16 functions that use UTF-16, still hasn’t been made. XE6 still uses PCRE 7.9 and thus continues to require conversion from the UTF-16 strings that Delphi uses natively to the UTF-8 strings that older versions of PCRE require. Delphi XE6 does fix one important issue that has plagued TRegEx since it was introduced in Delphi XE. Previously, TRegEx could not find zero-length matches. So a regex like (?m)^ that should find a zero-length match at the start of each line would not find any matches at all with TRegEx. The reason for this is that TRegEx uses TPerlRegEx to do the heavy lifting. TPerlRegEx sets its State property to [preNotEmpty] in its constructor, which tells it to skip zero-length matches. This is not a problem with TPerlRegEx because users of this class can change the State property. But TRegEx does not provide a way to change this property. So in Delphi XE5 and prior, TRegEx cannot find zero-length matches. In Delphi XE6 TPerlRegEx’s constructor was changed to initialize State to the empty set. This means TRegEx is now able to find zero-length matches. TRegex.Replace() using the regex (?m)^ now inserts the replacement at the start of each line, as you would expect. If you use TPerlRegEx directly, you’ll need to set State to [preNotEmpty] in your own code if you relied on its behavior to skip zero-length matches. You will need to check existing applications that use TRegEx for regular expressions that incorrectly allow zero-length matches. In XE5 and prior, TRegEx using \d* would match all numbers in a string. In XE6, the same regex still matches all numbers, but also finds a zero-length match at each position in the string. RegexBuddy 4 warns about zero-length matches on the Create panel if you set it to Detailed mode. At the bottom of the regex tree there will be a node saying either “your regular expression may find zero-length matches” or “zero-length matches will be skipped” depending on whether your application allows zero-length matches (XE6 TRegEx) or not (XE–XE5 TRegEx).

    Read the article

  • Using Pandora in Boxee

    - by Mysticgeek
    Boxee is a very cool multimedia app that lets you access and stream your digital media in many different ways. There’s also a lot of extra apps included with it, and today we take a look at the Pandora application in Boxee. Pandora has been a favorite free music streaming service that’s been around for some time now. Though there are new services like Grooveshark and Spotify that are competing, Pandora is still a reliable choice. It’s now included in Boxee, and here we take a look at using it. Create a Pandora Account If you don’t already have a Pandora account, you can easily create one at their website (link below). Pandora in Boxee To start using Pandora from Boxee, launch Boxee and from the main menu select Apps. Now from the My Apps section select Pandora. When the Pandora app menu comes up, select Start. Now you need to log into your Pandora account. After signing in you can starting listening to your stations, viewing artist info, and cover art. All while enjoying some cool visuals in the background. From the controls at the top you can control playback, skip songs, control volume, get information on why a song was picked, and give a song a thumbs up or down. Of course you can also pull up your stations and switch between them and add more. The same features you’ve come to expect from Pandora are available. One thing we noticed missing is not being able to click on the band or artist to get additional information about them –which you can do on the Pandora site and desktop app. But that isn’t a deal breaker by any means, and we’re hoping the feature will be added in the future. Then while you’re checking out other apps, shows, and setting within Boxee, the cool visuals continue and the songs from you stations keep playing. Conclusion Pandora is a great streaming music service and a welcome edition to Boxee. If you’re a fan of Pandora now you can listen to it on your home theater system. If you’re new to Boxee, make sure to check out our article on getting started with Boxee. Create a Pandora Account Download Boxee Similar Articles Productive Geek Tips Integrate Boxee with Media Center in Windows 7Getting Started with BoxeePandora One is a Worthwhile Upgrade for Your Current Pandora AccountCreate Music Video Playlists with TubeRadio.fmSpotify is an Awesome Music Streaming Service TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Draw Online using Harmony How to Browse Privately in Firefox Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually

    Read the article

  • Silverlight User Group of Switzerland (SLUGS)

    - by Laurent Bugnion
    Last Thursday, the Silverlight Firestarter event took place in Redmond, and was streamed live to a large audience worldwide (around 20’000 people). Approximately 30 if them were in Wallisellen near Zurich, in Microsoft Switzerland’s offices. This was not only a great occasion to learn more about the future of Silverlight and to see great demos, but also it was the very first meeting of the Silverlight User Group of Switzerland (SLUGS). Having 30 people for a first meeting was a great success, especially if we consider that it was REALLY cold that night, that it had snowed 20 cm the night before! We all had a good time, and 3 lucky winners went back home with a prize: One LG Optimus 7 Windows Phone and two copies of Silverlight 4 Unleashed. Congratulations to the winners! After the keynote (which went in a whirlwind, shortest 90 minutes ever!), we all had pizza and beverages generously sponsored by the Swiss DPE team, of which not less than 5 guys came to the event! Thanks to Stefano, Ronnie, Sascha, Big Mike and Ken for attending! We decided to have meetings every month. Stay tuned for announcements on when and where the events will take place. We are also in the process of creating various groups online where the attendees can find more information. For instance, I created a group on Flickr where the pictures taken at events will be published. The group is public, and the pictures of the first event are already online! We also have the already known page at http://www.slugs.ch/, check it out. A national group Even though the first event was in Zurich, and that 3 of the founding members live nearby, we would like to try and be a national group. That means having events sometimes in other parts of Switzerland, collaborating with other local user groups, etc. Stay tuned for more Join! We want you, we need you If you are doing Silverlight, for a living or as a hobby, if you are interested in user experience, XAML, Expression Blend and many more topics, you should consider joining! This is a great occasion to exchange experiences, to learn from Silverlight experts, to hear sessions about various topics related to Silverlight, etc. If you want to talk about a topic that is of interest to you, If you want to propose a topic of discussion Or if you just want to hang out then go to http://www.slugs.ch and register! Cheers, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Windows Azure Use Case: Fast Acquisitions

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Many organizations absorb, take over or merge with other organizations. In these cases, one of the most difficult parts of the process is the merging or changing of the IT systems that the employees use to do their work, process payments, and even get paid. Normally this means that the two companies have disparate systems, and several approaches can be used to have the two organizations use technology between them. An organization may choose to retain both systems, and manage them separately. The advantage here is speed, and keeping the profit/loss sheets separate. Another choice is to slowly “sunset” or stop using one organization’s system, and cutting to the other system immediately or at a later date. Although a popular choice, one of the most difficult methods is to extract data and processes from one system and import it into the other. Employees at the transitioning system have to be trained on the new one, the data must be examined and cleansed, and there is inevitable disruption when this happens. Still another option is to integrate the systems. This may prove to be as much work as a transitional strategy, but may have less impact on the users or the balance sheet. Implementation: A distributed computing paradigm can be a good strategic solution to most of these strategies. Retaining both systems is made more simple by allowing the users at the second organization immediate access to the new system, because security accounts can be created quickly inside an application. There is no need to set up a VPN or any other connections than just to the Internet. Having the users stop using one system and start with the other is also simple in Windows Azure for the same reason. Extracting data to Azure holds the same limitations as an on-premise system, and may even be more problematic because of the large data transfers that might be required. In a distributed environment, you pay for the data transfer, so a mixed migration strategy is not recommended. However, if the data is slowly migrated over time with a defined cutover, this can be an effective strategy. If done properly, an integration strategy works very well for a distributed computing environment like Windows Azure. If the Azure code is architected as a series of services, then endpoints can expose the service into and out of not only the Azure platform, but internally as well. This is a form of the Hybrid Application use-case documented here. References: Designing for Cloud Optimized Architecture: http://blogs.msdn.com/b/dachou/archive/2011/01/23/designing-for-cloud-optimized-architecture.aspx 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0

    Read the article

  • LLBLGen Pro feature highlights: model views

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) To be able to work with large(r) models, it's key you can view subsets of these models so you can have a better, more focused look at them. For example because you want to display how a subset of entities relate to one another in a different way than the list of entities. LLBLGen Pro offers this in the form of Model Views. Model Views are views on parts of the entity model of a project, and the subsets are displayed in a graphical way. Additionally, one can add documentation to a Model View. As Model Views are displaying parts of the model in a graphical way, they're easier to explain to people who aren't familiar with entity models, e.g. the stakeholders you're interviewing for your project. The documentation can then be used to communicate specifics of the elements on the model view to the developers who have to write the actual code. Below I've included an example. It's a model view on a subset of the entities of AdventureWorks. It displays several entities, their relationships (both relational and inheritance relationships) and also some specifics gathered from the interview with the stakeholder. As the information is inside the actual project the developer will work with, the information doesn't have to be converted back/from e.g .word documents or other intermediate formats, it's the same project. This makes sure there are less errors / misunderstandings. (of course you can hide the docked documentation pane or dock it to another corner). The Model View can contain entities which are placed in different groups. This makes it ideal to group entities together for close examination even though they're stored in different groups. The Model View is a first-class citizen of the code-generator. This means you can write templates which consume Model Views and generate code accordingly. E.g. you can write a template which generates a service per Model View and exposes the entities in the Model View as a single entity graph, fetched through a method. (This template isn't included in the LLBLGen Pro package, but it's easy to write it up yourself with the built-in template editor). Viewing an entity model in different ways is key to fully understand the entity model and Model Views help with that.

    Read the article

  • NHibernate Pitfalls: Fetch and Paging

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. NHibernate allows you to force loading additional references (many to one, one to one) or collections (one to many, many to many) in a query. You must know, however, that this is incompatible with paging. It’s easy to see why. Let’s say you want to get 5 products starting on the fifth, you can issue the following LINQ query: 1: session.Query<Product>().Take(5).Skip(5).ToList(); Will product this SQL in SQL Server: 1: SELECT 2: TOP (@p0) product1_4_, 3: name4_, 4: price4_ 5: FROM 6: (select 7: product0_.product_id as product1_4_, 8: product0_.name as name4_, 9: product0_.price as price4_, 10: ROW_NUMBER() OVER( 11: ORDER BY 12: CURRENT_TIMESTAMP) as __hibernate_sort_row 13: from 14: product product0_) as query 15: WHERE 16: query.__hibernate_sort_row > @p1 17: ORDER BY If, however, you wanted to bring as well the associated order details, you might be tempted to try this: 1: session.Query<Product>().Fetch(x => x.OrderDetails).Take(5).Skip(5).ToList(); Which, in turn, will produce this SQL: 1: SELECT 2: TOP (@p0) product1_4_0_, 3: order1_3_1_, 4: name4_0_, 5: price4_0_, 6: order2_3_1_, 7: product3_3_1_, 8: quantity3_1_, 9: product3_0__, 10: order1_0__ 11: FROM 12: (select 13: product0_.product_id as product1_4_0_, 14: orderdetai1_.order_detail_id as order1_3_1_, 15: product0_.name as name4_0_, 16: product0_.price as price4_0_, 17: orderdetai1_.order_id as order2_3_1_, 18: orderdetai1_.product_id as product3_3_1_, 19: orderdetai1_.quantity as quantity3_1_, 20: orderdetai1_.product_id as product3_0__, 21: orderdetai1_.order_detail_id as order1_0__, 22: ROW_NUMBER() OVER( 23: ORDER BY 24: CURRENT_TIMESTAMP) as __hibernate_sort_row 25: from 26: product product0_ 27: left outer join 28: order_detail orderdetai1_ 29: on product0_.product_id=orderdetai1_.product_id 30: ) as query 31: WHERE 32: query.__hibernate_sort_row > @p1 33: ORDER BY 34: query.__hibernate_sort_row; However, because of the JOIN, what happens is that, if your products have more than one order details, you will get several records – one per order detail – per product, which means that pagination will be broken. There is an workaround, which forces you to write your LINQ query in another way: 1: session.Query<OrderDetail>().Where(x => session.Query<Product>().Select(y => y.ProductId).Take(5).Skip(5).Contains(x.Product.ProductId)).Select(x => x.Product).ToList() Or, using HQL: 1: session.CreateQuery("select od.Product from OrderDetail od where od.Product.ProductId in (select p.ProductId from Product p skip 5 take 5)").List<Product>(); The generated SQL will then be: 1: select 2: product1_.product_id as product1_4_, 3: product1_.name as name4_, 4: product1_.price as price4_ 5: from 6: order_detail orderdetai0_ 7: left outer join 8: product product1_ 9: on orderdetai0_.product_id=product1_.product_id 10: where 11: orderdetai0_.product_id in ( 12: SELECT 13: TOP (@p0) product_id 14: FROM 15: (select 16: product2_.product_id, 17: ROW_NUMBER() OVER( 18: ORDER BY 19: CURRENT_TIMESTAMP) as __hibernate_sort_row 20: from 21: product product2_) as query 22: WHERE 23: query.__hibernate_sort_row > @p1 24: ORDER BY 25: query.__hibernate_sort_row); Which will get you what you want: for 5 products, all of their order details.

    Read the article

  • Large File Upload in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Okay this is a big BIG B-I-G problem. And with SP2010 it’s going to be more prominent, because atleast at the server side, SharePoint can support large files much much better than SharePoint 2007 ever did. The issue with very large files being uploaded through any browser based API are - Reliably transferring gigabyte or bigger files without breakages over a protocol like HTTP, which is better suited for tiny transfers like images and text. Not killing your browser because it has to load all that in memory Not killing your web server because All that you upload through HTTP post, first gets streamed into IIS Memory, w3wp.exe memory before the ENTIRE FILE finishes uploading .. before it is stored. Which means, You cannot show an accurate and live progress bar of the upload, IIS gives you no such accurate metric of an upload. All the counters it gives you are approximate. Your w3wp.exe eats up all server memory – 4GB of it, for a 4GB upload. A thread is kept busy for the entire duration of the upload, thereby greatly limiting your web server’s capability to serve newer requests. Kills effective load balancing. Not killing your content database because, As you are uploading a very large file, that large file gets written sequentially into the DB, and therefore for a very large file very severely impacts the database performance. I had put together another video showing RBS usage in SharePoint 2010. I talked about many practical ramifications of using RBS in SharePoint in that video. Note that enabling large file support will never ever be a point and click job, simply because there are too many questions one needs to ask, and too many things one needs to plan for. However, one part that will remain common across all large file upload scenarios, in SharePoint or outside of SharePoint is to do it efficiently while not killing the web server. In this video, I describe using the Telerik Silverlight Upload control with SharePoint 2010 to enable efficient large file uploads in SharePoint. Presenting .. The video Comment on the article ....

    Read the article

  • Need to Know

    - by Tony Davis
    Sometimes, I wonder whether writers of documentation, tutorials and articles stop to ask themselves one very important question: Does the reader really need to know this? I recently took on the task of writing a concise series of articles about the transaction log, what is it, how it works and why it's important. It was an enjoyable task; rather like peering inside a giant, complex clock mechanism. Initially, one sees only the basic components, which work to guarantee the integrity of database transactions, and preserve these transactions so that data can be restored to a previous point in time. On closer inspection, one notices all of small, arcane mechanisms that are necessary to make this happen; LSNs, virtual log files, log chains, database checkpoints, and so on. It was engrossing, escapist, stuff; what I'd written looked weighty and steeped in mysterious significance. Suddenly, however, I jolted myself back to reality with the awful thought "does anyone really need to know all this?" The driver of a car needs only to be dimly aware of what goes on under the hood, however exciting the mechanism is to the engineer. Similarly, while everyone who uses SQL Server ought to be aware of the transaction log, its role in guaranteeing the ACID properties, and how to control its growth, the intricate mechanisms ticking away under its clock face are a world away from the daily work of the harassed developer. The DBA needs to know more, such as the correct rituals for ensuring optimal performance and data integrity, setting the appropriate growth characteristics, backup routines, restore procedures, and so on. However, even then, the average DBA only needs to understand enough about the arcane processes to spot problems and react appropriately, or to know how to Google for the best way of dealing with it. The art of technical writing is tied up in intimate knowledge of your audience and what they need to know at any point. It means serving up just enough at each point to help the reader in a practical way, but not to overcook it, or stuff the reader with information that does them no good. When I think of the books and articles that have helped me the most, they have been full of brief, practical, and well-informed guidance, based on experience. This seems far-removed from the 900-page "beginner's guides" that one now sees everywhere. The more I write and edit, the more I become convinced that the real art of technical communication lies in knowing what to leave out. In what areas do the SQL Server technical materials suffer from "information overload"? Where else does it seem that concise, practical advice is drowned out by endless discussion of the "clock mechanisms"? Cheers, Tony.

    Read the article

  • How to upgrade boost lib using apt-get?

    - by sam
    I use ubuntu 11.04. My boost version: sam@sam:~/code/ros/pcl$ apt-cache showpkg libboost-all-dev Package: libboost-all-dev Versions: 1.42.0.1ubuntu1 (/var/lib/apt/lists/tw.archive.ubuntu.com_ubuntu_dists_natty_universe_binary-amd64_Packages) (/var/lib/dpkg/status) Description Language: File: /var/lib/apt/lists/tw.archive.ubuntu.com_ubuntu_dists_natty_universe_binary-amd64_Packages MD5: 72efad05a3c79394c125b79e1d4eb3a7 Reverse Depends: libvtk5-dev,libboost-all-dev libfeel++-dev,libboost-all-dev Dependencies: 1.42.0.1ubuntu1 - libboost-dev (0 (null)) libboost-date-time-dev (0 (null)) libboost-filesystem-dev (0 (null)) libboost-graph-dev (0 (null)) libboost-iostreams-dev (0 (null)) libboost-math-dev (0 (null)) libboost-program-options-dev (0 (null)) libboost-python-dev (0 (null)) libboost-regex-dev (0 (null)) libboost-serialization-dev (0 (null)) libboost-signals-dev (0 (null)) libboost-system-dev (0 (null)) libboost-test-dev (0 (null)) libboost-thread-dev (0 (null)) libboost-wave-dev (0 (null)) Provides: 1.42.0.1ubuntu1 - Reverse Provides: sam@sam:~/code/ros/pcl$ How to upgrade boost to 1.44+ by using apt tools? Thank you~ When I run apt-add-repository,it shows: sam@sam:~/code/ros/pcl$ sudo apt-add-repository ppa:timklingt/ppa Error reading https://launchpad.net/api/1.0/~timklingt/+archive/ppa: GnuTLS recv error (-9): A TLS packet with unexpected length was received. sam@sam:~/code/ros/pcl$ How to fix it? Thank you~ I try to install libboost1.46-all-dev: sam@sam:~/code/ros/pcl$ sudo apt-get install libboost1.46-all-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libboost1.46-all-dev : Depends: libboost1.46-dev but it is not going to be installed Depends: libboost-date-time1.46-dev but it is not going to be installed Depends: libboost-filesystem1.46-dev but it is not going to be installed Depends: libboost-graph1.46-dev but it is not going to be installed Depends: libboost-iostreams1.46-dev but it is not going to be installed Depends: libboost-math1.46-dev but it is not going to be installed Depends: libboost-program-options1.46-dev but it is not going to be installed Depends: libboost-python1.46-dev but it is not going to be installed Depends: libboost-regex1.46-dev but it is not going to be installed Depends: libboost-serialization1.46-dev but it is not going to be installed Depends: libboost-signals1.46-dev but it is not going to be installed Depends: libboost-system1.46-dev but it is not going to be installed Depends: libboost-test1.46-dev but it is not going to be installed Depends: libboost-thread1.46-dev but it is not going to be installed Depends: libboost-wave1.46-dev but it is not going to be installed E: Broken packages sam@sam:~/code/ros/pcl$ What's these error means? And how to solve it? Thank you~

    Read the article

  • Tom Cruise: Meet Fusion Apps UX and Feel the Speed

    - by ultan o'broin
    Unfortunately, I am old enough to remember, and now to admit that I really loved, the movie Top Gun. You know the one - Tom Cruise, US Navy F-14 ace pilot, Mr Maverick, crisis of confidence, meets woman, etc., etc. Anyway, one of more memorable lines (there were a few) was: "I feel the need, the need for speed." I was reminded of Tom Cruise recently. Paraphrasing a certain Senior Vice President talking about Oracle Fusion Applications and user experience at an all-hands meeting, I heard that: Applications can never be too easy to use. Performance can never be too fast. Developers, assume that your code is always "on". Perfect. You cannot overstate the user experience importance of application speed to users, or at least their perception of speed. We all want that super speed of execution and performance, and increasingly so as enterprise users bring the expectations of consumer IT into the work environment. Sten Vesterli (@stenvesterli), an Oracle Fusion Applications User Experience Advocate, also addressed the speed point artfully at an Oracle Usability Advisory Board meeting in Geneva. Sten asked us that when we next Googled something, to think about the message we see that Google has found hundreds of thousands or millions of results for us in a split second (for example, About 8,340,000 results (0.23 seconds)). Now, how many results can we see and how many can we use immediately? Yet, this simple message communicating the total results available to us works a special magic about speed, delight, and excitement that Google has made its own in the search space. And, guess what? The Oracle Application Development Framework table component relies on a similar "virtual performance boost", says Sten, when it displays the first 50 records in a table, and uses a scrollbar indicating the total size of the data record set. The user scrolls and the application automatically retrieves more records as needed. Application speed and its perception by users is worth bearing in mind the next time you're at a customer site and the IT Department demands that you retrieve every record from the database. Just think of... Dave Ensor: I'll give you all the rows you ask for in one second. If you promise to use them. (Again, hat tip to Sten.) And then maybe think of... Tom Cruise. And if you want to read about the speed of Oracle Fusion Applications, and what that really means in terms of user productivity for your entire business, then check out the Oracle Applications User Experience Oracle Fusion Applications white papers on the usable apps website.

    Read the article

  • links for 2010-04-27

    - by Bob Rhubart
    @oracletechnet: Oracle Technology Network Newsletters Revisited "You may find this hard to believe, but some analysts contend that email newsletters are still among the most preferred methods of "information awareness" by developers today. And in our experience, the numbers back it up: subscriptions to Oracle Technology Network newsletters grow organically by 15% every year, even after you take continual list cleanup into account. " -- Justin Kestelyn (tags: oracle otn newsletters developers architects) Sylvain Duloutre: Directory Services as a Web Service Sylvain Duloutre shares a WSDL file he created to deal with issues involved in XML binding generation. (tags: oracle sun wsdl webservices DSEE netbeans jdeveloper) Nick Wooler: Iron-Clad Cloud: Secure Cloud Computing "One solution to the security problem with cloud services can be overcome using Service Oriented Security. The Oracle approach to using Service Oriented Security allows developers to pull from a centralized, authoritative source of identity services. This allows developers to build security into every application from the inside-out. This is critical to ensuring this is done in a standardized manner and most importantly it allows developers to develop without being security experts." -- Nick Wooler (tags: oracle sun security cloud saas) Andy Mulholland: A week of visits; Cisco, HP, Oracle, SAP and VMware (in alphabetical order!) "I now am considering that we should be thinking about ‘clouds’ in virtual way, by which I mean that a succession of virtual ‘clouds’ will need to exist, each possessing specific characteristics that suit certain types of services. Really it’s no different to what we see with servers today. Adding a hypervisor to a server adds new flexibility, but creating a virtualised environment means much more. What I suspect will happen is that we will start to use vendor specific approaches to building what I will term a physical cloud solution using their technology and approach to supporting a specific objective, but with time we will find these physical clouds will interoperate as a fully virtualised cloud environment." -- Andy Mulholland (tags: entarch enterprisearchitecture cloudcomputing virtualization) @fteter: Highlights From The Bright Lights - Tuesday #c10 Oracle Ace Director Floyd Teter of JPL with one last wrap-up of Collaborate 10. (tags: oracle otn collaborate2010 las vegas) Rittman Mead India – Call for very good Oracle BI Developers/Architects "Now that we have an office in India and if you are interested in joining us, do drop us a line at [email protected], and we will be glad to have technical discussions with you. If you are also an Oracle BI, DW or EPM customer looking for help on projects in the Asia-Pacific region, again we’ll be pleased to hear from you and to let you know how we can help." -- Venkatakrishnan J (tags: otn oracle jobs india developers architects software)

    Read the article

  • Is Nick Clegg a man or a mouse?

    - by BizTalk Visionary
    Well we got the hung election so many of us wanted! I believe it really is time for electoral change. Why? Consider: the ConMen under Cameroon have polled 36% of the great British voting public – well those that got to vote!! That means 64% of us don’t want him as PM. So what gives him the right to govern? Well an ancient voting system ideal for two party politics. But for the last 30 years we’ve had multi-party politics and going forward we may see 4 or 5 parties stepping up. We have to set in place a system that makes this work! So what does that mean today: Nick has a golden chance to push forward the case and in fact the absolute right for the change. He needs to keep this in mind when he discusses coalition with both Labour and the ConMen. So the mouse approach: Decides it is only fair to side with the ‘biggest’ vote and team up with the ConMen. Chances of electoral change? Big fat zero. Chance of achieving any of his other targets. Big fat zero. Why? Simple (as the Meer Kat would say). Cameroon needs to become PM by hook or crook. Once PM he holds the whip hand. Labour will dump Brown and head off into Leadership race land, Clegg will be knocking on number 10, having meaningless meetings and seeing no reward. Finally while Labour is at 6‘s and 7’s  the ‘new’ PM will call a new election, gain the majority they need and dump luckless Nick!! So the man approach: Team up with Labour. As one of the conditions – Brown to go. Run referendum for PR. Get PR through then force Labour to have new election under PR. Nick now hero and should be in a much better place following a PR election!! The man bit is standing up to the media attack for supporting Labour. Come Nick – be a man for a better Britain!!

    Read the article

  • Is Nick Clegg a man or a mouse?

    - by BizTalk Visionary
    Well we got the hung election so many of us wanted! I believe it really is time for electoral change. Why? Consider: the ConMen under Cameroon have polled 36% of the great British voting public – well those that got to vote!! That means 64% of us don’t want him as PM. So what gives him the right to govern? Well an ancient voting system ideal for two party politics. But for the last 30 years we’ve had multi-party politics and going forward we may see 4 or 5 parties stepping up. We have to set in place a system that makes this work! So what does that mean today: Nick has a golden chance to push forward the case and in fact the absolute right for the change. He needs to keep this in mind when he discusses coalition with both Labour and the ConMen. So the mouse approach: Decides it is only fair to side with the ‘biggest’ vote and team up with the ConMen. Chances of electoral change? Big fat zero. Chance of achieving any of his other targets. Big fat zero. Why? Simple (as the Meer Kat would say). Cameroon needs to become PM by hook or crook. Once PM he holds the whip hand. Labour will dump Brown and head off into Leadership race land, Glegg will be knocking on number 10, having meaningless meetings and seeing no reward. Finally while Labour is at 6‘s and 7’s  the ‘new’ PM will call a new election, gain the majority they need and dump luckless Nick!! So the man approach: Team up with Labour. As one of the conditions – Brown to go. Run referendum for PR. Get PR through then force Labour to have new election under PR. Nick now hero and should be in a much better place following a PR election!! The man bit is standing up to the media attack for supporting Labour. Come Nick – be a man for a better Britain!!

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >