Search Results

Search found 1699 results on 68 pages for 'skip'.

Page 67/68 | < Previous Page | 63 64 65 66 67 68  | Next Page >

  • CodePlex Daily Summary for Tuesday, October 09, 2012

    CodePlex Daily Summary for Tuesday, October 09, 2012Popular ReleasesScript SQL Server Configuration: Release 3.0.9: Release 3.0.9 Rewrote trigger scripting. If encrypted triggers are encountered they are listed in a commented line. Scripts event notifications Added an option to script a single database (in addition to the instance) using the /scriptdb parameter. Script user-defined end points Script Service Broker objects Skip database mail on Express EditionMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.69: Fix for issue #18766: build task should not build the output if it's newer than all the input files. Fix for Issue #18764: build taks -res switch not working. update build task to concatenate input source and then minify, rather than minify and then concatenate. include resource string-replacement root name in the assumed globals list. Stop replacing new Date().getTime() with +new Date -- the latter is smaller, but turns out it executes up to 45% slower. add CSS support for single-...D3 Loot Tracker: 1.5.2: now recording server ip for each drop.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.3: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Controls Converters Debugging helpers Extension methods Imaging helpers IO helpers VisualTree helpers Samples Recent changes NOTE:...DevLib: 69721 binary dll: 69721 binary dllVidCoder: 1.4.4 Beta: Fixed inability to create new presets with "Save As".MCEBuddy 2.x: MCEBuddy 2.3.2: Changelog for 2.3.2 (32bit and 64bit) 1. Added support for generating XBMC XML NFO files for files in the conversion queue (store it along with the source video with source video name.nfo). Right click on the file in queue and select generate XML 2. UI bugifx, start and end trim box locations interchanged 3. Added support for removing commercials from non DVRMS/WTV files (MP4, AVI etc) 4. Now checking for Firewall port status before enabling (might help with some firewall problems) 5. User In...Sandcastle Help File Builder: SHFB v1.9.5.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release supports the Sandcastle October 2012 Release (v2.7.1.0). It includes full support for generating, installing, and removing MS Help Viewer files. This new release suppor...Sofire Suite v1.6: XSqlModelGenerator.AddIn: 1、?? VS2010/2012 2、?? .NET FRAMEWORK 2.0 3、?? SOFIRE V1.6ClosedXML - The easy way to OpenXML: ClosedXML 0.68.0: ClosedXML now resolves formulas! Yes it finally happened. If you call cell.Value and it has a formula the library will try to evaluate the formula and give you the result. For example: var wb = new XLWorkbook(); var ws = wb.AddWorksheet("Sheet1"); ws.Cell("A1").SetValue(1).CellBelow().SetValue(1); ws.Cell("B1").SetValue(1).CellBelow().SetValue(1); ws.Cell("C1").FormulaA1 = "\"The total value is: \" & SUM(A1:B2)"; var...Json.NET: Json.NET 4.5 Release 10: New feature - Added Portable build to NuGet package New feature - Added GetValue and TryGetValue with StringComparison to JObject Change - Improved duplicate object reference id error message Fix - Fixed error when comparing empty JObjects Fix - Fixed SecAnnotate warnings Fix - Fixed error when comparing DateTime JValue with a DateTimeOffset JValue Fix - Fixed serializer sometimes not using DateParseHandling setting Fix - Fixed error in JsonWriter.WriteToken when writing a DateT...Readable Passphrase Generator: KeePass Plugin 0.7.2: Changes: Tested against KeePass 2.20.1 Tested under Ubuntu 12.10 (and KeePass 2.20) Added GenerateAsUtf8 method returning the encrypted passphrase as a UTF8 byte array.patterns & practices: Prism: Prism for .NET 4.5: This is a release does not include any functionality changes over Prism 4.1 Desktop. These assemblies target .NET 4.5. These assemblies also were compiled against updated dependencies: Unity 3.0 and Common Service Locator (Portable Class Library).Snoop, the WPF Spy Utility: Snoop 2.8.0: Snoop 2.8.0Announcing Snoop 2.8.0! It's been exactly six months since the last release, and this one has a bunch of goodies in it. In particular, there is now a PowerShell scripting tab, compliments of Bailey Ling. With this tab, the possibilities are limitless. It basically lets you automate/script the application that you are Snooping. Bailey has a couple blog posts (one and two) on his tab already, and I am sure more is to come. Please note that if you do not have PowerShell installed, y...Z3: Z3 4.1.2: Minor fixes. Now, z3 compiles with gcc 4.7.x.NET Micro Framework: .NET MF 4.3 (Beta): This is the 4.3 Beta version of the .NET Micro Framework. Feature List for v4.3 Support for Visual Studio 2012 (including the Windows Desktop Express version) All v4.2 QFEs features and bug fixes (PWM enhancements, lwIP and network driver reliability improvements, Analog Output, WinUSB and latest GCC support) Improved diagnostic information for deployment Decreased boot time Bug fixes Work Item 1736 - Create link for MFDeploy under start menu Work Item 1504 - Customizing lwIP o...NTCPMSG: V1.2.0.0: Allocate an identify cableid for each single connection cable. * Server can asend to specified cableid directly.Team Foundation Server Word Add-in: Version 1.0.12.0622: Welcome to the Visual Studio Team Foundation Server Word Add-in Supported Environments Microsoft Office Word 2007 and 2010 X86 (32-bit) Team Foundation Server 2010 Object Model TFS 2010, 2012 and TFS Service supported, using TFS OM / Explorer 2010. Quality-Bar Details Tool has been reviewed by Visual Studio ALM Rangers Tool has been through an independent technical and quality review All critical bugs have been resolved Known Issues / Bugs WI#43553 - The Acceptance criteria is not pu...UMD????? - PC?: UMDEditor?????V2.7: ??:http://jianyun.org/archives/948.html =============================================================================== UMD??? ???? =============================================================================== 2.7.0 (2012-10-3) ???????“UMD???.exe”??“UMDEditor.exe” ?????????;????????,??????。??????,????! ??64????,??????????????bug ?????????????,???? ???????????????? ???????????????,??????????bug ------------------------------------------------------- ?? reg.bat ????????????。 ????,??????txt/u...Untangler: Untangler 1.0.0: Add a missing file from first releaseNew Projects3dBuzz: Denise's website for 3d projection mapping artists to develop a web community to show and discuss their work.Advanced DataGridView with Excel-like auto filter: Windows Forms DataGridView Control with Excel-Like auto-filter context menu Windows Forms DataGridView ??????? ? Excel-???????? ????-????????azure media services admin panel: Simple azure media services dashboard. Upload media assets, queue encoder tasks and stream your audio\video assets.BackupCleaner.Net: A C#.Net-based tool for automatically removing old backups selectively. It can be used when you already make daily backups on disk, and want to clean them up while still keeping some of the ones made weeks, months or years ago.Bible Lib: BibleLib is a .net compatible library containing information for books, chapters and verses in the Old Testament.CRM 4.0 to CRM 2011 Queues Upgrades: for more information and documentation, please refer to http://mayankp.wordpress.com/2012/05/25/crm-4-0-to-crm-2011-queues-upgrades/ CSV File Reader and Writer for .NET: C# classes for reading and writing CSV files. Support for multi-line fields, custom delimiter and quote characters, options for how empty lines are handled.DotNetNuke Boards: DNN Boards is a task management solution that allows each user to have their own task board or social group members can collaborate within a single board.FarmaciasCruzAzul: Sistema Cruz AzulFindValueInDatabase: This solution finds the value you input in the hole given database and tells you which columns of which tables have the value you are looking for.Fish Tank: Social networking site for Aquarium enthusiasts.GSBA Apps: GSBA App for Windows 8Heuristics for the Vehicle Routing Problem: This is the code for our class, IEMS 482.Icaro - UPN: Icaro UPN es la recopilación de todos los preyectos realizados en clases de los diferentes proyectos que Enseño, espero les Sirva.JSLogger - free logging library for JavaScript: JSLogger is a free JavaScript Library for log information during the duration of your client script. The target is very easy >>Find every javasvript error<<Just little strategy: Just little strategy writen in C# using XNA Game Studio Now under developmentLightweight Medata Reader: The Lightweight Metadata Reader (LMR) is a tooling friendly version of the CLR Reflection APIs that takes no dependency on the CLR Loader.My Solution: No description for it nowoden????: ???????????????^^ooaavee.net: TODOPath Splitter: Path Splitter uses Roslyn to convert a method into a set of methods each equivalent to a distinct execution path. Assume annotations are added for use with Pex.Planning Poker for Azure: Planning Poker application allows distributed teams to play planning poker just using web browser. It can be deployed to IIS or Azure cloud service.plasmatrim.net: A library for controlling the USB PlasmaTrim from a .net application.powersaver: A simple utility, which will turn off your monitor when you lock your work station.Project13251008: gterProject13261008: asdsaProject13271008: sdfPyfus Reload: Server-side framework for Dofus 1.29.1 gameRei do Biscuit: E-commerce do rei do biscuitSofire Suite v1.6: Sofire Suite v1.6Sofire XSql: Sofire XSqlSosa Analysis Module (SAM): Numerical Analysis and data visualization program for SOSA psychological experiment software.SyncProject: The summary is coming soon... Just be patient ! Tistory Syntax Highlighter: ???? SyntaxHighlighter ????TJBHJJ: this is a website for company!tricogol App: nonetuts: My first SVNWatchr Change Control Management: Change control management for development and administrative teams focused on lean or agile processes.WCF Simple multi chat: This is a simple Windows form application that it's like a 'chat room'. Multiple users can login and chat with others users. The chat's core is powered by WCFWindows Event Log Email Notification: This will read the windows event logs based on the supplied xpath filters and email the specified addresses if there are matches.

    Read the article

  • Office 2010: It&rsquo;s not just DOC(X) and XLS(X)

    - by andrewbrust
    Office 2010 has released to manufacturing.  The bits have left the (product team’s) building.  Will you upgrade? This version of Office is officially numbered 14, a designation that correlates with the various releases, through the years, of Microsoft Word.  There were six major versions of Word for DOS, during whose release cycles came three 16-bit Windows versions.  Then, starting with Word 95 and counting through Word 2007, there have been six more versions – all for the 32-bit Windows platform.  Skip version 13 to ward off folksy bad luck (and, perhaps, the bugs that could come with it) and that brings us to version 14, which includes implementations for both 32- and 64-bit Windows platforms.  We’ve come a long way baby.  Or have we? As it does every three years or so, debate will now start to rage on over whether we need a “14th” version the PC platform’s standard word processor, or a “13th” version of the spreadsheet.  If you accept the premise of that question, then you may be on a slippery slope toward answering it in the negative.  Thing is, that premise is valid for certain customers and not others. The Microsoft Office product has morphed from one that offered core word processing, spreadsheet, presentation and email functionality to a suite of applications that provides unique, new value-added features, and even whole applications, in the context of those core services.  The core apps thus grow in mission: Excel is a BI tool.  Word is a collaborative editorial system for the production of publications.  PowerPoint is a media production platform for for live presentations and, increasingly, for delivering more effective presentations online.  Outlook is a time and task management system.  Access is a rich client front-end for data-driven self-service SharePoint applications.  OneNote helps you capture ideas, corral random thoughts in a semi-structured way, and then tie them back to other, more rigidly structured, Office documents. Google Docs and other cloud productivity platforms like Zoho don’t really do these things.  And there is a growing chorus of voices who say that they shouldn’t, because those ancillary capabilities are over-engineered, over-produced and “under-necessary.”  They might say Microsoft is layering on superfluous capabilities to avoid admitting that Office’s core capabilities, the ones people really need, have become commoditized. It’s hard to take sides in that argument, because different people, and the different companies that employ them, have different needs.  For my own needs, it all comes down to three basic questions: will the new version of Office save me time, will it make the mundane parts of my job easier, and will it augment my services to customers?  I need my time back.  I need to spend more of it with my family, and more of it focusing on my own core capabilities rather than the administrative tasks around them.  And I also need my customers to be able to get more value out of the services I provide. Help me triage my inbox, help me get proposals done more quickly and make them easier to read.  Let me get my presentations done faster, make them more effective and make it easier for me to reuse materials from other presentations.  And, since I’m in the BI and data business, help me and my customers manage data and analytics more easily, both on the desktop and online. Those are my criteria.  And, with those in mind, Office 2010 is looking like a worthwhile upgrade.  Perhaps it’s not earth-shattering, but it offers a combination of incremental improvements and a few new major capabilities that I think are quite compelling.  I provide a brief roundup of them here.  It’s admittedly arbitrary and not comprehensive, but I think it tells the Office 2010 story effectively. Across the Suite More than any other, this release of Office aims to give collaboration a real workout.  In certain apps, for the first time, documents can be opened simultaneously by multiple users, with colleagues’ changes appearing in near real-time.  Web-browser-based versions of Word, Excel, PowerPoint and OneNote will be available to extend collaboration to contributors who are off the corporate network. The ribbon user interface is now more pervasive (for example, it appears in OneNote and in Outlook’s main window).  It’s also customizable, allowing users to add, easily, buttons and options of their choosing, into new tabs, or into new groups within existing tabs. Microsoft has also taken the File menu (which was the “Office Button” menu in the 2007 release) and made it into a full-screen “Backstage” view where document-wide operations, like saving, printing and online publishing are performed. And because, more and more, heavily formatted content is cut and pasted between documents and applications, Office 2010 makes it easier to manage the retention or jettisoning of that formatting right as the paste operation is performed.  That’s much nicer than stripping it off, or adding it back, afterwards. And, speaking of pasting, a number of Office apps now make it especially easy to insert screenshots within their documents.  I know that’s useful to me, because I often document or critique applications and need to show them in action.  For the vast majority of users, I expect that this feature will be more useful for capturing snapshots of Web pages, but we’ll have to see whether this feature becomes popular.   Excel At first glance, Excel 2010 looks and acts nearly identically to the 2007 version.  But additional glances are necessary.  It’s important to understand that lots of people in the working world use Excel as more of a database, analytics and mathematical modeling tool than merely as a spreadsheet.  And it’s also important to understand that Excel wasn’t designed to handle such workloads past a certain scale.  That all changes with this release. The first reason things change is that Excel has been tuned for performance.  It’s been optimized for multi-threaded operation; previously lengthy processes have been shortened, especially for large data sets; more rows and columns are allowed and, for the first time, Excel (and the rest of Office) is available in a 64-bit version.  For Excel, this means users can take advantage of more than the 2GB of memory that the 32-bit version is limited to. On the analysis side, Excel 2010 adds Sparklines (tiny charts that fit into a single cell and can therefore be presented down an entire column or across a row) and Slicers (a more user-friendly filter mechanism for PivotTables and charts, which visually indicates what the filtered state of a given data member is).  But most important, Excel 2010 supports the new PowerPIvot add-in which brings true self-service BI to Office.  PowerPivot allows users to import data from almost anywhere, model it, and then analyze it.  Rather than forcing users to build “spreadmarts” or use corporate-built data warehouses, PowerPivot models function as true columnar, in-memory OLAP cubes that can accommodate millions of rows of data and deliver fast drill-down performance. And speaking of OLAP, Excel 2010 now supports an important Analysis Services OLAP feature called write-back.  Write-back is especially useful in financial forecasting scenarios for which Excel is the natural home.  Support for write-back is long overdue, but I’m still glad it’s there, because I had almost given up on it.   PowerPoint This version of PowerPoint marks its progression from a presentation tool to a video and photo editing and production tool.  Whether or not it’s successful in this pursuit, and if offering this is even a sensible goal, is another question. Regardless, the new capabilities are kind of interesting.  A greatly enhanced set of slide transitions with 3D effects; in-product photo and video editing; accommodation of embedded videos from services such as YouTube; and the ability to save a presentation as a video each lay testimony to PowerPoint’s transformation into a media tool and away from a pure presentation tool. These capabilities also recognize the importance of the Web as both a source for materials and a channel for disseminating PowerPoint output. Congruent with that is PowerPoint’s new ability to broadcast a slide presentation, using a quickly-generated public URL, without involving the hassle or expense of a Web meeting service like GoToMeeting or Microsoft’s own LiveMeeting.  Slides presented through this broadcast feature retain full color fidelity and transitions and animations are preserved as well.   Outlook Microsoft’s ubiquitous email/calendar/contact/task management tool gains long overdue speed improvements, especially against POP3 email accounts.  Outlook 2010 also supports multiple Exchange accounts, rather than just one; tighter integration with OneNote; and a new Social Connector providing integration with, and presence information from, online social network services like LinkedIn and Facebook (not to mention Windows Live).  A revamped conversation view now includes messages that are part of a given thread regardless of which folder they may be stored in. I don’t know yet how well the Social Connector will work or whether it will keep Outlook relevant to those who live on Facebook and LinkedIn.  But among the other features, there’s very little not to like.   OneNote To me, OneNote is the part of Office that just keeps getting better.  There is one major caveat to this, which I’ll cover in a moment, but let’s first catalog what new stuff OneNote 2010 brings.  The best part of OneNote, is the way each of its versions have managed hierarchy: Notebooks have sections, sections have pages, pages have sub pages, multiple notes can be contained in either, and each note supports infinite levels of indentation.  None of that is new to 2010, but the new version does make creation of pages and subpages easier and also makes simple work out of promoting and demoting pages from sub page to full page status.  And relationships between pages are quite easy to create now: much like a Wiki, simply typing a page’s name in double-square-brackets (“[[…]]”) creates a link to it. OneNote is also great at integrating content outside of its notebooks.  With a new Dock to Desktop feature, OneNote becomes aware of what window is displayed in the rest of the screen and, if it’s an Office document or a Web page, links the notes you’re typing, at the time, to it.  A single click from your notes later on will bring that same document or Web page back on-screen.  Embedding content from Web pages and elsewhere is also easier.  Using OneNote’s Windows Key+S combination to grab part of the screen now allows you to specify the destination of that bitmap instead of automatically creating a new note in the Unfiled Notes area.  Using the Send to OneNote buttons in Internet Explorer and Outlook result in the same choice. Collaboration gets better too.  Real-time multi-author editing is better accommodated and determining author lineage of particular changes is easily carried out. My one pet peeve with OneNote is the difficulty using it when I’m not one a Windows PC.  OneNote’s main competitor, Evernote, while I believe inferior in terms of features, has client versions for PC, Mac, Windows Mobile, Android, iPhone, iPad and Web browsers.  Since I have an Android phone and an iPad, I am practically forced to use it.  However, the OneNote Web app should help here, as should a forthcoming version of OneNote for Windows Phone 7.  In the mean time, it turns out that using OneNote’s Email Page ribbon button lets you move a OneNote page easily into EverNote (since every EverNote account gets a unique email address for adding notes) and that Evernote’s Email function combined with Outlook’s Send to OneNote button (in the Move group of the ribbon’s Home tab) can achieve the reverse.   Access To me, the big change in Access 2007 was its tight integration with SharePoint lists.  Access 2010 and SharePoint 2010 continue this integration with the introduction of SharePoint’s Access Services.  Much as Excel Services provides a SharePoint-hosted experience for viewing (and now editing) Excel spreadsheet, PivotTable and chart content, Access Services allows for SharePoint browser-hosted editing of Access data within the forms that are built in the Access client itself. To me this makes all kinds of sense.  Although it does beg the question of where to draw the line between Access, InfoPath, SharePoint list maintenance and SharePoint 2010’s new Business Connectivity Services.  Each of these tools provide overlapping data entry and data maintenance functionality. But if you do prefer Access, then you’ll like  things like templates and application parts that make it easier to get off the blank page.  These features help you quickly get tables, forms and reports built out.  To make things look nice, Access even gets its own version of Excel’s Conditional Formatting feature, letting you add data bars and data-driven text formatting.   Word As I said at the beginning of this post, upgrades to Office are about much more than enhancing the suite’s flagship word processing application. So are there any enhancements in Word worth mentioning?  I think so.  The most important one has to be the collaboration features.  Essentially, when a user opens a Word document that is in a SharePoint document library (or Windows Live SkyDrive folder), rather than the whole document being locked, Word has the ability to observe more granular locks on the individual paragraphs being edited.  Word also shows you who’s editing what and its Save function morphs into a sync feature that both saves your changes and loads those made by anyone editing the document concurrently. There’s also a new navigation pane that lets you manage sections in your document in much the same way as you manage slides in a PowerPoint deck.  Using the navigation pane, you can reorder sections, insert new ones, or promote and demote sections in the outline hierarchy.  Not earth shattering, but nice.   Other Apps and Summarized Findings What about InfoPath, Publisher, Visio and Project?  I haven’t looked at them yet.  And for this post, I think that’s fine.  While those apps (and, arguably, Access) cater to specific tasks, I think the apps we’ve looked at in this post service the general purpose needs of most users.  And the theme in those 2010 apps is clear: collaboration is key, the Web and productivity are indivisible, and making data and analytics into a self-service amenity is the way to go.  But perhaps most of all, features are still important, as long as they get you through your day faster, rather than adding complexity for its own sake.  I would argue that this is true for just about every product Microsoft makes: users want utility, not complexity.

    Read the article

  • C#/.NET Little Wonders: Interlocked CompareExchange()

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Two posts ago, I discussed the Interlocked Add(), Increment(), and Decrement() methods (here) for adding and subtracting values in a thread-safe, lightweight manner.  Then, last post I talked about the Interlocked Read() and Exchange() methods (here) for safely and efficiently reading and setting 32 or 64 bit values (or references).  This week, we’ll round out the discussion by talking about the Interlocked CompareExchange() method and how it can be put to use to exchange a value if the current value is what you expected it to be. Dirty reads can lead to bad results Many of the uses of Interlocked that we’ve explored so far have centered around either reading, setting, or adding values.  But what happens if you want to do something more complex such as setting a value based on the previous value in some manner? Perhaps you were creating an application that reads a current balance, applies a deposit, and then saves the new modified balance, where of course you’d want that to happen atomically.  If you read the balance, then go to save the new balance and between that time the previous balance has already changed, you’ll have an issue!  Think about it, if we read the current balance as $400, and we are applying a new deposit of $50.75, but meanwhile someone else deposits $200 and sets the total to $600, but then we write a total of $450.75 we’ve lost $200! Now, certainly for int and long values we can use Interlocked.Add() to handles these cases, and it works well for that.  But what if we want to work with doubles, for example?  Let’s say we wanted to add the numbers from 0 to 99,999 in parallel.  We could do this by spawning several parallel tasks to continuously add to a total: 1: double total = 0; 2:  3: Parallel.For(0, 10000, next => 4: { 5: total += next; 6: }); Were this run on one thread using a standard for loop, we’d expect an answer of 4,999,950,000 (the sum of all numbers from 0 to 99,999).  But when we run this in parallel as written above, we’ll likely get something far off.  The result of one of my runs, for example, was 1,281,880,740.  That is way off!  If this were banking software we’d be in big trouble with our clients.  So what happened?  The += operator is not atomic, it will read in the current value, add the result, then store it back into the total.  At any point in all of this another thread could read a “dirty” current total and accidentally “skip” our add.   So, to clean this up, we could use a lock to guarantee concurrency: 1: double total = 0.0; 2: object locker = new object(); 3:  4: Parallel.For(0, count, next => 5: { 6: lock (locker) 7: { 8: total += next; 9: } 10: }); Which will give us the correct result of 4,999,950,000.  One thing to note is that locking can be heavy, especially if the operation being locked over is trivial, or the life of the lock is a high percentage of the work being performed concurrently.  In the case above, the lock consumes pretty much all of the time of each parallel task – and the task being locked on is relatively trivial. Now, let me put in a disclaimer here before we go further: For most uses, lock is more than sufficient for your needs, and is often the simplest solution!    So, if lock is sufficient for most needs, why would we ever consider another solution?  The problem with locking is that it can suspend execution of your thread while it waits for the signal that the lock is free.  Moreover, if the operation being locked over is trivial, the lock can add a very high level of overhead.  This is why things like Interlocked.Increment() perform so well, instead of locking just to perform an increment, we perform the increment with an atomic, lockless method. As with all things performance related, it’s important to profile before jumping to the conclusion that you should optimize everything in your path.  If your profiling shows that locking is causing a high level of waiting in your application, then it’s time to consider lighter alternatives such as Interlocked. CompareExchange() – Exchange existing value if equal some value So let’s look at how we could use CompareExchange() to solve our problem above.  The general syntax of CompareExchange() is: T CompareExchange<T>(ref T location, T newValue, T expectedValue) If the value in location == expectedValue, then newValue is exchanged.  Either way, the value in location (before exchange) is returned. Actually, CompareExchange() is not one method, but a family of overloaded methods that can take int, long, float, double, pointers, or references.  It cannot take other value types (that is, can’t CompareExchange() two DateTime instances directly).  Also keep in mind that the version that takes any reference type (the generic overload) only checks for reference equality, it does not call any overridden Equals(). So how does this help us?  Well, we can grab the current total, and exchange the new value if total hasn’t changed.  This would look like this: 1: // grab the snapshot 2: double current = total; 3:  4: // if the total hasn’t changed since I grabbed the snapshot, then 5: // set it to the new total 6: Interlocked.CompareExchange(ref total, current + next, current); So what the code above says is: if the amount in total (1st arg) is the same as the amount in current (3rd arg), then set total to current + next (2nd arg).  This check and exchange pair is atomic (and thus thread-safe). This works if total is the same as our snapshot in current, but the problem, is what happens if they aren’t the same?  Well, we know that in either case we will get the previous value of total (before the exchange), back as a result.  Thus, we can test this against our snapshot to see if it was the value we expected: 1: // if the value returned is != current, then our snapshot must be out of date 2: // which means we didn't (and shouldn't) apply current + next 3: if (Interlocked.CompareExchange(ref total, current + next, current) != current) 4: { 5: // ooops, total was not equal to our snapshot in current, what should we do??? 6: } So what do we do if we fail?  That’s up to you and the problem you are trying to solve.  It’s possible you would decide to abort the whole transaction, or perhaps do a lightweight spin and try again.  Let’s try that: 1: double current = total; 2:  3: // make first attempt... 4: if (Interlocked.CompareExchange(ref total, current + i, current) != current) 5: { 6: // if we fail, go into a spin wait, spin, and try again until succeed 7: var spinner = new SpinWait(); 8:  9: do 10: { 11: spinner.SpinOnce(); 12: current = total; 13: } 14: while (Interlocked.CompareExchange(ref total, current + i, current) != current); 15: } 16:  This is not trivial code, but it illustrates a possible use of CompareExchange().  What we are doing is first checking to see if we succeed on the first try, and if so great!  If not, we create a SpinWait and then repeat the process of SpinOnce(), grab a fresh snapshot, and repeat until CompareExchnage() succeeds.  You may wonder why not a simple do-while here, and the reason it’s more efficient to only create the SpinWait until we absolutely know we need one, for optimal efficiency. Though not as simple (or maintainable) as a simple lock, this will perform better in many situations.  Comparing an unlocked (and wrong) version, a version using lock, and the Interlocked of the code, we get the following average times for multiple iterations of adding the sum of 100,000 numbers: 1: Unlocked money average time: 2.1 ms 2: Locked money average time: 5.1 ms 3: Interlocked money average time: 3 ms So the Interlocked.CompareExchange(), while heavier to code, came in lighter than the lock, offering a good compromise of safety and performance when we need to reduce contention. CompareExchange() - it’s not just for adding stuff… So that was one simple use of CompareExchange() in the context of adding double values -- which meant we couldn’t have used the simpler Interlocked.Add() -- but it has other uses as well. If you think about it, this really works anytime you want to create something new based on a current value without using a full lock.  For example, you could use it to create a simple lazy instantiation implementation.  In this case, we want to set the lazy instance only if the previous value was null: 1: public static class Lazy<T> where T : class, new() 2: { 3: private static T _instance; 4:  5: public static T Instance 6: { 7: get 8: { 9: // if current is null, we need to create new instance 10: if (_instance == null) 11: { 12: // attempt create, it will only set if previous was null 13: Interlocked.CompareExchange(ref _instance, new T(), (T)null); 14: } 15:  16: return _instance; 17: } 18: } 19: } So, if _instance == null, this will create a new T() and attempt to exchange it with _instance.  If _instance is not null, then it does nothing and we discard the new T() we created. This is a way to create lazy instances of a type where we are more concerned about locking overhead than creating an accidental duplicate which is not used.  In fact, the BCL implementation of Lazy<T> offers a similar thread-safety choice for Publication thread safety, where it will not guarantee only one instance was created, but it will guarantee that all readers get the same instance.  Another possible use would be in concurrent collections.  Let’s say, for example, that you are creating your own brand new super stack that uses a linked list paradigm and is “lock free”.  We could use Interlocked.CompareExchange() to be able to do a lockless Push() which could be more efficient in multi-threaded applications where several threads are pushing and popping on the stack concurrently. Yes, there are already concurrent collections in the BCL (in .NET 4.0 as part of the TPL), but it’s a fun exercise!  So let’s assume we have a node like this: 1: public sealed class Node<T> 2: { 3: // the data for this node 4: public T Data { get; set; } 5:  6: // the link to the next instance 7: internal Node<T> Next { get; set; } 8: } Then, perhaps, our stack’s Push() operation might look something like: 1: public sealed class SuperStack<T> 2: { 3: private volatile T _head; 4:  5: public void Push(T value) 6: { 7: var newNode = new Node<int> { Data = value, Next = _head }; 8:  9: if (Interlocked.CompareExchange(ref _head, newNode, newNode.Next) != newNode.Next) 10: { 11: var spinner = new SpinWait(); 12:  13: do 14: { 15: spinner.SpinOnce(); 16: newNode.Next = _head; 17: } 18: while (Interlocked.CompareExchange(ref _head, newNode, newNode.Next) != newNode.Next); 19: } 20: } 21:  22: // ... 23: } Notice a similar paradigm here as with adding our doubles before.  What we are doing is creating the new Node with the data to push, and with a Next value being the original node referenced by _head.  This will create our stack behavior (LIFO – Last In, First Out).  Now, we have to set _head to now refer to the newNode, but we must first make sure it hasn’t changed! So we check to see if _head has the same value we saved in our snapshot as newNode.Next, and if so, we set _head to newNode.  This is all done atomically, and the result is _head’s original value, as long as the original value was what we assumed it was with newNode.Next, then we are good and we set it without a lock!  If not, we SpinWait and try again. Once again, this is much lighter than locking in highly parallelized code with lots of contention.  If I compare the method above with a similar class using lock, I get the following results for pushing 100,000 items: 1: Locked SuperStack average time: 6 ms 2: Interlocked SuperStack average time: 4.5 ms So, once again, we can get more efficient than a lock, though there is the cost of added code complexity.  Fortunately for you, most of the concurrent collection you’d ever need are already created for you in the System.Collections.Concurrent (here) namespace – for more information, see my Little Wonders – The Concurent Collections Part 1 (here), Part 2 (here), and Part 3 (here). Summary We’ve seen before how the Interlocked class can be used to safely and efficiently add, increment, decrement, read, and exchange values in a multi-threaded environment.  In addition to these, Interlocked CompareExchange() can be used to perform more complex logic without the need of a lock when lock contention is a concern. The added efficiency, though, comes at the cost of more complex code.  As such, the standard lock is often sufficient for most thread-safety needs.  But if profiling indicates you spend a lot of time waiting for locks, or if you just need a lock for something simple such as an increment, decrement, read, exchange, etc., then consider using the Interlocked class’s methods to reduce wait. Technorati Tags: C#,CSharp,.NET,Little Wonders,Interlocked,CompareExchange,threading,concurrency

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Content Management for WebCenter Installation Guide

    - by Gary Niu
    Overvew As we known, there are two way to install Content Management for WebCenter. One way is install it by WebCenter installer wizard, another way is to install it use their own installer. This guide is for the later one. For SSO purpose, I also mentioned how to config OID identity store for Content Management for WebCenter. Content Management for WebCenter( 10.1.3.5.1) Oracle Enterprise Linux R5U4 Basic Installation -bash-3.2$ ./setup.sh Please select your locale from the list.           1. Chinese-Simplified           2. Chinese-Traditional           3. Deutsch          *4. English-US           5. English-UK           6. Español           7. Français           8. Italiano           9. Japanese          10. Korean          11. Nederlands          12. Português-Brazil Choice? Throughout the install, when entering a text value, you can press Enter to accept the default that appears between square brackets ([]). When selecting from a list, you can select the choice followed by an asterisk by pressing Enter. Select installation type from the list.         *1. Install new server          2. Update a server Choice? Content Server Installation Directory Please enter the full pathname to the installation directory. Content Server Core Folder [/oracle/ucm/server]:/opt/oracle/ucm/server Create Directory         *1. yes          2. no Choice? Java virtual machine         *1. Sun Java 1.5.0_11 JDK          2. Specify a custom Java virtual machine Choice? Installing with Java version 1.5.0_11. Enter the location of the native file repository. This directory contains the native files checked in by contributors. Content Server Native Vault Folder [/opt/oracle/ucm/server/vault/]: Create Directory         *1. yes          2. no Choice? Enter the location of the web-viewable file repository. This directory contains files that can be accessed through the web server. Content Server Weblayout Folder [/opt/oracle/ucm/server/weblayout/]: Create Directory         *1. yes          2. no Choice? This server can be configured to manage its own authentication or to allow another master to act as an authentication proxy. Configure this server as a master or proxied server.         *1. Configure as a master server.          2. Configure as server proxied by a local master server. Choice? During installation, an admin server can be installed and configured to manage this server. If there is already an admin server on this system, you can have the installer configure it to administrate this server instead. Select admin server configuration.         *1. Install an admin server to manage this server.          2. Configure an existing admin server to manage this server.          3. Don't configure an admin server. Choice? Enter the location of an executable to start your web browser. This browser will be used to display the online help. Web Browser Path [/usr/bin/firefox]: Content Server System locale           1. Chinese-Simplified           2. Chinese-Traditional           3. Deutsch          *4. English-US           5. English-UK           6. Español           7. Français           8. Italiano           9. Japanese          10. Korean          11. Nederlands          12. Português-Brazil Choice? Please select the region for your timezone from the list.         *1. Use the timezone setting for your operating system          2. Pacific          3. America          4. Atlantic          5. Europe          6. Africa          7. Asia          8. Indian          9. Australia Choice? Please enter the port number that will be used to connect to the Content Server. This port must be otherwise unused. Content Server Port [4444]: Please enter the port number that will be used to connect to the Admin Server. This port must be otherwise unused. Admin Server Port [4440]: Enter a security filter for the server port. Hosts which are allowed to communicate directly with the server port may access any resources managed by the server. Insure that hosts which need access are included in the filter. See the installation guide for more details. Incoming connection address filter [127.0.0.1]:*.*.*.* *** Content Server URL Prefix The URL prefix specified here is used when generating HTML pages that refer to the contents of the weblayout directory within the installation. This prefix must be mapped in the web server Additional Document Directories section of the Content Management administration menu to the physical location of the weblayout directory. For example, "/idc/" would be used in your installation to refer to the URL http://ucm.company.com/idc which would be mapped in the web server to the physical location /oracle/ucm/server/weblayout. Web Server Relative Root [/idc/]: Enter the name of the local mail server. The server will contact this system to deliver email. Company Mail Server [mail]: Enter the e-mail address for the system administrator. Administrator E-Mail Address [sysadmin@mail]: *** Web Server Address Many generated HTML pages refer to the web server you are using. The address specified here will be used when generating those pages. The address should include the host and domain name in most cases. If your webserver is running on a port other than 80, append a colon and the port number. Examples: www.company.com, ucm.company.com:90 Web Server HTTP Address [yekki]:yekki.cn.oracle.com:7777 Enter the name for this instance. This name should be unique across your entire enterprise. It may not contain characters other than letters, numbers, and underscores. Server Instance Name [idc]: Enter a short label for this instance. This label is used on web pages to identify this instance. It should be less than 12 characters long. Server Instance Label [idc]: Enter a long description for this instance. Server Description [Content Server idc]: Web Server         *1. Apache          2. Sun ONE          3. Configure manually Choice? Please select a database from the list below to use with the Content Server. Content Server Database         *1. Oracle          2. Microsoft SQL Server 2005          3. Microsoft SQL Server 2000          4. Sybase          5. DB2          6. Custom JDBC settings          7. Skip database configuration Choice? Manually configure JDBC settings for this database          1. yes         *2. no Choice? Oracle Server Hostname [localhost]: Oracle Listener Port Number [1521]: *** Database User ID The user name is used to log into the database used by the content server. Oracle User [user]:YEKKI_OCSERVER *** Database Password The password is used to log into the database used by the content server. Oracle Password []:oracle Oracle Instance Name [ORACLE]:orcl Configure the JVM to find the JDBC driver in a specific jar file          1. yes         *2. no Choice? The installer can attempt to create the database tables or you can manually create them. If you choose to manually create the tables, you should create them now. Attempt to create database tables          1. yes         *2. no Choice? Select components to install.          1. ContentFolios: Collect related items in folios          2. Folders_g: Organize content into hierarchical folders          3. LinkManager8: Hypertext link management support          4. OracleTextSearch: External Oracle 11g database as search indexer support          5. ThreadedDiscussions: Threaded discussion management Enter numbers separated by commas to toggle, 0 to unselect all, F to finish: 1,2,3,4,5         *1. ContentFolios: Collect related items in folios         *2. Folders_g: Organize content into hierarchical folders         *3. LinkManager8: Hypertext link management support         *4. OracleTextSearch: External Oracle 11g database as search indexer support         *5. ThreadedDiscussions: Threaded discussion management Enter numbers separated by commas to toggle, 0 to unselect all, F to finish: F Checking configuration. . . Configuration OK. Review install settings. . . Content Server Core Folder: /opt/oracle/ucm/server Java virtual machine: Sun Java 1.5.0_11 JDK Content Server Native Vault Folder: /opt/oracle/ucm/server/vault/ Content Server Weblayout Folder: /opt/oracle/ucm/server/weblayout/ Proxy authentication through another server: no Install admin server: yes Web Browser Path: /usr/bin/firefox Content Server System locale: English-US Content Server Port: 4444 Admin Server Port: 4440 Incoming connection address filter: *.*.*.* Web Server Relative Root: /idc/ Company Mail Server: mail Administrator E-Mail Address: sysadmin@mail Web Server HTTP Address: yekki.cn.oracle.com:7777 Server Instance Name: idc Server Instance Label: idc Server Description: Content Server idc Web Server: Apache Content Server Database: Oracle Manually configure JDBC settings for this database: false Oracle Server Hostname: localhost Oracle Listener Port Number: 1521 Oracle User: YEKKI_OCSERVER Oracle Password: 6GP1gBgzSyKa4JW10U8UqqPznr/lzkNn/Ojf6M8GJ8I= Oracle Instance Name: orcl Configure the JVM to find the JDBC driver in a specific jar file: false Attempt to create database tables: no Components: ContentFolios,Folders_g,LinkManager8,OracleTextSearch,ThreadedDiscussions Proceed with install         *1. Proceed          2. Change configuration          3. Recheck the configuration          4. Abort installation Choice? Finished install type Install with warnings at 4/2/10 12:32 AM. Run Scripts -bash-3.2$ ./wc_contentserverconfig.sh /opt/oracle/ucm/server /mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/CS10gR35UpdateBundle.zip' Service 'DELETE_DOC' Extended Service 'DELETE_BYREV_REVISION' Extended Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/ContentAccess/ContentAccess-linux.zip' (internal)      04.02 00:40:38.019      main    updateDocMetaDefinitionV11: adding decimal column Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/Folders_g.zip' Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/FusionLibraries.zip' Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/JpsUserProvider.zip' Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/WcConfigure.zip' Apr 2, 2010 12:41:24 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:24 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Apr 2, 2010 12:41:27 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:27 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Apr 2, 2010 12:41:28 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:28 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Restart Content Server to apply updates. Configuring Apache Web Server append the following lines at httpd.conf: include "/opt/oracle/ucm/server/data/users/apache22/apache.conf" Configuring the Identity Store( Optional ) 1.  Stop Oracle Content Server and the Admin Server 2.  Update the Oracle Content Server's JPS configuration file, jps-config.xml: a. add a service instance <serviceInstance provider="idstore.ldap.provider" name="idstore.oid"> <property name="subscriber.name" value="dc=cn,dc=oracle,dc=com"></property> <property name="idstore.type" value="OID"></property> <property name="security.principal.key" value="ldap.credential"></property> <property name="security.principal.alias" value="JPS"></property> <property name="ldap.url" value="ldap://yekki.cn.oracle.com:3060"></property> <extendedProperty> <name>user.search.bases</name> <values> <value>cn=users,dc=cn,dc=oracle,dc=com</value> </values> </extendedProperty> <extendedProperty> <name>group.search.bases</name> <values> <value>cn=groups,dc=cn,dc=oracle,dc=com</value> </values> </extendedProperty> <property name="username.attr" value="uid"></property> <property name="user.login.attr" value="uid"></property> <property name="groupname.attr" value="cn"></property> </serviceInstance> b. Ensure that the <jpsContext> entry in the jps-config.xml file refers to the new serviceInstance, that is, idstore.oid and not idstore.ldap: <jpsContext name="default"> <serviceInstanceRef ref="idstore.oid"/> 3. Run the new script to setup the credentials for idstore.oid in the credential store: cd CONTENT_SERVER_HOME/custom/FusionLibraries/tools -bash-3.2$ ./run_credtool.sh Buildfile: ./../tools/credtool.xml     [input] skipping input as property action has already been set.     [input] Alias: [JPS]     [input] Key: [ldap.credential]     [input] User Name: cn=orcladmin     [input] Password: welcome1     [input] JPS Config: [/opt/oracle/ucm/server/custom/FusionLibraries/tools/../../../config/jps-config.xml] manage-creds:      [echo] @@@ Help: run 'ant manage-creds' command to see the detailed usage      [java] Using default context in /opt/oracle/ucm/server/custom/FusionLibraries/tools/../../../config/jps-config.xml file for credential store.      [java] Credential store location : /opt/oracle/ucm/server/config      [java] Credential with map JPS key ldap.credential stored successfully!      [java]      [java]      [java]     Credential for map JPS and key ldap.credential is:      [java]             PasswordCredential name : cn=orcladmin      [java]             PasswordCredential password : welcome1 BUILD SUCCESSFUL Total time: 1 minute 27 seconds Testing 1. acces http://yekki.cn.oracle.com:7777/idc 2. login in with OID user, for example: orcladmin/welcome1 3. make sure your JpsUserProvider status is "good"

    Read the article

  • ASP.NET Web API Exception Handling

    - by Fredrik N
    When I talk about exceptions in my product team I often talk about two kind of exceptions, business and critical exceptions. Business exceptions are exceptions thrown based on “business rules”, for example if you aren’t allowed to do a purchase. Business exceptions in most case aren’t important to log into a log file, they can directly be shown to the user. An example of a business exception could be "DeniedToPurchaseException”, or some validation exceptions such as “FirstNameIsMissingException” etc. Critical Exceptions are all other kind of exceptions such as the SQL server is down etc. Those kind of exception message need to be logged and should not reach the user, because they can contain information that can be harmful if it reach out to wrong kind of users. I often distinguish business exceptions from critical exceptions by creating a base class called BusinessException, then in my error handling code I catch on the type BusinessException and all other exceptions will be handled as critical exceptions. This blog post will be about different ways to handle exceptions and how Business and Critical Exceptions could be handled. Web API and Exceptions the basics When an exception is thrown in a ApiController a response message will be returned with a status code set to 500 and a response formatted by the formatters based on the “Accept” or “Content-Type” HTTP header, for example JSON or XML. Here is an example:   public IEnumerable<string> Get() { throw new ApplicationException("Error!!!!!"); return new string[] { "value1", "value2" }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The response message will be: HTTP/1.1 500 Internal Server Error Content-Length: 860 Content-Type: application/json; charset=utf-8 { "ExceptionType":"System.ApplicationException","Message":"Error!!!!!","StackTrace":" at ..."} .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The stack trace will be returned to the client, this is because of making it easier to debug. Be careful so you don’t leak out some sensitive information to the client. So as long as you are developing your API, this is not harmful. In a production environment it can be better to log exceptions and return a user friendly exception instead of the original exception. There is a specific exception shipped with ASP.NET Web API that will not use the formatters based on the “Accept” or “Content-Type” HTTP header, it is the exception is the HttpResponseException class. Here is an example where the HttpReponseExcetpion is used: // GET api/values [ExceptionHandling] public IEnumerable<string> Get() { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.InternalServerError)); return new string[] { "value1", "value2" }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The response will not contain any content, only header information and the status code based on the HttpStatusCode passed as an argument to the HttpResponseMessage. Because the HttpResponsException takes a HttpResponseMessage as an argument, we can give the response a content: public IEnumerable<string> Get() { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent("My Error Message"), ReasonPhrase = "Critical Exception" }); return new string[] { "value1", "value2" }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The code above will have the following response:   HTTP/1.1 500 Critical Exception Content-Length: 5 Content-Type: text/plain; charset=utf-8 My Error Message .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The Content property of the HttpResponseMessage doesn’t need to be just plain text, it can also be other formats, for example JSON, XML etc. By using the HttpResponseException we can for example catch an exception and throw a user friendly exception instead: public IEnumerable<string> Get() { try { DoSomething(); return new string[] { "value1", "value2" }; } catch (Exception e) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent("An error occurred, please try again or contact the administrator."), ReasonPhrase = "Critical Exception" }); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Adding a try catch to every ApiController methods will only end in duplication of code, by using a custom ExceptionFilterAttribute or our own custom ApiController base class we can reduce code duplicationof code and also have a more general exception handler for our ApiControllers . By creating a custom ApiController’s and override the ExecuteAsync method, we can add a try catch around the base.ExecuteAsync method, but I prefer to skip the creation of a own custom ApiController, better to use a solution that require few files to be modified. The ExceptionFilterAttribute has a OnException method that we can override and add our exception handling. Here is an example: using System; using System.Diagnostics; using System.Net; using System.Net.Http; using System.Web.Http; using System.Web.Http.Filters; public class ExceptionHandlingAttribute : ExceptionFilterAttribute { public override void OnException(HttpActionExecutedContext context) { if (context.Exception is BusinessException) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent(context.Exception.Message), ReasonPhrase = "Exception" }); } //Log Critical errors Debug.WriteLine(context.Exception); throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent("An error occurred, please try again or contact the administrator."), ReasonPhrase = "Critical Exception" }); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Note: Something to have in mind is that the ExceptionFilterAttribute will be ignored if the ApiController action method throws a HttpResponseException. The code above will always make sure a HttpResponseExceptions will be returned, it will also make sure the critical exceptions will show a more user friendly message. The OnException method can also be used to log exceptions. By using a ExceptionFilterAttribute the Get() method in the previous example can now look like this: public IEnumerable<string> Get() { DoSomething(); return new string[] { "value1", "value2" }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To use the an ExceptionFilterAttribute, we can for example add the ExceptionFilterAttribute to our ApiControllers methods or to the ApiController class definition, or register it globally for all ApiControllers. You can read more about is here. Note: If something goes wrong in the ExceptionFilterAttribute and an exception is thrown that is not of type HttpResponseException, a formatted exception will be thrown with stack trace etc to the client. How about using a custom IHttpActionInvoker? We can create our own IHTTPActionInvoker and add Exception handling to the invoker. The IHttpActionInvoker will be used to invoke the ApiController’s ExecuteAsync method. Here is an example where the default IHttpActionInvoker, ApiControllerActionInvoker, is used to add exception handling: public class MyApiControllerActionInvoker : ApiControllerActionInvoker { public override Task<HttpResponseMessage> InvokeActionAsync(HttpActionContext actionContext, System.Threading.CancellationToken cancellationToken) { var result = base.InvokeActionAsync(actionContext, cancellationToken); if (result.Exception != null && result.Exception.GetBaseException() != null) { var baseException = result.Exception.GetBaseException(); if (baseException is BusinessException) { return Task.Run<HttpResponseMessage>(() => new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent(baseException.Message), ReasonPhrase = "Error" }); } else { //Log critical error Debug.WriteLine(baseException); return Task.Run<HttpResponseMessage>(() => new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent(baseException.Message), ReasonPhrase = "Critical Error" }); } } return result; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can register the IHttpActionInvoker with your own IoC to resolve the MyApiContollerActionInvoker, or add it in the Global.asax: GlobalConfiguration.Configuration.Services.Remove(typeof(IHttpActionInvoker), GlobalConfiguration.Configuration.Services.GetActionInvoker()); GlobalConfiguration.Configuration.Services.Add(typeof(IHttpActionInvoker), new MyApiControllerActionInvoker()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   How about using a Message Handler for Exception Handling? By creating a custom Message Handler, we can handle error after the ApiController and the ExceptionFilterAttribute is invoked and in that way create a global exception handler, BUT, the only thing we can take a look at is the HttpResponseMessage, we can’t add a try catch around the Message Handler’s SendAsync method. The last Message Handler that will be used in the Wep API pipe-line is the HttpControllerDispatcher and this Message Handler is added to the HttpServer in an early stage. The HttpControllerDispatcher will use the IHttpActionInvoker to invoke the ApiController method. The HttpControllerDipatcher has a try catch that will turn ALL exceptions into a HttpResponseMessage, so that is the reason why a try catch around the SendAsync in a custom Message Handler want help us. If we create our own Host for the Wep API we could create our own custom HttpControllerDispatcher and add or exception handler to that class, but that would be little tricky but is possible. We can in a Message Handler take a look at the HttpResponseMessage’s IsSuccessStatusCode property to see if the request has failed and if we throw the HttpResponseException in our ApiControllers, we could use the HttpResponseException and give it a Reason Phrase and use that to identify business exceptions or critical exceptions. I wouldn’t add an exception handler into a Message Handler, instead I should use the ExceptionFilterAttribute and register it globally for all ApiControllers. BUT, now to another interesting issue. What will happen if we have a Message Handler that throws an exception?  Those exceptions will not be catch and handled by the ExceptionFilterAttribute. I found a  bug in my previews blog post about “Log message Request and Response in ASP.NET WebAPI” in the MessageHandler I use to log incoming and outgoing messages. Here is the code from my blog before I fixed the bug:   public abstract class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { var corrId = string.Format("{0}{1}", DateTime.Now.Ticks, Thread.CurrentThread.ManagedThreadId); var requestInfo = string.Format("{0} {1}", request.Method, request.RequestUri); var requestMessage = await request.Content.ReadAsByteArrayAsync(); await IncommingMessageAsync(corrId, requestInfo, requestMessage); var response = await base.SendAsync(request, cancellationToken); var responseMessage = await response.Content.ReadAsByteArrayAsync(); await OutgoingMessageAsync(corrId, requestInfo, responseMessage); return response; } protected abstract Task IncommingMessageAsync(string correlationId, string requestInfo, byte[] message); protected abstract Task OutgoingMessageAsync(string correlationId, string requestInfo, byte[] message); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   If a ApiController throws a HttpResponseException, the Content property of the HttpResponseMessage from the SendAsync will be NULL. So a null reference exception is thrown within the MessageHandler. The yellow screen of death will be returned to the client, and the content is HTML and the Http status code is 500. The bug in the MessageHandler was solved by adding a check against the HttpResponseMessage’s IsSuccessStatusCode property: public abstract class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { var corrId = string.Format("{0}{1}", DateTime.Now.Ticks, Thread.CurrentThread.ManagedThreadId); var requestInfo = string.Format("{0} {1}", request.Method, request.RequestUri); var requestMessage = await request.Content.ReadAsByteArrayAsync(); await IncommingMessageAsync(corrId, requestInfo, requestMessage); var response = await base.SendAsync(request, cancellationToken); byte[] responseMessage; if (response.IsSuccessStatusCode) responseMessage = await response.Content.ReadAsByteArrayAsync(); else responseMessage = Encoding.UTF8.GetBytes(response.ReasonPhrase); await OutgoingMessageAsync(corrId, requestInfo, responseMessage); return response; } protected abstract Task IncommingMessageAsync(string correlationId, string requestInfo, byte[] message); protected abstract Task OutgoingMessageAsync(string correlationId, string requestInfo, byte[] message); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If we don’t handle the exceptions that can occur in a custom Message Handler, we can have a hard time to find the problem causing the exception. The savior in this case is the Global.asax’s Application_Error: protected void Application_Error() { var exception = Server.GetLastError(); Debug.WriteLine(exception); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I would recommend you to add the Application_Error to the Global.asax and log all exceptions to make sure all kind of exception is handled. Summary There are different ways we could add Exception Handling to the Wep API, we can use a custom ApiController, ExceptionFilterAttribute, IHttpActionInvoker or Message Handler. The ExceptionFilterAttribute would be a good place to add a global exception handling, require very few modification, just register it globally for all ApiControllers, even the IHttpActionInvoker can be used to minimize the modifications of files. Adding the Application_Error to the global.asax is a good way to catch all unhandled exception that can occur, for example exception thrown in a Message Handler.   If you want to know when I have posted a blog post, you can follow me on twitter @fredrikn

    Read the article

  • IIS SSL Certificate Renewal Pain

    - by Rick Strahl
    I’m in the middle of my annual certificate renewal for the West Wind site and I can honestly say that I hate IIS’s certificate system.  When it works it’s fine, but when it doesn’t man can it be a pain. Because I deal with public certificates on my site merely once a year, and you have to perform the certificate dance just the right way, I seem to run into some sort of trouble every year, thinking that Microsoft surely must have addressed the issues I ran into previously – HA! Not so. Don’t ever use the Renew Certificate Feature in IIS! The first rule that I should have never forgotten is that certificate renewals in IIS (7 is what I’m using but I think it’s no different in 7.5 and 8), simply don’t work if you’re submitting to get a public certificate from a certificate authority. I use DNSimple for my DNS domain management and SSL certificates because they provide ridiculously easy domain management and good prices for SSL certs – especially wildcard certificates, which is what I use on west-wind.com. Certificates in IIS can be found pegged to the machine root. If you go into the IIS Manager, go to the machine root the tree and then click on certificates and you then get various certificate options: Both of these options create a new Certificate request (CSR), which is just a text file. But if you’re silly enough like me to click on the Renew button on your old certificate, you’ll find that you end up generating a very long Certificate Request that looks nothing like the original certificate request and the format that’s used for this is not accepted by most certificate authorities. While I’m not sure exactly what the problem is, it simply looks like IIS is respecting none of your original certificate bit size choices and is generating a huge certificate request that is 3 times the size of a ‘normal’ certificate request. The end result is (and I’ve done this at least twice now) is that the certificate processor is likely to fail processing those renewals. Always create a new Certificate While it’s a little more work and you have to remember how to fill out the certificate request properly, this is the safe way to make sure your certificate generates properly. First comes the Distinguished Name Properties dialog: Ah yes you have to love the nomenclature of this stuff. Distinguished name, Common name – WTF is a common name? It doesn’t look common to me! Make sure this form gets filled out correctly. Common NameThis is the domain name of the Web site. In my case I’m creating a wildcard certificate so I’m using the * prefix. If you’re purchasing a certificate for a specific domain use www.west-wind.com or store.west-wind.com for example. Make sure this matches the EXACT domain you’re trying to use secure access on because that’s all the certificate is going to work on unless you get a wildcard certificate. Organization Is the name of your company or organization. Depending on the kind of certificate you purchase this name will show up on your certificate. Most low end SSL certificates (ie. those that cost under $100 for single domains) don’t list the organization, the higher signature certificates that also require extensive validation by the cert authority do. Regardless you should make sure this matches the right company/organization. Organizational Unit This can be anything. Not really sure what this is for, but traditionally I’ve always set this to Web because – well this is a Web thing after all right? I’ve never seen this used anywhere that I can tell other than to internally reference the cert. State and CountryPretty obvious. Should reflect the location of the business/organization/person or site.   Next you have to configure the bit size used for the certificate: The default on this dialog is 1024, but I’ve found that most providers these days request a minimum bit length of 2048, as did my DNSimple provider. Again check with the provider when you submit to make sure. Bit length mismatches can cause problems if you use a size that isn’t supported by the provider. I had that happen last year when I submitted my CSR and it got rejected quite a bit later, when the certs usually are issued within an hour or less. When you’re done here, the certificate is saved to disk as a .txt file and it should look something like this (this is a 2048 bit length CSR):-----BEGIN NEW CERTIFICATE REQUEST----- MIIEVGCCAz0CAQAwdjELMAkGA1UEBhMCVVMxDzANBgNVBAgMBkhhd2FpaTENMAsG A1UEBwwEUGFpYTEfMB0GA1UECgwWV2VzdCBXaW5kIFRlY2hub2xvZ2llczEMMAoG B1UECwwDV2ViMRgwFgYDVQQDDA8qLndlc3Qtd2luZC5jb20wggEiMA0GCSqGSIb3 DQEBAQUAA4IBDwAwggEKAoIBAQDIPWOFMkMVRp2Ftj9w/cCVV4OYYhoZYtl+8lTk oqDwKca0xWHLgioX/9v0rZLS6a82MHqKEBxVXu+cuCmSE4AQtB/1YH9lS4tpc/be OZDvnTotP6l4MCEzzAfROcw4CiIg6X0RMSnl8IATAvv2V5LQM9TDdt9oDdMpX2IY +vVC9RZ7PMHBmR9kwI2i/lrKitzhQKaHgpmKcRlM6iqpALUiX28w5HJaDKK1MDHN 607tyFJLHijuJKx7PdTqZYf50KkC3NupfZ2avVycf18Q13jHWj59tvwEOczoVzRL l4LQivAqbhyiqMpWnrZunIOUZta5aGm+jo7O1knGWJjxuraTAgMBAAGgggGYMBoG CisGAQQBgjcNAgMxDBYKNi4yLjkyMDAuMjA0BgkrBgEEAYI3FRQxJzAlAgEFDAZS QVNYUFMMC1JBU1hQU1xSaWNrDAtJbmV0TWdyLmV4ZTByBgorBgEEAYI3DQICMWQw YgIBAR5aAE0AaQBjAHIAbwBzAG8AZgB0ACAAUgBTAEEAIABTAEMAaABhAG4AbgBl AGwAIABDAHIAeQBwAHQAbwBnAHIAYQBwAGgAaQBjACAAUAByAG8AdgBpAGQAZQBy AwEAMIHPBgkqhkiG9w0BCQ4xgcEwgb4wDgYDVR0PAQH/BAQDAgTwMBMGA1UdJQQM MAoGCCsGAQUFBwMBMHgGCSqGSIb3DQEJDwRrMGkwDgYIKoZIhvcNAwICAgCAMA4G CCqGSIb3DQMEAgIAgDALBglghkgBZQMEASowCwYJYIZIAWUDBAEtMAsGCWCGSAFl AwQBAjALBglghkgBZQMEAQUwBwYFKw4DAgcwCgYIKoZIhvcNAwcwHQYDVR0OBBYE FD/yOsTbXE+GVFCFMmldzQvyloz9MA0GCSqGSIb3DQEBBQUAA4IBAQCK6LlsCuIM 1AU0niB6QZ9v0FTsGFxP1dYvVUnJyY6VEKNiGFiQjZac7UCs0p58yScdXWEFOE8V OsjAYD3xYNc05+ckyD67UHRGEUAVB9RBvbKW23KeR/8kBmEzc8PemD52YOgExxAJ 57xWmAwEHAvbgYzQvhO8AOzH3TGvvHbg5UKM1pYgNmuwZq5DkL/IDoeIJwfk/wrI wghNTuxxIFgbH4YrgLgv4PRvrS/LaTCRBdboaCgzATMczaOb1nd/DVNR+3fCtMhM W0psTAjzRbmXF3nJyAQa7jF/52gkY0RfFX2lG5tJnG+XDsVNvKNvh9Qa5Tlmkm06 ILKCm9ciWCKk -----END NEW CERTIFICATE REQUEST----- You can take that certificate request and submit that to your certificate provider. Since this is base64 encoded you can typically just paste it into a text box on the submission page, or some providers will ask you to upload the CSR as a file. What does a Renewal look like? Note the length of the CSR will vary somewhat with key strength, but compare this to a renewal request that IIS generated from my existing site:-----BEGIN NEW CERTIFICATE REQUEST----- MIIPpwYFKoZIhvcNAQcCoIIPmDCCD5QCAQExCzAJBgUrDgMCGgUAMIIIqAYJKoZI hvcNAQcBoIIImQSCCJUwggiRMIIH+gIBADBdMSEwHwYDVQQLDBhEb21haW4gQ29u dHJvbCBWYWxpFGF0ZWQxHjAcBgNVBAsMFUVzc2VudGlhbFNTTCBXaWxkY2FyZDEY MBYGA1UEAwwPKi53ZXN0LXdpbmQuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB iQKBgQCK4OuIOR18Wb8tNMGRZiD1c9X57b332Lj7DhbckFqLs0ys8kVDHrTXSj+T Ye9nmAvfPpZmBtE5p9qRNN79rUYugAdl+qEtE4IJe1bRfxXzcKa1SXa8+TEs3zQa zYSmcR2dDuC8om1eAdeCtt0NnkvANgm1VLwGOor/UHMASaEhCQIDAQABoIIG8jAa BgorBgEEAYI3DQIDMQwWCjYuMi45MjAwLjIwNAYJKwYBBAGCNxUUMScwJQIBBQwG UkFTWFBTDAtSQVNYUFNcUmljawwLSW5ldE1nci5leGUwZgYKKwYBBAGCNw0CAjFY MFYCAQIeTgBNAGkAYwByAG8AcwBvAGYAdAAgAFMAdAByAG8AbgBnACAAQwByAHkA cAB0AG8AZwByAGEAcABoAGkAYwAgAFAAcgBvAHYAaQBkAGUAcgMBADCCAQAGCSqG SIb3DQEJDjGB8jCB7zAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIwADA0BgNV HSUELTArBggrBgEFBQcDAQYIKwYBBQUHAwIGCisGAQQBgjcKAwMGCWCGSAGG+EIE ATBPBgNVHSAESDBGMDoGCysGAQQBsjEBAgIHMCswKQYIKwYBBQUHAgEWHWh0dHBz Oi8vc2VjdXJlLmNvbW9kby5jb20vQ1BTMAgGBmeBDAECATApBgNVHREEIjAggg8q Lndlc3Qtd2luZC5jb22CDXdlc3Qtd2luZC5jb20wHQYDVR0OBBYEFEVLAyO8gDiv lsfovKrx9mHPyrsiMIIFMAYJKwYBBAGCNw0BMYIFITCCBR0wggQFoAMCAQICEQDu 1E1T5Jvtkm5LOfSHabWlMA0GCSqGSIb3DQEBBQUAMHIxCzAJBgNVBAYTAkdCMRsw GQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAY BgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMRgwFgYDVQQDEw9Fc3NlbnRpYWxTU0wg Q0EwHhcNMTQwNTA3MDAwMDAwWhcNMTUwNjA2MjM1OTU5WjBdMSEwHwYDVQQLExhE b21haW4gQ29udHJvbCBWYWxpZGF0ZWQxHjAcBgNVBAsTFUVzc2VudGlhbFNTTCBX aWxkY2FyZDEYMBYGA1UEAxQPKi53ZXN0LXdpbmQuY29tMIIBIjANBgkqhkiG9w0B AQEFAAOCAQ8AMIIBCgKCAQEAiyKfL66XB51DlUfm6xXqJBcvMU2qorRHxC+WjEpB amvg8XoqNfCKzDAvLMbY4BLhbYCTagqtslnP3Gj4AKhXqRKU0n6iSbmS1gcWzCJM CHufZ5RDtuTuxhTdJxzP9YqZUfKV5abWQp/TK6V1ryaBJvdqM73q4tRjrQODtkiR PfZjxpybnBHFJS8jYAf8jcOjSDZcgN1d9Evc5MrEJCp/90cAkozyF/NMcFtD6Yj8 UM97z3MzDT2JPDoH3kAr3cCgpUNyQ2+wDNCnL9eWYFkOQi8FZMsZol7KlZ5NgNfO a7iZMVGbqDg6rkS//2uGe6tSQJTTs+mAZB+na+M8XT2UqwIDAQABo4IBwTCCAb0w HwYDVR0jBBgwFoAU2svqrVsIXcz//CZUzknlVcY49PgwHQYDVR0OBBYEFH0AmLiL RSEL9+sQD/n5O4N7/nnqMA4GA1UdDwEB/wQEAwIFoDAMBgNVHRMBAf8EAjAAMDQG A1UdJQQtMCsGCCsGAQUFBwMBBggrBgEFBQcDAgYKKwYBBAGCNwoDAwYJYIZIAYb4 QgQBME8GA1UdIARIMEYwOgYLKwYBBAGyMQECAgcwKzApBggrBgEFBQcCARYdaHR0 cHM6Ly9zZWN1cmUuY29tb2RvLmNvbS9DUFMwCAYGZ4EMAQIBMDsGA1UdHwQ0MDIw MKAuoCyGKmh0dHA6Ly9jcmwuY29tb2RvY2EuY29tL0Vzc2VudGlhbFNTTENBLmNy bDBuBggrBgEFBQcBAQRiMGAwOAYIKwYBBQUHMAKGLGh0dHA6Ly9jcnQuY29tb2Rv Y2EuY29tL0Vzc2VudGlhbFNTTENBXzIuY3J0MCQGCCsGAQUFBzABhhhodHRwOi8v b2NzcC5jb21vZG9jYS5jb20wKQYDVR0RBCIwIIIPKi53ZXN0LXdpbmQuY29tgg13 ZXN0LXdpbmQuY29tMA0GCSqGSIb3DQEBBQUAA4IBAQBqBfd6QHrxXsfgfKARG6np 8yszIPhHGPPmaE7xq7RpcZjY9H+8l6fe4jQbGFjbA5uHBklYI4m2snhPaW2p8iF8 YOkm2V2hEsSTnkf5/flw9mZtlCFEDFXSsBxBdNz8RYTthPMu1h09C0XuDB30sztg nR692FrxJN5/bXsk+MC9nEweTFW/t2HW+XZ8bhM7vsAS+pZionR4MyuQ0mYIt/lD csZVZ91KxTsIm8rNMkkYGFoSIXjQ0+0tCbxMF0i2qnpmNRpA6PU8l7lxxvPkplsk 9KB8QIPFrR5p/i/SUAd9vECWh5+/ktlcrfFP2PK7XcEwWizsvMrNqLyvQVNXSUPT MA0GCSqGSIb3DQEBBQUAA4GBABt/NitwMzc5t22p5+zy4HXbVYzLEjesLH8/v0ot uLQ3kkG8tIWNh5RplxIxtilXt09H4Oxpo3fKUN0yw+E6WsBfg0sAF8pHNBdOJi48 azrQbt4HvKktQkGpgYFjLsormjF44SRtToLHlYycDHBNvjaBClUwMCq8HnwY6vDq xikRoIIFITCCBR0wggQFoAMCAQICEQDu1E1T5Jvtkm5LOfSHabWlMA0GCSqGSIb3 DQEBBQUAMHIxCzAJBgNVBAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0 ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVk MRgwFgYDVQQDEw9Fc3NlbnRpYWxTU0wgQ0EwHhcNMTQwNTA3MDAwMDAwWhcNMTUw NjA2MjM1OTU5WjBdMSEwHwYDVQQLExhEb21haW4gQ29udHJvbCBWYWxpZGF0ZWQx HjAcBgNVBAsTFUVzc2VudGlhbFNTTCBXaWxkY2FyZDEYMBYGA1UEAxQPKi53ZXN0 LXdpbmQuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAiyKfL66X B51DlUfm6xXqJBcvMU2qorRHxC+WjEpBamvg8XoqNfCKzDAvLMbY4BLhbYCTagqt slnP3Gj4AKhXqRKU0n6iSbmS1gcWzCJMCHufZ5RDtuTuxhTdJxzP9YqZUfKV5abW Qp/TK6V1ryaBJvdqM73q4tRjrQODtkiRPfZjxpybnBHFJS8jYAf8jcOjSDZcgN1d 9Evc5MrEJCp/90cAkozyF/NMcFtD6Yj8UM97z3MzDT2JPDoH3kAr3cCgpUNyQ2+w DNCnL9eWYFkOQi8FZMsZol7KlZ5NgNfOa7iZMVGbqDg6rkS//2uGe6tSQJTTs+mA ZB+na+M8XT2UqwIDAQABo4IBwTCCAb0wHwYDVR0jBBgwFoAU2svqrVsIXcz//CZU zknlVcY49PgwHQYDVR0OBBYEFH0AmLiLRSEL9+sQD/n5O4N7/nnqMA4GA1UdDwEB /wQEAwIFoDAMBgNVHRMBAf8EAjAAMDQGA1UdJQQtMCsGCCsGAQUFBwMBBggrBgEF BQcDAgYKKwYBBAGCNwoDAwYJYIZIAYb4QgQBME8GA1UdIARIMEYwOgYLKwYBBAGy MQECAgcwKzApBggrBgEFBQcCARYdaHR0cHM6Ly9zZWN1cmUuY29tb2RvLmNvbS9D UFMwCAYGZ4EMAQIBMDsGA1UdHwQ0MDIwMKAuoCyGKmh0dHA6Ly9jcmwuY29tb2Rv Y2EuY29tL0Vzc2VudGlhbFNTTENBLmNybDBuBggrBgEFBQcBAQRiMGAwOAYIKwYB BQUHMAKGLGh0dHA6Ly9jcnQuY29tb2RvY2EuY29tL0Vzc2VudGlhbFNTTENBXzIu Y3J0MCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC5jb21vZG9jYS5jb20wKQYDVR0R BCIwIIIPKi53ZXN0LXdpbmQuY29tgg13ZXN0LXdpbmQuY29tMA0GCSqGSIb3DQEB BQUAA4IBAQBqBfd6QHrxXsfgfKARG6np8yszIPhHGPPmaE7xq7RpcZjY9H+8l6fe 4jQbGFjbA5uHBklYI4m2snhPaW2p8iF8YOkm2V2hEsSTnkf5/flw9mZtlCFEDFXS sBxBdNz8RYTthPMu1h09C0XuDB30sztgnR692FrxJN5/bXsk+MC9nEweTFW/t2HW +XZ8bhM7vsAS+pZionR4MyuQ0mYIt/lDcsZVZ91KxTsIm8rNMkkYGFoSIXjQ0+0t CbxMF0i2qnpmNRpA6PU8l7lxxvPkplsk9KB8QIPFrR5p/i/SUAd9vECWh5+/ktlc rfFP2PK7XcEwWizsvMrNqLyvQVNXSUPTMYIBrzCCAasCAQEwgYcwcjELMAkGA1UE BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2Fs Zm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxGDAWBgNVBAMTD0Vzc2Vu dGlhbFNTTCBDQQIRAO7UTVPkm+2Sbks59IdptaUwCQYFKw4DAhoFADANBgkqhkiG 9w0BAQEFAASCAQB8PNQ6bYnQpWfkHyxnDuvNKw3wrqF2p7JMZm+SuN2qp3R2LpCR mW2LrGtQIm9Iob/QOYH+8houYNVdvsATGPXX2T8gzn+anof4tOG0vCTK1Bp9bwf9 MkRP+1c8RW/vkYmUW4X5/C+y3CZpMH5dDTaXBIpXFzjX/fxNpH/rvLzGiaYYL3Cn OLO+aOADr9qq5yoqwpiYCSfYNNYKTUNNGfYIidQwYtbHXEYhSukB2oR89xD2sZZ4 bOqFjUPgTa5SsERLDDeg3omMKiIXVYGxlqBEq51Kge6IQt4qQV9P9VgInW7cWmKe dTqNHI9ri3ttewdEnT++TKGKKfTjX9SR8Waj -----END NEW CERTIFICATE REQUEST----- Clearly there’s something very different between this an my original request! And it didn’t work. IIS creates a custom CSR that is encoded in a format that no certificate authority I’ve ever used uses. If you want the gory details of what’s in there look at this ServerFault question (thanks to Mika in the comments). In the end it doesn’t matter  though – no certificate authority knows what to do with this CSR. So create a new CSR and skip the renewal. Always! Use the same Server Keep in mind that on IIS at least you should always create your certificate on a single server and then when you receive the final certificate from your provider import it on that server. IIS tracks the CSR it created and requires it in order to import the final certificate properly. So if for some reason you try to install the certificate on another server, it won’t work. I’ve also run into trouble trying to install the same certificate twice – this time around I didn’t give my certificate the proper friendly name and IIS failed to allow me to assign the certificate to any of my Web sites. So I removed the certificate and tried to import again, only to find it failed the second time around. There are other ways to fix this, but in my case I had to have the certificate re-issued to work – not what you want to do. Regardless of what you do though, when you import make sure you do it right the first time by crossing all your t’s and dotting your i's– it’ll save you a lot of grief! You don’t actually have to use the server that the certificate gets installed on to generate the CSR and first install it, but it is generally a good idea to do so just so you can get the certificate installed into the right place right away. If you have access to the server where you need to install the certificate you might as well use it. But you can use another machine to generated the and install the certificate, then export the certificate and move it to another machine as needed. So you can use your Dev machine to create a certificate then export it and install it on a live server. More on installation and back up/export later. Installing the Certificate Once you’ve submitted a CSR request your provider will process the request and eventually issue you a new final certificate that contains another text file with the final key to import into your certificate store. IIS does this by combining the content in your certificate request with the original CSR. If all goes well your new certificate shows up in the certificate list and you’re ready to assign the certificate to your sites. Make sure you use a friendly name that matches domain name of your site. So use *.mysite.com or www.mysite.com or store.mysite.com to ensure IIS recognizes the certificate. I made the mistake of not naming my friendly name this way and found that IIS was unable to link my sites to my wildcard certificate. It needed to have the *. as part of the certificate otherwise the Hostname input field was blanked out. Changing the Friendly Name If you by accidentally used an invalid friendly name you can change it later in the Windows certificate store. Bring up a Run Box Type MMC File | Add/Remove Snap In Add Certificates | Computer Account | Local Computer Drill into Certificates | Personal | Certificates Find your Certificate | Right Click | Properties Edit the Friendly Name | Click OK Backing up your Certificate The first thing you should do once your certificate is successfully installed is to back it up! In case your server crashes or you otherwise lose your configuration this will ensure you have an easy way to recover and reinstall your certificate either on the same server or a different one. If you’re running a server farm or using a wildcard certificate you also need to get the certificate onto other machines and a PFX file import is the easiest way to do this. To back up your certificate select your certificate and choose Export from the context or sidebar menu: The Export Certificate option allows you to export a password protected binary file that you can import in a single step. You can copy the resulting binary PFX file to back up or copy to other machines to install on. Importing the certificate on another machine is as easy as pointing at the PFX file and specifying the password. IIS handles the rest. Assigning a new certificate to your Site Once you have the new certificate installed, all that’s left to do is assign it to your site. In IIS select your Web site and bring up the Site Bindings from the right sidebar. Add a new binding for https, bind it to port 443, specify your hostname and pick the certificate from the pick list. If you’re using a root site make sure to set up your certificate for www.yoursite.com and also for yoursite.com so that both work properly with SSL. Note that you need to explicitly configure each hostname for a certificate if you plan to use SSL. Luckily if you update your SSL certificate in the following year, IIS prompts you and asks whether you like to update all other sites that are using the existing cert to the newer cert. And you’re done. So what’s the Pain? So, all of this is old hat and it doesn’t look all that bad right? So what’s the pain here? Well if you follow the instructions and do everything right, then the process is about as straight forward as you would expect it to be. You create a cert request, you import it and assign it to your sites. That’s the basic steps and to be perfectly fair it works well – if nothing goes wrong. However, renewing tends to be the problem. The first unintuitive issue is that you simply shouldn’t renew but create a new CSR and generate your new certificate from that. Over the years I’ve fallen prey to the belief that Microsoft eventually will fix this so that the renewal creates the same type of CSR as the old cert, but apparently that will just never happen. Booo! The other problem I ran into is that I accidentally misnamed my imported certificate which in turn set off a chain of events that caused my originally issued certificate to become uninstallable. When I received my completed certificate I installed it and it installed just fine, but the friendly name was wrong. As a result IIS refused to assign the certificate to any of my host headered sites. That’s strike number one. Why the heck should the friendly name have any effect on the ability to attach the certificate??? Next I uninstalled the certificate because I figured that would be the easiest way to make sure I get it right. But I found that I could not reinstall my certificate. I kept getting these stop errors: "ASN1 bad tag value met" that would prevent the installation from completion. After searching around for this error and reading countless long messages on forums, I found that this error supposedly does not actually mean the install failed, but the list wouldn’t refresh. Commodo has this to say: Note: There is a known issue in IIS 7 giving the following error: "Cannot find the certificate request associated with this certificate file. A certificate request must be completed on the computer where it was created." You may also receive a message stating "ASN1 bad tag value met". If this is the same server that you generated the CSR on then, in most cases, the certificate is actually installed. Simply cancel the dialog and press "F5" to refresh the list of server certificates. If the new certificate is now in the list, you can continue with the next step. If it is not in the list, you will need to reissue your certificate using a new CSR (see our CSR creation instructions for IIS 7). After creating a new CSR, login to your Comodo account and click the 'replace' button for your certificate. Not sure if this issue is fixed in IIS 8 but that’s an insane bug to have crop up. As it turns out, in my case the refresh didn’t work and the certificate didn’t show up in the IIS list after the reinstall. In fact when looking at the certificate store I could see my certificate was installed in the right place, but the private key is missing which is most likely why IIS is not picking it up. It looks like IIS could not match the final cert to the original CSR generated. But again some sort of message to that affect might be helpful instead of ASN1 bad tag value met. Recovering the Private Key So it turns out my original problem was that I received the published key, but when I imported the private key was missing. There’s a relatively easy way to recover from this. If your certificate doesn’t show up in IIS check in the certificate store for the local machine (see steps above on how to bring this up). If you look at the certificate in Certificates/Personal/Certificates make sure you see the key as shown in the image below: if the key is missing it means that the certificate is missing the private key most likely. To fix a certificate you can do the following: Double click the certificate Go to the Details Tab Copy down the Serial number You can copy the serial number from the area blurred out above. The serial number will be in a format like ?00 a7 9b a1 a4 9d 91 63 57 d6 9f 26 b8 ee 79 b5 cb and you’ll need to strip out the spaces in order to use it in the next step. Next open up an Administrative command prompt and issue the following command: certutil -repairstore my 00a79ba1a49d916357d69f26b8ee79b5cb You should get a confirmation message that the repair worked. If you now go back to the certificate store you should now see the key icon show up on the certificate. Your certificate is fixed. Now go back into IIS Manager and refresh the list of certificates and if all goes well you should see all the certificates that showed in the cert store now: Remember – back up the key first then map to your site… Summary I deal with a lot of customers who run their own IIS servers, and I can’t tell you how often I hear about botched SSL installations. When I posted some of my issues on Twitter yesterday I got a hell storm of “me too” responses. I’m clearly not the only one, who’s run into this especially with renewals. I feel pretty comfortable with IIS configuration and I do a lot of it for support purposes, but the SSL configuration is one that never seems to go seamlessly. This blog post is meant as reminder to myself to read next time I do a renewal. So I can dot my i's and dash my t’s before I get caught in the mess I’m dealing with today. Hopefully some of you find this useful as well.© Rick Strahl, West Wind Technologies, 2005-2014Posted in IIS7  Security   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • MVC2 EditorTemplate for DropDownList

    - by tschreck
    I've spent the majority of the past week knee deep in the new templating functionality baked into MVC2. I had a hard time trying to get a DropDownList template working. The biggest problem I've been working to solve is how to get the source data for the drop down list to the template. I saw a lot of examples where you can put the source data in the ViewData dictionary (ViewData["DropDownSourceValuesKey"]) then retrieve them in the template itself (var sourceValues = ViewData["DropDownSourceValuesKey"];) This works, but I did not like having a silly string as the lynch pin for making this work. Below is an approach I've come up with and wanted to get opinions on this approach: here are my design goals: The view model should contain the source data for the drop down list Limit Silly Strings Not use ViewData dictionary Controller is responsible for filling the property with the source data for the drop down list Here's my View Model: public class CustomerViewModel { [ScaffoldColumn(false)] public String CustomerCode{ get; set; } [UIHint("DropDownList")] [DropDownList(DropDownListTargetProperty = "CustomerCode"] [DisplayName("Customer Code")] public IEnumerable<SelectListItem> CustomerCodeList { get; set; } public String FirstName { get; set; } public String LastName { get; set; } public String PhoneNumber { get; set; } public String Address1 { get; set; } public String Address2 { get; set; } public String City { get; set; } public String State { get; set; } public String Zip { get; set; } } My View Model has a CustomerCode property which is a value that the user selects from a list of values. I have a CustomerCodeList property that is a list of possible CustomerCode values and is the source for a drop down list. I've created a DropDownList attribute with a DropDownListTargetProperty. DropDownListTargetProperty points to the property which will be populated based on the user selection from the generated drop down (in this case, the CustomerCode property). Notice that the CustomerCode property has [ScaffoldColumn(false)] which forces the generator to skip the field in the generated output. My DropDownList.ascx file will generate a dropdown list form element with the source data from the CustomerCodeList property. The generated dropdown list will use the value of the DropDownListTargetProperty from the DropDownList attribute as the Id and the Name attributes of the Select form element. So the generated code will look like this: <select id="CustomerCode" name="CustomerCode"> <option>... </select> This works out great because when the form is submitted, MVC will populate the target property with the selected value from the drop down list because the name of the generated dropdown list IS the target property. I kinda visualize it as the CustomerCodeList property is an extension of sorts of the CustomerCode property. I've coupled the source data to the property. Here's my code for the controller: public ActionResult Create() { //retrieve CustomerCodes from a datasource of your choosing List<CustomerCode> customerCodeList = modelService.GetCustomerCodeList(); CustomerViewModel viewModel= new CustomerViewModel(); viewModel.CustomerCodeList = customerCodeList.Select(s => new SelectListItem() { Text = s.CustomerCode, Value = s.CustomerCode, Selected = (s.CustomerCode == viewModel.CustomerCode) }).AsEnumerable(); return View(viewModel); } Here's my code for the DropDownListAttribute: namespace AutoForm.Attributes { public class DropDownListAttribute : Attribute { public String DropDownListTargetProperty { get; set; } } } Here's my code for the template (DropDownList.ascx): <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<IEnumerable<SelectListItem>>" %> <%@ Import Namespace="AutoForm.Attributes"%> <script runat="server"> DropDownListAttribute GetDropDownListAttribute() { var dropDownListAttribute = new DropDownListAttribute(); if (ViewData.ModelMetadata.AdditionalValues.ContainsKey("DropDownListAttribute")) { dropDownListAttribute = (DropDownListAttribute)ViewData.ModelMetadata.AdditionalValues["DropDownListAttribute"]; } return dropDownListAttribute; } </script> <% DropDownListAttribute attribute = GetDropDownListAttribute();%> <select id="<%= attribute.DropDownListTargetProperty %>" name="<%= attribute.DropDownListTargetProperty %>"> <% foreach(SelectListItem item in ViewData.Model) {%> <% if (item.Selected == true) {%> <option value="<%= item.Value %>" selected="true"><%= item.Text %></option> <% } %> <% else {%> <option value="<%= item.Value %>"><%= item.Text %></option> <% } %> <% } %> </select> I tried using the Html.DropDownList helper, but it would not allow me to change the Id and Name attributes of the generated Select element. NOTE: you have to override the CreateMetadata method of the DataAnnotationsModelMetadataProvider for the DropDownListAttribute. Here's the code for that: public class MetadataProvider : DataAnnotationsModelMetadataProvider { protected override ModelMetadata CreateMetadata(IEnumerable<Attribute> attributes, Type containerType, Func<object> modelAccessor, Type modelType, string propertyName) { var metadata = base.CreateMetadata(attributes, containerType, modelAccessor, modelType, propertyName); var additionalValues = attributes.OfType<DropDownListAttribute>().FirstOrDefault(); if (additionalValues != null) { metadata.AdditionalValues.Add("DropDownListAttribute", additionalValues); } return metadata; } } Then you have to make a call to the new MetadataProvider in Application_Start of Global.asax.cs: protected void Application_Start() { RegisterRoutes(RouteTable.Routes); ModelMetadataProviders.Current = new MetadataProvider(); } Well, I hope this makes sense and I hope this approach may save you some time. I'd like some feedback on this approach please. Is there a better approach?

    Read the article

  • Tomcat6 can't connect to MySql (The driver has not received any packets from the server)

    - by Tobias Wiesenthal
    Hi all, i'm running an Apache Tomcat 6.0.20 / MySQL 5.1.37-lubuntu / sun-java6-jdk /sun-java6-jre / sun-java6-bin on my local machine using Ubuntu 9.10 as OS. I'm trying to get a simple DB-query example running for 2 days now, but i still get this Exception: org.apache.jasper.JasperException: javax.servlet.ServletException: javax.servlet.jsp.JspException: Unable to get connection, DataSource invalid: "org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)" org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:522) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:398) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:342) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:267) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) root cause javax.servlet.ServletException: javax.servlet.jsp.JspException: Unable to get connection, DataSource invalid: "org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)" org.apache.jasper.runtime.PageContextImpl.doHandlePageException(PageContextImpl.java:862) org.apache.jasper.runtime.PageContextImpl.handlePageException(PageContextImpl.java:791) org.apache.jsp.index_jsp._jspService(index_jsp.java:104) org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:374) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:342) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:267) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) root cause javax.servlet.jsp.JspException: Unable to get connection, DataSource invalid: "org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)" org.apache.taglibs.standard.tag.common.sql.QueryTagSupport.getConnection(QueryTagSupport.java:285) org.apache.taglibs.standard.tag.common.sql.QueryTagSupport.doStartTag(QueryTagSupport.java:168) org.apache.jsp.index_jsp._jspx_meth_sql_005fquery_005f0(index_jsp.java:274) org.apache.jsp.index_jsp._jspx_meth_c_005fotherwise_005f0(index_jsp.java:216) org.apache.jsp.index_jsp._jspx_meth_c_005fchoose_005f0(index_jsp.java:130) org.apache.jsp.index_jsp._jspService(index_jsp.java:93) org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:374) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:342) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:267) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) my web.xml looks like this : <?xml version="1.0" encoding="utf-8"?> <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="2.5"> <resource-ref> <description>DB Connection</description> <res-ref-name>jdbc/testDB</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> </web-app> the context.xml looks like this : <?xml version="1.0" encoding="UTF-8"?> <Context path="/my1stApp" docBase="/var/www/jsp/my1stApp" debug="5" reloadable="true" crossContext="true"> <Resource name="jdbc/testDB" auth="Container" type="javax.sql.DataSource" maxActive="5" maxIdle="5" maxWait="10000" username="user" password="password" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/some"/> </Context> and the jsp file looks like this: <%@ page contentType="text/html" %> <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> <%@ taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions" %> <%@ taglib prefix="fmt" uri="http://java.sun.com/jsp/jstl/fmt" %> <%@ taglib prefix="sql" uri="http://java.sun.com/jsp/jstl/sql" %> <html> <head> <title>DroneLootTool</title> </head> <body bgcolor="white"> <sql:query var="res" dataSource="jdbc/testDB"> select name, othername from mytable </sql:query> <h2>Results</h2> <c:forEach var="row" items="${res.rows}"> Name ${row.name}<br/> MoreName ${row.othername}<br/><br/> </c:forEach> </body> </html> read lots of forum entries / tried lots of different settings (always changed back to original settings when it didnt' work) set TOMCAT6_SECURITY=no in /etc/default/tomcat6 because TOMCAT6_SECURITY=yes was causing trouble too the skip-networking flag is not set for the DB (BIND 127.0.0.1 is set) firewall is swiched off (sudo ufw disable) MySQL works (tested several times with user used in this skript) telnet localhost 3306 says Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. The TestConnection.java produced the following output: me@my-laptop:~/Desktop$ java -classpath '/usr/share/java/mysql.jar:./' TestConnection com.mysql.jdbc.Driver jdbc:mysql://localhost:3306/testDB myuser mypassword com.mysql.jdbc.CommunicationsException: Communications link failure Last packet sent to the server was 0 ms ago. at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1070) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2103) at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:718) at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:298) at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:282) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:185) at TestConnection.checkConnection(TestConnection.java:40) at TestConnection.main(TestConnection.java:21) Caused by: com.mysql.jdbc.CommunicationsException: Communications link failure Last packet sent to the server was 0 ms ago. at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1070) at com.mysql.jdbc.MysqlIO.readPacket(MysqlIO.java:666) at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1069) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2031) ... 7 more Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost. at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:2431) at com.mysql.jdbc.MysqlIO.readPacket(MysqlIO.java:590) ... 9 more Connection failed. i don't know if there is a difference between the way the java driver connects to the DB and the Perl DBI module does, but this PERL skript works #!/usr/bin/perl -w use CGI; use DBI; use strict; print CGI::header(); my $dbh = DBI->connect("dbi:mysql:some:localhost", "user", "password"); my $sSql = "SELECT * from mytable"; my $ppl = $dbh->selectall_arrayref( $sSql ); foreach my $pl (@$ppl) { my @array = @$pl; print @array; } $dbh->disconnect; enabled --log-warnings on the mysql, but i didn't get any new warnings. When i was searching the logs for warnings i found this messages when i restart the tomcat, don't know if it helps to find the problem : Feb 2 19:50:37 tobias-laptop jsvc.exec[3129]: 02.02.2010 19:50:37 org.apache.catalina.startup.HostConfig checkResources#012INFO: Undeploying context [/myapp] Feb 2 19:50:37 tobias-laptop jsvc.exec[3129]: 02.02.2010 19:50:37 org.apache.catalina.loader.WebappClassLoader clearReferencesThreads#012SCHWERWIEGEND: A web application appears to have started a thread named [MySQL Statement Cancellation Timer] but has failed to stop it. This is very likely to create a memory leak. Feb 2 19:50:37 tobias-laptop jsvc.exec[3129]: 02.02.2010 19:50:37 org.apache.catalina.startup.HostConfig deployDescriptor#012INFO: Deploying configuration descriptor myapp.xml

    Read the article

  • Impossible to do POSTs with appengine-jruby/RoR: Reflection is not allowed

    - by Joel Cuevas
    I'm trying to build a site with RoR on Google App Engine. I'm using the google-appengine gem (http://appengine-jruby.googlecode.com) and following the instructions in (http://gist.github.com/268192). The problem is that I can't submit ANY form! I've already tried this in two diferent clean Win 7 Pro envs and the result is the same. After install Ruby 1.8.6 (One-Click Installer): 1. gem update --system 2. gem install rails 3. gem install google-appengine 4. gem install rails_dm_datastore 5. gem install activerecord-nulldb-adapter 6. curl -O http://appengine-jruby.googlecode.com/hg/demos/rails2/rails2_appengine.rb 7. ruby rails2_appengine.rb (previously downloaded) 8. rails myproj 9. chmod myproj 10. ruby script/generate dd_model MyModel f1:string f2:float f3:float f4:float f5:integer f6:integer f7:integer -f 11. ruby script/generate scaffold MyModel f1:string f2:float f3:float f4:float f5:integer f6:integer f7:integer -f --skip-migration 12. dev_appserver.rb -p 3000 . At this point, I manually test the scaffold in (http://localhost:3000/my_models). The index is OK, then I create a new registry with the generated form, everything's fine, but when I try to create a second one, I get a "java.lang.RuntimeException: DummyDynamicScope should never be used for backref storage" in the console. As far as I read this is a won't-fix behavior in JRuby 1.4.1, but it's converted to a debug only warning in 1.5.0, so I proceed to install the pre release. 13. gem install appengine-jruby-jars --pre With this, that exception is solved and everything works great... until I move the project to the GAE server. 14. ruby appcfg.rb update . And now, in (http://myproj.appspot.com/my_models), again, the index is fine, also the new form, but in the moment that I submit it with valid data, I get a 500 error: "java.lang.IllegalAccessException: Reflection is not allowed on public int". As I said, this behavior is not present in the local SDK. In both cases, I'm completely unable to post anything. This is what I have right now in the GAE environment: Ruby version 1.8.7 (java) RubyGems disabled Rack version 1.1 Rails version 2.3.5 Action Pack version 2.3.5 Active Support version 2.3.5 DataMapper version 0.10.2 Environment production JRuby Runtime version 1.5.0.pre JRuby-Rack version 0.9.7 AppEngine SDK version Google App Engine/1.3.3 AppEngine APIs version 0.0.15 And this are my intalled gems: actionmailer (2.3.5) actionpack (2.3.5) activerecord (2.3.5) activerecord-nulldb-adapter (0.2.0) activeresource (2.3.5) activesupport (2.3.5) addressable (2.1.2) appengine-apis (0.0.15) appengine-jruby-jars (0.0.8.pre, 0.0.7) appengine-rack (0.0.8) appengine-sdk (1.3.3.1) appengine-tools (0.0.12) bundler08 (0.8.5) dm-appengine (0.0.8) dm-ar-finders (0.10.2) dm-core (0.10.2) dm-timestamps (0.10.2) dm-validations (0.10.2) extlib (0.9.14) fxri (0.3.7, 0.3.6) google-appengine (0.0.12) hpricot (0.8.2 x86-mswin32, 0.6 mswin32) jruby-rack (0.9.8, 0.9.7) log4r (1.1.7, 1.0.5) rack (1.1.0, 1.0.1) rails (2.3.5) rails_appengine (0.0.3) rails_dm_datastore (0.2.9) rake (0.8.7, 0.7.3) rubygems-update (1.3.7, 1.3.6) rubyzip (0.9.4) sources (0.0.1) win32-api (1.4.6 x86-mswin32-60, 1.0.4 mswin32) win32-clipboard (0.5.2, 0.4.3) win32-dir (0.3.6, 0.3.2) win32-eventlog (0.5.2, 0.4.6) win32-file (0.6.3, 0.5.4) win32-file-stat (1.3.4, 1.2.7) win32-process (0.6.2, 0.5.3) win32-sapi (0.1.5, 0.1.4) win32-sound (0.4.2, 0.4.1) windows-api (0.4.0, 0.2.0) windows-pr (1.0.9, 0.7.2) I'm unable to attach the full logs of the exceptions because of the character limits, but I can provide them under request. Here's an abstract of them: DummyDynamicScope (dev and prod envs): 14-may-2010 7:18:40 com.google.appengine.tools.development.ApiProxyLocalImpl log SEVERE: [1273821520195000] javax.servlet.ServletContext log: Application Error java.lang.RuntimeException: DummyDynamicScope should never be used for backref storage at org.jruby.runtime.scope.DummyDynamicScope.getBackRef(DummyDynamicScope.java:49) at org.jruby.RubyRegexp.updateBackRef(RubyRegexp.java:1404) at org.jruby.RubyRegexp.updateBackRef(RubyRegexp.java:1396) at org.jruby.RubyRegexp.search(RubyRegexp.java:1386) at org.jruby.RubyRegexp.op_match(RubyRegexp.java:1301) at org.jruby.RubyString.op_match(RubyString.java:1446) at org.jruby.RubyString$i_method_1_0$RUBYINVOKER$op_match.call(org/jruby/RubyString$i_method_1_0$RUBYINVOKER$op_match.gen) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodOneOrN.call(JavaMethod.java:721) at org.jruby.RubyClass.finvoke(RubyClass.java:472) at org.jruby.RubyObject.send(RubyObject.java:1442) at org.jruby.RubyObject$i_method_multi$RUBYINVOKER$send.call(org/jruby/RubyObject$i_method_multi$RUBYINVOKER$send.gen) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrOneOrTwoOrNBlock.call(JavaMethod.java:276) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:330) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:189) at ruby.jit.ruby.C_3a_.Desarrollo.AppEngine.gorgory.WEB_minus_INF.lib.gems_dot_jar.bundler_gems.jruby.$1_dot_8.gems.dm_minus_validations_minus_0_dot_10_dot_2.lib.dm_minus_validations.validators.numeric_validator.validate_with_comparison at ruby.jit.ruby.C_3a_.Desarrollo.AppEngine.gorgory.WEB_minus_INF.lib.gems_dot_jar.bundler_gems.jruby.$1_dot_8.gems.dm_minus_validations_minus_0_dot_10_dot_2.lib.dm_minus_validations.validators.numeric_validator.validate_with_comparison at org.jruby.internal.runtime.methods.JittedMethod.call(JittedMethod.java:102) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:144) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:280) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:69) at org.jruby.ast.FCallManyArgsNode.interpret(FCallManyArgsNode.java:60) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:229) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:193) at org.jruby.RubyClass.finvoke(RubyClass.java:491) at org.jruby.RubyObject.send(RubyObject.java:1448) at org.jruby.RubyObject$i_method_multi$RUBYINVOKER$send.call(org/jruby/RubyObject$i_method_multi$RUBYINVOKER$send.gen) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrOneOrTwoOrThreeOrNBlock.call(JavaMethod.java:293) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:350) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:229) at ruby.jit.ruby.C_3a_.Desarrollo.AppEngine.gorgory.WEB_minus_INF.lib.gems_dot_jar.bundler_gems.jruby.$1_dot_8.gems.dm_minus_validations_minus_0_dot_10_dot_2.lib.dm_minus_validations.validators.numeric_validator.validate_with28985350_50 at ruby.jit.ruby.C_3a_.Desarrollo.AppEngine.gorgory.WEB_minus_INF.lib.gems_dot_jar.bundler_gems.jruby.$1_dot_8.gems.dm_minus_validations_minus_0_dot_10_dot_2.lib.dm_minus_validations.validators.numeric_validator.validate_with28985350_50 at org.jruby.internal.runtime.methods.JittedMethod.call(JittedMethod.java:221) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:201) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:227) at org.jruby.ast.FCallThreeArgNode.interpret(FCallThreeArgNode.java:40) Reflection (only prod env): Java::JavaLang::SecurityException (java.lang.IllegalAccessException: Reflection is not allowed on public int java.lang.String$CaseInsensitiveComparator.compare(java.lang.String,java.lang.String)): com.google.appengine.runtime.Request.process-92563a0605f433ea(Request.java) java.lang.reflect.AccessibleObject.setAccessible(AccessibleObject.java:40) org.jruby.javasupport.JavaMethod.<init>(JavaMethod.java:176) org.jruby.javasupport.JavaMethod.create(JavaMethod.java:183) org.jruby.java.invokers.MethodInvoker.createCallable(MethodInvoker.java:23) org.jruby.java.invokers.RubyToJavaInvoker.<init>(RubyToJavaInvoker.java:63) org.jruby.java.invokers.MethodInvoker.<init>(MethodInvoker.java:13) org.jruby.java.invokers.InstanceMethodInvoker.<init>(InstanceMethodInvoker.java:15) org.jruby.javasupport.JavaClass$InstanceMethodInvokerInstaller.install(JavaClass.java:339) org.jruby.javasupport.JavaClass.installClassMethods(JavaClass.java:723) org.jruby.javasupport.JavaClass.setupProxy(JavaClass.java:586) org.jruby.javasupport.Java.createProxyClass(Java.java:506) org.jruby.javasupport.Java.getProxyClass(Java.java:445) org.jruby.javasupport.Java.getInstance(Java.java:354) org.jruby.javasupport.JavaUtil.convertJavaToUsableRubyObject(JavaUtil.java:143) org.jruby.javasupport.JavaClass$ConstantField.install(JavaClass.java:360) org.jruby.javasupport.JavaClass.installClassFields(JavaClass.java:711) org.jruby.javasupport.JavaClass.setupProxy(JavaClass.java:585) org.jruby.javasupport.Java.createProxyClass(Java.java:506) org.jruby.javasupport.Java.getProxyClass(Java.java:445) org.jruby.javasupport.Java.getProxyOrPackageUnderPackage(Java.java:885) org.jruby.javasupport.Java.get_proxy_or_package_under_package(Java.java:918) org.jruby.javasupport.JavaUtilities.get_proxy_or_package_under_package(JavaUtilities.java:54) org.jruby.javasupport.JavaUtilities$s_method_2_0$RUBYINVOKER$get_proxy_or_package_under_package.call(org/jruby/javasupport/JavaUtilities$s_method_2_0$RUBYINVOKER$get_proxy_or_package_under_package.gen:65535) org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:329) org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:188) org.jruby.ast.CallTwoArgNode.interpret(CallTwoArgNode.java:59) org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) org.jruby.ast.BlockNode.interpret(BlockNode.java:71) org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:113) org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:138) org.jruby.javasupport.util.RuntimeHelpers$MethodMissingMethod.call(RuntimeHelpers.java:389) org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:182) What should I do now? Any hint would be wellcome. Thanks!

    Read the article

  • Linq to LLBLGen query problem

    - by Jeroen Breuer
    Hello, I've got a Stored Procedure and i'm trying to convert it to a Linq to LLBLGen query. The query in Linq to LLBGen works, but when I trace the query which is send to sql server it is far from perfect. This is the Stored Procedure: ALTER PROCEDURE [dbo].[spDIGI_GetAllUmbracoProducts] -- Add the parameters for the stored procedure. @searchText nvarchar(255), @startRowIndex int, @maximumRows int, @sortExpression nvarchar(255) AS BEGIN SET @startRowIndex = @startRowIndex + 1 SET @searchText = '%' + @searchText + '%' -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- This is the query which will fetch all the UmbracoProducts. -- This query also supports paging and sorting. WITH UmbracoOverview As ( SELECT ROW_NUMBER() OVER( ORDER BY CASE WHEN @sortExpression = 'productName' THEN umbracoProduct.productName WHEN @sortExpression = 'productCode' THEN umbracoProduct.productCode END ASC, CASE WHEN @sortExpression = 'productName DESC' THEN umbracoProduct.productName WHEN @sortExpression = 'productCode DESC' THEN umbracoProduct.productCode END DESC ) AS row_num, umbracoProduct.umbracoProductId, umbracoProduct.productName, umbracoProduct.productCode FROM umbracoProduct INNER JOIN product ON umbracoProduct.umbracoProductId = product.umbracoProductId WHERE (umbracoProduct.productName LIKE @searchText OR umbracoProduct.productCode LIKE @searchText OR product.code LIKE @searchText OR product.description LIKE @searchText OR product.descriptionLong LIKE @searchText OR product.unitCode LIKE @searchText) ) SELECT UmbracoOverview.UmbracoProductId, UmbracoOverview.productName, UmbracoOverview.productCode FROM UmbracoOverview WHERE (row_num >= @startRowIndex AND row_num < (@startRowIndex + @maximumRows)) -- This query will count all the UmbracoProducts. -- This query is used for paging inside ASP.NET. SELECT COUNT (umbracoProduct.umbracoProductId) AS CountNumber FROM umbracoProduct INNER JOIN product ON umbracoProduct.umbracoProductId = product.umbracoProductId WHERE (umbracoProduct.productName LIKE @searchText OR umbracoProduct.productCode LIKE @searchText OR product.code LIKE @searchText OR product.description LIKE @searchText OR product.descriptionLong LIKE @searchText OR product.unitCode LIKE @searchText) END This is my Linq to LLBLGen query: using System.Linq.Dynamic; var q = ( from up in MetaData.UmbracoProduct join p in MetaData.Product on up.UmbracoProductId equals p.UmbracoProductId where up.ProductCode.Contains(searchText) || up.ProductName.Contains(searchText) || p.Code.Contains(searchText) || p.Description.Contains(searchText) || p.DescriptionLong.Contains(searchText) || p.UnitCode.Contains(searchText) select new UmbracoProductOverview { UmbracoProductId = up.UmbracoProductId, ProductName = up.ProductName, ProductCode = up.ProductCode } ).OrderBy(sortExpression); //Save the count in HttpContext.Current.Items. This value will only be saved during 1 single HTTP request. HttpContext.Current.Items["AllProductsCount"] = q.Count(); //Returns the results paged. return q.Skip(startRowIndex).Take(maximumRows).ToList<UmbracoProductOverview>(); This is my Initial expression to process: value(SD.LLBLGen.Pro.LinqSupportClasses.DataSource`1[Eurofysica.DB.EntityClasses.UmbracoProductEntity]).Join(value(SD.LLBLGen.Pro.LinqSupportClasses.DataSource`1[Eurofysica.DB.EntityClasses.ProductEntity]), up => up.UmbracoProductId, p => p.UmbracoProductId, (up, p) => new <>f__AnonymousType0`2(up = up, p = p)).Where(<>h__TransparentIdentifier0 => (((((<>h__TransparentIdentifier0.up.ProductCode.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText) || <>h__TransparentIdentifier0.up.ProductName.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.Code.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.Description.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.DescriptionLong.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.UnitCode.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText))).Select(<>h__TransparentIdentifier0 => new UmbracoProductOverview() {UmbracoProductId = <>h__TransparentIdentifier0.up.UmbracoProductId, ProductName = <>h__TransparentIdentifier0.up.ProductName, ProductCode = <>h__TransparentIdentifier0.up.ProductCode}).OrderBy( => .ProductName).Count() Now this is how the queries look like that are send to sql server: Select query: Query: SELECT [LPA_L2].[umbracoProductId] AS [UmbracoProductId], [LPA_L2].[productName] AS [ProductName], [LPA_L2].[productCode] AS [ProductCode] FROM ( [eurofysica].[dbo].[umbracoProduct] [LPA_L2] INNER JOIN [eurofysica].[dbo].[product] [LPA_L3] ON [LPA_L2].[umbracoProductId] = [LPA_L3].[umbracoProductId]) WHERE ( ( ( ( ( ( ( ( [LPA_L2].[productCode] LIKE @ProductCode1) OR ( [LPA_L2].[productName] LIKE @ProductName2)) OR ( [LPA_L3].[code] LIKE @Code3)) OR ( [LPA_L3].[description] LIKE @Description4)) OR ( [LPA_L3].[descriptionLong] LIKE @DescriptionLong5)) OR ( [LPA_L3].[unitCode] LIKE @UnitCode6)))) Parameter: @ProductCode1 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @ProductName2 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Code3 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Description4 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @DescriptionLong5 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @UnitCode6 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Count query: Query: SELECT TOP 1 COUNT(*) AS [LPAV_] FROM (SELECT [LPA_L2].[umbracoProductId] AS [UmbracoProductId], [LPA_L2].[productName] AS [ProductName], [LPA_L2].[productCode] AS [ProductCode] FROM ( [eurofysica].[dbo].[umbracoProduct] [LPA_L2] INNER JOIN [eurofysica].[dbo].[product] [LPA_L3] ON [LPA_L2].[umbracoProductId] = [LPA_L3].[umbracoProductId]) WHERE ( ( ( ( ( ( ( ( [LPA_L2].[productCode] LIKE @ProductCode1) OR ( [LPA_L2].[productName] LIKE @ProductName2)) OR ( [LPA_L3].[code] LIKE @Code3)) OR ( [LPA_L3].[description] LIKE @Description4)) OR ( [LPA_L3].[descriptionLong] LIKE @DescriptionLong5)) OR ( [LPA_L3].[unitCode] LIKE @UnitCode6))))) [LPA_L1] Parameter: @ProductCode1 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @ProductName2 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Code3 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Description4 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @DescriptionLong5 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @UnitCode6 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". As you can see no sorting or paging is done (like in my Stored Procedure). This is probably done inside the code after all the results are fetched. This costs a lot of performance! Does anybody know how I can convert my Stored Procedure to Linq to LLBLGen the proper way?

    Read the article

  • Passing XML markers to Google Map

    - by djmadscribbler
    I've been creating a V3 Google map based on this example from Mike Williams http://www.geocodezip.com/v3_MW_example_map3.html I've run into a bit of a problem though. If I have no parameters in my URL then I get the error "id is undefined idmarkers [id.toLowerCase()] = marker;" in Firebug and only one marker will show up. If I have a parameter (?id=105 for example) then all the sidebar links say 105 (or whatever the parameter in the URL was) instead of their respective label as listed in the XML file and a random infowindow will be opened instead of the window for the id in the URL. Here is my javascript: var map = null; var lastmarker = null; // ========== Read paramaters that have been passed in ========== // Before we go looking for the passed parameters, set some defaults // in case there are no parameters var id; var index = -1; // these set the initial center, zoom and maptype for the map // if it is not specified in the query string var lat = 42.194741; var lng = -121.700301; var zoom = 18; var maptype = google.maps.MapTypeId.HYBRID; function MapTypeId2UrlValue(maptype) { var urlValue = 'm'; switch (maptype) { case google.maps.MapTypeId.HYBRID: urlValue = 'h'; break; case google.maps.MapTypeId.SATELLITE: urlValue = 'k'; break; case google.maps.MapTypeId.TERRAIN: urlValue = 't'; break; default: case google.maps.MapTypeId.ROADMAP: urlValue = 'm'; break; } return urlValue; } // If there are any parameters at eh end of the URL, they will be in location.search // looking something like "?marker=3" // skip the first character, we are not interested in the "?" var query = location.search.substring(1); // split the rest at each "&" character to give a list of "argname=value" pairs var pairs = query.split("&"); for (var i = 0; i < pairs.length; i++) { // break each pair at the first "=" to obtain the argname and value var pos = pairs[i].indexOf("="); var argname = pairs[i].substring(0, pos).toLowerCase(); var value = pairs[i].substring(pos + 1).toLowerCase(); // process each possible argname - use unescape() if theres any chance of spaces if (argname == "id") { id = unescape(value); } if (argname == "marker") { index = parseFloat(value); } if (argname == "lat") { lat = parseFloat(value); } if (argname == "lng") { lng = parseFloat(value); } if (argname == "zoom") { zoom = parseInt(value); } if (argname == "type") { // from the v3 documentation 8/24/2010 // HYBRID This map type displays a transparent layer of major streets on satellite images. // ROADMAP This map type displays a normal street map. // SATELLITE This map type displays satellite images. // TERRAIN This map type displays maps with physical features such as terrain and vegetation. if (value == "m") { maptype = google.maps.MapTypeId.ROADMAP; } if (value == "k") { maptype = google.maps.MapTypeId.SATELLITE; } if (value == "h") { maptype = google.maps.MapTypeId.HYBRID; } if (value == "t") { maptype = google.maps.MapTypeId.TERRAIN; } } } // this variable will collect the html which will eventually be placed in the side_bar var side_bar_html = ""; // arrays to hold copies of the markers and html used by the side_bar // because the function closure trick doesnt work there var gmarkers = []; var idmarkers = []; // global "map" variable var map = null; // A function to create the marker and set up the event window function function createMarker(point, icon, label, html) { var contentString = html; var marker = new google.maps.Marker({ position: point, map: map, title: label, icon: icon, zIndex: Math.round(point.lat() * -100000) << 5 }); marker.id = id; marker.index = gmarkers.length; google.maps.event.addListener(marker, 'click', function () { lastmarker = new Object; lastmarker.id = marker.id; lastmarker.index = marker.index; infowindow.setContent(contentString); infowindow.open(map, marker); }); // save the info we need to use later for the side_bar gmarkers.push(marker); idmarkers[id.toLowerCase()] = marker; // add a line to the side_bar html side_bar_html += '<a href="javascript:myclick(' + (gmarkers.length - 1) + ')">' + id + '<\/a><br>'; } // This function picks up the click and opens the corresponding info window function myclick(i) { google.maps.event.trigger(gmarkers[i], "click"); } function makeLink() { var mapinfo = "lat=" + map.getCenter().lat().toFixed(6) + "&lng=" + map.getCenter().lng().toFixed(6) + "&zoom=" + map.getZoom() + "&type=" + MapTypeId2UrlValue(map.getMapTypeId()); if (lastmarker) { var a = "/about/map/default.aspx?id=" + lastmarker.id + "&" + mapinfo; var b = "/about/map/default.aspx?marker=" + lastmarker.index + "&" + mapinfo; } else { var a = "/about/map/default.aspx?" + mapinfo; var b = a; } document.getElementById("idlink").innerHTML = '<a href="' + a + '" id=url target=_new>- Link directly to this page by id</a> (id in xml file also entry &quot;name&quot; in sidebar menu)'; document.getElementById("indexlink").innerHTML = '<a href="' + b + '" id=url target=_new>- Link directly to this page by index</a> (position in gmarkers array)'; } function initialize() { // create the map var myOptions = { zoom: zoom, center: new google.maps.LatLng(lat, lng), mapTypeId: maptype, mapTypeControlOptions: { style: google.maps.MapTypeControlStyle.DROPDOWN_MENU }, navigationControl: true, mapTypeId: google.maps.MapTypeId.HYBRID }; map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); var stylesarray = [ { featureType: "poi", elementType: "labels", stylers: [ { visibility: "off" } ] }, { featureType: "landscape.man_made", elementType: "labels", stylers: [ { visibility: "off" } ] } ]; var options = map.setOptions({ styles: stylesarray }); // Make the link the first time when the page opens makeLink(); // Make the link again whenever the map changes google.maps.event.addListener(map, 'maptypeid_changed', makeLink); google.maps.event.addListener(map, 'center_changed', makeLink); google.maps.event.addListener(map, 'bounds_changed', makeLink); google.maps.event.addListener(map, 'zoom_changed', makeLink); google.maps.event.addListener(map, 'click', function () { lastmarker = null; makeLink(); infowindow.close(); }); // Read the data from example.xml downloadUrl("example.xml", function (doc) { var xmlDoc = xmlParse(doc); var markers = xmlDoc.documentElement.getElementsByTagName("marker"); for (var i = 0; i < markers.length; i++) { // obtain the attribues of each marker var lat = parseFloat(markers[i].getAttribute("lat")); var lng = parseFloat(markers[i].getAttribute("lng")); var point = new google.maps.LatLng(lat, lng); var html = markers[i].getAttribute("html"); var label = markers[i].getAttribute("label"); var icon = markers[i].getAttribute("icon"); // create the marker var marker = createMarker(point, icon, label, html); } // put the assembled side_bar_html contents into the side_bar div document.getElementById("side_bar").innerHTML = side_bar_html; // ========= If a parameter was passed, open the info window ========== if (id) { if (idmarkers[id]) { google.maps.event.trigger(idmarkers[id], "click"); } else { alert("id " + id + " does not match any marker"); } } if (index > -1) { if (index < gmarkers.length) { google.maps.event.trigger(gmarkers[index], "click"); } else { alert("marker " + index + " does not exist"); } } }); } var infowindow = new google.maps.InfoWindow( { size: new google.maps.Size(150, 50) }); google.maps.event.addDomListener(window, "load", initialize); And here is an example of my XML formatting <marker lat="42.196175" lng="-121.699224" html="This is the information about 104" iconimage="/about/map/images/104.png" label="104" />

    Read the article

  • Problem while running the j2me application

    - by Paru
    I am not able to view any content in the emulator while running the application. The Build is not failed and i am able run the application successfully. While i am closing the emulator i am getting an error. i can provide both code and log here. import javax.microedition.lcdui.; import javax.microedition.midlet.; import java.io.; import java.lang.; import javax.microedition.io.; import javax.microedition.rms.; public class Login extends MIDlet implements CommandListener { TextField ItemName=null; TextField ItemNo=null; TextField UserName=null; TextField Password=null; Form authForm,mainscreen; TextBox t = null; StringBuffer b = new StringBuffer(); private Display myDisplay = null; private Command okCommand = new Command("Login", Command.OK, 1); private Command exitCommand = new Command("Exit", Command.EXIT, 2); private Command sendCommand = new Command("Send", Command.OK, 1); private Command backCommand = new Command("Back", Command.BACK, 2); private Alert alert = null; public Login() { ItemName=new TextField("Item Name","",10,TextField.ANY); ItemNo=new TextField("Item No","",10,TextField.ANY); myDisplay = Display.getDisplay(this); UserName=new TextField("Your Name","",10,TextField.ANY); Password=new TextField("Password","",10,TextField.PASSWORD); authForm=new Form("Identification"); mainscreen=new Form("Logging IN"); mainscreen.addCommand(sendCommand); mainscreen.addCommand(backCommand); authForm.append(UserName); authForm.append(Password); authForm.addCommand(okCommand); authForm.addCommand(exitCommand); authForm.setCommandListener(this); myDisplay.setCurrent(authForm); } public void startApp() throws MIDletStateChangeException { } public void pauseApp() { } protected void destroyApp(boolean unconditional) throws MIDletStateChangeException { } public void commandAction(Command c, Displayable d) { if ((c == okCommand) && (d == authForm)) { if (UserName.getString().equals("") || Password.getString().equals("")){ alert = new Alert("Error", "You should enter Username and Password", null, AlertType.ERROR); alert.setTimeout(Alert.FOREVER); myDisplay.setCurrent(alert); } else{ //myDisplay.setCurrent(mainscreen); login(UserName.getString(),Password.getString()); } } if ((c == backCommand) && (d == mainscreen)) { myDisplay.setCurrent(authForm); } if ((c == exitCommand) && (d == authForm)) { notifyDestroyed(); } if ((c == sendCommand) && (d == mainscreen)) { if(ItemName.getString().equals("") || ItemNo.getString().equals("")){ } else{ sendItem(ItemName.getString(),ItemNo.getString()); } } } public void login(String UserName,String PassWord) { HttpConnection connection=null; DataInputStream in=null; String url="http://olario.net/submitpost/submitpost/login.php"; OutputStream out=null; try { connection=(HttpConnection)Connector.open(url); connection.setRequestMethod(HttpConnection.POST); connection.setRequestProperty("IF-Modified-Since", "2 Oct 2002 15:10:15 GMT"); connection.setRequestProperty("User-Agent","Profile/MIDP-1.0 Configuration/CLDC-1.0"); connection.setRequestProperty("Content-Language", "en-CA"); connection.setRequestProperty("Content-Length",""+ (UserName.length()+PassWord.length())); connection.setRequestProperty("username",UserName); connection.setRequestProperty("password",PassWord); out = connection.openDataOutputStream(); out.flush(); in = connection.openDataInputStream(); int ch; while((ch = in.read()) != -1) { b.append((char) ch); //System.out.println((char)ch); } //t = new TextBox("Reply",b.toString(),1024,0); //mainscreen.append(b.toString()); String auth=b.toString(); if(in!=null) in.close(); if(out!=null) out.close(); if(connection!=null) connection.close(); if(auth.equals("ok")){ mainscreen.setCommandListener(this); myDisplay.setCurrent(mainscreen); } } catch(IOException x){ } } public void sendItem(String itemname,String itemno){ HttpConnection connection=null; DataInputStream in=null; String url="http://www.olario.net/submitpost/submitpost/submitPost.php"; OutputStream out=null; try { connection=(HttpConnection)Connector.open(url); connection.setRequestMethod(HttpConnection.POST); connection.setRequestProperty("IF-Modified-Since", "2 Oct 2002 15:10:15 GMT"); connection.setRequestProperty("User-Agent","Profile/MIDP-1.0 Configuration/CLDC-1.0"); connection.setRequestProperty("Content-Language", "en-CA"); connection.setRequestProperty("Content-Length",""+ (itemname.length()+itemno.length())); connection.setRequestProperty("itemCode",itemname); connection.setRequestProperty("qty",itemno); out = connection.openDataOutputStream(); out.flush(); in = connection.openDataInputStream(); int ch; while((ch = in.read()) != -1) { b.append((char) ch); //System.out.println((char)ch); } //t = new TextBox("Reply",b.toString(),1024,0); //mainscreen.append(b.toString()); String send=b.toString(); if(in!=null) in.close(); if(out!=null) out.close(); if(connection!=null) connection.close(); if(send.equals("added")){ alert = new Alert("Error", "Send Successfully", null, AlertType.INFO); alert.setTimeout(Alert.FOREVER); myDisplay.setCurrent(alert); } } catch(IOException x){ } } } and the log is pre-init: pre-load-properties: exists.config.active: exists.netbeans.user: exists.user.properties.file: load-properties: exists.platform.active: exists.platform.configuration: exists.platform.profile: basic-init: cldc-pre-init: cldc-init: cdc-init: ricoh-pre-init: ricoh-init: semc-pre-init: semc-init: savaje-pre-init: savaje-init: sjmc-pre-init: sjmc-init: ojec-pre-init: ojec-init: cdc-hi-pre-init: cdc-hi-init: nokiaS80-pre-init: nokiaS80-init: nsicom-pre-init: nsicom-init: post-init: init: conditional-clean-init: conditional-clean: deps-jar: pre-preprocess: do-preprocess: Pre-processing 0 file(s) into /home/sreekumar/NetBeansProjects/Login/build/preprocessed directory. post-preprocess: preprocess: pre-compile: extract-libs: do-compile: post-compile: compile: pre-obfuscate: proguard-init: skip-obfuscation: proguard: post-obfuscate: obfuscate: lwuit-build: pre-preverify: do-preverify: post-preverify: preverify: pre-jar: set-password-init: set-keystore-password: set-alias-password: set-password: create-jad: add-configuration: add-profile: do-extra-libs: nokiaS80-prepare-j9: nokiaS80-prepare-manifest: nokiaS80-prepare-manifest-no-icon: nokiaS80-create-manifest: jad-jsr211-properties.check: jad-jsr211-properties: semc-build-j9: do-jar: nsicom-create-manifest: do-jar-no-manifest: update-jad: Updating application descriptor: /home/sreekumar/NetBeansProjects/Login/dist/Login.jad Generated "/home/sreekumar/NetBeansProjects/Login/dist/Login.jar" is 3501 bytes. sign-jar: ricoh-init-dalp: ricoh-add-app-icon: ricoh-build-dalp-with-icon: ricoh-build-dalp-without-icon: ricoh-build-dalp: savaje-prepare-icon: savaje-build-jnlp: post-jar: jar: pre-run: netmon.check: open-netmon: cldc-run: Copying 1 file to /home/sreekumar/NetBeansProjects/Login/dist/nbrun4244989945642509378 Copying 1 file to /home/sreekumar/NetBeansProjects/Login/dist/nbrun4244989945642509378 Jad URL for OTA execution: http://localhost:8082/servlet/org.netbeans.modules.mobility.project.jam.JAMServlet//home/sreekumar/NetBeansProjects/Login/dist//Login.jad Starting emulator in execution mode Running with storage root /home/sreekumar/j2mewtk/2.5.2/appdb/temp.DefaultColorPhone1 /home/sreekumar/NetBeansProjects/Login/nbproject/build-impl.xml:915: Execution failed with error code 143. BUILD FAILED (total time: 35 seconds)

    Read the article

  • SQL error - Cannot convert nvarchar to decimal

    - by jakesankey
    I have a C# application that simply parses all of the txt documents within a given network directory and imports the data to a SQL server db. Everything was cruising along just fine until about the 1800th file when it happend to have a few blanks in columns that are called out as DBType.Decimal (and the value is usually zero in the files, not blank). So I got this error, "cannot convert nvarchar to decimal". I am wondering how I could tell the app to simply skip the lines that have this issue?? Perhaps I could even just change the column type to varchar even tho values are numbers (what problems could this create?) Thanks for any help! using System; using System.Data; using System.Data.SQLite; using System.IO; using System.Text.RegularExpressions; using System.Threading; using System.Collections.Generic; using System.Linq; using System.Data.SqlClient; namespace JohnDeereCMMDataParser { internal class Program { public static List<string> GetImportedFileList() { List<string> ImportedFiles = new List<string>(); using (SqlConnection connect = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { connect.Open(); using (SqlCommand fmd = connect.CreateCommand()) { fmd.CommandText = @"SELECT FileName FROM CMMData;"; fmd.CommandType = CommandType.Text; SqlDataReader r = fmd.ExecuteReader(); while (r.Read()) { ImportedFiles.Add(Convert.ToString(r["FileName"])); } } } return ImportedFiles; } private static void Main(string[] args) { Console.Title = "John Deere CMM Data Parser"; Console.WriteLine("Preparing CMM Data Parser... done"); Console.WriteLine("Scanning for new CMM data..."); Console.ForegroundColor = ConsoleColor.Gray; using (SqlConnection con = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { con.Open(); using (SqlCommand insertCommand = con.CreateCommand()) { Console.WriteLine("Connecting to SQL server..."); SqlCommand cmdd = con.CreateCommand(); string[] files = Directory.GetFiles(@"C:\Documents and Settings\js91162\Desktop\CMM WENZEL\", "*_*_*.txt", SearchOption.AllDirectories); List<string> ImportedFiles = GetImportedFileList(); insertCommand.Parameters.Add(new SqlParameter("@FeatType", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@FeatName", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@Axis", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@OutOfTol", DbType.Decimal)); foreach (string file in files.Except(ImportedFiles)) { var FileNameExt1 = Path.GetFileName(file); cmdd.Parameters.Clear(); cmdd.Parameters.Add(new SqlParameter("@FileExt", FileNameExt1)); cmdd.CommandText = @" IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'RX_CMMData' AND TABLE_NAME = 'CMMData')) BEGIN SELECT COUNT(*) FROM CMMData WHERE FileName = @FileExt; END"; int count = Convert.ToInt32(cmdd.ExecuteScalar()); con.Close(); con.Open(); if (count == 0) { Console.WriteLine("Preparing to parse CMM data for SQL import..."); if (file.Count(c => c == '_') > 5) continue; insertCommand.CommandText = @" INSERT INTO CMMData (FeatType, FeatName, Axis, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, PartNumber, CMMNumber, Date, FileName) VALUES (@FeatType, @FeatName, @Axis, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @PartNumber, @CMMNumber, @Date, @FileName);"; string FileNameExt = Path.GetFullPath(file); string RNumber = Path.GetFileNameWithoutExtension(file); int index2 = RNumber.IndexOf("~"); Match RNumberE = Regex.Match(RNumber, @"^(R|L)\d{6}(COMP|CRIT|TEST|SU[1-9])(?=_)", RegexOptions.IgnoreCase); Match RNumberD = Regex.Match(RNumber, @"(?<=_)\d{3}[A-Z]\d{4}|\d{3}[A-Z]\d\w\w\d(?=_)", RegexOptions.IgnoreCase); Match RNumberDate = Regex.Match(RNumber, @"(?<=_)\d{8}(?=_)", RegexOptions.IgnoreCase); string RNumE = Convert.ToString(RNumberE); string RNumD = Convert.ToString(RNumberD); if (RNumberD.Value == @"") continue; if (RNumberE.Value == @"") continue; if (RNumberDate.Value == @"") continue; if (index2 != -1) continue; DateTime dateTime = DateTime.ParseExact(RNumberDate.Value, "yyyyMMdd", Thread.CurrentThread.CurrentCulture); string cmmDate = dateTime.ToString("dd-MMM-yyyy"); string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } Console.WriteLine(tmpLine); foreach (SqlParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] { ',' }); for (int i = 0; i < values.Length - 1; i++) { if (i = "" || i = null) continue; SqlParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.Parameters.Add(new SqlParameter("@PartNumber", RNumE)); insertCommand.Parameters.Add(new SqlParameter("@CMMNumber", RNumD)); insertCommand.Parameters.Add(new SqlParameter("@Date", cmmDate)); insertCommand.Parameters.Add(new SqlParameter("@FileName", FileNameExt)); insertCommand.ExecuteNonQuery(); insertCommand.Parameters.RemoveAt("@PartNumber"); insertCommand.Parameters.RemoveAt("@CMMNumber"); insertCommand.Parameters.RemoveAt("@Date"); insertCommand.Parameters.RemoveAt("@FileName"); } } } Console.WriteLine("CMM data successfully imported to SQL database..."); } con.Close(); } } } }

    Read the article

  • How to use iterator in nested arraylist

    - by Muhammad Abrar
    I am trying to build an NFA with a special purpose of searching, which is totally different from regex. The State has following format class State implements List{ //GLOBAL DATA static int depth; //STATE VALUES String stateName; ArrayList<String> label = new ArrayList<>(); //Label for states //LINKS TO OTHER STATES boolean finalState; ArrayList<State> nextState ; // Link with multiple next states State preState; // previous state public State() { stateName = ""; finalState = true; nextState = new ArrayList<>(); } public void addlabel(String lbl) { if(!this.label.contains(lbl)) this.label.add(lbl); } public State(String state, String lbl) { this.stateName = state; if(!this.label.contains(lbl)) this.label.add(lbl); depth++; } public State(String state, String lbl, boolean fstate) { this.stateName = state; this.label.add(lbl); this.finalState = fstate; this.nextState = new ArrayList<>(); } void displayState() { System.out.print(this.stateName+" --> "); for(Iterator<String> it = label.iterator(); it.hasNext();) { System.out.print(it.next()+" , "); } System.out.println("\nNo of States : "+State.depth); } Next, the NFA class is public class NFA { static final String[] STATES= {"A","B","C","D","E","F","G","H","I","J","K","L","M" ,"N","O","P","Q","R","S","T","U","V","W","X","Y","Z"}; State startState; State currentState; static int level; public NFA() { startState = new State(); startState = null; currentState = new State(); currentState = null; startState = currentState; } /** * * @param st */ NFA(State startstate) { startState = new State(); startState = startstate; currentState = new State(); currentState = null; currentState = startState ; // To show that their is only one element in NFA } boolean insertState(State newState) { newState.nextState = new ArrayList<>(); if(currentState == null && startState == null ) //if empty NFA { newState.preState = null; startState = newState; currentState = newState; State.depth = 0; return true; } else { if(!Exist(newState.stateName))//Exist is used to check for duplicates { newState.preState = currentState ; currentState.nextState.add(newState); currentState = newState; State.depth++; return true; } } return false; } boolean insertState(State newState, String label) { newState.label.add(label); newState.nextState = null; newState.preState = null; if(currentState == null && startState == null) { startState = newState; currentState = newState; State.depth = 0; return true; } else { if(!Exist(newState.stateName)) { newState.preState = currentState; currentState.nextState.add(newState); currentState = newState; State.depth++; return true; } else { ///code goes here } } return false; } void markFinal(State s) { s.finalState = true; } void unmarkFinal(State s) { s.finalState = false; } boolean Exist(String s) { State temp = startState; if(startState.stateName.equals(s)) return true; Iterator<State> it = temp.nextState.iterator(); while(it.hasNext()) { Iterator<State> i = it ;//startState.nextState.iterator(); { while(i.hasNext()) { if(i.next().stateName.equals(s)) return true; } } //else // return false; } return false; } void displayNfa() { State st = startState; if(startState == null && currentState == null) { System.out.println("The NFA is empty"); } else { while(st != null) { if(!st.nextState.isEmpty()) { Iterator<State> it = st.nextState.iterator(); do { st.displayState(); st = it.next(); }while(it.hasNext()); } else { st = null; } } } System.out.println(); } /** * @param args the command line arguments */ /** * * @param args the command line arguments */ public static void main(String[] args) { // TODO code application logic here NFA l = new NFA(); State s = new State("A11", "a",false); NFA ll = new NFA(s); s = new State("A111", "a",false); ll.insertState(s); ll.insertState(new State("A1","0")); ll.insertState(new State("A1111","0")); ll.displayNfa(); int j = 1; for(int i = 0 ; i < 2 ; i++) { int rand = (int) (Math.random()* 10); State st = new State(STATES[rand],String.valueOf(i), false); if(l.insertState(st)) { System.out.println(j+" : " + STATES[rand]+" and "+String.valueOf(i)+ " inserted"); j++; } } l.displayNfa(); System.out.println("No of states inserted : "+ j--); } I want to do the following This program always skip to display the last state i.e. if there are 10 states inserted, it will display only 9. In exist() method , i used two iterator but i do not know why it is working I have no idea how to perform searching for the existing class name, when dealing with iterators. How should i keep track of current State, properly iterate through the nextState List, Label List in a depth first order. How to insert unique States i.e. if State "A" is inserted once, it should not insert it again (The exist method is not working) Best Regards

    Read the article

  • C++ Deck and Card Class Error with bad alloc

    - by user3702164
    Just started learn to code in school. Our assignment requires us to create a card game with card,deck and hand class. I am having troubles with it now and i keep getting exception: std::bad_alloc at memory location. Here are my codes right now CardType h: #ifndef cardType_h #define cardType_h #include <string> using namespace std; class cardType{ public: void print(); int getValue() const; string getSymbol() const; string getSpecial() const; string getSuit() const; int checkSpecial(int gscore) const; cardType(); cardType(string suit,int value); private: int value; string special; string symbol; string suit; }; #endif CardType cpp: #include "cardType.h" #include <iostream> #include <string> using namespace std; void cardType::print() { cout << getSymbol() << " of " << getSuit() << ", having the value of " << getValue() << "."<< endl <<"This card's special is " << getSpecial() << endl; } int cardType::getValue() const { return value; } string cardType::getSymbol() const { return symbol; } string cardType::getSpecial() const { return special; } string cardType::getSuit() const { return suit; } cardType::cardType(){ value=0; symbol="?"; special='?'; suit='?'; } cardType::cardType(string s, int v){ suit = s; value = v; switch(v){ case 1: // Ace cards have a value of 1 and have no special type symbol="Ace"; special="None"; break; case 2: // 2 cards have a value of 2 and have no special type symbol="2"; special="None"; break; case 3: symbol="3"; // 3 cards have a value of 3 and have no special type special="None"; break; case 4: symbol="4"; // 4 cards have a value of 0 and have a special type "Reverse" which reverses the flow of the game special="Reverse"; value=0; break; case 5: symbol="5"; // 5 cards have a value of 5 and have no special type special="None"; break; case 6: symbol="6"; // 6 cards have a value of 6 and have no special type special="None"; break; case 7: symbol="7"; // 7 cards have a value of 7 and have no special type special="None"; break; case 8: symbol="8"; // 8 cards have a value of 8 and have no special type special="None"; break; case 9: symbol="9"; // 9 cards have a value of 0 and have a special type "Pass" which does not add any value to the game and lets the player skip his turn. special="Pass"; value=0; break; case 10: symbol="10"; // 10 cards have a value of 10 and have a special type "subtract" which instead of adding the 10 value to the total game it is subtracted instead. special="Subtract"; value=10; break; case 11: // Jack cards have a value of 10 and have no special type symbol="Jack"; special="None"; value=10; break; case 12: // Queens cards have a value of 10 and have no special type symbol="Queen"; special="None"; value=10; break; case 13: symbol="King"; // King cards have a value of 0 and have a special type "NinetyNine" which changes the total game score to 99 reguardless what number it was previously special="NinetyNine"; value=0; break; } } int cardType::checkSpecial(int gscore) const{ if(special=="Pass"){ return gscore; } if(special=="Reverse"){ return gscore; } if(special=="Subtract"){ return gscore - value; } if(special=="NinetyNine"){ return 99; } else{ return gscore + value; } } DeckType h: #ifndef deckType_h #define deckType_h #include "cardType.h" #include <string> using namespace std; class deckType { public: void shuffle(); cardType dealCard(); deckType(); private: cardType *deck; int current; }; #endif DeckType cpp: #include <iostream> #include "deckType.h" using namespace std; deckType::deckType() { int index = 0; int current=0; deck = new cardType[52]; string suit[] = {"Hearts","Diamonds","Clubs","Spades"}; int value[] = {1,2,3,4,5,6,7,8,9,10,11,12,13}; for ( int i = 0; i <= 3; i++ ) { for ( int j = 1; j <= 13; j++ ) { deck[index] = cardType(suit[i],value[j]); index++; } } } cardType deckType::dealCard() { return deck[current]; current++; } Main cpp : #include "deckType.h" #include <iostream> using namespace std; int main() { deckType gamedeck; cout << "1" <<endl; cardType currentCard; cout << "2" <<endl; currentCard = gamedeck.dealCard(); cout << "3" <<endl; return 0; } I keep getting bad_alloc at the currentCard = gamedeck.dealCard(); I really do not know what i have done wrong.

    Read the article

  • CSS Positioning

    - by Davey
    Trying to mess with this wordpress theme and can't figure out why the sidebar is stacking underneath the content block. Any help would be very appreciated. http://www.buffalostreetbooks.com/events CSS: body { font-family: Arial, Helvetica, Verdana, Sans-serif; font-size: 10pt; background-color: #692022; background-image:url("http://www.buffalostreetbooks.com/wp-content/themes/autumn-leaves/images/repeatflower.png"); } body,h1#blog-title { margin: 0; padding: 0; } a { color: blue; } a:hover { color: #FF8C00; } a img { border: 0 none; } #wrapper { width: 960px; margin: 0 auto; background-color: #F4FBF4; border-left: 1px solid #ccc; border-right: 1px solid #ccc; } #header { background-image:url("http://www.buffalostreetbooks.com/wp-content/themes/autumn-leaves/images/headertime.png"); width:768px; height: 200px; } #inner-header { padding: 125px 1em 0; } h1#blog-title { font-size: 2em; } h1#blog-title a { color: #800000; } .entry-title a { color: #CD853F; } h1#blog-title a, .entry-title a, #footer a { text-decoration: none; } h1#blog-title a:hover, .entry-title a:hover, #footer a:hover { text-decoration: underline; } div.skip-link { display: none; } #menu { border-bottom: 1px solid #ccc; } #menu a { color: #000; } #menu a:hover { text-decoration: underline; } #menu li.current_page_item a, #menu li.current_page_item a:hover { background-color: #DFC28B; text-decoration: none; } #content { padding: 1em; width:600px; } .entry-title { font-size: 1.5em; margin: 1em 0 0 0; } abbr.published { color: #666; border: 0 none; } .entry-meta, .entry-date { color: #666; } #comments-list .avatar { float: left; margin-right: 1em; } #comments-list .n { font-weight: bold; } .entry-meta, .comment-meta { font-style: italic; } #comments-list p { clear: left; } #primary { padding-left: 1em; font-size: 0.9em; border-left: 1px solid #ccc; border-bottom: 1px solid #ccc; background-color: #FFFACD; } #footer { text-align: center; font-size: 0.8em; border-top: 1px solid #ccc; border-bottom: 1px solid #ccc; margin-bottom: 1em; } #inner-footer { padding: 1em 0; } .entry-meta, .entry-meta a, .comment-meta, .comment-meta a, .sidebar, .sidebar a, #footer, #footer a { color: #666; } /* LAYOUT: Two-Column (Right) DESCRIPTION: Two-column fluid layout with one sidebars right of content */ div#container { margin:0 0 0 0; width:960px; height:100%; } div#content { margin:0 0 0 0; } div.sidebar { overflow:hidden; width:280px; min-height:500px; clear:both; } div#secondary { clear:right; } div#footer { clear:both; width:100%; } /* Just some example content */ div#menu { height:2em; width:100%; } div#menu ul,div#menu ul ul { line-height:2em; list-style:none; margin:0; padding:0; } div#menu ul a { display:block; margin-right:1em; padding:0 0.5em; text-decoration:none; } div#menu ul ul ul a { font-style:italic; } div#menu ul li ul { left:-999em; position:absolute; } div#menu ul li:hover ul { left:auto; } .entry-title,.entry-meta { clear:both; } div#primary { } form#commentform .form-label { margin:1em 0 0; } form#commentform span.required { background:#fff; color:#c30; } form#commentform,form#commentform p { padding:0; } input#author,input#email,input#url,textarea#comme nt { padding:0.2em; } div.comments ol li { margin:0 0 3.5em; } textarea#comment { height:13em; margin:0 0 0.5em; overflow:auto; width:66%; } .alignright,img.alignright{ float:right; margin:1em 0 0 1em; } .alignleft,img.alignleft{ float:left; margin:1em 1em 0 0; } .aligncenter,img.aligncenter{ display:block; margin:1em auto; text-align:center; } div.gallery { clear:both; height:180px; margin:1em 0; width:100%; } p.wp-caption-text{ font-style:italic; } div.gallery dl{ margin:1em auto; overflow:hidden; text-align:center; } div.gallery dl.gallery-columns-1 { width:100%; } div.gallery dl.gallery-columns-2 { width:49%; } div.gallery dl.gallery-columns-3 { width:33%; } div.gallery dl.gallery-columns-4 { width:24%; } div.gallery dl.gallery-columns-5 { width:19%; } div#nav-above { margin-bottom:1em; } div#nav-below { margin-top:1em; } div#nav-images { height:150px; margin:1em 0; } div.navigation { height:1.25em; } div.navigation div.nav-next { float:right; text-align:right; } div.sidebar h3 { font-size:1.2em; } div.sidebar input#s { width:7em; } div.sidebar li { list-style:none; margin:0 0 2em; } div.sidebar li form { margin:0.2em 0 0; padding:0; } div.sidebar ul ul { margin:0 0 0 2em; } div.sidebar ul ul li { list-style:disc; margin:0; } div.sidebar ul ul ul { margin:0 0 0 0.5em; } div.sidebar ul ul ul li { list-style:circle; } div#menu ul li,div.gallery dl,div.navigation div.nav-previous { float:left; } input#author,input#email,input#url,div.navigation div { width:50%; } div.gallery *,div.sidebar div,div.sidebar h3,div.sidebar ul { margin:0; padding:0; }

    Read the article

  • Little more help with writing a o buffer with libjpeg

    - by Richard Knop
    So I have managed to find another question discussing how to use the libjpeg to compress an image to jpeg. I have found this code which is supposed to work: Compressing IplImage to JPEG using libjpeg in OpenCV Here's the code (it compiles ok): /* This a custom destination manager for jpeglib that enables the use of memory to memory compression. See IJG documentation for details. */ typedef struct { struct jpeg_destination_mgr pub; /* base class */ JOCTET* buffer; /* buffer start address */ int bufsize; /* size of buffer */ size_t datasize; /* final size of compressed data */ int* outsize; /* user pointer to datasize */ int errcount; /* counts up write errors due to buffer overruns */ } memory_destination_mgr; typedef memory_destination_mgr* mem_dest_ptr; /* ------------------------------------------------------------- */ /* MEMORY DESTINATION INTERFACE METHODS */ /* ------------------------------------------------------------- */ /* This function is called by the library before any data gets written */ METHODDEF(void) init_destination (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->pub.next_output_byte = dest->buffer; /* set destination buffer */ dest->pub.free_in_buffer = dest->bufsize; /* input buffer size */ dest->datasize = 0; /* reset output size */ dest->errcount = 0; /* reset error count */ } /* This function is called by the library if the buffer fills up I just reset destination pointer and buffer size here. Note that this behavior, while preventing seg faults will lead to invalid output streams as data is over- written. */ METHODDEF(boolean) empty_output_buffer (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->pub.next_output_byte = dest->buffer; dest->pub.free_in_buffer = dest->bufsize; ++dest->errcount; /* need to increase error count */ return TRUE; } /* Usually the library wants to flush output here. I will calculate output buffer size here. Note that results become incorrect, once empty_output_buffer was called. This situation is notified by errcount. */ METHODDEF(void) term_destination (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->datasize = dest->bufsize - dest->pub.free_in_buffer; if (dest->outsize) *dest->outsize += (int)dest->datasize; } /* Override the default destination manager initialization provided by jpeglib. Since we want to use memory-to-memory compression, we need to use our own destination manager. */ GLOBAL(void) jpeg_memory_dest (j_compress_ptr cinfo, JOCTET* buffer, int bufsize, int* outsize) { mem_dest_ptr dest; /* first call for this instance - need to setup */ if (cinfo->dest == 0) { cinfo->dest = (struct jpeg_destination_mgr *) (*cinfo->mem->alloc_small) ((j_common_ptr) cinfo, JPOOL_PERMANENT, sizeof (memory_destination_mgr)); } dest = (mem_dest_ptr) cinfo->dest; dest->bufsize = bufsize; dest->buffer = buffer; dest->outsize = outsize; /* set method callbacks */ dest->pub.init_destination = init_destination; dest->pub.empty_output_buffer = empty_output_buffer; dest->pub.term_destination = term_destination; } /* ------------------------------------------------------------- */ /* MEMORY SOURCE INTERFACE METHODS */ /* ------------------------------------------------------------- */ /* Called before data is read */ METHODDEF(void) init_source (j_decompress_ptr dinfo) { /* nothing to do here, really. I mean. I'm not lazy or something, but... we're actually through here. */ } /* Called if the decoder wants some bytes that we cannot provide... */ METHODDEF(boolean) fill_input_buffer (j_decompress_ptr dinfo) { /* we can't do anything about this. This might happen if the provided buffer is either invalid with regards to its content or just a to small bufsize has been given. */ /* fail. */ return FALSE; } /* From IJG docs: "it's not clear that being smart is worth much trouble" So I save myself some trouble by ignoring this bit. */ METHODDEF(void) skip_input_data (j_decompress_ptr dinfo, INT32 num_bytes) { /* There might be more data to skip than available in buffer. This clearly is an error, so screw this mess. */ if ((size_t)num_bytes > dinfo->src->bytes_in_buffer) { dinfo->src->next_input_byte = 0; /* no buffer byte */ dinfo->src->bytes_in_buffer = 0; /* no input left */ } else { dinfo->src->next_input_byte += num_bytes; dinfo->src->bytes_in_buffer -= num_bytes; } } /* Finished with decompression */ METHODDEF(void) term_source (j_decompress_ptr dinfo) { /* Again. Absolute laziness. Nothing to do here. Boring. */ } GLOBAL(void) jpeg_memory_src (j_decompress_ptr dinfo, unsigned char* buffer, size_t size) { struct jpeg_source_mgr* src; /* first call for this instance - need to setup */ if (dinfo->src == 0) { dinfo->src = (struct jpeg_source_mgr *) (*dinfo->mem->alloc_small) ((j_common_ptr) dinfo, JPOOL_PERMANENT, sizeof (struct jpeg_source_mgr)); } src = dinfo->src; src->next_input_byte = buffer; src->bytes_in_buffer = size; src->init_source = init_source; src->fill_input_buffer = fill_input_buffer; src->skip_input_data = skip_input_data; src->term_source = term_source; /* IJG recommend to use their function - as I don't know **** about how to do better, I follow this recommendation */ src->resync_to_restart = jpeg_resync_to_restart; } All I need to do is replace the jpeg_stdio_dest in my program with this code: int numBytes = 0; //size of jpeg after compression char * storage = new char[150000]; //storage buffer JOCTET *jpgbuff = (JOCTET*)storage; //JOCTET pointer to buffer jpeg_memory_dest(&cinfo,jpgbuff,150000,&numBytes); So I need some help to incorporate the above four lines into this function which now works but writes to a file instead of a memory: int write_jpeg_file( char *filename ) { struct jpeg_compress_struct cinfo; struct jpeg_error_mgr jerr; /* this is a pointer to one row of image data */ JSAMPROW row_pointer[1]; FILE *outfile = fopen( filename, "wb" ); if ( !outfile ) { printf("Error opening output jpeg file %s\n!", filename ); return -1; } cinfo.err = jpeg_std_error( &jerr ); jpeg_create_compress(&cinfo); jpeg_stdio_dest(&cinfo, outfile); /* Setting the parameters of the output file here */ cinfo.image_width = width; cinfo.image_height = height; cinfo.input_components = bytes_per_pixel; cinfo.in_color_space = color_space; /* default compression parameters, we shouldn't be worried about these */ jpeg_set_defaults( &cinfo ); /* Now do the compression .. */ jpeg_start_compress( &cinfo, TRUE ); /* like reading a file, this time write one row at a time */ while( cinfo.next_scanline < cinfo.image_height ) { row_pointer[0] = &raw_image[ cinfo.next_scanline * cinfo.image_width * cinfo.input_components]; jpeg_write_scanlines( &cinfo, row_pointer, 1 ); } /* similar to read file, clean up after we're done compressing */ jpeg_finish_compress( &cinfo ); jpeg_destroy_compress( &cinfo ); fclose( outfile ); /* success code is 1! */ return 1; } Anybody could help me out a bit with it? I've tried meddling with it but I am not sure how to do it. I I just replace this line: jpeg_stdio_dest(&cinfo, outfile); It's not going to work. There is more stuff that needs to be changed a bit in that function and I am being a little lost from all those pointers and memory management.

    Read the article

  • questions regarding the use of A* with the 15-square puzzle

    - by Cheeso
    I'm trying to build an A* solver for a 15-square puzzle. The goal is to re-arrange the tiles so that they appear in their natural positions. You can only slide one tile at a time. Each possible state of the puzzle is a node in the search graph. For the h(x) function, I am using an aggregate sum, across all tiles, of the tile's dislocation from the goal state. In the above image, the 5 is at location 0,0, and it belongs at location 1,0, therefore it contributes 1 to the h(x) function. The next tile is the 11, located at 0,1, and belongs at 2,2, therefore it contributes 3 to h(x). And so on. EDIT: I now understand this is what they call "Manhattan distance", or "taxicab distance". I have been using a step count for g(x). In my implementation, for any node in the state graph, g is just +1 from the prior node's g. To find successive nodes, I just examine where I can possibly move the "hole" in the puzzle. There are 3 neighbors for the puzzle state (aka node) that is displayed: the hole can move north, west, or east. My A* search sometimes converges to a solution in 20s, sometimes 180s, and sometimes doesn't converge at all (waited 10 mins or more). I think h is reasonable. I'm wondering if I've modeled g properly. In other words, is it possible that my A* function is reaching a node in the graph via a path that is not the shortest path? Maybe have I not waited long enough? Maybe 10 minutes is not long enough? For a fully random arrangement, (assuming no parity problems), What is the average number of permutations an A* solution will examine? (please show the math) I'm going to look for logic errors in my code, but in the meantime, Any tips? (ps: it's done in Javascript). Also, no, this isn't CompSci homework. It's just a personal exploration thing. I'm just trying to learn Javascript. EDIT: I've found that the run-time is highly depend upon the heuristic. I saw the 10x factor applied to the heuristic from the article someone mentioned, and it made me wonder - why 10x? Why linear? Because this is done in javascript, I could modify the code to dynamically update an html table with the node currently being considered. This allowd me to peek at the algorithm as it was progressing. With a regular taxicab distance heuristic, I watched as it failed to converge. There were 5's and 12's in the top row, and they kept hanging around. I'd see 1,2,3,4 creep into the top row, but then they'd drop out, and other numbers would move up there. What I was hoping to see was 1,2,3,4 sort of creeping up to the top, and then staying there. I thought to myself - this is not the way I solve this personally. Doing this manually, I solve the top row, then the 2ne row, then the 3rd and 4th rows sort of concurrently. So I tweaked the h(x) function to more heavily weight the higher rows and the "lefter" columns. The result was that the A* converged much more quickly. It now runs in 3 minutes instead of "indefinitely". With the "peek" I talked about, I can see the smaller numbers creep up to the higher rows and stay there. Not only does this seem like the right thing, it runs much faster. I'm in the process of trying a bunch of variations. It seems pretty clear that A* runtime is very sensitive to the heuristic. Currently the best heuristic I've found uses the summation of dislocation * ((4-i) + (4-j)) where i and j are the row and column, and dislocation is the taxicab distance. One interesting part of the result I got: with a particular heuristic I find a path very quickly, but it is obviously not the shortest path. I think this is because I am weighting the heuristic. In one case I got a path of 178 steps in 10s. My own manual effort produce a solution in 87 moves. (much more than 10s). More investigation warranted. So the result is I am seeing it converge must faster, and the path is definitely not the shortest. I have to think about this more. Code: var stop = false; function Astar(start, goal, callback) { // start and goal are nodes in the graph, represented by // an array of 16 ints. The goal is: [1,2,3,...14,15,0] // Zero represents the hole. // callback is a method to call when finished. This runs a long time, // therefore we need to use setTimeout() to break it up, to avoid // the browser warning like "Stop running this script?" // g is the actual distance traveled from initial node to current node. // h is the heuristic estimate of distance from current to goal. stop = false; start.g = start.dontgo = 0; // calcHeuristic inserts an .h member into the array calcHeuristicDistance(start); // start the stack with one element var closed = []; // set of nodes already evaluated. var open = [ start ]; // set of nodes to evaluate (start with initial node) var iteration = function() { if (open.length==0) { // no more nodes. Fail. callback(null); return; } var current = open.shift(); // get highest priority node // update the browser with a table representation of the // node being evaluated $("#solution").html(stateToString(current)); // check solution returns true if current == goal if (checkSolution(current,goal)) { // reconstructPath just records the position of the hole // through each node var path= reconstructPath(start,current); callback(path); return; } closed.push(current); // get the set of neighbors. This is 3 or fewer nodes. // (nextStates is optimized to NOT turn directly back on itself) var neighbors = nextStates(current, goal); for (var i=0; i<neighbors.length; i++) { var n = neighbors[i]; // skip this one if we've already visited it if (closed.containsNode(n)) continue; // .g, .h, and .previous get assigned implicitly when // calculating neighbors. n.g is nothing more than // current.g+1 ; // add to the open list if (!open.containsNode(n)) { // slot into the list, in priority order (minimum f first) open.priorityPush(n); n.previous = current; } } if (stop) { callback(null); return; } setTimeout(iteration, 1); }; // kick off the first iteration iteration(); return null; }

    Read the article

  • Erroneous/Incorrect C2248 error using Visual Studio 2010

    - by Dylan Bourque
    I'm seeing what I believe to be an erroneous/incorrect compiler error using the Visual Studio 2010 compiler. I'm in the process of up-porting our codebase from Visual Studio 2005 and I ran across a construct that was building correctly before but now generates a C2248 compiler error. Obviously, the code snippet below has been generic-ized, but it is a compilable example of the scenario. The ObjectPtr<T> C++ template comes from our codebase and is the source of the error in question. What appears to be happening is that the compiler is generating a call to the copy constructor for ObjectPtr<T> when it shouldn't (see my comment block in the SomeContainer::Foo() method below). For this code construct, there is a public cast operator for SomeUsefulData * on ObjectPtr<SomeUsefulData> but it is not being chosen inside the true expression if the ?: operator. Instead, I get the two errors in the block quote below. Based on my knowledge of C++, this code should compile. Has anyone else seen this behavior? If not, can someone point me to a clarification of the compiler resolution rules that would explain why it's attempting to generate a copy of the object in this case? Thanks in advance, Dylan Bourque Visual Studio build output: c:\projects\objectptrtest\objectptrtest.cpp(177): error C2248: 'ObjectPtr::ObjectPtr' : cannot access private member declared in class 'ObjectPtr' with [ T=SomeUsefulData ] c:\projects\objectptrtest\objectptrtest.cpp(25) : see declaration of 'ObjectPtr::ObjectPtr' with [ T=SomeUsefulData ] c:\projects\objectptrtest\objectptrtest.cpp(177): error C2248: 'ObjectPtr::ObjectPtr' : cannot access private member declared in class 'ObjectPtr' with [ T=SomeUsefulData ] c:\projects\objectptrtest\objectptrtest.cpp(25) : see declaration of 'ObjectPtr::ObjectPtr' with [ T=SomeUsefulData ] Below is a minimal, compilable example of the scenario: #include <stdio.h> #include <tchar.h> template<class T> class ObjectPtr { public: ObjectPtr<T> (T* pObj = NULL, bool bShared = false) : m_pObject(pObj), m_bObjectShared(bShared) {} ~ObjectPtr<T> () { Detach(); } private: // private, unimplemented copy constructor and assignment operator // to guarantee that ObjectPtr<T> objects are not copied ObjectPtr<T> (const ObjectPtr<T>&); ObjectPtr<T>& operator = (const ObjectPtr<T>&); public: T * GetObject () { return m_pObject; } const T * GetObject () const { return m_pObject; } bool HasObject () const { return (GetObject()!=NULL); } bool IsObjectShared () const { return m_bObjectShared; } void ObjectShared (bool bShared) { m_bObjectShared = bShared; } bool IsNull () const { return !HasObject(); } void Attach (T* pObj, bool bShared = false) { Detach(); if (pObj != NULL) { m_pObject = pObj; m_bObjectShared = bShared; } } void Detach (T** ppObject = NULL) { if (ppObject != NULL) { *ppObject = m_pObject; m_pObject = NULL; m_bObjectShared = false; } else { if (HasObject()) { if (!IsObjectShared()) delete m_pObject; m_pObject = NULL; m_bObjectShared = false; } } } void Detach (bool bDeleteIfNotShared) { if (HasObject()) { if (bDeleteIfNotShared && !IsObjectShared()) delete m_pObject; m_pObject = NULL; m_bObjectShared = false; } } bool IsEqualTo (const T * pOther) const { return (GetObject() == pOther); } public: T * operator -> () { ASSERT(HasObject()); return m_pObject; } const T * operator -> () const { ASSERT(HasObject()); return m_pObject; } T & operator * () { ASSERT(HasObject()); return *m_pObject; } const T & operator * () const { ASSERT(HasObject()); return (const C &)(*m_pObject); } operator T * () { return m_pObject; } operator const T * () const { return m_pObject; } operator bool() const { return (m_pObject!=NULL); } ObjectPtr<T>& operator = (T * pObj) { Attach(pObj, false); return *this; } bool operator == (const T * pOther) const { return IsEqualTo(pOther); } bool operator == (T * pOther) const { return IsEqualTo(pOther); } bool operator != (const T * pOther) const { return !IsEqualTo(pOther); } bool operator != (T * pOther) const { return !IsEqualTo(pOther); } bool operator == (const ObjectPtr<T>& other) const { return IsEqualTo(other.GetObject()); } bool operator != (const ObjectPtr<T>& other) const { return !IsEqualTo(other.GetObject()); } bool operator == (int pv) const { return (pv==NULL)? IsNull() : (LPVOID(m_pObject)==LPVOID(pv)); } bool operator != (int pv) const { return !(*this == pv); } private: T * m_pObject; bool m_bObjectShared; }; // Some concrete type that holds useful data class SomeUsefulData { public: SomeUsefulData () {} ~SomeUsefulData () {} }; // Some concrete type that holds a heap-allocated instance of // SomeUsefulData class SomeContainer { public: SomeContainer (SomeUsefulData* pUsefulData) { m_pData = pUsefulData; } ~SomeContainer () { // nothing to do here } public: bool EvaluateSomeCondition () { // fake condition check to give us an expression // to use in ?: operator below return true; } SomeUsefulData* Foo () { // this usage of the ?: operator generates a C2248 // error b/c it's attempting to call the copy // constructor on ObjectPtr<T> return EvaluateSomeCondition() ? m_pData : NULL; /**********[ DISCUSSION ]********** The following equivalent constructs compile w/out error and behave correctly: (1) explicit cast to SomeUsefulData* as a comiler hint return EvaluateSomeCondition() ? (SomeUsefulData *)m_pData : NULL; (2) if/else instead of ?: if (EvaluateSomeCondition()) return m_pData; else return NULL; (3) skip the condition check and return m_pData as a SomeUsefulData* directly return m_pData; **********[ END DISCUSSION ]**********/ } private: ObjectPtr<SomeUsefulData> m_pData; }; int _tmain(int argc, _TCHAR* argv[]) { return 0; }

    Read the article

  • Java: Cannot find a method's symbol even though that method is declared later in the class. The remaining code is looking for a class.

    - by Midimistro
    This is an assignment that we use strings in Java to analyze a phone number. The error I am having is anything below tester=invalidCharacters(c); does not compile because every line past tester=invalidCharacters(c); is looking for a symbol or the class. In get invalidResults, all I am trying to do is evaluate a given string for non-alphabetical characters such as *,(,^,&,%,@,#,), and so on. What to answer: Why is it producing an error, what will work, and is there an easier method WITHOUT using regex. Here is the link to the assignment: http://cis.csuohio.edu/~hwang/teaching/cis260/assignments/assignment9.html public class PhoneNumber { private int areacode; private int number; private int ext; /////Constructors///// //Third Constructor (given one string arg) "xxx-xxxxxxx" where first three are numbers and the remaining (7) are numbers or letters public PhoneNumber(String newNumber){ //Note: Set default ext to 0 ext=0; ////Declare Temporary Storage and other variables//// //for the first three numbers String areaCodeString; //for the remaining seven characters String newNumberString; //For use in testing the second half of the string boolean containsLetters; boolean containsInvalid; /////Separate the two parts of string///// //Get the area code part of the string areaCodeString=newNumber.substring(0,2); //Convert the string and set it to the area code areacode=Integer.parseInt(areaCodeString); //Skip the "-" and Get the remaining part of the string newNumberString=newNumber.substring(4); //Create an array of characters from newNumberString to reuse in later methods for int length=newNumberString.length(); char [] myCharacters= new char [length]; int i; for (i=0;i<length;i++){ myCharacters [i]=newNumberString.charAt(i); } //Test if newNumberString contains letters & converting them into numbers String reNewNumber=""; //Test for invalid characters containsInvalid=getInvalidResults(newNumberString,length); if (containsInvalid==false){ containsLetters=getCharResults(newNumberString,length); if (containsLetters==true){ for (i=0;i<length;i++){ myCharacters [i]=(char)convertLetNum((myCharacters [i])); reNewNumber=reNewNumber+myCharacters[i]; } } } if (containsInvalid==false){ number=Integer.parseInt(reNewNumber); } else{ System.out.println("Error!"+"\t"+newNumber+" contains illegal characters. This number will be ignored and skipped."); } } //////Primary Methods/Behaviors/////// //Compare this phone number with the one passed by the caller public boolean equals(PhoneNumber pn){ boolean equal; String concat=(areacode+"-"+number); String pN=pn.toString(); if (concat==pN){ equal=true; } else{ equal=false; } return equal; } //Convert the stored number to a certain string depending on extension public String toString(){ String returned; if(ext==0){ returned=(areacode+"-"+number); } else{ returned=(areacode+"-"+number+" ext "+ext); } return returned; } //////Secondary Methods/////// //Method for testing if the second part of the string contains any letters public static boolean getCharResults(String newNumString,int getLength){ //Recreate a character array int i; char [] myCharacters= new char [getLength]; for (i=0;i<getLength;i++){ myCharacters [i]=newNumString.charAt(i); } boolean doesContainLetter=false; int j; for (j=0;j<getLength;j++){ if ((Character.isDigit(myCharacters[j])==true)){ doesContainLetter=false; } else{ doesContainLetter=true; return doesContainLetter; } } return doesContainLetter; } //Method for testing if the second part of the string contains any letters public static boolean getInvalidResults(String newNumString,int getLength){ boolean doesContainInvalid=false; int i; char c; boolean tester; char [] invalidCharacters= new char [getLength]; for (i=0;i<getLength;i++){ invalidCharacters [i]=newNumString.charAt(i); c=invalidCharacters [i]; tester=invalidCharacters(c); if(tester==true)){ doesContainInvalid=false; } else{ doesContainInvalid=true; return doesContainInvalid; } } return doesContainInvalid; } //Method for evaluating string for invalid characters public boolean invalidCharacters(char letter){ boolean returnNum=false; switch (letter){ case 'A': return returnNum; case 'B': return returnNum; case 'C': return returnNum; case 'D': return returnNum; case 'E': return returnNum; case 'F': return returnNum; case 'G': return returnNum; case 'H': return returnNum; case 'I': return returnNum; case 'J': return returnNum; case 'K': return returnNum; case 'L': return returnNum; case 'M': return returnNum; case 'N': return returnNum; case 'O': return returnNum; case 'P': return returnNum; case 'Q': return returnNum; case 'R': return returnNum; case 'S': return returnNum; case 'T': return returnNum; case 'U': return returnNum; case 'V': return returnNum; case 'W': return returnNum; case 'X': return returnNum; case 'Y': return returnNum; case 'Z': return returnNum; default: return true; } } //Method for converting letters to numbers public int convertLetNum(char letter){ int returnNum; switch (letter){ case 'A': returnNum=2;return returnNum; case 'B': returnNum=2;return returnNum; case 'C': returnNum=2;return returnNum; case 'D': returnNum=3;return returnNum; case 'E': returnNum=3;return returnNum; case 'F': returnNum=3;return returnNum; case 'G': returnNum=4;return returnNum; case 'H': returnNum=4;return returnNum; case 'I': returnNum=4;return returnNum; case 'J': returnNum=5;return returnNum; case 'K': returnNum=5;return returnNum; case 'L': returnNum=5;return returnNum; case 'M': returnNum=6;return returnNum; case 'N': returnNum=6;return returnNum; case 'O': returnNum=6;return returnNum; case 'P': returnNum=7;return returnNum; case 'Q': returnNum=7;return returnNum; case 'R': returnNum=7;return returnNum; case 'S': returnNum=7;return returnNum; case 'T': returnNum=8;return returnNum; case 'U': returnNum=8;return returnNum; case 'V': returnNum=8;return returnNum; case 'W': returnNum=9;return returnNum; case 'X': returnNum=9;return returnNum; case 'Y': returnNum=9;return returnNum; case 'Z': returnNum=9;return returnNum; default: return 0; } } } Note: Please Do not use this program to cheat in your own class. To ensure of this, I will take this question down if it has not been answered by the end of 2013, if I no longer need an explanation for it, or if the term for the class has ended.

    Read the article

  • Server compromised. Bounce message contains many email addresses message was not sent to

    - by Tim Duncklee
    This is not a dupe. Please read and understand the issue before marking this as a duplicate question that has been answered already. Several customers are reporting bounce messages like the one below. At first I thought their computers had a virus but then I received one that was server generated so the problem is with the server. I've inspected the logs and these email addresses do not appear in the logs. The only thing I see that I do not remember seeing in the past are entries like this: Apr 30 13:34:49 psa86 qmail-queue-handlers[20994]: hook_dir = '/var/qmail//handlers/before-queue' Apr 30 13:34:49 psa86 qmail-queue-handlers[20994]: recipient[3] = '[email protected]' Apr 30 13:34:49 psa86 qmail-queue-handlers[20994]: handlers dir = '/var/qmail//handlers/before-queue/recipient/[email protected]' I've searched here and the web and maybe I'm just not entering the right search terms but I find nothing on this issue. Does anyone know how a hacker would attach additional email addresses to a message at the server and have them not appear in the logs? CentOS release 5.4, Plesk 8.6, QMail 1.03 Hi. This is the qmail-send program at psa.aaaaaa.com. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: 82.201.133.227 does not like recipient. Remote host said: 550 #5.1.0 Address rejected. Giving up on 82.201.133.227. <[email protected]>: 64.18.7.10 does not like recipient. Remote host said: 550 No such user - psmtp Giving up on 64.18.7.10. <[email protected]>: 173.194.68.27 does not like recipient. Remote host said: 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://support.google.com/mail/bin/answer.py?answer=6596 w8si1903qag.18 - gsmtp Giving up on 173.194.68.27. <[email protected]>: 207.115.36.23 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.36.23. <[email protected]>: 207.115.37.22 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.22. <[email protected]>: 207.115.37.20 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.20. <[email protected]>: 207.115.37.23 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.23. <[email protected]>: 207.115.36.22 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.36.22. <[email protected]>: 74.205.16.140 does not like recipient. Remote host said: 553 sorry, that domain isn't in my list of allowed rcpthosts; no valid cert for gatewaying (#5.7.1) Giving up on 74.205.16.140. <[email protected]>: 207.115.36.20 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.36.20. <[email protected]>: 207.115.37.21 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.21. <[email protected]>: 192.169.41.23 failed after I sent the message. Remote host said: 554 qq Sorry, no valid recipients (#5.1.3) --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 15962 invoked from network); 1 May 2013 06:49:34 -0400 Received: from exprod6mo107.postini.com (64.18.1.18) by psa.aaaaaa.com with (DHE-RSA-AES256-SHA encrypted) SMTP; 1 May 2013 06:49:34 -0400 Received: from aaaaaa.com (exprod6lut001.postini.com [64.18.1.199]) by exprod6mo107.postini.com (Postfix) with SMTP id 47F80B8CA4 for <[email protected]>; Wed, 1 May 2013 03:49:33 -0700 (PDT) From: "Support" <[email protected]> To: [email protected] Subject: Detected Potential Junk Mail Date: Wed, 1 May 2013 03:49:33 -0700 Dear [email protected], junk mail protection service has detected suspicious email message(s) since your last visit and directed them to your Message Center. You can inspect your suspicious email at: ... UPDATE: After not seeing this problem for a while, I personally sent a message and immediately got a bounce with several bad addresses that I know I did not send to. These are addresses that are not on my system or on the server. This problem happens with both Mac and Windows clients and with messages generated from Postini and sent to users on my system. This is NOT backscatter. If it was backscatter it would not have the contents of my message in it. UPDATE #2 Here is another bounce. This one was sent by me and the bounce came back immediately. Hi. This is the qmail-send program at psa.aaaaaa.com. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: 71.74.56.227 does not like recipient. Remote host said: 550 5.1.1 <[email protected]>... User unknown Giving up on 71.74.56.227. <[email protected]>: Connected to 208.34.236.3 but sender was rejected. Remote host said: 550 5.7.1 This system is configured to reject mail from 174.142.62.210 [174.142.62.210] (Host blacklisted - Found on Realtime Black List server 'bl.mailspike.net') <[email protected]>: 66.96.80.22 failed after I sent the message. Remote host said: 552 sorry, mailbox [email protected] is over quota temporarily (#5.1.1) <[email protected]>: 83.145.109.52 does not like recipient. Remote host said: 550 5.1.1 <[email protected]>: Recipient address rejected: User unknown in virtual mailbox table Giving up on 83.145.109.52. <[email protected]>: 69.49.101.234 does not like recipient. Remote host said: 550 5.7.1 <[email protected]>... H:M12 [174.142.62.210] Connection refused due to abuse. Please see http://mailspike.org/anubis/lookup.html or contact your E-mail provider. Giving up on 69.49.101.234. <[email protected]>: 212.55.154.36 does not like recipient. Remote host said: 550-The account has been suspended for inactivity 550 A conta do destinatario encontra-se suspensa por inactividade (#5.2.1) Giving up on 212.55.154.36. <[email protected]>: 199.168.90.102 failed after I sent the message. Remote host said: 552 Transaction [email protected] failed, remote said "550 No such user" (#5.1.1) <[email protected]>: 98.136.217.192 failed after I sent the message. Remote host said: 554 delivery error: dd Sorry your message to [email protected] cannot be delivered. This account has been disabled or discontinued [#102]. - mta1210.sbc.mail.gq1.yahoo.com --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 2618 invoked from network); 2 Jun 2013 22:32:51 -0400 Received: from 75-138-254-239.dhcp.jcsn.tn.charter.com (HELO ?192.168.0.66?) (75.138.254.239) by psa.aaaaaa.com with SMTP; 2 Jun 2013 22:32:48 -0400 User-Agent: Microsoft-Entourage/12.34.0.120813 Date: Sun, 02 Jun 2013 21:32:39 -0500 Subject: Refinance From: Tim Duncklee <[email protected]> To: Scott jones <[email protected]> Message-ID: <CDD16A79.67344%[email protected]> Thread-Topic: Reference Thread-Index: Ac5gAp2QmTs+LRv0SEOy7AJTX2DWzQ== Mime-version: 1.0 Content-type: multipart/mixed; boundary="B_3453053568_12034440" > This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. --B_3453053568_12034440 Content-type: multipart/related; boundary="B_3453053568_11982218" --B_3453053568_11982218 Content-type: multipart/alternative; boundary="B_3453053568_12000660" --B_3453053568_12000660 Content-type: text/plain; charset="ISO-8859-1" Content-transfer-encoding: quoted-printable Scott, ... email body here ... Here are the relevant log entries: Jun 2 22:32:50 psa qmail-queue[2616]: mail: all addreses are uncheckable - need to skip scanning (by deny mode) Jun 2 22:32:50 psa qmail-queue[2616]: scan: the message(drweb.tmp.i2SY0n) sent by [email protected] to [email protected] should be passed without checks, because contains uncheckable addresses Jun 2 22:32:50 psa qmail-queue-handlers[2617]: Handlers Filter before-queue for qmail started ... Jun 2 22:32:50 psa qmail-queue-handlers[2617]: [email protected] Jun 2 22:32:50 psa qmail-queue-handlers[2617]: [email protected] Jun 2 22:32:50 psa qmail-queue-handlers[2617]: hook_dir = '/var/qmail//handlers/before-queue' Jun 2 22:32:50 psa qmail-queue-handlers[2617]: recipient[3] = '[email protected]' Jun 2 22:32:50 psa qmail-queue-handlers[2617]: handlers dir = '/var/qmail//handlers/before-queue/recipient/[email protected]' Jun 2 22:32:51 psa qmail: 1370226771.060211 starting delivery 57: msg 1540285 to remote [email protected] Jun 2 22:32:51 psa qmail: 1370226771.060402 status: local 0/10 remote 1/20 Jun 2 22:32:51 psa qmail: 1370226771.060556 new msg 4915232 Jun 2 22:32:51 psa qmail: 1370226771.060671 info msg 4915232: bytes 687899 from <[email protected]> qp 2618 uid 2020 Jun 2 22:32:51 psa qmail-remote-handlers[2619]: Handlers Filter before-remote for qmail started ... Jun 2 22:32:51 psa qmail-queue-handlers[2617]: starter: submitter[2618] exited normally Jun 2 22:32:51 psa qmail-remote-handlers[2619]: from= Jun 2 22:32:51 psa qmail-remote-handlers[2619]: [email protected] Jun 2 22:32:51 psa qmail: 1370226771.078732 starting delivery 58: msg 4915232 to remote [email protected] Jun 2 22:32:51 psa qmail: 1370226771.078825 status: local 0/10 remote 2/20 Jun 2 22:32:51 psa qmail-remote-handlers[2621]: Handlers Filter before-remote for qmail started ... Jun 2 22:32:51 psa qmail-remote-handlers[2621]: [email protected] Jun 2 22:32:51 psa qmail-remote-handlers[2621]: [email protected]

    Read the article

  • webserver horrible slow, sometimes incredible fast

    - by dhanke
    i am running a small community ( 6000+ Members ) on a non-virtual 64-bit ubuntu 11.04 system. I am not a Linux-pro, not even advanced, i just tried to setup a webserver, which does nothing special actually. Delivering some dynamic PHP and RoR websites is its task. So it might be that my configuration files do look horrible bad. Also, i might use the wrong vocabulary, so in doubt, please ask. Having a current all-time record of 520 registered users (board-accounts, no system-users) online at same time, average server-load is about 2.0 - 5.0. Meantime (~250 users) average server load value is at about 0.4 - 0.8, sometimes, on some expensive searches a bit higher. everything fine. From time to time however, the load increases up to 120 (120.0, not 12.0 ;) ). In this time, its hard to even connect via SSH, but when i reach the server, and use top/htop/iotop to see whats happening, i cannot identify any process causing high CPU load. iotop tells me about a current reading/writing speed of about approx. 70kb/s, which is quite equal to power-off i think. Memory-Usage is max. at ~ 12GB of 16GB, so swap remains empty. now the odd (at least for me:) waiting some minutes ( since i always get a bit into a panic when this happens, it feels like 5 minutes, but i suppose its more like 20-30 minutes) and the server is back to normal. everything continues as normal. another odd fact: when i run hdparm -tT /dev/sda, i get answer like: /dev/sda: Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec when i run the same command while the server is "frozen", the answer is like /dev/sda: <- takes about 5 minutes until this line appears Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec <- 5 more minutes Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec <- another 5 minutes so the values are the same, but the quoted time is completely wrong. using time command as prefix also tells me that ~ 15 minutes were used. I searched in dmesg, /var/log/[messages|syslog] - nothing found. /var/log/errors however tells me that: Jul 4 20:28:30 localhost kernel: [19080.671415] INFO: task php5-fpm:27728 blocked for more than 120 seconds. Jul 4 20:28:30 localhost kernel: [19080.671419] "echo 0 /proc/sys/kernel/hung_task_timeout_secs" disables this message. multiple times. now that message does tell me that php5-fpm task was blocked or did block ? - but not if that is the cause or just one of the results of that "freeze". Anyone? to cut the long story short, i dont know where even to start analyzing. So if you can give me any advice by looking at following specs and configs, or ask me to provide more information, i`d be glad. Specs: 6 Core AMD Phenom(tm) II X6 1055T Processor * 16 Gigabyte Ram 2x 1.5 TB Seagate ST1500DL003-9VT16L via SATA 3 via SoftwareRaid (i suppose) Services: (due to service --status-all, those with [ + ]) nginx Webserver 1.0.14 mySQL 5.1.63 Server Ruby on Rails 2.3.11 ( passenger-nginx-module ) php5-fpm 5.3.6-13ubuntu3.7 SSH ido2db Further services: default crontab + nightly backup. syslog-ng Website consists of 2 subdomains, forum. and www. where forum is a phpBB3.x PHP-Board, and www a Ruby on Rails 2.3.11 application (portal). Mini-Note: sometimes i notice that the forum is pretty slow, in contrast to the always-fast (except for this "freeze") portal. Both share the same Database, but the portal is using it read-only. The Webserver is nginx, using phusion passenger module to communicate with the ruby-application. Also, for the forum it communicates with php5-fpm via socket: relevant nginx configuration parts ( with comments/questions starting by ; ) ; in case of freeze due to too high Filesystem activity, maybe adding a limit? #worker_rlimit_nofile 50000; user www-data; ; 6 cores, so i read 6 fits. maybe already wrong? worker_processes 6; pid /var/run/nginx.pid; events { worker_connections 1024; } http { passenger_root /var/lib/gems/1.8/gems/passenger-3.0.11; passenger_ruby /usr/bin/ruby1.8; ; the forum once featured a chat, which was working w/o websockets. ; so it was a hell of pull requests (deactivated now, freeze still happening) keepalive_timeout 65; keepalive_requests 50; gzip on; server { listen 80; server_name www.domain.tld; root /var/www/domain/rails/public; passenger_enabled on; } server { listen 80; server_name forum.domain.tld; location / { root /var/www/domain/forum; index index.php; } ; satic stuff to be handled by nginx location ~* ^/style/.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; root /var/www/domain/forum/; } ; now the php magic, note the "backend"-fcgi_pass location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/domain/forum$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; } location ~ /\.ht { deny all; } } ;the php5-fpm socket. i read that /dev/shm/ whould be the fastes place for this. bad idea in general? upstream backend { server unix:/dev/shm/phpfpm; } ... } php5-fpm settings (i changed this values due to php5-fpm error log messages higher and higher.. (freeze-problem was there before as well)* listen = /dev/shm/phpfpm user = www-data group = www-data pm = dynamic ; holy, 4000! well, shinking this value to earth-level gave me ; 100s of 502 bad gateway commands. this values were quite stable. ; since there are only max 520 users online i dont get it, why i would need ; as many children as configured here. due to keep-alive maybe? ; asking questions is easier for me since restarting server will make ; my community-members angry ;) pm.max_children = 4000 pm.start_servers = 100 pm.min_spare_servers = 50 pm.max_spare_servers = 150 pm.max_requests = 10 pm.status_path = /status ping.path = /ping ping.response = pong slowlog = log/$pool.log.slow ;should i use rlimit? ;rlimit_files = 1024 chdir = / mysql/my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP ; high number, but less gives some phpBB errors. max_connections = 450 table_cache = 512 ; i read twice the cpu cores, bad? thread_concurrency = 12 join_buffer_size = 2084K concurrent_insert = 3 query_cache_limit = 64M query_cache_size = 512M query_cache_type = 1 log_error = /var/log/mysql/error.log log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 expire_logs_days = 10 max_binlog_size = 100M low_priority_updates=1 [mysqldump] quick quote-names max_allowed_packet = 16M [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/ I used smartctl already, hdds seem to be fine. /proc/mdstatus quotes: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sda3[1] 1459264192 blocks [2/1] [_U] md1 : active raid1 sda1[0] 3911680 blocks [2/1] [U_] unused devices: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127727 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 127727 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited I quote some questions in my configuration files, these are not (intentional) directly problem-related, but would be nice for me to know wether they are indeed questionable or done right. One additional Fact: my MYSQL-database is at 12GB size. i dont know if that does matter, but mytop sometimes shows me 4-5 seconds long insert queries, some are 20-30 seconds long. Its just a feeling that i am unable to prove (because i dont know how), but when i disable the database, the freeze seems not to happen. Example: i created a dummy rails application to see the development log. the app made some sql-queries, reads and inserts. the log quite often was like: DbTest Load (0.3ms) SELECT * FROM `db_test` WHERE (`db_test`.`id` = 31722) LIMIT 1 SQL (0.1ms) BEGIN DbTest Update (0.3ms) UPDATE `db_test` SET `updated_at` = '2012-07-04 23:32:34' WHERE `id` = 31722 - now the log stands still for 5-60 seconds. SQL (49.1ms) COMMIT - SQL-Update time in the log does not include freeze time Rendering test/index Completed in 96ms (View: 16, DB: 59) | 200 OK [http://localhost:9000/test] Bad part is: this mini-freeze here only happens from time to time as well. note: meanwhile i cannot even upload files via scp. I currently feel like running form bad to worse and back by googling for my server-problem due to immense lack of knowledge regarding server configurations. It still makes me wonder, why those problems even appear, since 250 users a time is not such a high amount, right? So my questions: whats wrong and how to fix? ;) or: what information can i provide to make the situation more clear? can you point at some critical bad configuration-line which i should consider to catch up in the documentation? are there any tools i can run to see some possible bottlenecks? any further advice? (next to: "pay someone who knows what he does" - its a private project, server costs enough already. :)) Thanks for your time and help. Best Regards, Daniel P.S.: i renamed the configfiles to domain.tld since i dont want to have any % more load to the server until its fixed. might be a exaggeratedly thought.. P.P.S: if i asked a complete duplicate question, sorry. my search results seemed to be quite specific in their own way.

    Read the article

< Previous Page | 63 64 65 66 67 68  | Next Page >