Search Results

Search found 1856 results on 75 pages for 'hits lucky'.

Page 72/75 | < Previous Page | 68 69 70 71 72 73 74 75  | Next Page >

  • HTG Explains: Just How Bad Are Android Tablet Apps?

    - by Chris Hoffman
    Apple loves to criticize the state of Android tablet apps when pushing its own iPad tablets. But just how bad is the Android tablet app situation? Should you avoid Android tablets like the Nexus 7 because of the apps? It’s clear that Apple’s iPad is way ahead when it comes to the sheer quantity of tablet-optimized apps. It’s also clear that some popular apps — particularly touch-optimized games — only show up on iPad. But that’s not the whole story. The Basics First, let’s get an idea of the basic stuff that will work well for you on Android. An excellent web browser. Chrome has struggled with performance on Android, but hits its stride on the Nexus 7 (2013). Great, tablet-optimized apps for all of Google’s services, from YouTube to Gmail and Google Maps. Everything you need for reading, from Amazon’s Kindle app for eBooks, Flipboard and Feedly for new articles from websites, and other services like the popular Pocket read-it-later service. Apps for most popular media services, from Netflix, Hulu, and YouTube for videos to Pandora, Spotify, and Rdio for music. A few things aren’t available — you won’t find Apple’s iTunes and Amazon still doesn’t offer an Amazon Instant Video app for Android, while they do for iPad and even their own Android-based Kindle Fire devices. Android has very good app coverage when it comes to consuming content, whether you’re reading websites and ebooks or watching videos and listening to music. You can play almost any Android smartphone game, too. For content consumption, Android is better than something like Windows 8, which lacks apps for Google services like YouTube and still doesn’t have apps for popular media services like Spotify and Rdio. How Android Scales Smartphone Apps Let’s look at how Android scales smartphone apps. Now, bear with us here — we know “scaling” is a dirty word considering how poorly Apple’s iPad scales iPhone apps, but it’s not as bad on Android. When an iPad runs an iPhone app, it simply doubles the pixels and effectively zooms in. For example, if you had  Twitter app with five tweets visible at once on an iPhone and ran the same app on an iPad, the iPad would simply “zoom in” and enlarge the same screen — you’d still see five tweets, but each tweet would appear larger. This is why developers create optimized iPad apps with their own interfaces. It’s especially important on Apple’s iOS. Android devices come in all shapes and sizes, so Android apps have a smarter, more intelligent way to adapt to different screen sizes. Let’s say you have a Twitter app designed for smartphones and it only shows five tweets at once when run on a phone. If you ran the same app on a tablet, you wouldn’t see the same five tweets — you’d see ten or more tweets. Rather than simply zooming in, the app can show more content at the same time on a tablet, even if it was never optimized for tablet-size screens. While apps designed for smartphones aren’t generally ideal, they adapt much better on Android than they do on an iPad. This is particularly true when it comes to games. You’re capable of playing almost any Android smartphone game on an Android tablet, and games generally adapt very well to the larger screen. This gives you access to a huge catalog of games. It’s a great option to have, especially when you look at Microsoft’s Window 8 and consider how much better the touch-based app and game selection would be if Microsoft allowed its users to run Windows Phone games on Windows 8. 7-inch vs 10-inch Tablets The Twitter example above wasn’t just an example. The official Twitter app for Android still doesn’t have a tablet-optimized interface, so this is the sort of situation you’d have to deal with on an Android tablet. On the popular Nexus 7, Twitter is an example of a smartphone app that actually works fairly well — in portrait mode, you can see many more tweets on screen at the same time and none of the space really feels all that wasted. This is important to consider — smartphone apps like Twitter often scale quite well to 7-inch screens because a 7-inch screen is much closer in form factor to a smartphone than a 10-inch screen is. When you begin to look at 10-inch Android tablets that are the same size as an iPad, the situation changes. While the Twitter app works well enough on a Nexus 7, it looks horrible on a Nexus 10 or other 10-inch tablet. Running many smartphone-designed apps — possible with the exception of games — on a 10-inch tablet is a frustrating, poor experience. There’s much more white, empty space in the interface. It feels like you’re using a smartphone app on a large screen, and what’s the point of that? A tablet-optimized Twitter app for Android is finally on its way, but this same situation will repeat with many other types of apps. For example, Facebook doesn’t offer a tablet-optimized interface, but it’s okay on a Nexus 7 anyway. On a 10-inch screen, it probably wouldn’t be anywhere near as nice an experience. It goes without saying that Facebook and Twitter both offer iPad apps with interfaces designed for a tablet-size screen. Here’s another problematic app — the official Yelp app for Android. Even just using it on a 7-inch Nexus 7 will be a poor experience, while it would be much worse on a larger 10-inch tablet app. Now, it’s true that many — maybe even most — of the popular apps you might want to run today are optimized for Android tablets. But, when you look at the situation when it comes to popular apps like Twitter, Facebook, and Yelp, it’s clear Android is still behind in a meaningful way. Price Let’s be honest. The thing that really makes Android tablets compelling — and the only reason Android tablets started seeing real traction after years of almost complete dominance by Apple’s iPads — is that Android tablets are available for so much cheaper than iPads. Google’s latest Nexus 7 (2013) is available for only $230. Apple’s non-retina iPad Mini is available at $300, which is already $70 more. In spite of that, the iPad Mini has much older, slower internals and a much lower resolution screen. It’s not as nice to look at when it comes to reading or watching movies, and the iPad Mini reportedly struggles to run Apple’s latest iOS 7. In contrast, the new Nexus 7 has a very high resolution screen, speedy internals, and runs Android very well with little-to-no lag in real use. We haven’t had any problems with it, unlike all the problems we unfortunately encountered with the first Nexus 7. For a really comparable experience to the current Nexus 7, you’d want to get one of Apple’s new retina iPad Minis. That would cost you $400, another $170 over the Nexus 7. In fact, it’s possible to regularly find sales on the Nexus 7, so if you waited you could get it for just $200 — half the price of the iPad mini with a comparable screen and internals. (In fairness, the iPad certainly has better hardware — but you won’t feel if it you’re just using your tablet to browse the web, watch videos, and do other typical tablet things.) This makes a tablet like the popular Nexus 7 a very good option for budget-conscious users who just want a high-quality device they can use to browse the web, watch videos, play games, and generally do light computing. There’s a reason we’re focusing on the Nexus 7 here. The combination of price and size brings it to a very good place. It’s awfully cheap for the high-quality experience you get, and the 7-inch screen means that even the non-tablet-optimized apps you may stumble across will often work fairly well. On the other hand, more expensive 10-inch Android tablets are still a tougher sell. For $400-$500, you’re getting awfully close to Apple’s full-size iPad price range and Android tablets don’t have as good an app ecosystem as an iPad. It’s hard to recommend an expensive, 10-inch Android tablet over a full-size iPad to average users. In summary, the Android app tablet app situation is nowhere near as bad as it was a few years ago. The success of the Nexus 7 proves that Android tablets can be compelling experiences, and there are a wide variety of strong apps. That said, more expensive 10-inch Android tablets that compete directly with the full-size iPad on price still don’t make much sense for most people.  Unless you have a specific reason for preferring an Android tablet, it’s tough not to recommend an iPad if you’re looking at spending $400+ on a 10-inch tablet. Image Credit: Christian Ghanime on Flickr, Christian Ghanime on Flickr     

    Read the article

  • How can I convert this C Calendaer Code into a Objective-C syntax and have it work with matrixes

    - by Alec Niemy
    #define TRUE 1 #define FALSE 0 int days_in_month[]={0,31,28,31,30,31,30,31,31,30,31,30,31}; char *months[]= { " ", "\n\n\nJanuary", "\n\n\nFebruary", "\n\n\nMarch", "\n\n\nApril", "\n\n\nMay", "\n\n\nJune", "\n\n\nJuly", "\n\n\nAugust", "\n\n\nSeptember", "\n\n\nOctober", "\n\n\nNovember", "\n\n\nDecember" }; int inputyear(void) { int year; printf("Please enter a year (example: 1999) : "); scanf("%d", &year); return year; } int determinedaycode(int year) { int daycode; int d1, d2, d3; d1 = (year - 1.)/ 4.0; d2 = (year - 1.)/ 100.; d3 = (year - 1.)/ 400.; daycode = (year + d1 - d2 + d3) %7; return daycode; } int determineleapyear(int year) { if(year% 4 == FALSE && year%100 != FALSE || year%400 == FALSE) { days_in_month[2] = 29; return TRUE; } else { days_in_month[2] = 28; return FALSE; } } void calendar(int year, int daycode) { int month, day; for ( month = 1; month <= 12; month++ ) { printf("%s", months[month]); printf("\n\nSun Mon Tue Wed Thu Fri Sat\n" ); // Correct the position for the first date for ( day = 1; day <= 1 + daycode * 5; day++ ) { printf(" "); } // Print all the dates for one month for ( day = 1; day <= days_in_month[month]; day++ ) { printf("%2d", day ); // Is day before Sat? Else start next line Sun. if ( ( day + daycode ) % 7 > 0 ) printf(" " ); else printf("\n " ); } // Set position for next month daycode = ( daycode + days_in_month[month] ) % 7; } } int main(void) { int year, daycode, leapyear; year = inputyear(); daycode = determinedaycode(year); determineleapyear(year); calendar(year, daycode); printf("\n"); } This code generates a calendar of the inputed year in the terminal. my question is how can i convert this into a Objective-C syntax instead of this C syntax. im sure this is simple process but im quite of a novice to objective - c and i need it for a cocoa project. this code outputs the calendar as a continuously series of strings until the last month hits. soo instead of creating the calendar in the terminal how can i input the calendar a series of NSMatrixes depend on the inputed year. hope somone can help me with this thanks or every helps (you be in the credits of the finished program) :) (the calendar is just a small part of the program i making and it is one of the important parts!!)

    Read the article

  • multiple droppable

    - by Sune
    Ive have multiple droppable divs (which all have the same size) and one draggable div. The draggable div is 3 times bigger than one droppable. When I drag the draggable div on the droppables divs I want to find which droppable has been affeckted. My code looks like this: $(function () { $(".draggable").draggable({ drag: function (event, ui) { } }); $(".droppable").droppable({ drop: function (event, ui) { alert(this.id); } }); }); the html <div style="height:100px; width:200px; border-bottom:1px solid red; " id="Div0" class="droppable"> drop in me1! </div> <div style="height:100px; width:200px; border-bottom:1px solid red;" id="Div1" class="droppable"> drop in me2! </div> <div style="height:100px; width:200px; border-bottom:1px solid red; " id="Div2" class="droppable"> drop in me2! </div> <div style="height:100px; width:200px; border-bottom:1px solid red; " id="Div3" class="droppable"> drop in me2! </div> <div style="height:100px; width:200px; border-bottom:1px solid red; " id="Div4" class="droppable"> drop in me2! </div> <div style="height:100px; width:200px; border-bottom:1px solid red; " id="Div5" class="droppable"> drop in me2! </div> <div style="height:100px; width:200px; border-bottom:1px solid red; " id="Div6" class="droppable"> drop in me2! </div> <div style="height:100px; width:200px; border-bottom:1px solid red; " id="Div7" class="droppable"> drop in me2! </div> <div class="draggable" id="drag" style="height:300px; width:50px; border:1px solid black;"><span>Drag</span></div> But my alert only shows the first which my draggable div (Div0) hits, how can I find the missing (Div1 and Div2), which also is affeckted?? Here's a guy with the same problem : http://forum.jquery.com/topic/drop-onto-multiple-droppable-elements-at-once

    Read the article

  • Why is my code returning a Null Object Refrence error when using WatIn?

    - by Fuzz Evans
    I keep getting a Null Object Refrence Error, but can't tell why. I have a CSV file that contains 100 urls. The file is read into an array called "lines". public partial class Form1 : System.Windows.Forms.Form { string[] lines; public Form1() ... private void ReadLinksIntoMemory() { //this reads the chosen csv file into our "lines" array //and splits on comma's and new lines to create new objects within the array using (StreamReader sr = new StreamReader(@"C:\temp.csv")) { //reads everything in our csv into 1 long line string fileContents = sr.ReadToEnd(); //splits the 1 long line read in into multiple objects of the lines array lines = fileContents.Split(new string[] { ",", Environment.NewLine }, StringSplitOptions.RemoveEmptyEntries); sr.Dispose(); } } The next part is where I get the null object error. When I try to use WatIn to go to the first item in the lines array it says I'm referencing a null object. private void GoToEditLinks() { for (int i = 0; i < lines.Length; i++) { //go to each link sequentially myIE.GoTo(lines[i].ToString()); //sleep so we can make sure the page loads System.Threading.Thread.Sleep(5000); } } When I debug the code it says that the GoTo request calls lines which is null. It seems like I need to declare the array, but don't I need to tell it an exact size to do that? Example: lines = new string[10] I thought I could use the lines.Length to tell it how big to make the array but that didn't work. What is weird to me is I can use the following code without problem: //returns the accurate number of urls that were in the CSV we read in earlier txtbx1.text = lines.Length; //or //this returns the last entry in the csv, as well as the last entry in the array TextBox2.Text = lines[lines.Length - 1]; I am confused why the array clearly has items in it, they can be called to fill a text box, but when I try to call them in my for loop it says its a null reference? UPDATE: By placing my cursor on both calls to lines and pressing f12 I find they both go to the same instance. The thought next is that I am not calling ReadLinksIntoMemory in time, below is my code: private void button1_Click(object sender, EventArgs e) { button1.Enabled = false; ReadLinksIntoMemory(); GoToEditLinks(); button1.Enabled = true; } Unless I'm mistaken the code says that the ReadLinksIntoMemory method must complete before GoToEditLinks can be called? If ReadLinksIntoMemory didn't finish in time I shouldn't be able to fill my text boxes with the lines array length and/or last entry. UPDATE: Stepping into the method GoToEditLinks() I see that lines is null before it calls: myIE.GoTo(lines[i]); but when it hits the goto command the value changes from null to the url it is suppose to go to, but at that same time it gives me the null object error? UPDATE: I added a IsNullOrEmpty check method and lines array passes it without any issue. I'm beginning to think it is an issue with WatIn and the myIE.GoTo command. I think this is the stack trace/call stack? Program.exe!Program.Form1.GoToEditLinks() Line 284 C# Program.exe!Program.Form1.button1_Click(object sender, System.EventArgs e) Line 191 + 0x8 bytes C# [External Code] Program.exe!Program.Program.Main() Line 18 + 0x1d bytes C# [External Code]

    Read the article

  • CodePlex Daily Summary for Thursday, June 05, 2014

    CodePlex Daily Summary for Thursday, June 05, 2014Popular Releases51Degrees - Device Detection and Redirection: 3.1.2.3: Version 3.1 HighlightsDevice detection algorithm is over 100 times faster. Regular expressions and levenshtein distance calculations are no longer used. The device detection algorithm performance is no longer limited by the number of device combinations contained in the dataset. Two modes of operation are available: Memory – the detection data set is loaded into memory and there is no continuous connection to the source data file. Slower initialisation time but faster detection performanc...CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.27.0: CodeMap now indicates the type name for all members Implemented running scripts 'as administrator'. Just add '//css_npp asadmin' to the script and run it as usual. 'Prepare script for distribution' now aggregates script dependency assemblies. Various improvements in CodeSnipptet, Autcompletion and MethodInfo interactions with each other. Added printing line number for the entries in CodeMap (subject of configuration value) Improved debugging step indication for classless scripts ...Load Runner - HTTP Pressure Test Tool: Load Runner 1.1: 1. added support for measuring total in time / hits / traffic / requests 2. abstracted log, default log to console / text file 3. refactored codeKartris E-commerce: Kartris v2.6003: Bug fixes: Fixed issue where category could not be saved if parent categories not altered Updated split string function to 500 chars otherwise problems with attribute filtering in PowerPack Improvements: If a user has group pricing below that available through qty discounts, that will show in place of the qty discountMagicaVoxel: MagicaVoxel Renderer ver 0.01: MagicaVoxel Renderer (Win 0.01)ClosedXML - The easy way to OpenXML: ClosedXML 0.72.3: 70426e13c415 ClosedXML for .Net 4.0 now uses Open XML SDK 2.5 b9ef53a6654f Merge branch 'master' of https://git01.codeplex.com/forks/vbjay/closedxml 727714e86416 Fix range.Merge(Boolean) for .Net 3.5 eb1ed478e50e Make public range.Merge(Boolean checkIntersects) 6284cf3c3991 More performance improvements when saving.MCE Controller: MCE Controller V1.8.6: BETA. Adds option to disable all internal commands. If the "Disable Internal Commands" checkbox in settings is checked, MCE Controller will disable all internally defined commands (e.g. VK key codes, single characters, mouse commands, etc...) and only respond to commands defined in the MCEControl.commmands file. Attempts to fix inability for clients to reconnect after they forcibly close the connection. Build 1.8.6.37445Visual F# Tools: Daily Builds Preview 06-04-2014: This preview is released for use under a proprietary license.SEToolbox: SEToolbox 01.032.018 Release 1: Added ability to merge/join two ships, regardless of origin. Added Language selection menu to set display text language (for SE resources only), and fixed inherent issues. Added full support for Dedicated Servers, allowing use on PC without Steam present, and only the Dedicated Server install. Added Browse button for used specified save game selection in Load dialog. Installation of this version will replace older version.Service monitor: Version 3.1: Added the ability of the tool to restart itself in 'Admin' mode without UAC prompt. Note that this only works after the tool has run in 'Admin' mode at least once before. This is only supported on Vista, Windows 7 or later. See my blog post on how it is done.DNN Blog: 06.00.07: Highlights: Enhancement: Option to show management panel on child modules Fix: Changed SPROC and code to ensure the right people are notified of pending comments. (CP-24018) Fix: Fix to have notification actions authentication assume the right module id so these will work from the messaging center (CP-24019) Fix: Fix to issue in categories in a post that will not save when no categories and no tags selectedTEncoder: 4.0.0: --4.0.0 -Added: Video downloader -Added: Total progress will be updated more smoothly -Added: MP4Box progress will be shown -Added: A tool to create gif image from video -Added: An option to disable trimming -Added: Audio track option won't be used for mpeg sources as default -Fixed: Subtitle position wasn't used -Fixed: Duration info in the file list wasn't updated after trimming -Updated: FFMpegQuickMon: Version 3.14 (Pie release): This is unofficially the 'Pie' release. There are two big changes.1. 'Presets' - basically templates. Future releases might build on this to allow users to add more presets. 2. MSI Installer now allows you to choose components (in case you don't want all collectors etc.). This means you don't have to download separate components anymore (AllAgents.zip still included in case you want to use them separately) Some other changes:1. Add/changed default file extension for monitor packs to *.qmp (...VeraCrypt: VeraCrypt version 1.0d: Changes between 1.0c and 1.0d (03 June 2014) : Correct issue while creating hidden operating system. Minor fixes (look at git history for more details).Keepass2Android: 0.9.4-pre1: added plug-in support: See settings for how to get plug-ins! published QR plug-in (scan passwords, display passwords as QR code, transfer entries to other KP2A devices) published InputStick plugin (transfer credentials to your PC via bluetooth - requires InputStick USB stick) Third party apps can now simply implement querying KP2A for credentials. Are you a developer? Please add this to your app if suitable! added TOTP support (compatible with KeeOTP and TrayTotp) app should no l...Microsoft Web Protection Library: AntiXss Library 4.3.0: Download from nuget or the Microsoft Download Center This issue finally addresses the over zealous behaviour of the HTML Sanitizer which should now function as expected once again. HTML encoding has been changed to safelist a few more characters for webforms compatibility. This will be the last version of AntiXSS that contains a sanitizer. Any new releases will be encoding libraries only. We recommend you explore other sanitizer options, for example AntiSamy https://www.owasp.org/index....Z SqlBulkCopy Extensions: SqlBulkCopy Extensions 1.0.0: SqlBulkCopy Extensions provide MUST-HAVE methods with outstanding performance missing from the SqlBulkCopy class like Delete, Update, Merge, Upsert. Compatible with .NET 2.0, SQL Server 2000, SQL Azure and more! Bulk MethodsBulkDelete BulkInsert BulkMerge BulkUpdate BulkUpsert Utility MethodsGetSqlConnection GetSqlTransaction You like this library? Find out how and why you should support Z Project Become a Memberhttp://zzzproject.com/resources/images/all/become-a-member.png|ht...Tweetinvi a friendly Twitter C# API: Tweetinvi 0.9.3.x: Timelines- Added all the parameters available from the Timeline Endpoints in Tweetinvi. - This is available for HomeTimeline, UserTimeline, MentionsTimeline // Simple query var tweets = Timeline.GetHomeTimeline(); // Create a parameter for queries with specific parameters var timelineParameter = Timeline.CreateHomeTimelineRequestParameter(); timelineParameter.ExcludeReplies = true; timelineParameter.TrimUser = true; var tweets = Timeline.GetHomeTimeline(timelineParameter); Tweetinvi 0.9.3.1...Sandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...Magick.NET: Magick.NET 6.8.9.101: Magick.NET linked with ImageMagick 6.8.9.1. Breaking changes: - Int/short Set methods of WritablePixelCollection are now unsigned. - The Q16 build no longer uses HDRI, switch to the new Q16-HDRI build if you need HDRI.New Projects12306helper: 12306 ????2112110044: dasda2112110202: asdfsdfsdfsd2112110298: TieuDan2112110315: vftgcdxfrBaidu PCS: BaiduPCSExcel2Sobek: Here the project Excel2Sobek is hosted. Excel2Sobek is a preprocessor in Excel's VBA for the SOBEK 2 hydrological and hydraulic simulation model by Deltares.Help Desk Pro: Manage a business of customer serviceMARGYE: MARGYE CMS CMR E-CommerceMemberBS: MemberBS20140605MiKroTik API: Api untuk menghubungkan aplikasi anda dengan router Mikrotik.MiniProfiler + Log4Net: Use MiniProfiler and Log4Net in ASP.NET projectStore Apps Unity App Base for Prism: This simple library provides an Application Base for a Windows Store application that automatically connects Unity to provide injection as the container.SuperCaptcha for MVC: Custom captcha control for MVCTC760240: myappwithfailuresWindows Phone 8 Dilbert Comic Reader: Basic WP8.1 app that displays comics from http://www.dilbert.com.x86 Proved: Prove what you run!

    Read the article

  • Windows Form hangs when running threads

    - by Benjamin Ortuzar
    JI have written a .NET C# Windows Form app in Visual Studio 2008 that uses a Semaphore to run multiple jobs as threads when the Start button is pressed. It’s experiencing an issue where the Form goes into a comma after being run for 40 minutes or more. The log files indicate that the current jobs complete, it picks a new job from the list, and there it hangs. I have noticed that the Windows Form becomes unresponsive when this happens. The form is running in its own thread. This is a sample of the code I am using: protected void ProcessJobsWithStatus (Status status) { int maxJobThreads = Convert.ToInt32(ConfigurationManager.AppSettings["MaxJobThreads"]); Semaphore semaphore = new Semaphore(maxJobThreads, maxJobThreads); // Available=3; Capacity=3 int threadTimeOut = Convert.ToInt32(ConfigurationManager.AppSettings["ThreadSemaphoreWait"]);//in Seconds //gets a list of jobs from a DB Query. List<Job> jobList = jobQueue.GetJobsWithStatus(status); //we need to create a list of threads to check if they all have stopped. List<Thread> threadList = new List<Thread>(); if (jobList.Count > 0) { foreach (Job job in jobList) { logger.DebugFormat("Waiting green light for JobId: [{0}]", job.JobId.ToString()); if (!semaphore.WaitOne(threadTimeOut * 1000)) { logger.ErrorFormat("Semaphore Timeout. A thread did NOT complete in time[{0} seconds]. JobId: [{1}] will start", threadTimeOut, job.JobId.ToString()); } logger.DebugFormat("Acquired green light for JobId: [{0}]", job.JobId.ToString()); // Only N threads can get here at once job.semaphore = semaphore; ThreadStart threadStart = new ThreadStart(job.Process); Thread thread = new Thread(threadStart); thread.Name = job.JobId.ToString(); threadList.Add(thread); thread.Start(); } logger.Info("Waiting for all threads to complete"); //check that all threads have completed. foreach (Thread thread in threadList) { logger.DebugFormat("About to join thread(jobId): {0}", thread.Name); if (!thread.Join(threadTimeOut * 1000)) { logger.ErrorFormat("Thread did NOT complete in time[{0} seconds]. JobId: [{1}]", threadTimeOut, thread.Name); } else { logger.DebugFormat("Thread did complete in time. JobId: [{0}]", thread.Name); } } } logger.InfoFormat("Finished Processing Jobs in Queue with status [{0}]...", status); } //form methods private void button1_Click(object sender, EventArgs e) { buttonStop.Enabled = true; buttonStart.Enabled = false; ThreadStart threadStart = new ThreadStart(DoWork); workerThread = new Thread(threadStart); serviceStarted = true; workerThread.Start(); } private void DoWork() { EmailAlert emailAlert = new EmailAlert (); // start an endless loop; loop will abort only when "serviceStarted" flag = false while (serviceStarted) { emailAlert.ProcessJobsWithStatus(0); // yield if (serviceStarted) { Thread.Sleep(new TimeSpan(0, 0, 1)); } } // time to end the thread Thread.CurrentThread.Abort(); } //job.process() public void Process() { try { //sets the status, DateTimeStarted, and the processId this.UpdateStatus(Status.InProgress); //do something logger.Debug("Updating Status to [Completed]"); //hits, status,DateFinished this.UpdateStatus(Status.Completed); } catch (Exception e) { logger.Error("Exception: " + e.Message); this.UpdateStatus(Status.Error); } finally { logger.Debug("Relasing semaphore"); semaphore.Release(); } I have tried to log what I can into a file to detect where the problem is happening, but so far I haven't been able to identify where this happens. Losing control of the Windows Form makes me think that this has nothing to do with processing the jobs. Any ideas?

    Read the article

  • html button elemenents and onserverclick attribute in asp.net

    - by Adrian Adkison
    I have been experiencing some very strange behavior using html buttons with the onserverclick attribute. What I am trying to do is use jQuery to designate the default button on the page. By default button I mean the button that is "clicked" when a user hits enter. In testing, I would hit enter while in an input field and sometimes the intended "default" button was clicked and the server side method defined in the corresponding onserverclick attribute was hit. Other times a post back was made without hitting the server method. In order to isolate the issue I created a minimal test case and found some very interesting results. client side: <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> <script type="text/javascript" src="/scripts/jquery-1.4.min.js"></script> </head> <body> <form id="form1" runat="server"> <asp:Label ID="_response" runat="server"></asp:Label> <input id="input1" type="text" /> <input id="input2" type="text" /> <button id="first" type="button" class="defaultBtn" runat="server" onserverclick="ServerMethod1">test1</button> <button id="second" type="button" runat="server" onserverclick="ServerMethod2">test2</button> </form> <script type="text/javascript"> jQuery('form input').keyup( function(e) { if ((e.which && e.which == 13) || (e.keyCode && e.keyCode == 13)) { $('button.defaultBtn').click(); return true; } } ); </script> </body> </html> server side: public partial class admin_spikes_ButtonSubmitTest : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void ServerMethod1(object sender, EventArgs e) { _response.Text = "server method1 was hit"; } protected void ServerMethod2(object sender, EventArgs e) { _response.Text = "server method2 was hit"; } } What I found was that everything worked as expected with this code except when I removed one of the input elements. In Firefox 3.6 and IE 8 when only one input exists on the page, hitting enter does not trigger the onserverclick, it makes a post back without even being jQuery "clicked" or actually "clicked". You can test this by starting with two inputs and clicking "test2". This will output "server method2 was hit". Then just hit enter and the output will be "server method1 was hit. Now take away an input and run the same test. Click "test2" see the results, then hit enter and you will see that nothing will change because the "test1" method was never hit. Does anyone know what is going on? Thanks p.s. Chrome worked as expected

    Read the article

  • Best way to add an extra (nested) form in the middle of a tabbed form

    - by Scharrels
    I've got a web application, consisting mainly of a big form with information. The form is split into multiple tabs, to make it more readable for the user: <form> <div id="tabs"> <ul> <li><a href="#tab1">Tab1</a></li> <li><a href="#tab2">Tab2</a></li> </ul> <div id="tab1">A big table with a lot of input rows</div> <div id="tab2">A big table with a lot of input rows</div> </div> </form> The form is dynamically extended (extra rows are added to the tables). Every 10 seconds the form is serialized and synchronized with the server. I now want to add an interactive form on one of the tabs: when a user enters a name in a field, this information is sent to the server and an id associated with that name is returned. This id is used as an identifier for some dynamically added form fields. A quick sketchup of such a page would look like this: <form action="bigform.php"> <div id="tabs"> <ul> <li><a href="#tab1">Tab1</a></li> <li><a href="#tab2">Tab2</a></li> </ul> <div id="tab1">A big table with a lot of input rows</div> <div id="tab2"> <div class="associatedinfo"> <p>Information for Joe</p> <ul> <li><input name="associated[26][]" /></li> <li><input name="associated[26][]" /></li> </ul> </div> <div class="associatedinfo"> <p>Information for Jill</p> <ul> <li><input name="associated[12][]" /></li> <li><input name="associated[12][]" /></li> </ul> </div> <div id="newperson"> <form action="newform.php"> <p>Add another person:</p> <input name="extra" /><input type="submit" value="Add" /> </form> </div> </div> </div> </form> The above will not work: nested forms are not allowed in HTML. However, I really need to display the form on that tab: it's part of the functionality of that page. I also want the behaviour of a separate form: when the user hits return in the form field, the "Add" submit button is pressed and a submit action is triggered on the partial form. What is the best way to solve this problem?

    Read the article

  • About global.asax and the events there

    - by eski
    So what i'm trying to understand is the whole global.asax events. I doing a simple counter that records website visits. I am using MSSQL. Basicly i have two ints. totalNumberOfUsers - The total visist from begining. currentNumberOfUsers - Total of users viewing the site at the moment. So the way i understand global.asax events is that every time someone comes to the site "Session_Start" is fired once. So once per user. "Application_Start" is fired only once the first time someone comes to the site. Going with this i have my global.asax file here. <script runat="server"> string connectionstring = ConfigurationManager.ConnectionStrings["ConnectionString1"].ConnectionString; void Application_Start(object sender, EventArgs e) { // Code that runs on application startup Application.Lock(); Application["currentNumberOfUsers"] = 0; Application.UnLock(); string sql = "Select c_hit from v_counter where (id=1)"; SqlConnection connect = new SqlConnection(connectionstring); SqlCommand cmd = new SqlCommand(sql, connect); cmd.Connection.Open(); cmd.ExecuteNonQuery(); SqlDataReader reader = cmd.ExecuteReader(); while (reader.Read()) { Application.Lock(); Application["totalNumberOfUsers"] = reader.GetInt32(0); Application.UnLock(); } reader.Close(); cmd.Connection.Close(); } void Application_End(object sender, EventArgs e) { // Code that runs on application shutdown } void Application_Error(object sender, EventArgs e) { // Code that runs when an unhandled error occurs } void Session_Start(object sender, EventArgs e) { // Code that runs when a new session is started Application.Lock(); Application["totalNumberOfUsers"] = (int)Application["totalNumberOfUsers"] + 1; Application["currentNumberOfUsers"] = (int)Application["currentNumberOfUsers"] + 1; Application.UnLock(); string sql = "UPDATE v_counter SET c_hit = @hit WHERE c_type = 'totalNumberOfUsers'"; SqlConnection connect = new SqlConnection(connectionstring); SqlCommand cmd = new SqlCommand(sql, connect); SqlParameter hit = new SqlParameter("@hit", SqlDbType.Int); hit.Value = Application["totalNumberOfUsers"]; cmd.Parameters.Add(hit); cmd.Connection.Open(); cmd.ExecuteNonQuery(); cmd.Connection.Close(); } void Session_End(object sender, EventArgs e) { // Code that runs when a session ends. // Note: The Session_End event is raised only when the sessionstate mode // is set to InProc in the Web.config file. If session mode is set to StateServer // or SQLServer, the event is not raised. Application.Lock(); Application["currentNumberOfUsers"] = (int)Application["currentNumberOfUsers"] - 1; Application.UnLock(); } </script> In the page_load i have this protected void Page_Load(object sender, EventArgs e) { l_current.Text = Application["currentNumberOfUsers"].ToString(); l_total.Text = Application["totalNumberOfUsers"].ToString(); } So if i understand this right, every time someone comes to the site both the currentNumberOfUsers and totalNumberOfUsers are incremented with 1. But when the session is over the currentNumberOfUsers is decremented with 1. If i go to the site with 3 types of browsers with the same computer i should have 3 in hits on both counters. Doing this again after hours i should have 3 in current and 6 in total, right ? The way its working right now is the current goes up to 2 and the total is incremented on every postback on IE and Chrome but not on firefox. And one last thing, is this the same thing ? Application["value"] = 0; value = Application["value"] //OR Application.Set("Value", 0); Value = Application.Get("Value");

    Read the article

  • android program crashing (new to platform)

    - by mutio
    So it is my first real Android program (!hello world), but i do have java experience.The program compiles fine, but on running it crashes as soon as it opens (tried debugging, but it crashes before it hits my breakpoint). Was looking for any advice from anyone who is more experienced with android. package org.me.tipcalculator; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; import android.view.View; import android.widget.Button; import android.widget.EditText; import java.text.NumberFormat; import android.util.Log; public class TipCalculator extends Activity { public static final String tag = "TipCalculator"; /** Called when the activity is first created. */ @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); setContentView(R.layout.main); final EditText mealpricefield = (EditText) findViewById(R.id.mealprice); final TextView answerfield = (TextView) findViewById(R.id.answer); final Button button = (Button) findViewById(R.id.calculate); button.setOnClickListener(new Button.OnClickListener() { public void onClick(View v) { try { Log.i(tag, "onClick invoked."); String mealprice = mealpricefield.getText().toString(); Log.i(tag, "mealprice is [" + mealprice + "]"); String answer = ""; if (mealprice.indexOf("$") == -1) { mealprice = "$" + mealprice; } float fmp = 0.0F; NumberFormat nf = java.text.NumberFormat.getCurrencyInstance(); fmp = nf.parse(mealprice).floatValue(); fmp *= 1.2; Log.i(tag, "Total Meal Price (unformatted) is [" + fmp + "]"); answer = "Full Price, including 20% Tip: " + nf.format(fmp); answerfield.setText(answer); Log.i(tag, "onClick Complete"); } catch(java.text.ParseException pe){ Log.i (tag ,"Parse exception caught"); answerfield.setText("Failed to parse amount?"); } catch(Exception e){ Log.e (tag ,"Failed to Calculate Tip:" + e.getMessage()); e.printStackTrace(); answerfield.setText(e.getMessage()); } } } ); } Just in case it helps heres the xml <?xml version="1.0" encoding="UTF-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="Android Tip Calculator"/> <EditText android:id="@+id/mealprice" android:layout_width="fill_parent" android:layout_height="wrap_content" android:autoText="true"/> <Button android:id="@+id/calculate" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Calculate Tip"/> <TextView android:id= "@+id/answer" android:layout_width="fill_parent" android:layout_height="wrap_content" android:text=""/> </LinearLayout>

    Read the article

  • Foolishness Check: PHP Class finds Class file but not Class in the file.

    - by Daniel Bingham
    I'm at a loss here. I've defined an abstract superclass in one file and a subclass in another. I have required the super-classes file and the stack trace reports to find an include it. However, it then returns an error when it hits the 'extends' line: Fatal error: Class 'HTMLBuilder' not found in View/Markup/HTML/HTML4.01/HTML4_01Builder.php on line 7. I had this working with another class tree that uses factories a moment ago. I just added the builder layer in between the factories and the consumer. The factory layer looked almost exactly the same in terms of includes and dependencies. So that makes me think I must have done something silly that's causes the HTMLBuilder.php file to not be included correctly or interpreted correctly or some such. Here's the full stack trace (paths slightly altered): # Time Memory Function Location 1 0.0001 53904 {main}( ) ../index.php:0 2 0.0002 67600 require_once( 'View/Page.php' ) ../index.php:3 3 0.0003 75444 require_once( 'View/Sections/SectionFactory.php' ) ../Page.php:4 4 0.0003 81152 require_once( 'View/Sections/HTML/HTMLSectionFactory.php' ) ../SectionFactory.php:3 5 0.0004 92108 require_once( 'View/Sections/HTML/HTMLTitlebarSection.php' ) ../HTMLSectionFactory.php:5 6 0.0005 99716 require_once( 'View/Markup/HTML/HTMLBuilder.php' ) ../HTMLTitlebarSection.php:3 7 0.0005 103580 require_once( 'View/Markup/MarkupBuilder.php' ) ../HTMLBuilder.php:3 8 0.0006 124120 require_once( 'View/Markup/HTML/HTML4.01/HTML4_01Builder.php' ) ../MarkupBuilder.php:3 Here's the code in question: Parent class (View/Markup/HTML/HTMLBuilder.php): <?php require_once('View/Markup/MarkupBuilder.php'); abstract class HTMLBuilder extends MarkupBuilder { public abstract function getLink($text, $href); public abstract function getImage($src, $alt); public abstract function getDivision($id, array $classes=NULL, array $children=NULL); public abstract function getParagraph($text, array $classes=NULL, $id=NULL); } ?> Child Class, (View/Markup/HTML/HTML4.01/HTML4_01Builder.php): <?php require_once('HTML4_01Factory.php'); require_once('View/Markup/HTML/HTMLBuilder.php'); class HTML4_01Builder extends HTMLBuilder { private $factory; public function __construct() { $this->factory = new HTML4_01Factory(); } public function getLink($href, $text) { $link = $this->factory->getA(); $link->addAttribute('href', $href); $link->addChild($this->factory->getText($text)); return $link; } public function getImage($src, $alt) { $image = $this->factory->getImg(); $image->addAttribute('src', $src); $image->addAttribute('alt', $alt); return $image; } public function getDivision($id, array $classes=NULL, array $children=NULL) { $div = $this->factory->getDiv(); $div->setID($id); if(!empty($classes)) { $div->addClasses($classes); } if(!empty($children)) { $div->addChildren($children); } return $div; } public function getParagraph($text, array $classes=NULL, $id=NULL) { $p = $this->factory->getP(); $p->addChild($this->factory->getText($text)); if(!empty($classes)) { $p->addClasses($classes); } if(!empty($id)) { $p->setID($id); } return $p; } } ?> I would appreciate any and all ideas. I'm at a complete loss here as to what is going wrong. I'm sure it's something stupid I just can't see...

    Read the article

  • "Could not establish secure channel for SSL/TLS" in .NET CF application on smart phone

    - by Stefan Mohr
    I have a stubborn communications issue with an application running on the .NET Compact Framework 3.5 on Windows Mobile smartphones. I am constructing a web request using this code: UTF8Encoding encoding = new System.Text.UTF8Encoding(); byte[] Data = encoding.GetBytes(HttpUtility.ConstructQueryString(parameters)); httpRequest = WebRequest.Create((domain)) as HttpWebRequest; httpRequest.Timeout = 10000000; httpRequest.ReadWriteTimeout = 10000000; httpRequest.Credentials = CredentialCache.DefaultCredentials; httpRequest.Method = "POST"; httpRequest.ContentType = "application/x-www-form-urlencoded"; httpRequest.ContentLength = Data.Length; Stream SendReq = httpRequest.GetRequestStream(); SendReq.Write(Data, 0, Data.Length); SendReq.Close(); HttpWebResponse httpResponse = (HttpWebResponse)httpRequest.GetResponse(); return httpResponse.GetResponseStream(); The web service functions by receiving a JSON-encoded document as part of the URL (eg. https://site.com/ws/sync??document={"version":"1.0.0","items":[{"item_1":"item1"}]}&user=usr&password=pw), and as a response receives another JSON document as response data. This code runs fine on all emulators and PDAs running WM 5 and 6. We have seen an issue with a couple of customers running Treo smartphones (and only on the Sprint network). We have tested the code on an identical device on the AT&T network (via DeviceAnywhere) and once again the code worked as we expected. This has to be some sort of security policy on the phone, but we've been unable to determine a workaround or diagnose it thoroughly as we cannot reproduce it in house and have had to resort to getting users to assist with running test drivers for us. When this code executes, the user's device throws the following exception: System.Net.WebException Could not establish secure channel for SSL/TLS Stack trace: at System.Net.HttpWebRequest.finishGetRequestStream() at System.Net.HttpWebRequest.GetRequestStream() at OurApp.GetResponseStream(String domain, Hashtable parameters) inner exception: System.IO.IOException Authentication failed because the remote party has closed the transport stream. Stack trace: at System.Net.SslConnectionState.ClientSideHandshake() at System.Net.SslConnectionState.PerformClientHandShake() at System.Net.Connection.connect(Object ignored) at System.Threading.ThreadPool.WorkItem.doWork(Object o) at System.Threading.Timer.ring() Examining the server Apache logs shows no hits from the user's IP - I don't think the device is even attempting to send a packet before failing. If relevant, the server is running Apache on Linux and is written using the TurboGears Python framework. The server certificate is issued by a CA and is still valid. The test driver where this error was copied from was not code signed, however the same error (without the error messages) is signed with a GeoTrust certificate so we don't believe this is a code signing issue. The application installs and launches without issue on all phones - it's just establishing this SSL connection that is breaking for these users. One significant issue in troubleshooting is that there is a substantial inconvenience each time we try out a solution (need to find a "volunteer" customer), so we're really looking for a silver bullet or a better understanding of the handshaking process so we can be reasonably confident we only need to ask the user to test it one or two more times. One final mention: we have tried the sync both over ActiveSync and also over GPRS with identical results. Any thoughts would be greatly appreciated!

    Read the article

  • Removing google mail markers with jquery after getting them from xml

    - by sebastian
    Hi there, I'm trying to create a page that contains a google map. The map is filled with markers from an xml file. I just can't figure out how to remove "old" markers, that don't match the latest user input. At the moment my js stops after the very first xml item. The clearList.push(marker); is supposed to put the generated marker away for later usage. When the user hits the search button I want all markers to be gone and use clearMarkers();. Maybe someone here can help Sebastian Here is my JavaScript: $(document).ready(function() { $("#map").css({height: 650}); var clearList = []; var myLatLng = new google.maps.LatLng(52.518143, 13.372879); MYMAP.init('#map', myLatLng, 11); $("#showmarkers").click(function(e){ clearMarkers(); MYMAP.placeMarkers('markers.xml'); }); }); function clearMarkers() { $(clearList).each(function () { this.setmap(null); }); clearList = []; } var MYMAP = { map: null, bounds: null } MYMAP.init = function(selector, latLng, zoom) { var myOptions = { zoom:zoom, center: latLng, mapTypeId: google.maps.MapTypeId.HYBRID } this.map = new google.maps.Map($(selector)[0], myOptions); this.bounds = new google.maps.LatLngBounds(); } MYMAP.placeMarkers = function(filename) { $.get(filename, function(xml){ $(xml).find("marker").each(function(){ // read values from xml for searching var platzart = $(this).find('platzart').text(); var ort = $(this).find('ort').text(); var open = $(this).find('open').text(); if (platzart =="Kunstrasen" && $('#kunstrasen').attr('checked') || platzart =="Rasen" && $('#rasen').attr('checked') || platzart =="Tartan" && $('#tartan').attr('checked') || platzart =="Boltzplatz" && $('#boltzplatz').attr('checked') ){ // read values from xml for additional info var name = $(this).find('name').text(); var plz = $(this).find('plz').text(); var note = $(this).find('note').text(); var adress = $(this).find('adress').text(); // create a new LatLng point for the marker var lat = $(this).find('lat').text(); var lng = $(this).find('lng').text(); var point = new google.maps.LatLng(parseFloat(lat),parseFloat(lng)); // extend the bounds to include the new point MYMAP.bounds.extend(point); // create new marker var marker = new google.maps.Marker({ position: point, map: MYMAP.map }); clearList.push(marker); // add onclick overlay var infoWindow = new google.maps.InfoWindow(); var html='<strong>'+name+'</strong.><br />'+platzart; google.maps.event.addListener(marker, 'click', function() { infoWindow.setContent(html); infoWindow.open(MYMAP.map, marker); }); } MYMAP.map.fitBounds(MYMAP.bounds); }); }); } Thanks in advance

    Read the article

  • gzip compression using varnish cache

    - by Ali Raza
    Im trying to provide gzip compression using varnish cache. But when I set content-encoding as gzip using my below mentioned configuration for varnish (default.vcl). Browser failed to download those content for which i set content-encoding as gzipped. Varnish configuration file: backend default { .host = "127.0.0.1"; .port = "9000"; } backend socketIO { .host = "127.0.0.1"; .port = "8083"; } acl purge { "127.0.0.1"; "192.168.15.0"/24; } sub vcl_fetch { /* If the request is for pictures, javascript, css, etc */ if (req.url ~ "^/public/" || req.url ~ "\.js"){ unset req.http.cookie; set beresp.http.Content-Encoding= "gzip"; set beresp.ttl = 86400s; set beresp.http.Cache-Control = "public, max-age=3600"; /*set the expires time to response header*/ set beresp.http.expires=beresp.ttl; /* marker for vcl_deliver to reset Age: */ set beresp.http.magicmarker = "1"; } if (!beresp.cacheable) { return (pass); } return (deliver); } sub vcl_deliver { if (resp.http.magicmarker) { /* Remove the magic marker */ unset resp.http.magicmarker; /* By definition we have a fresh object */ set resp.http.age = "0"; } if(obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; }else { set resp.http.X-Varnish-Cache = "MISS"; } return (deliver); } sub vcl_recv { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For ", " client.ip; } else { set req.http.X-Forwarded-For = client.ip; } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* Non-RFC2616 or CONNECT which is weird. */ return (pipe); } # Pass requests that are not GET or HEAD if (req.request != "GET" && req.request != "HEAD") { return(pass); } #pipe websocket connections directly to Node.js if (req.http.Upgrade ~ "(?i)websocket") { set req.backend = socketIO; return (pipe); } # Properly handle different encoding types if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|js|css)$") { # No point in compressing these remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unkown algorithm remove req.http.Accept-Encoding; } } # allow PURGE from localhost and 192.168.15... if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return (lookup); } return (lookup); } sub vcl_hit { if (req.request == "PURGE") { purge_url(req.url); error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE") { purge_url(req.url); error 200 "Purged."; } } sub vcl_pipe { if (req.http.upgrade) { set bereq.http.upgrade = req.http.upgrade; } } Response Header: Cache-Control:public, max-age=3600 Connection:keep-alive Content-Encoding:gzip Content-Length:11520 Content-Type:application/javascript Date:Fri, 06 Apr 2012 04:53:41 GMT ETag:"1330493670000--987570445" Last-Modified:Wed, 29 Feb 2012 05:34:30 GMT Server:Play! Framework;1.2.x-localbuild;dev Via:1.1 varnish X-Varnish:118464579 118464571 X-Varnish-Cache:HIT age:0 expires:86400.000 Any suggestion on how to fix it and how to provide gzip compression using varnish.

    Read the article

  • extensible database design: automatic ALTER TABLE or serialize() field BLOB ?

    - by mario
    I want an adaptable database scheme. But still use a simple table data gateway in my application, where I just pass an $data[] array for storing. The basic columns are settled in the initial table scheme. There will however arise a couple of meta fields later (ca 10-20). I want some flexibility there and not adapt the database manually each time, or -worse- change the application logic just because of new fields. So now there are two options which seem workable yet not overkill. But I'm not sure about the scalability or database drawbacks. (1) Automatic ALTER TABLE. Whenever the $data array is to be saved, the keys are compared against the current database columns. New columns get defined before the $data is INSERTed into the table. Actually seems simple enough in test code: function save($data, $table="forum") { // columns if ($new_fields = array_diff(array_keys($data), known_fields($table))) { extend_schema($table, $new_fields, $data); } // save $columns = implode("`, `", array_keys($data)); $qm = str_repeat(",?", count(array_keys($data)) - 1); echo ("INSERT INTO `$table` (`$columns`) VALUES (?$qm);"); function known_fields($table) { return unserialize(@file_get_contents("db:$table")) ?: array("id"); function extend_schema($table, $new_fields, $data) { foreach ($new_fields as $field) { echo("ALTER TABLE `$table` ADD COLUMN `$field` VARCHAR;"); Since it is mostly meta information fields, adding them just as VARCHAR seems sufficient. Nobody will query by them anyway. So the database is really just used as storage here. However, while I might want to add a lot of new $data fields on the go, they will not always be populated. (2) serialize() fields into BLOB. Any new/extraneous meta fields could be opaque to the database. Simply sorting out the virtual fields from the real database columns is simple. And the meta fields can just be serialize()d into a blob/text field then: function ext_save($data, $table="forum") { $db_fields = array("id", "content", "flags", "ext"); // disjoin foreach (array_diff(array_keys($data),$db_fields) as $key) { $data["ext"][$key] = $data[$key]; unset($data[$key]); } $data["ext"] = serialize($data["ext"]); Unserializing and unpacking this 'ext' column on read queries is a minor overhead. The advantage is that there won't be any sparsely filled columns in the database, so I guess it's compacter and faster than the AUTO ALTER TABLE approach. Of course, this method prevents ever using one of the new fields in a WHERE or GROUP BY clause. But I think none of the possible meta fields (user_agent, author_ip, author_img, votes, hits, last_modified, ..) would/should ever be used there anyway. So I currently prefer the 'ext' blob approach, even if it's a one-way ticket. How are such columns called usually? (looking for examples/doc) Would you use XML serialization for (very theoretical) in-database queries? The adapting table scheme seems a "cleaner" interface, even if most columns might remain empty then. How does that impact speed? How many such sparse VARCHAR fields can MySQL/innodb stomach? But most importantly: Is there any standard implementation for this? A pseudo ORM with automatic ALTER TABLE tricks? Storing a simple column list seems workable, but something like pdo::getColumnMeta would be more robust.

    Read the article

  • JPA2 adding referential contraint to table complicates criteria query with lazy fetch, need advice

    - by Quaternion
    Following is a lot of writing for what I feel is a pretty simple issue. Root of issue is my ignorance, not looking so much for code but advice. Table: Ininvhst (Inventory-schema inventory history) column ihtran (inventory history transfer code) using an old entity mapping I have: @Basic(optional = false) @Column(name = "IHTRAN") private String ihtran; ihtran is really a foreign key to table Intrnmst ("Inventory Transfer Master" which contains a list of "transfer codes"). This was not expressed in the database so placed a referential constraint on Ininvhst re-generating JPA2 entity classes produced: @JoinColumn(name = "IHTRAN", referencedColumnName = "TMCODE", nullable = false) @ManyToOne(optional = false) private Intrnmst intrnmst; Now previously I was using JPA2 to select the records/(Ininvhst entities) from the Ininvhst table where "ihtran" was one of a set of values. I used in.value() to do this... here is a snippet: cq = cb.createQuery(Ininvhst.class); ... In in = cb.in(transactionType); //Get in expression for transacton types for (String s : transactionTypes) { //has a value in = in.value(s);//check if the strings we are looking for exist in the transfer master } predicateList.add(in); My issue is that the Ininvhst used to contain a string called ihtran but now it contains Ininvhst... So I now need a path expression: this.predicateList = new ArrayList<Predicate>(); if (transactionTypes != null && transactionTypes.size() > 0) { //list of strings has some values Path<Intrnmst> intrnmst = root.get(Ininvhst_.intrnmst); //get transfermaster from Ininvhst Path<String> transactionType = intrnmst.get(Intrnmst_.tmcode); //get transaction types from transfer master In<String> in = cb.in(transactionType); //Get in expression for transacton types for (String s : transactionTypes) { //has a value in = in.value(s);//check if the strings we are looking for exist in the transfer master } predicateList.add(in); } Can I add ihtran back into the entity along with a join column that is both references "IHTRAN"? Or should I use a projection to somehow return Ininvhst along with the ihtran string which is now part of the Intrnmst entity. Or should I use a projection to return Ininvhst and somehow limit Intrnmst just just the ihtran string. Further information: I am using the resulting list of selected Ininvhst objects in a web application, the class which contains the list of Ininvhst objects is transformed into a json object. There are probably quite a few serialization methods that would navigate the object graph the problem is that my current fetch strategy is lazy so it hits the join entity (Intrnmst intrnmst) and there is no Entity Manager available at that point. At this point I have prevented the object from serializing the join column but now I am missing a critical piece of data. I think I've said too much but not knowing enough I don't know what you JPA experts need. What I would like is my original object to have both a string object and be able to join on the same column (ihtran) and have it as a string too, but if this isn't possible or advisable I want to hear what I should do and why. Pseudo code/English is more than fine.

    Read the article

  • Do you know of a C macro to compute Unix time and date?

    - by Alexis Wilke
    I'm wondering if someone knows/has a C macro to compute a static Unix time from a hard coded date and time as in: time_t t = UNIX_TIMESTAMP(2012, 5, 10, 9, 26, 13); I'm looking into that because I want to have a numeric static timestamp. This will be done hundred of times throughout the software, each time with a different date, and I want to make sure it is fast because it will run hundreds of times every second. Converting dates that many times would definitively slow down things (i.e. calling mktime() is slower than having a static number compiled in place, right?) [made an update to try to render this paragraph clearer, Nov 23, 2012] Update I want to clarify the question with more information about the process being used. As my server receives requests, for each request, it starts a new process. That process is constantly updated with new plugins and quite often such updates require a database update. Those must be run only once. To know whether an update is necessary, I want to use a Unix date (which is better than using a counter because a counter is much more likely to break once in a while.) The plugins will thus receive an update signal and have their on_update() function called. There I want to do something like this: void some_plugin::on_update(time_t last_update) { if(last_update < UNIX_TIMESTAMP(2010, 3, 22, 20, 9, 26)) { ...run update... } if(last_update < UNIX_TIMESTAMP(2012, 5, 10, 9, 26, 13)) { ...run update... } // as many test as required... } As you can see, if I have to compute the unix timestamp each time, this could represent thousands of calls per process and if you receive 100 hits a second x 1000 calls, you wasted 100,000 calls when you could have had the compiler compute those numbers once at compile time. Putting the value in a static variable is of no interest because this code will run once per process run. Note that the last_update variable changes depending on the website being hit (it comes from the database.) Code Okay, I got the code now: // helper (Days in February) #define _SNAP_UNIX_TIMESTAMP_FDAY(year) \ (((year) % 400) == 0 ? 29LL : \ (((year) % 100) == 0 ? 28LL : \ (((year) % 4) == 0 ? 29LL : \ 28LL))) // helper (Days in the year) #define _SNAP_UNIX_TIMESTAMP_YDAY(year, month, day) \ ( \ /* January */ static_cast<qint64>(day) \ /* February */ + ((month) >= 2 ? 31LL : 0LL) \ /* March */ + ((month) >= 3 ? _SNAP_UNIX_TIMESTAMP_FDAY(year) : 0LL) \ /* April */ + ((month) >= 4 ? 31LL : 0LL) \ /* May */ + ((month) >= 5 ? 30LL : 0LL) \ /* June */ + ((month) >= 6 ? 31LL : 0LL) \ /* July */ + ((month) >= 7 ? 30LL : 0LL) \ /* August */ + ((month) >= 8 ? 31LL : 0LL) \ /* September */+ ((month) >= 9 ? 31LL : 0LL) \ /* October */ + ((month) >= 10 ? 30LL : 0LL) \ /* November */ + ((month) >= 11 ? 31LL : 0LL) \ /* December */ + ((month) >= 12 ? 30LL : 0LL) \ ) #define SNAP_UNIX_TIMESTAMP(year, month, day, hour, minute, second) \ ( /* time */ static_cast<qint64>(second) \ + static_cast<qint64>(minute) * 60LL \ + static_cast<qint64>(hour) * 3600LL \ + /* year day (month + day) */ (_SNAP_UNIX_TIMESTAMP_YDAY(year, month, day) - 1) * 86400LL \ + /* year */ (static_cast<qint64>(year) - 1970LL) * 31536000LL \ + ((static_cast<qint64>(year) - 1969LL) / 4LL) * 86400LL \ - ((static_cast<qint64>(year) - 1901LL) / 100LL) * 86400LL \ + ((static_cast<qint64>(year) - 1601LL) / 400LL) * 86400LL ) WARNING: Do not use these macros to dynamically compute a date. It is SLOWER than mktime(). This being said, if you have a hard coded date, then the compiler will compute the time_t value at compile time. Slower to compile, but faster to execute over and over again.

    Read the article

  • How to search Multiple Sites using Lucene Search engine API?

    - by Wael Salman
    Hope that someone can help me as soon as possible :-) I would like to know how can we search Multiple Sites using Lucene??! (All sites are in one index). I have succeeded to search one website , and to index multiple sites, however I am not able to search all websites. Consider this method that I have: private void PerformSearch() { DateTime start = DateTime.Now; //Create the Searcher object string strIndexDir = Server.MapPath("index") + @"\" + mstrURL; IndexSearcher objSearcher = new IndexSearcher(strIndexDir); //Parse the query, "text" is the default field to search Query objQuery = QueryParser.Parse(mstrQuery, "text", new StandardAnalyzer()); //Create the result DataTable mobjDTResults.Columns.Add("title", typeof(string)); mobjDTResults.Columns.Add("path", typeof(string)); mobjDTResults.Columns.Add("score", typeof(string)); mobjDTResults.Columns.Add("sample", typeof(string)); mobjDTResults.Columns.Add("explain", typeof(string)); //Perform search and get hit count Hits objHits = objSearcher.Search(objQuery); mintTotal = objHits.Length(); //Create Highlighter QueryHighlightExtractor highlighter = new QueryHighlightExtractor(objQuery, new StandardAnalyzer(), "<B>", "</B>"); //Initialize "Start At" variable mintStartAt = GetStartAt(); //How many items we should show? int intResultsCt = GetSmallerOf(mintTotal, mintMaxResults + mintStartAt); //Loop through results and display for (int intCt = mintStartAt; intCt < intResultsCt; intCt++) { //Get the document from resuls index Document doc = objHits.Doc(intCt); //Get the document's ID and set the cache location string strID = doc.Get("id"); string strLocation = ""; if (mstrURL.Substring(0,3) == "www") strLocation = Server.MapPath("cache") + @"\" + mstrURL + @"\" + strID + ".htm"; else strLocation = doc.Get("path") + doc.Get("filename"); //Load the HTML page from cache string strPlainText; using (StreamReader sr = new StreamReader(strLocation, System.Text.Encoding.Default)) { strPlainText = ParseHTML(sr.ReadToEnd()); } //Add result to results datagrid DataRow row = mobjDTResults.NewRow(); if (mstrURL.Substring(0,3) == "www") row["title"] = doc.Get("title"); else row["title"] = doc.Get("filename"); row["path"] = doc.Get("path"); row["score"] = String.Format("{0:f}", (objHits.Score(intCt) * 100)) + "%"; row["sample"] = highlighter.GetBestFragments(strPlainText, 200, 2, "..."); Explanation objExplain = objSearcher.Explain(objQuery, intCt); row["explain"] = objExplain.ToHtml(); mobjDTResults.Rows.Add(row); } objSearcher.Close(); //Finalize results information mTsDuration = DateTime.Now - start; mintFromItem = mintStartAt + 1; mintToItem = GetSmallerOf(mintStartAt + mintMaxResults, mintTotal); } as you can see that I use the site URL 'mstrURL' when I create the search object string strIndexDir = Server.MapPath("index") + @"\" + mstrURL; How can I do the same when I want to search multiple sites?? Actually I am using the code from http://www.keylimetie.com/blog/2005/8/4/lucenenet/

    Read the article

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • qemu-kvm virtual machine virtio network freeze under load

    - by Rick Koshi
    I'm having a problem with my virtual machines, where the network will freeze under heavy load. I'm using CentOS 6.2 as both host and guest, not using libvirt, just running qemu-kvm directly as follows: /usr/libexec/qemu-kvm \ -drive file=/data2/vm/rb-dev2-www1-vm.img,index=0,media=disk,cache=none,if=virtio \ -boot order=c \ -m 2G \ -smp cores=1,threads=2 \ -vga std \ -name rb-dev2-www1-vm \ -vnc :84,password \ -net nic,vlan=0,macaddr=52:54:20:00:00:54,model=virtio \ -net tap,vlan=0,ifname=tap84,script=/etc/qemu-ifup \ -monitor unix:/var/run/vm/rb-dev2-www1-vm.mon,server,nowait \ -rtc base=utc \ -device piix3-usb-uhci \ -device usb-tablet /etc/qemu-ifup (used by the above command) is a very simple script, containing the following: #!/bin/sh sudo /sbin/ifconfig $1 0.0.0.0 promisc up sudo /usr/sbin/brctl addif br0 $1 sleep 2 And here's the info on br0 and other interfaces: avl-host3 14# brctl show bridge name bridge id STP enabled interfaces br0 8000.180373f5521a no bond0 tap84 virbr0 8000.525400858961 yes virbr0-nic avl-host3 15# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: em1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff 3: em2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff 4: em3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 18:03:73:f5:52:1e brd ff:ff:ff:ff:ff:ff 5: em4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 18:03:73:f5:52:20 brd ff:ff:ff:ff:ff:ff 6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff inet6 fe80::1a03:73ff:fef5:521a/64 scope link valid_lft forever preferred_lft forever 7: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff inet 172.16.1.46/24 brd 172.16.1.255 scope global br0 inet6 fe80::1a03:73ff:fef5:521a/64 scope link valid_lft forever preferred_lft forever 8: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:85:89:61 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 9: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500 link/ether 52:54:00:85:89:61 brd ff:ff:ff:ff:ff:ff 12: tap84: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether ba:e8:9b:2a:ff:48 brd ff:ff:ff:ff:ff:ff inet6 fe80::b8e8:9bff:fe2a:ff48/64 scope link valid_lft forever preferred_lft forever bond0 is a bond of em1 and em2. virbr0 and virbr0-nic are vestigial interfaces left over from CentOS's default installation. They are unused (as far as I know). The guest runs perfectly until I run a large 'rsync', when the network will freeze after some seemingly-random time (usually under a minute). When it freezes, there is no network activity in or out of the guest. I can still connect to the guest's console via vnc, but it is unable to speak out its network interface. Any attempt to 'ping' from the guest gives a "Destination Host Unreachable" error for 3/4 packets and no reply for every fourth packet. Sometimes (perhaps two thirds of the time), I can bring the interface back to life by doing a "service network restart" from the guest's console. If this works (and if I do it before the rsync times out), the rsync will resume. Usually it will freeze again within a minute or two. If I repeat, the rsync will eventually finish, and I presume the machine goes back to waiting for another period of heavy load. Throughout the whole process, there are no console errors or relevant (that I can see) syslog messages on either guest or host machine. If the "service network restart" doesn't work the first time, trying again (and again and again) never seems to work. The command completes normally, with normal output, but the interface stays frozen. However, a soft reboot of the guest machine (without restarting qemu-kvm) always seems to bring it back. I am aware of the "lowest mac address" assignment problem, where the bridge takes on the mac address of the slave interface with the lowest mac address. This causes temporary network freezes, but is definitely not what's happening for me. My freezes are permanent until manual intervention, and you can see from the 'ip addr show' output above that the mac address being used by br0 is that of the physical ethernet. There are no other virtual machines running on the host. I've verified that each virtual machine on the subnet has its own unique mac address. I have rebuilt the guest machine several times, and I have tried this on three different host machines (identical hardware, built identically). Oddly, I do have one virtual host (the second of this series) which never seemed to have a problem. It never had its network freeze when it was running the same rsync during its build. It's particularly odd because it was the second build. The first, on a different host, did have the freezing problem, but the second did not. I assumed at the time that I had done something wrong with the first build, and that the problem was resolved. Unfortunately, the problem reappeared when I built the third VM. Also unfortunately, I can't do many tests with the working VM, as it's now in production use, and I'm hoping I can find the cause of this issue before that machine starts having problems. It's possible that I just got really lucky while running the rsync on the working machine, and that one time it didn't freeze. Of course it's possible that I somehow changed the build scripts without realizing it and re-broke something, but I can't find any such thing. In any case, I'm hoping someone has some idea what could cause this. Addendum: Preliminary tests suggest that I don't have the problem if I substitute e1000 for virtio in the first -net flag to qemu-kvm. I don't consider this a solution, but it is suitable for a stopgap. Has anyone else had (or better yet, solved) this problem with the virtio network driver?

    Read the article

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe Irony

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • WCF WS-Security and WSE Nonce Authentication

    - by Rick Strahl
    WCF makes it fairly easy to access WS-* Web Services, except when you run into a service format that it doesn't support. Even then WCF provides a huge amount of flexibility to make the service clients work, however finding the proper interfaces to make that happen is not easy to discover and for the most part undocumented unless you're lucky enough to run into a blog, forum or StackOverflow post on the matter. This is definitely true for the Password Nonce as part of the WS-Security/WSE protocol, which is not natively supported in WCF. Specifically I had a need to create a WCF message on the client that includes a WS-Security header that looks like this from their spec document:<soapenv:Header> <wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken wsu:Id="UsernameToken-8" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsse:Username>TeStUsErNaMe1</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >TeStPaSsWoRd1</wsse:Password> <wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >f8nUe3YupTU5ISdCy3X9Gg==</wsse:Nonce> <wsu:Created>2011-05-04T19:01:40.981Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soapenv:Header> Specifically, the Nonce and Created keys are what WCF doesn't create or have a built in formatting for. Why is there a nonce? My first thought here was WTF? The username and password are there in clear text, what does the Nonce accomplish? The Nonce and created keys are are part of WSE Security specification and are meant to allow the server to detect and prevent replay attacks. The hashed nonce should be unique per request which the server can store and check for before running another request thus ensuring that a request is not replayed with exactly the same values. Basic ServiceUtl Import - not much Luck The first thing I did when I imported this service with a service reference was to simply import it as a Service Reference. The Add Service Reference import automatically detects that WS-Security is required and appropariately adds the WS-Security to the basicHttpBinding in the config file:<?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="RealTimeOnlineSoapBinding"> <security mode="Transport" /> </binding> <binding name="RealTimeOnlineSoapBinding1" /> </basicHttpBinding> </bindings> <client> <endpoint address="https://notarealurl.com:443/services/RealTimeOnline" binding="basicHttpBinding" bindingConfiguration="RealTimeOnlineSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> </configuration> If if I run this as is using code like this:var client = new RealTimeOnlineClient(); client.ClientCredentials.UserName.UserName = "TheUsername"; client.ClientCredentials.UserName.Password = "ThePassword"; … I get nothing in terms of WS-Security headers. The request is sent, but the the binding expects transport level security to be applied, rather than message level security. To fix this so that a WS-Security message header is sent the security mode can be changed to: <security mode="TransportWithMessageCredential" /> Now if I re-run I at least get a WS-Security header which looks like this:<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <u:Timestamp u:Id="_0"> <u:Created>2012-11-24T02:55:18.011Z</u:Created> <u:Expires>2012-11-24T03:00:18.011Z</u:Expires> </u:Timestamp> <o:UsernameToken u:Id="uuid-18c215d4-1106-40a5-8dd1-c81fdddf19d3-1"> <o:Username>TheUserName</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> Closer! Now the WS-Security header is there along with a timestamp field (which might not be accepted by some WS-Security expecting services), but there's no Nonce or created timestamp as required by my original service. Using a CustomBinding instead My next try was to go with a CustomBinding instead of basicHttpBinding as it allows a bit more control over the protocol and transport configurations for the binding. Specifically I can explicitly specify the message protocol(s) used. Using configuration file settings here's what the config file looks like:<?xml version="1.0"?> <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="CustomSoapBinding"> <security includeTimestamp="false" authenticationMode="UserNameOverTransport" defaultAlgorithmSuite="Basic256" requireDerivedKeys="false" messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10"> </security> <textMessageEncoding messageVersion="Soap11"></textMessageEncoding> <httpsTransport maxReceivedMessageSize="2000000000"/> </binding> </customBinding> </bindings> <client> <endpoint address="https://notrealurl.com:443/services/RealTimeOnline" binding="customBinding" bindingConfiguration="CustomSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> </startup> </configuration> This ends up creating a cleaner header that's missing the timestamp field which can cause some services problems. The WS-Security header output generated with the above looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-291622ca-4c11-460f-9886-ac1c78813b24-1"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> This is closer as it includes only the username and password. The key here is the protocol for WS-Security:messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10" which explicitly specifies the protocol version. There are several variants of this specification but none of them seem to support the nonce unfortunately. This protocol does allow for optional omission of the Nonce and created timestamp provided (which effectively makes those keys optional). With some services I tried that requested a Nonce just using this protocol actually worked where the default basicHttpBinding failed to connect, so this is a possible solution for access to some services. Unfortunately for my target service that was not an option. The nonce has to be there. Creating Custom ClientCredentials As it turns out WCF doesn't have support for the Digest Nonce as part of WS-Security, and so as far as I can tell there's no way to do it just with configuration settings. I did a bunch of research on this trying to find workarounds for this, and I did find a couple of entries on StackOverflow as well as on the MSDN forums. However, none of these are particularily clear and I ended up using bits and pieces of several of them to arrive at a working solution in the end. http://stackoverflow.com/questions/896901/wcf-adding-nonce-to-usernametoken http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4df3354f-0627-42d9-b5fb-6e880b60f8ee The latter forum message is the more useful of the two (the last message on the thread in particular) and it has most of the information required to make this work. But it took some experimentation for me to get this right so I'll recount the process here maybe a bit more comprehensively. In order for this to work a number of classes have to be overridden: ClientCredentials ClientCredentialsSecurityTokenManager WSSecurityTokenizer The idea is that we need to create a custom ClientCredential class to hold the custom properties so they can be set from the UI or via configuration settings. The TokenManager and Tokenizer are mainly required to allow the custom credentials class to flow through the WCF pipeline and eventually provide custom serialization. Here are the three classes required and their full implementations:public class CustomCredentials : ClientCredentials { public CustomCredentials() { } protected CustomCredentials(CustomCredentials cc) : base(cc) { } public override System.IdentityModel.Selectors.SecurityTokenManager CreateSecurityTokenManager() { return new CustomSecurityTokenManager(this); } protected override ClientCredentials CloneCore() { return new CustomCredentials(this); } } public class CustomSecurityTokenManager : ClientCredentialsSecurityTokenManager { public CustomSecurityTokenManager(CustomCredentials cred) : base(cred) { } public override System.IdentityModel.Selectors.SecurityTokenSerializer CreateSecurityTokenSerializer(System.IdentityModel.Selectors.SecurityTokenVersion version) { return new CustomTokenSerializer(System.ServiceModel.Security.SecurityVersion.WSSecurity11); } } public class CustomTokenSerializer : WSSecurityTokenSerializer { public CustomTokenSerializer(SecurityVersion sv) : base(sv) { } protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); // in this case password is plain text // for digest mode password needs to be encoded as: // PasswordAsDigest = Base64(SHA-1(Nonce + Created + Password)) // and profile needs to change to //string password = GetSHA1String(nonce + createdStr + userToken.Password); string password = userToken.Password; writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } protected string GetSHA1String(string phrase) { SHA1CryptoServiceProvider sha1Hasher = new SHA1CryptoServiceProvider(); byte[] hashedDataBytes = sha1Hasher.ComputeHash(Encoding.UTF8.GetBytes(phrase)); return Convert.ToBase64String(hashedDataBytes); } } Realistically only the CustomTokenSerializer has any significant code in. The code there deals with actually serializing the custom credentials using low level XML semantics by writing output into an XML writer. I can't take credit for this code - most of the code comes from the MSDN forum post mentioned earlier - I made a few adjustments to simplify the nonce generation and also added some notes to allow for PasswordDigest generation. Per spec the nonce is nothing more than a unique value that's supposed to be 'random'. I'm thinking that this value can be any string that's unique and a GUID on its own probably would have sufficed. Comments on other posts that GUIDs can be potentially guessed are highly exaggerated to say the least IMHO. To satisfy even that aspect though I added the SHA1 encryption and binary decoding to give a more random value that would be impossible to 'guess'. The original example from the forum post used another level of encoding and decoding to string in between - but that really didn't accomplish anything but extra overhead. The header output generated from this looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-f43d8b0d-0ebb-482e-998d-f544401a3c91-1" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">ThePassword</o:Password> <o:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >PjVE24TC6HtdAnsf3U9c5WMsECY=</o:Nonce> <u:Created>2012-11-23T07:10:04.670Z</u:Created> </o:UsernameToken> </o:Security> </s:Header> which is exactly as it should be. Password Digest? In my case the password is passed in plain text over an SSL connection, so there's no digest required so I was done with the code above. Since I don't have a service handy that requires a password digest,  I had no way of testing the code for the digest implementation, but here is how this is likely to work. If you need to pass a digest encoded password things are a little bit trickier. The password type namespace needs to change to: http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest and then the password value needs to be encoded. The format for password digest encoding is this: Base64(SHA-1(Nonce + Created + Password)) and it can be handled in the code above with this code (that's commented in the snippet above): string password = GetSHA1String(nonce + createdStr + userToken.Password); The entire WriteTokenCore method for digest code looks like this:protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); string password = GetSHA1String(nonce + createdStr + userToken.Password); writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } I had no service to connect to to try out Digest auth - if you end up needing it and get it to work please drop a comment… How to use the custom Credentials The easiest way to use the custom credentials is to create the client in code. Here's a factory method I use to create an instance of my service client:  public static RealTimeOnlineClient CreateRealTimeOnlineProxy(string url, string username, string password) { if (string.IsNullOrEmpty(url)) url = "https://notrealurl.com:443/cows/services/RealTimeOnline"; CustomBinding binding = new CustomBinding(); var security = TransportSecurityBindingElement.CreateUserNameOverTransportBindingElement(); security.IncludeTimestamp = false; security.DefaultAlgorithmSuite = SecurityAlgorithmSuite.Basic256; security.MessageSecurityVersion = MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10; var encoding = new TextMessageEncodingBindingElement(); encoding.MessageVersion = MessageVersion.Soap11; var transport = new HttpsTransportBindingElement(); transport.MaxReceivedMessageSize = 20000000; // 20 megs binding.Elements.Add(security); binding.Elements.Add(encoding); binding.Elements.Add(transport); RealTimeOnlineClient client = new RealTimeOnlineClient(binding, new EndpointAddress(url)); // to use full client credential with Nonce uncomment this code: // it looks like this might not be required - the service seems to work without it client.ChannelFactory.Endpoint.Behaviors.Remove<System.ServiceModel.Description.ClientCredentials>(); client.ChannelFactory.Endpoint.Behaviors.Add(new CustomCredentials()); client.ClientCredentials.UserName.UserName = username; client.ClientCredentials.UserName.Password = password; return client; } This returns a service client that's ready to call other service methods. The key item in this code is the ChannelFactory endpoint behavior modification that that first removes the original ClientCredentials and then adds the new one. The ClientCredentials property on the client is read only and this is the way it has to be added.   Summary It's a bummer that WCF doesn't suport WSE Security authentication with nonce values out of the box. From reading the comments in posts/articles while I was trying to find a solution, I found that this feature was omitted by design as this protocol is considered unsecure. While I agree that plain text passwords are rarely a good idea even if they go over secured SSL connection as WSE Security does, there are unfortunately quite a few services (mosly Java services I suspect) that use this protocol. I've run into this twice now and trying to find a solution online I can see that this is not an isolated problem - many others seem to have struggled with this. It seems there are about a dozen questions about this on StackOverflow all with varying incomplete answers. Hopefully this post provides a little more coherent content in one place. Again I marvel at WCF and its breadth of support for protocol features it has in a single tool. And even when it can't handle something there are ways to get it working via extensibility. But at the same time I marvel at how freaking difficult it is to arrive at these solutions. I mean there's no way I could have ever figured this out on my own. It takes somebody working on the WCF team or at least being very, very intricately involved in the innards of WCF to figure out the interconnection of the various objects to do this from scratch. Luckily this is an older problem that has been discussed extensively online and I was able to cobble together a solution from the online content. I'm glad it worked out that way, but it feels dirty and incomplete in that there's a whole learning path that was omitted to get here… Man am I glad I'm not dealing with SOAP services much anymore. REST service security - even when using some sort of federation is a piece of cake by comparison :-) I'm sure once standards bodies gets involved we'll be right back in security standard hell…© Rick Strahl, West Wind Technologies, 2005-2012Posted in WCF  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Built-in GZip/Deflate Compression on IIS 7.x

    - by Rick Strahl
    IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework. While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy. Let’s take a look at each of the two approaches available: Static Compression Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled. Dynamic Compression Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching. How Compression is configured Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> </configuration> You can find documentation on the httpCompression and urlCompression keys here respectively: http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx The httpCompression Element – What and How to compress Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client. The UrlCompression Element – Enables and Disables Compression The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it. The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"! Static Compression is enabled by Default, Dynamic Compression is not Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting: <urlCompression doDynamicCompression="true" /> in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy). So: remember to set doDynamicCompression=”true” in web.config!!! How Static Compression works Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder. The compression folder is located at: %windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\ If you look into the subfolders you’ll find compressed files: These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed. As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element: <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> Other than that the default settings are probably just fine. Dynamic Compression – not so fast! By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance. There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server: dynamicCompressionDisableCpuUsage dynamicCompressionEnableCpuUsage By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment. Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type: <add mimeType="application/json" enabled="true" /> What about Deflate Compression? The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to: <scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs. I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available. IIS 7 finally makes GZip Easy In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs. Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use Related Content Code based ASP.NET GZip Caveats HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75  | Next Page >