Search Results

Search found 35354 results on 1415 pages for 'joe even'.

Page 672/1415 | < Previous Page | 668 669 670 671 672 673 674 675 676 677 678 679  | Next Page >

  • Pinterest and Social Commerce: The Social Networking Site Retailers Shouldn’t Ignore

    - by Jeri Kelley
    If you are in the midst of remodeling your home, researching the latest spring fashion trends, or simply trying to figure out what to cook for dinner you’ve probably been on Pinterest, and like me, find it extremely useful for generating new ideas and storing them all in one place. Gone are the days of folding over corners of magazines or bookmarking the URL of a Web page – Pinterest makes it easy for you to “pin” ideas, photos, links, and more to virtual bulletin boards where your “followers” can repin, like, and share. As a consumer, Pinterest has gained my attention and I’m definitely not the only one. According to a Monetate infographic, Pinterest’s unique visitors increased 329% from September to December 2011. With this explosion of users, what does it mean for social commerce? Also according to Monetate, Pinterest is one of the top traffic drivers for retailers – driving even more traffic than popular social networking sites like Google+.  For businesses, creating a presence on Pinterest is a great way to extend the reach of your brand, increase inbound links, and drive more traffic to your site. Socialnomics has a great post on how some of the biggest retail brands are using Pinterest to connect with consumers, offer cool content, and engage on a more personal level. When evaluating your social commerce program, while Facebook still delivers the most referrals, Pinterest shouldn’t be ignored as a way to help reach and connect with as many consumers as possible.

    Read the article

  • How to Make Sure your Company Don't Go Underwater if Your Programmers are Hit by Bus

    - by Graviton
    I have a few programmers under me, they are all doing very great and very smart obviously. Thank you very much. But the problem is that each and every one of them is responsible for one core area, which no one else on the team have foggiest idea on what it is. This means that if anyone of them is taken out, my company as a business is dead because they aren't replaceable. I'm thinking about bringing in new programmers to cover them, just in case they are hit by a bus, or resign or whatever. But I afraid that The old programmers might actively resist the idea of knowledge transfer, fearing that a backup might reduce their value. I don't have a system to facilitate technology transfer between different developers, so even if I ask them to do it, I've no assurance that they will do it properly. My question is, How to put it to the old programmers in such they would agree What are systems that you use, in order to facilitate this kind of "backup"? I can understand that you can do code review, but is there a simple way to conduct this? I think we are not ready for a full blown, check-in by check-in code review.

    Read the article

  • Mobile Java, shiny and new: Nokia Asha and Nokia SDK 2.0

    - by terrencebarr
    Nokia has announced a series of new S40 phones called “Asha” – mass-market devices with smart-phone features: Good-sized touch screens, 1 GHz processors, WiFi connectivity, social networking integration, and more. Prices starting around €60 retail. In case you don’t know, the S40 series is built on Java ME and has a huge deployed base in many parts of the world where price/performance is critical. Along with the new phones, Nokia is also making available the new Nokia SDK 2.0 for Java (beta), which enables developers to build rich Java applications with multi-touch, sensor support, an improved Maps API, and the Lightweight UI Toolkit (LWUIT) (more API & tools details). Furthermore, there is a host of developer information, the remote device access service, and even a porting guide to help you port your Android app to the new Asha platform. Last, but not least: More and better options to monetize your applications. Nokia has enabled in-app advertising and in-app purchasing, and improved the way applications can be discovered by customers. Nokia has seen downloads from the Nokia app store rise by 63%, now totaling billions. From what I’m hearing, the revenue opportunities on S40 for developers are often way better than what is typical for other smart-phone platforms (where competition is huge and consumers are fickle). Cheers, – Terrence Filed under: Mobile & Embedded Tagged: Asha Series, Java ME, Java ME SDK, Mobile Java, monetization, Nokia, S40

    Read the article

  • Oracle VM and JRockit Virtual Edition: Oracle Introduces Java Virtualization Solution for Oracle(R)

    - by adam.hawley
    Since the beginning, we've been talking to customers about how our approach to virtualization is different and more powerful than any other company because Oracle has the "full-stack" of software (and even hardware these days!) to work with to create more comprehensive, more powerful solutions. Having the virtualization layer, two enterprise class operating systems in Solaris and Enterprise Linux, and the leading enterprise software in nearly every layer of the data center stack, allows us to not just do virtualization for virtualization's sake but rather to provide complete virtualization solutions focused on making enterprise software easier to deploy, easier to manage, and easier to support through integration up and down the stack. Today, we announced the availability of a significant demonstration of that capability by announcing a WebLogic Suite option that permits the Oracle WebLogic Server 11g to run on a Java JVM (JRockit Virtual Edition) that itself runs directly on the Oracle VM Server for x86 / x64 without needing any operating system. Why would you want that? Better performance and better consolidation density, not to mention great security due to a lower "attack surface area". Oracle also announced the Oracle Virtual Assembly Builder product. Oracle Virtual Assembly Builder provides a framework for automatically capturing the configuration of existing software components and packaging them as self-contained building blocks known as appliances. So you know that complex application you've tweaked on your physical servers (or on other virtual environments for that matter)?  Virtual Assembly Builder will allow the automated collection of all the configuration data for the various application components that make up that multi-tier application and then use the information to create and package each component as a virtual machine so that the application can be deployed in your Oracle VM virtualization environment quickly and easily and just as it was configured it in your original environment. A slick, drag-and-drop GUI also serves as a powerful, intuitive interface for viewing and editing your assembly as needed.No one else can do complete virtualization solutions the way Oracle can and I think these offerings show what's possible when you have the right resources for elegantly solving the larger problems in the data center rather than just having to make-do with tools that are only operating at one layer of the stack. For more information, read the press release including the links to more information on various Oracle websites.

    Read the article

  • ASP.NET MVC Postbacks and HtmlHelper Controls ignoring Model Changes

    - by Rick Strahl
    So here's a binding behavior in ASP.NET MVC that I didn't really get until today: HtmlHelpers controls (like .TextBoxFor() etc.) don't bind to model values on Postback, but rather get their value directly out of the POST buffer from ModelState. Effectively it looks like you can't change the display value of a control via model value updates on a Postback operation. To demonstrate here's an example. I have a small section in a document where I display an editable email address: This is what the form displays on a GET operation and as expected I get the email value displayed in both the textbox and plain value display below, which reflects the value in the mode. I added a plain text value to demonstrate the model value compared to what's rendered in the textbox. The relevant markup is the email address which needs to be manipulated via the model in the Controller code. Here's the Razor markup: <div class="fieldcontainer"> <label> Email: &nbsp; <small>(username and <a href="http://gravatar.com">Gravatar</a> image)</small> </label> <div> @Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield"}) @Model.User.Email </div> </div>   So, I have this form and the user can change their email address. On postback the Post controller code then asks the business layer whether the change is allowed. If it's not I want to reset the email address back to the old value which exists in the database and was previously store. The obvious thing to do would be to modify the model. Here's the Controller logic block that deals with that:// did user change email? if (!string.IsNullOrEmpty(oldEmail) && user.Email != oldEmail) { if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; } else // allow email change but require verification by forcing a login user.IsVerified = false; }… model.user = user; return View(model); The logic is straight forward - if the new email address is not valid because it already exists I don't want to display the new email address the user entered, but rather the old one. To do this I change the value on the model which effectively does this:model.user.Email = oldEmail; return View(model); So when I press the Save button after entering in my new email address ([email protected]) here's what comes back in the rendered view: Notice that the textbox value and the raw displayed model value are different. The TextBox displays the POST value, the raw value displays the actual model value which are different. This means that MVC renders the textbox value from the POST data rather than from the view data when an Http POST is active. Now I don't know about you but this is not the behavior I expected - initially. This behavior effectively means that I cannot modify the contents of the textbox from the Controller code if using HtmlHelpers for binding. Updating the model for display purposes in a POST has in effect - no effect. (Apr. 25, 2012 - edited the post heavily based on comments and more experimentation) What should the behavior be? After getting quite a few comments on this post I quickly realized that the behavior I described above is actually the behavior you'd want in 99% of the binding scenarios. You do want to get the POST values back into your input controls at all times, so that the data displayed on a form for the user matches what they typed. So if an error occurs, the error doesn't mysteriously disappear getting replaced either with a default value or some value that you changed on the model on your own. Makes sense. Still it is a little non-obvious because the way you create the UI elements with MVC, it certainly looks like your are binding to the model value:@Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield",required="required" }) and so unless one understands a little bit about how the model binder works this is easy to trip up. At least it was for me. Even though I'm telling the control which model value to bind to, that model value is only used initially on GET operations. After that ModelState/POST values provide the display value. Workarounds The default behavior should be fine for 99% of binding scenarios. But if you do need fix up values based on your model rather than the default POST values, there are a number of ways that you can work around this. Initially when I ran into this, I couldn't figure out how to set the value using code and so the simplest solution to me was simply to not use the MVC Html Helper for the specific control and explicitly bind the model via HTML markup and @Razor expression: <input type="text" name="User.Email" id="User_Email" value="@Model.User.Email" /> And this produces the right result. This is easy enough to create, but feels a little out of place when using the @Html helpers for everything else. As you can see by the difference in the name and id values, you also are forced to remember the naming conventions that MVC imposes in order for ModelBinding to work properly which is a pain to remember and set manually (name is the same as the property with . syntax, id replaces dots with underlines). Use the ModelState Some of my original confusion came because I didn't understand how the model binder works. The model binder basically maintains ModelState on a postback, which holds a value and binding errors for each of the Post back value submitted on the page that can be mapped to the model. In other words there's one ModelState entry for each bound property of the model. Each ModelState entry contains a value property that holds AttemptedValue and RawValue properties. The AttemptedValue is essentially the POST value retrieved from the form. The RawValue is the value that the model holds. When MVC binds controls like @Html.TextBoxFor() or @Html.TextBox(), it always binds values on a GET operation. On a POST operation however, it'll always used the AttemptedValue to display the control. MVC binds using the ModelState on a POST operation, not the model's value. So, if you want the behavior that I was expecting originally you can actually get it by clearing the ModelState in the controller code:ModelState.Clear(); This clears out all the captured ModelState values, and effectively binds to the model. Note this will produce very similar results - in fact if there are no binding errors you see exactly the same behavior as if binding from ModelState, because the model has been updated from the ModelState already and binding to the updated values most likely produces the same values you would get with POST back values. The big difference though is that any values that couldn't bind - like say putting a string into a numeric field - will now not display back the value the user typed, but the default field value or whatever you changed the model value to. This is the behavior I was actually expecting previously. But - clearing out all values might be a bit heavy handed. You might want to fix up one or two values in a model but rarely would you want the entire model to update from the model. So, you can also clear out individual values on an as needed basis:if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; ModelState.Remove("User.Email"); } This allows you to remove a single value from the ModelState and effectively allows you to replace that value for display from the model. Why? While researching this I came across a post from Microsoft's Brad Wilson who describes the default binding behavior best in a forum post: The reason we use the posted value for editors rather than the model value is that the model may not be able to contain the value that the user typed. Imagine in your "int" editor the user had typed "dog". You want to display an error message which says "dog is not valid", and leave "dog" in the editor field. However, your model is an int: there's no way it can store "dog". So we keep the old value. If you don't want the old values in the editor, clear out the Model State. That's where the old value is stored and pulled from the HTML helpers. There you have it. It's not the most intuitive behavior, but in hindsight this behavior does make some sense even if at first glance it looks like you should be able to update values from the model. The solution of clearing ModelState works and is a reasonable one but you have to know about some of the innards of ModelState and how it actually works to figure that out.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL SERVER – What is Page Life Expectancy (PLE) Counter

    - by pinaldave
    During performance tuning consultation there are plenty of counters and values, I often come across. Today we will quickly talk about Page Life Expectancy counter, which is commonly known as PLE as well. You can find the value of the PLE by running following query. SELECT [object_name], [counter_name], [cntr_value] FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Manager%' AND [counter_name] = 'Page life expectancy' The recommended value of the PLE counter is 300 seconds. I have seen on busy system this value to be as low as even 45 seconds and on unused system as high as 1250 seconds. Page Life Expectancy is number of seconds a page will stay in the buffer pool without references. In simple words, if your page stays longer in the buffer pool (area of the memory cache) your PLE is higher, leading to higher performance as every time request comes there are chances it may find its data in the cache itself instead of going to hard drive to read the data. Now check your system and post back what is this counter value for you during various time of the day. Is this counter any way relates to performance issues for your system? Note: There are various other counters which are important to discuss during the performance tuning and this counter is not everything. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • That’s a wrap! Almost, there’s still one last chance to attend a SQL in the City event in 2012

    - by Red and the Community
    The communities team are back from the SQL in the City multi-city US Tour and we are delighted to have met so many happy SQL Server professionals and Red Gate customers. We set out to run a series of back-to-back events in order to meet, talk to and delight as many SQL Server and Red Gate enthusiasts as possible in 5 different cities in 11 days. We did it! The attendees had a good time too and 99% of them would attend another SQL in the City event in 2013 – so it seems we left an impression. There were a range of topics on the event agenda, ranging from ‘The Whys & Hows of Continuous Integration’, ‘Database Maintenance Essentials’, ‘Red Gate tools – The Complete Lifecycle’, ‘Automated Deployment: Application And Database Releases Without The Headache’, ‘The Ten Commandments of SQL Server Monitoring’ and many more. Videos and slides from the events will be posted to the event website in November, after our last event of 2012. SQL in the City Seattle – November 5 Join us for free and hear from some of the very best names in the SQL Server world. SQL Server MVPs such as; Steve Jones, Grant Fritchey, Brent Ozar, Gail Shaw and more will be presenting at the Bell Harbor conference center for one day only. We’re even taking on board some of the recent attendee-suggestions of how we can improve the events (feedback from the 65% of attendees who came to our US tour events), first off we’re extending the drinks celebration in the evening! Rather than just a 30 minute drink and run, attendees will have up to 2 hours to enjoy free drinks, relax and network in a fantastic environment amongst some really smart like-minded professionals. If you’re interested in expanding your SQL Server knowledge, would like to learn more about Red Gate tools, get yourself registered for the last SQL in the City event of 2012. It’s free, fun and we’re very friendly! I look forward to seeing you in Seattle on Monday November 5. Cheers, Annabel.

    Read the article

  • sudo: /usr/lib/sudo/sudoers.so must be owned by uid 0

    - by 7UR7L3
    Whenever I try to do anything at all that requires my password it returns this: u7ur7l3@ubuntu:~$ sudo sudo: /usr/lib/sudo/sudoers.so must be owned by uid 0 sudo: fatal error, unable to load plugins u7ur7l3@ubuntu:~$ So I can't install anything from the Software Center / package manager or run any commands in terminal that require my password. I can log in, but that's pretty much it. I accidentally changed the permissions of some files, then changed some more trying to fix it :/. Now I'm completely lost as to what to do. This is what happened when I tried to get sudo working again using pkexec: u7ur7l3@ubuntu:~$ pkexec chown root /usr/lib/sudo/sudoers.so Error getting authority: Error initializing authority: Error calling StartServiceByName for org.freedesktop.PolicyKit1: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ExecFailed: Failed to execute program /usr/lib/dbus-1.0/dbus-daemon-launch-helper: Success u7ur7l3@ubuntu:~$ sudo ls sudo: /usr/lib/sudo/sudoers.so must be owned by uid 0 sudo: fatal error, unable to load plugins And to change permissions I was using Root Actions as a dolphin service/ plugin thing, so history doesn't show me the permission changes. I just realized that sounds don't work at all anymore. When I go into Phonon my default settings and playback devices aren't even there. Also I don't have the option to shutdown, I can only log out or leave.

    Read the article

  • NorthWest Arkansas TechFest

    - by dmccollough
    David Walker is taking Tulsa TechFest on the road to NorthWest Arkansas When Thursday, July 8th 2010 Where Center for Nonprofits @St. Mary’s 1200 West Walnut Street Rogers, Ar 72756 479-936-8218 Map it with Bing! What is NorthWest Arkansas TechFest ? It is a technical conference with a primary focus to provide training/teaching sessions that are immediately beneficial to the broadest range of IT professionals in their day-to-day jobs. We can accomplish this with numerous national and international speakers delivering 75 minute sessions. A charitable non-profit event organized by local area volunteers. Even though it its a free event, we ask that you support the community and PLEASE bring TWO CANS or TWO BUCKS. All canned food will be donated to the NWA Food Bank and all proceeds will be donated to the The Jones Center. Since our first event in the Tulsa area back in 1996, many other communities have been following our example by hosting their own TechFest events: Vancouver TechFest, Houston TechFest, Dallas TechFest, Alberta TechFest and Indy TechFest. We are very PROUD to now bring the event to NorthWest Arkansas! Who should Attend? Every IT Professional IT Job seekers and IT Recruiters and Hiring Managers Developers of all languages Graphic and Web Designers Infrastructure, IT and System Administrators eMarketing Professionals Project Managers Compliance Managers IT Directors and Mangers Chief Compliance Officers Chief Security Officers CIOs/CTOs CEOs/Executive Officers With this many hours of training, anyone in the or wanting to get into the IT Industry will definitely find interesting and instructional presentations by professional speakers. Want to keep informed? More information can be found here.

    Read the article

  • How to implement RLE into a tilemap?

    - by Smallbro
    Currently I've been using a 3D array for my tiles in a 2D world but the 3D side comes in when moving down into caves and whatnot. Now this is not memory efficient and I switched over to a 2D array and can now have much larger maps. The only issue I'm having now is that it seems that my tiles cannot occupy the same space as a tile on the same z level. My current structure means that each block has its own z variable. This is what it used to look like: map.blockData[x][y][z] = new Block(); however now it works like this map.blockData[x][y] = new Block(z); I'm not sure why but if I decide to use the same space on say the floor below it wont allow me to. Does anyone have any ideas on how I can add a z-axis to my 2D array? I'm using java but I reckon the concept carries across different languages. Edit: As Will posted, RLE sounds like the best method for achieving a fast 3D array. However I'm struggling to understand how I would even start to implement it? Would I create a 4D array the 4th being something which controls how many to skip? Or would the x-axis simply change altogether and have large gaps in between - for example [5][y][z] would skip 5 tiles? Is there something really obvious here which I am missing? The number of z levels I'm trying to have is around 66, it would be preferably that I can have up to or more than 1000 in x and y.

    Read the article

  • Creating HTML5 Offline Web Applications with ASP.NET

    - by Stephen Walther
    The goal of this blog entry is to describe how you can create HTML5 Offline Web Applications when building ASP.NET web applications. I describe the method that I used to create an offline Web application when building the JavaScript Reference application. You can read about the HTML5 Offline Web Application standard by visiting the following links: Offline Web Applications Firefox Offline Web Applications Safari Offline Web Applications Currently, the HTML5 Offline Web Applications feature works with all modern browsers with one important exception. You can use Offline Web Applications with Firefox, Chrome, and Safari (including iPhone Safari). Unfortunately, however, Internet Explorer does not support Offline Web Applications (not even IE 9). Why Build an HTML5 Offline Web Application? The official reason to build an Offline Web Application is so that you do not need to be connected to the Internet to use it. For example, you can use the JavaScript Reference Application when flying in an airplane, riding a subway, or hiding in a cave in Borneo. The JavaScript Reference Application works great on my iPhone even when I am completely disconnected from any network. The following screenshot shows the JavaScript Reference Application running on my iPhone when airplane mode is enabled (notice the little orange airplane):   Admittedly, it is becoming increasingly difficult to find locations where you can’t get Internet access. A second, and possibly better, reason to create Offline Web Applications is speed. An Offline Web Application must be downloaded only once. After it gets downloaded, all of the files required by your Web application (HTML, CSS, JavaScript, Image) are stored persistently on your computer. Think of Offline Web Applications as providing you with a super browser cache. Normally, when you cache files in a browser, the files are cached on a file-by-file basis. For each HTML, CSS, image, or JavaScript file, you specify how long the file should remain in the cache by setting cache headers. Unlike the normal browser caching mechanism, the HTML5 Offline Web Application cache is used to specify a caching policy for an entire set of files. You use a manifest file to list the files that you want to cache and these files are cached until the manifest is changed. Another advantage of using the HTML5 offline cache is that the HTML5 standard supports several JavaScript events and methods related to the offline cache. For example, you can be notified in your JavaScript code whenever the offline application has been updated. You can use JavaScript methods, such as the ApplicationCache.update() method, to update the cache programmatically. Creating the Manifest File The HTML5 Offline Cache uses a manifest file to determine the files that get cached. Here’s what the manifest file looks like for the JavaScript Reference application: CACHE MANIFEST # v30 Default.aspx # Standard Script Libraries Scripts/jquery-1.4.4.min.js Scripts/jquery-ui-1.8.7.custom.min.js Scripts/jquery.tmpl.min.js Scripts/json2.js # App Scripts App_Scripts/combine.js App_Scripts/combine.debug.js # Content (CSS & images) Content/default.css Content/logo.png Content/ui-lightness/jquery-ui-1.8.7.custom.css Content/ui-lightness/images/ui-bg_glass_65_ffffff_1x400.png Content/ui-lightness/images/ui-bg_glass_100_f6f6f6_1x400.png Content/ui-lightness/images/ui-bg_highlight-soft_100_eeeeee_1x100.png Content/ui-lightness/images/ui-icons_222222_256x240.png Content/ui-lightness/images/ui-bg_glass_100_fdf5ce_1x400.png Content/ui-lightness/images/ui-bg_diagonals-thick_20_666666_40x40.png Content/ui-lightness/images/ui-bg_gloss-wave_35_f6a828_500x100.png Content/ui-lightness/images/ui-icons_ffffff_256x240.png Content/ui-lightness/images/ui-icons_ef8c08_256x240.png Content/browsers/c8.png Content/browsers/es3.png Content/browsers/es5.png Content/browsers/ff3_6.png Content/browsers/ie8.png Content/browsers/ie9.png Content/browsers/sf5.png NETWORK: Services/EntryService.svc http://superexpert.com/resources/JavaScriptReference/ A Cache Manifest file always starts with the line of text Cache Manifest. In the manifest above, all of the CSS, image, and JavaScript files required by the JavaScript Reference application are listed. For example, the Default.aspx ASP.NET page, jQuery library, JQuery UI library, and several images are listed. Notice that you can add comments to a manifest by starting a line with the hash character (#). I use comments in the manifest above to group JavaScript and image files. Finally, notice that there is a NETWORK: section of the manifest. You list any file that you do not want to cache (any file that requires network access) in this section. In the manifest above, the NETWORK: section includes the URL for a WCF Service named EntryService.svc. This service is called to get the JavaScript entries displayed by the JavaScript Reference. There are two important things that you need to be aware of when using a manifest file. First, all relative URLs listed in a manifest are resolved relative to the manifest file. The URLs listed in the manifest above are all resolved relative to the root of the application because the manifest file is located in the application root. Second, whenever you make a change to the manifest file, browsers will download all of the files contained in the manifest (all of them). For example, if you add a new file to the manifest then any browser that supports the Offline Cache standard will detect the change in the manifest and download all of the files listed in the manifest automatically. If you make changes to files in the manifest (for example, modify a JavaScript file) then you need to make a change in the manifest file in order for the new version of the file to be downloaded. The standard way of updating a manifest file is to include a comment with a version number. The manifest above includes a # v30 comment. If you make a change to a file then you need to modify the comment to be # v31 in order for the new file to be downloaded. When Are Updated Files Downloaded? When you make changes to a manifest, the changes are not reflected the very next time you open the offline application in your web browser. Your web browser will download the updated files in the background. This can be very confusing when you are working with JavaScript files. If you make a change to a JavaScript file, and you have cached the application offline, then the changes to the JavaScript file won’t appear when you reload the application. The HTML5 standard includes new JavaScript events and methods that you can use to track changes and make changes to the Application Cache. You can use the ApplicationCache.update() method to initiate an update to the application cache and you can use the ApplicationCache.swapCache() method to switch to the latest version of a cached application. My heartfelt recommendation is that you do not enable your application for offline storage until after you finish writing your application code. Otherwise, debugging the application can become a very confusing experience. Offline Web Applications versus Local Storage Be careful to not confuse the HTML5 Offline Web Application feature and HTML5 Local Storage (aka DOM storage) feature. The JavaScript Reference Application uses both features. HTML5 Local Storage enables you to store key/value pairs persistently. Think of Local Storage as a super cookie. I describe how the JavaScript Reference Application uses Local Storage to store the database of JavaScript entries in a separate blog entry. Offline Web Applications enable you to store static files persistently. Think of Offline Web Applications as a super cache. Creating a Manifest File in an ASP.NET Application A manifest file must be served with the MIME type text/cache-manifest. In order to serve the JavaScript Reference manifest with the proper MIME type, I added two files to the JavaScript Reference Application project: Manifest.txt – This text file contains the actual manifest file. Manifest.ashx – This generic handler sends the Manifest.txt file with the MIME type text/cache-manifest. Here’s the code for the generic handler: using System.Web; namespace JavaScriptReference { public class Manifest : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/cache-manifest"; context.Response.WriteFile(context.Server.MapPath("Manifest.txt")); } public bool IsReusable { get { return false; } } } } The Default.aspx file contains a reference to the manifest. The opening HTML tag in the Default.aspx file looks like this: <html manifest="Manifest.ashx"> Notice that the HTML tag contains a manifest attribute that points to the Manifest.ashx generic handler. Internet Explorer simply ignores this attribute. Every other modern browser will download the manifest when the Default.aspx page is requested. Seeing the Offline Web Application in Action The experience of using an HTML5 Web Application is different with different browsers. When you first open the JavaScript Reference application with Firefox, you get the following warning: Notice that you are provided with the choice of whether you want to use the application offline or not. Browsers other than Firefox, such as Chrome and Safari, do not provide you with this choice. Chrome and Safari will create an offline cache automatically. If you click the Allow button then Firefox will download all of the files listed in the manifest. You can view the files contained in the Firefox offline application cache by typing about:cache in the Firefox address bar: You can view the actual items being cached by clicking the List Cache Entries link: The Offline Web Application experience is different in the case of Google Chrome. You can view the entries in the offline cache by opening the Developer Tools (hit Shift+CTRL+I), selecting the Storage tab, and selecting Application Cache: Notice that you view the status of the Application Cache. In the screen shot above, the status is UNCACHED which means that the files listed in the manifest have not been downloaded and cached yet. The different possible values for the status are included in the HTML5 Offline Web Application standard: UNCACHED – The Application Cache has not been initialized. IDLE – The Application Cache is not currently being updated. CHECKING – The Application Cache is being fetched and checked for updates. DOWNLOADING – The files in the Application Cache are being updated. UPDATEREADY – There is a new version of the Application. OBSOLETE – The contents of the Application Cache are obsolete. Summary In this blog entry, I provided a description of how you can use the HTML5 Offline Web Application feature in the context of an ASP.NET application. I described how this feature is used with the JavaScript Reference Application to store the entire application on a user’s computer. By taking advantage of this new feature of the HTML5 standard, you can improve the performance of your ASP.NET web applications by requiring users of your web application to download your application once and only once. Furthermore, you can enable users to take advantage of your applications anywhere -- regardless of whether or not they are connected to the Internet.

    Read the article

  • ASP.NET MVC 2 RTM Unit Tests not compiling

    - by nmarun
    I found something weird this time when it came to ASP.NET MVC 2 release. A very handful of people ‘made noise’ about the release.. at least on the asp.net blog site, usually there’s a big ‘WOOHAA… <something> is released’, kind of a thing. Hmm… but here’s the reason I’m writing this post. I’m not sure how many of you read the release notes before downloading the version.. I did, I did, I did. Now there’s a ‘Known issues’ section in the document and I’m quoting the text as is from this section: Unit test project does not contain reference to ASP.NET MVC 2 project: If the Solution Explorer window is hidden in Visual Studio, when you create a new ASP.NET MVC 2 Web application project and you select the option Yes, create a unit test project in the Create Unit Test Project dialog box, the unit test project is created but does not have a reference to the associated ASP.NET MVC 2 project. When you build the solution, Visual Studio will display compilation errors and the unit tests will not run. There are two workarounds. The first workaround is to make sure that the Solution Explorer is displayed when you create a new ASP.NET MVC 2 Web application project. If you prefer to keep Solution Explorer hidden, the second workaround is to manually add a project reference from the unit test project to the ASP.NET MVC 2 project. This definitely looks like a bug to me and see below for a visual: At the top right corner you’ll see that the Solution Explorer is set to auto hide and there’s no reference for the TestMvc2 project and that is the reason we get compilation errors without even writing a single line of code. So thanks to <VeryBigFont>ME</VeryBigFont> and <VerySmallFont>Microsoft</VerySmallFont>) , we’ve shown the world how to resolve a major issue and to live in Peace with the rest of humanity!

    Read the article

  • "Mac" vs "OS X" vs "Mac OS X"

    - by Brian Campbell
    I am writing server software that gives administrators options that apply only to Mac OS X clients. When naming those options, I need to decide how I am going to refer to such clients. I can see three choices: Mac OS X Mac OS X In context, I feel that "Mac OS X" might be a little clumsy: "Default protocol for Mac OS X clients"; "Change Mac OS X protocol". In discussing with my boss, he suggests "OS X", as that's the name for the OS itself, while I think that "Mac" is more recognizable. While "OS X" is technically correct and is what Apple recommends, I feel that it has a lot less name recognition than "Mac"; in particular, administrators who work in a Windows-only environment may not even recognize "OS X" and wonder what the options is about, while I think everyone knows what "Mac" refers to. In looking at what several popular pieces of software choose, I see that Microsoft has "Office for Mac". Adobe calls it "Macintosh" (which sounds very outdated, I believe that Apple stopped using that "Macintosh" years ago). Firefox uses "Mac OS X". Google has "Google Software Downloads for Mac". I don't see many popular pieces of software that refer to it solely as OS X; it seems that either "Mac OS X" or "Mac" is used most often. Apple does refer to it as OS X, but I think the fact that it's coming from Apple provides the disambiguation that you need, so it's not confusing coming from them, while it may be in another context. Is there any good solution for this? Should I just use the somewhat clumsy "Mac OS X" everywhere (or at least, the first time I refer to it in any given screen)? Obviously, my boss has final say, but I'd like to be able to provide a coherent argument.

    Read the article

  • How Hard Can It Be?

    - by David Totzke
    I mean seriously.  Let’s imagine for a moment that by some stroke of luck or genius or cosmic accident that you come to be the owner of sex.com.  You’d think you had won the lottery.  That would be like having a license to print money.  I mean really.  Sex is the most searched term on the entire Internet.  Even without any SEO you’d think that your site would show up on the first page of results on Google. You would think that; and you’d be wrong.  At least in the case of the current owners of that domain name anyways.  The details can be found here but suffice it to say that Escom LLC has managed to fuck it up.  They’ve been forced into bankruptcy by their creditors.  Something doesn’t smell quite right with the whole thing.  Some guy named Mike Mann (please God, don’t let it be this Mike Mann) is an investor in all three creditors.  WTF? Seriously.  How hard can it be? Dave Just because I can…

    Read the article

  • Add Free Windows Live Apps to Your Website or Blog

    - by Matthew Guay
    Would you like to use Hotmail, Office Web Apps, Messenger, and more on your website domain?  Here’s how you can add Windows Live to your website for free. Microsoft offers a popular suite of online communications products including Hotmail and Messenger.  Although Hotmail hasn’t been as popular in recent years as Gmail, it is getting a refresh this summer that might make it an even better email solution.  Additionally, the new Office Web Apps offer great compatibility with Office documents. While Skydrive offers 25Gb of free online file storage for all users, so Windows Live can make a great communications solution for your domain. Note: To signup for Windows Live for your domain, you will need to be able to add info to your WordPress.com blog or change Domain settings manually. Getting Started Open the Windows Live Custom Domains page (Link below) to get started adding Windows Live to your domain.  Your free Windows Live account will let you create up to 500 accounts, so it’s great for teams and groups that want to have customized email addresses in addition to those who just want an email account for their website. Enter your domain or subdomain you want to add to Windows Live in the box, and then select whether you want to setup Hotmail with this or now.  We want to add email to our domain, so select Set up Windows Live Hotmail for my domain and click Continue. You’ll need to sign in with a Windows Live ID to create the account, or choose to create a new Windows Live account associated with your domain.   Sign in with your Windows Live ID…this can be a Hotmail, Live Messenger, XBOX Live, Zune ID, or Microsoft.com account. Or, enter your information to create a new Windows Live ID if you selected the second option. Now, review your settings and make sure everything looks correct.  Click the I Accept button to setup your account.   Your account is now fully setup, but you’ll need to add or edit DNS information on your site.  The steps are slightly different depending if your site is hosted on WordPress.com, on your own server, or hosting service. We’ll show you how to do it on either one. First, though, note the information below this box.  You’ll see settings for your Mail setup…   Security settings…   And Messenger integration.  Make note of the settings, especially the circled ones, as we’ll need them in the next step. Integrate Windows Live with Your WordPress Blog If the domain you added to Windows Live is for your WordPress blog, login to your WordPress dashboard in a separate browser window or tab.  Click the arrow beside Upgrades, and select Domains from the menu. Click the Edit DNS link beside the domain name you’re adding to Windows Live. In the text box on this page, enter the following, replacing Your_info with your code from the Mail Setup box in your Windows Live Dashboard.  Note that this is the blurred section in our screenshots.  It should be a numerical code like 1234567890.pamx1.hotmail.com. MX 10 Your_info.pamx1.hotmail.com. TXT v=spf1 include:hotmail.com ~all CNAME Your_info domains.live.com. Click Save DNS records, and your settings are saved to WordPress.  Note that this will only integrate email with your WordPress account; you cannot integrate Messenger with a domain hosted on WordPress.com. Finally, return to your Windows Live Settings page and click Refresh.  If your settings are correct, you’ll now be ready to use Windows Live on your WordPress.com domain. Integrate Windows Live with Your Own Server If your website is hosted on your own server or hosting account, you’ll need to take a few more steps to add Windows Live to your domain.  This is fairly easy, but the steps may be different depending on your hosting company or registrar.  With some hosts, you may have to contact support to have them add the MX records for you.  Our site’s host uses the popular cPanel for website administration, so here’s how we added the MX Entries through cPanel. Login to your website’s cPanel, and select MX Entry under the Mail section. In the text box on this page, enter the following, replacing Your_info with your code from the Mail Setup box in your Windows Live Dashboard.  Note that this is the blurred section in our screenshots.  It should be a numerical code like 1234567890.pamx1.hotmail.com. MX 10 Your_info.pamx1.hotmail.com. Now, go back to your cPanel home, and select Advanced DNS Zone Editor under Domains. Here, add a TXT record with the following info: Name: yoursite.com. TTL: 3600 TXT Data: v=spf1 include:hotmail.com ~all Click Add Record and your Mail integration data is all configured. To integrate Messenger with your own domain, you’ll have to add an SRV entry to your DNS settings.  cPanel doesn’t have an option for this, so we had to contact our site’s hosting company and they added the entry for us.  Copy all of the information in the Messenger box and send it to your domain support, and they should be able to add this for you.  Alternately, if you don’t want or need Messenger, then you can simply skip this step. Once all of your settings are in place, return to your Windows Live Settings page and click Refresh.  If your settings are correct, you’ll now be ready to use Windows Live on your WordPress.com domain. Create a New Email Account On Your Domain Welcome to your new Windows Live admin page!  Now you can add email accounts so you and anyone else you want can access Hotmail and the other Windows Live apps with your domain.  Click Add to add an account. Enter an account name, which will be the email address of the account, e.g. [email protected].  Then enter the user’s name and a password for the account.  By default this will be a temporary password, and the user will have to change it on first log-in, but if you’re setting up this account for yourself, you can uncheck the box and keep this as your standard password. Now, go to www.mail.live.com, and sign in with your new email address and password.  Remember, your email address is your username previously entered followed by @yourdomain.com. To finish setting up the email account, enter your password, secret question and answer, alternate email, and location information.  Click I accept to finish setting up your new email account. Enter the characters in the Captcha to confirm you’re a human, and click Continue. Your new Hotmail inbox will now load, and you’ll have a welcome email in your inbox.  This works the same as normal Hotmail, except this time, your email address is with your own domain. You can now access any of the Windows Live services from the top-level menu. Here’s an Excel Spreadsheet open in the new Office Web Apps via SkyDrive on our new Windows Live account. If you setup Messenger access previously, you can now sign in to Windows Live Messenger using your new @yourdomain.com account as well. Important Links Accessing your Windows Live accounts is easy.  Simply go to any Windows Live site, such as www.hotmail.com or www.skydrive.com, and sign in with your new Windows Live ID from your domain as normal.  You don’t need a special address to access your account; it works just like the standard public Hotmail accounts. To administer your Windows Live for your domain, go to https://domains.live.com/ and sign in with the Windows Live ID you used to create the account.  Here you can add more users, change settings, and view usage details for the Windows Live accounts on your domain. Conclusion Windows Live is easy to add to your domain, and lets you create up to 500 email address for it.  With the upcoming updates to Hotmail and Office Web Apps coming this summer, this can be a nice way to make your domain even more useful.  And with 500 email accounts, you can easily let your team take advantage of your unique address as well. If you’d rather use Google’s online applications with your domain, check out our article on how to add free Google apps to your website or blog. Link Signup for Windows Live for Your Domain Similar Articles Productive Geek Tips Tools to Help Post Content On Your WordPress BlogBackup Your Windows Live Writer SettingsInstall Windows Live Essentials In Windows 7Add Your Gmail To Windows Live MailMysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12

    Read the article

  • Introducing Ben Barreth, Community Builder &amp; Software Developer at GWB

    - by Staff of Geeks
    Please extend a warm welcome to Ben Barreth as the new community builder and full-time software developer at Geeks With Blogs. We've been wanting to add some cool features to the site but haven't had the opportunity until now. Adding Ben to the team should give us a big kick in the right direction. Ben has several years of .Net development experience and is heavily involved in the startup community in Kansas City, including the KC Startup Village as well as his own startup initiatives: Homes for Hackers and FreeIdeas.co. He loves working with people even more than coding and is excited to serve the GWB community in any way possible. Ben originally met Matt Watson as a beta tester for Stackify, the software company that gives developers the safe & secure access to troubleshoot in production. Jeff Julian and Matt are old friends and recently decided the site needed new ownership to carry it forward and build the enhancements it deserves. The site management transferred in October and Matt quickly began looking for a full-time community builder to lead the charge. Ben bumped into Matt once again at a Tech Cocktail event at the Boulevard Brewery where Stackify was presenting and an alliance was forged. Yes, the beer really IS that good! Which brings us to the biggest question of all: Where do you want Geeks with Blogs to go next? As a contributor to the GWB community, now is your chance to be heard! What are we missing? Features on our radar: New templates Add a code "formatter" to posts Add categories to blog feeds Re-skin the site and redesign the logo Feel free to contact Ben with further questions and ideas below. We need your help! @BenBarreth [email protected] Cell: 816-332-9770 www.linkedin.com/in/benbarreth

    Read the article

  • How do I create statistics to make ‘small’ objects appear ‘large’ to the Optmizer?

    - by Maria Colgan
    I recently spoke with a customer who has a development environment that is a tiny fraction of the size of their production environment. His team has been tasked with identifying problem SQL statements in this development environment before new code is released into production. The problem is the objects in the development environment are so small, the execution plans selected in the development environment rarely reflects what actually happens in production. To ensure the development environment accurately reflects production, in the eyes of the Optimizer, the statistics used in the development environment must be the same as the statistics used in production. This can be achieved by exporting the statistics from production and import them into the development environment. Even though the underlying objects are a fraction of the size of production, the Optimizer will see them as the same size and treat them the same way as it would in production. Below are the necessary steps to achieve this in their environment. I am using the SH sample schema as the application schema who's statistics we want to move from production to development. Step 1. Create a staging table, in the production environment, where the statistics can be stored Step 2. Export the statistics for the application schema, from the data dictionary in production, into the staging table Step 3. Create an Oracle directory on the production system where the export of the staging table will reside and grant the SH user the necessary privileges on it. Step 4. Export the staging table from production using data pump export Step 5. Copy the dump file containing the stating table from production to development Step 6. Create an Oracle directory on the development system where the export of the staging table resides and grant the SH user the necessary privileges on it.  Step 7. Import the staging table into the development environment using data pump import Step 8. Import the statistics from the staging table into the dictionary in the development environment. You can get a copy of the script I used to generate this post here. +Maria Colgan

    Read the article

  • Windows Phone 7 ActiveSync error 86000C09 (My First Post!)

    - by Chris Heacock
    Hello fellow geeks! I'm kicking off this new blog with an issue that was a real nuisance, but was relatively easy to fix. During a recent Exchange 2003 to 2010 migration, one of the users was getting an error on his Windows Phone 7 device. The error code that popped up on the phone on every sync attempt was 86000C09 We tested the following: Different user on the same device: WORKED Problem user on a different device: FAILED   Seemed to point (conclusively) at the user's account as the crux of the issue. This error can come up if a user has too many devices syncing, but he had no other phones. We verified that using the following command: Get-ActiveSyncDeviceStatistics -Identity USERID Turns out, it was the old familiar inheritable permissions issue in Active Directory. :-/ This user was not an admin, nor had he ever been one. HOWEVER, his account was cloned from an ex-admin user, so the unchecked box stayed unchecked. We checked the box and voila, data started flowing to his device(s). Here's a refresher on enabling Inheritable permissions: Open ADUC, and enable Advanced Features: Then open properties and go to the Security tab for the user in question: Click on Advanced, and the following screen should pop up: Verify that "Include inheritable permissions from this object's parent" is *checked*.   You will notice that for certain users, this box keeps getting unchecked. This is normal behavior due to the inbuilt security of Active Directory. People that are in the following groups will have this flag altered by AD: Account Operators Administrators Backup Operators Domain Admins Domain Controllers Enterprise Admins Print Operators Read-Only Domain Controllers Replicator Schema Admins Server Operators Once the box is cheked, permissions will flow and the user will be set correctly. Even if the box is unchecked, they will function normally as they now has the proper permissions configured. You need to perform this same excercise when enabling users for Lync, but that's another blog. :-)   -Chris

    Read the article

  • Domain registration and DNS, what am I actually paying for? [on hold]

    - by jozxyqk
    Long story short I'm quite confused as to exactly what is offered by domain registration and dns service sites. When I go to the url "http://google.com", my PC connects to a name server and gets the IP for "google.com", then connects to the IP and says, give me the page for "http://google.com". AFAIK there are many name servers and they all cache these bits of information in some hierarchical network, but ultimately a DNS record must come from a single source (not sure what this is called). There are different kinds of records, that might not an IP but an alias/redirect to other records for example. Lets say I want my own domain name for some server. Maybe it even has a static IP but I want a nicer thing for people to remember, or my ISP assigns dynamic IPs and I want a URL that always works, or my website is hosted on a shared machine so the browser needs to send "http://mydnsname.com" to the webserver to distinguish it from other requests to the same IP but for different sites. Registering a domain costs a small amount of money per year. Where does this money go, not that I'm complaining :P? Is that really all it costs to maintain the entire DNS system of nameservers? If I just register the domain and nothing else, what do I get? Is that just reserving a name or hosting WHOIS information or have I paid for a dns recrord to be hosted? Can a domain alone have a record, such as an IP or be an alias to another? A bunch of sites out there offer other services, in addition to domain registration (I'm assuming they register the domain through another party for me). One example is "dynamic DNS" (DDNS), but isn't this just a regular DNS record that's updated regularly? Does it cost extra to update more often? Without a DDNS, can a DNS record still point to an IP? I've also seen the term "managed DNS" and have no idea where that fits in.

    Read the article

  • Should Development / Testing / QA / Staging environments be similar?

    - by Walter White
    Hi all, After much time and effort, we're finally using maven to manage our application lifecycle for development. We still unfortunately use ANT to build an EAR before deploying to Test / QA / Staging. My question is, while we made that leap forward, developers are still free to do as they please for testing their code. One issue that we have is half our team is using Tomcat to test on and the other half is using Jetty. I prefer Jetty slightly over Tomcat, but regardless we using WAS for all the other environments. My question is, should we develop on the same application server we're deploying to? We've had numerous bugs come up from these differences in environments. Tomcat, Jetty, and WAS are different under the hood. My opinion is that we all should develop on what we're deploying to production with so we don't have the problem of well, it worked fine on my machine. While I prefer Jetty, I just assume we all work on the same environment even if it means deploying to WAS which is slow and cumbersome. What are your team dynamics like? Our lead developers stepped down from the team and development has been a free for all since then. Walter

    Read the article

  • AngularJS in 60-ish Minutes – The eBook

    - by dwahlin
    Back in April of 2013 I published a video titled AngularJS in 60-ish Minutes on YouTube that focused on learning the fundamentals of AngularJS such as data binding, controllers, modules, factories/services and more (watch it by clicking the link above or scroll to the bottom of this post). One of the people that watched the video was Ian Smith (his blog is at http://fastandfluid.blogspot.com). But, Ian did much more than just watch it. He took the time to transcribe the audio into text, added screenshots, and included the time that the topic appears in the original video. Here’s an example of one of the pages: The funny thing about this whole story is that I’m currently working on an AngularJS eBook concept that I plan to publish to Amazon.com that’ll be called AngularJS JumpStart and it’s also based on the video. It follows the same general format and I even paid a transcription company to generate a document for me a few months back. Ian and I have both developed training materials before and it turns out we were both thinking along the same lines which was funny to see when he first showed me what he created. I’m extremely appreciative of Ian for taking the time to transcribe the video (thank him if you use the document) and hope you find it useful! Download the AngularJS in 60-ish Minutes eBook here   AngularJS in 60-ish Minutes Video   If you’re interested in more articles, blog posts, and additional information on AngularJS check out the new The AngularJS Magazine (a Flipboard magazine) that I started:   The AngularJS Magazine

    Read the article

  • Learn Domain-Driven Design

    - by Ben Griswold
    I just wrote about how I like to present on unfamiliar topics. With this said, Domain-Driven Design (DDD) is no exception. This is yet another area I knew enough about to be dangerous but I certainly was no expert.  As it turns out, researching this topic wasn’t easy. I could be wrong, but it is as if DDD is a secret to which few are privy. If you search the Interwebs, you will likely find little information about DDD until you start rolling over rocks to find that one great write-up, a handful of podcasts and videos and the Readers’ Digest version of the Blue Book which apparently you must read if you really want to get the complete, unabridged skinny on DDD.  Even Wikipedia’s write-up is skimpy which I didn’t know was possible…   Here’s a list of valuable resources.  If you, too, are interested in DDD, this is a good starting place.  Domain-Driven Design: Tackling Complexity in the Heart of Software by Eric Evans Domain-Driven Design Quickly, by Abel Avram & Floyd Marinescu An Introduction to Domain-Driven Design by David Laribee Talking Domain-Driven Design with David Laribee Part 1, Deep Fried Bytes Talking Domain-Driven Design with David Laribee Part 2, Deep Fried Bytes Eric Evans on Domain Driven Design, .NET Rocks Domain-Driven Design Community Eric Evans on Domain Driven Design Jimmy Nilsson on Domain Driven Design Domain-Driven Design Wikipedia What I’ve Learned About DDD Since the Book, Eric Evans Domain Driven Design, Alt.Net Podcast Applying Domain-Driven Design and Patterns: With Examples in C# and .NET, Jimmy Nilsson Domain-Driven Design Discussion Group DDD: Putting the Model to Work by Eric Evans The Official DDD Site

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • A whole site for reviewing of SQL Server MVP Deep Dives

    - by Rob Farley
    This book just keeps amazing me. Not only as I read through some chapters for the first time, and others for the second and third times, but also as I read reviews of it written by other people. The guys over at http://sqlperspectives.wordpress.com are a prime example. They’ve been going through each chapter, each writing a review on it, and often getting a guest blogger to write something as well – and they’re clearly getting a lot of stuff out of this brilliant book. Back when I first heard about them doing this, I had offered to be involved, and recently did an interview with them about my chapters (chapter seven and chapter forty). That interview can be found at http://sqlperspectives.wordpress.com/2010/03/20/interview-with-rob-farley/ – and covers how I got into databases, and how I think the database roles in the IT industry are changing. If you don’t have a copy of SQL Server MVP Deep Dives yet, why not get a copy from http://www.sqlservermvpdeepdives.com (or persuade your local bookstore to get some copies in), and read through chapters with these guys? Treat it like a book club, discussing each chapter with others (guest blogging perhaps?), and you’ll probably end up getting even more out of it. Remember that the proceeds of the book go to charity (instead of the authors – we get nothing), so you don’t need to consider that you’re splashing out on a treat for yourself. Think of the kids helped by War Child instead. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Adding Custom Fields to RadScheduler for WinForms Appointments

    When using RadScheduler for WinForms, it will almost always need to be customized in some way. This could come in the form of custom dialogs, context menus, or even custom appointments. In this blog entry, I am going to explain the steps required to add a custom field to RadScheduler. Click here to download the source code and follow along... Adding Custom Fields to Appointments In this example, a custom field for Email will be added to the Appointment class being used by RadScheduler. The process of accomplishing this involves five simple steps. Step 1: Add a Custom Field to the Data Source In order to display a custom field in RadScheduler, the field will first need to already exist in, or be added to the data source. In this example, the database being used is based on the structure of the SchedulerData sample database included with the installation of RadControls for WinForms. Figure 1 shows the structure of the database. Note ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 668 669 670 671 672 673 674 675 676 677 678 679  | Next Page >