Search Results

Search found 1423 results on 57 pages for 'crazy'.

Page 57/57 | < Previous Page | 53 54 55 56 57 

  • Using methods on 2 input files - 2nd is printing multiple times - Java

    - by Aaa
    I have the following code to read in text, store in a hashmap as bigrams (with other methods to sort them by frequency and do v. v. basic additive smoothing. I had it working great for one language input file (english) and then I want to expand it for the second language input file (japanese - doens;t matter what it is I suppose) using the same methods but the Japanese bigram hashmap is printing out 3 times in a row with diff. values. I've tried using diff text in the input file, making sure there are no gaps in text etc. I've also put print statements at certain places in the Japanese part of the code to see if I can get any clues but all the print statements are printing each time so I can't work out if it is looping at a certain place. I have gone through it with a fine toothcomb but am obviously missing something and slowly going crazy here - any help would be appreciated. thanks in advance... package languagerecognition2; import java.lang.String; import java.io.InputStreamReader; import java.util.*; import java.util.Iterator; import java.util.List.*; import java.util.ArrayList; import java.util.AbstractMap.*; import java.lang.Object; import java.io.*; import java.util.Enumeration; import java.util.Arrays; import java.lang.Math; public class Main { /** public static void main(String[] args) { //training English ----------------------------------------------------------------- File file = new File("english1.txt"); StringBuffer contents = new StringBuffer(); BufferedReader reader = null; try { reader = new BufferedReader(new FileReader(file)); String test = null; //test = reader.readLine(); // repeat until all lines are read while ((test = reader.readLine()) != null) { test = test.toLowerCase(); char[] charArrayEng = test.toCharArray(); HashMap<String, Integer> hashMapEng = new HashMap<String, Integer>(bigrams(charArrayEng)); LinkedHashMap<String, Integer> sortedListEng = new LinkedHashMap<String, Integer>(sort(hashMapEng)); int sizeEng=sortedListEng.size(); System.out.println("Total count of English bigrams is " + sizeEng); LinkedHashMap<String, Integer> smoothedListEng = new LinkedHashMap<String, Integer>(smooth(sortedListEng, sizeEng)); //print linkedHashMap to check values Set set= smoothedListEng.entrySet(); Iterator iter = set.iterator ( ) ; System.out.println("Beginning English"); while ( iter.hasNext()) { Map.Entry entry = ( Map.Entry ) iter.next ( ) ; Object key = entry.getKey ( ) ; Object value = entry.getValue ( ) ; System.out.println( key+" : " + value); } System.out.println("End English"); }//end while }//end try catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { try { if (reader != null) { reader.close(); } } catch (IOException e) { e.printStackTrace(); } } //End training English----------------------------------------------------------- //Training japanese-------------------------------------------------------------- File file2 = new File("japanese1.txt"); StringBuffer contents2 = new StringBuffer(); BufferedReader reader2 = null; try { reader2 = new BufferedReader(new FileReader(file2)); String test2 = null; //repeat until all lines are read while ((test2 = reader2.readLine()) != null) { test2 = test2.toLowerCase(); char[] charArrayJap = test2.toCharArray(); HashMap<String, Integer> hashMapJap = new HashMap<String, Integer>(bigrams(charArrayJap)); //System.out.println( "bigrams stage"); LinkedHashMap<String, Integer> sortedListJap = new LinkedHashMap<String, Integer>(sort(hashMapJap)); //System.out.println( "sort stage"); int sizeJap=sortedListJap.size(); //System.out.println("Total count of Japanese bigrams is " + sizeJap); LinkedHashMap<String, Integer> smoothedListJap = new LinkedHashMap<String, Integer>(smooth(sortedListJap, sizeJap)); System.out.println( "smooth stage"); //print linkedHashMap to check values Set set2= smoothedListJap.entrySet(); Iterator iter2 = set2.iterator(); System.out.println("Beginning Japanese"); while ( iter2.hasNext()) { Map.Entry entry2 = ( Map.Entry ) iter2.next ( ) ; Object key = entry2.getKey ( ) ; Object value = entry2.getValue ( ) ; System.out.println( key+" : " + value); }//end while System.out.println("End Japanese"); }//end while }//end try catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { try { if (reader2 != null) { reader2.close(); } } catch (IOException e) { e.printStackTrace(); } } //end training Japanese--------------------------------------------------------- } //end main (inner)

    Read the article

  • MEF + Plug-In not updating

    - by mybrokengnome
    I asked this on the MEF Codeplex forum already, but I haven't gotten a response yet, so I figured I'd try StackOverflow. Here's the original post if anyone's interested (this is just a copy from it): MEF Codeplex "Let me first say that I'm completely new to MEF (just discovered it today) and am very happy with it so far. However, I've ran in to a problem that is very frustrating. I'm creating an app that will have a plugin architecture and the plugins will only be stored in a single DLL file (or coded into the main app). The DLL file needs to be able to be recompiled during run-time and the app should recognize this and re-load the plugins (I know this is difficult, but it's a requirement). To accomplish this I took the approach covered http://blog.maartenballiauw.be/category/MEF.aspx there (look for WebServerDirectoryCatalog). Basically the idea is to "monitor the plugins folder, copy the new/modified assemblies to the web application’s /bin folder and instruct MEF to load its exports from there." This is my code, which is probably not the correct way to do it but it's what I found in some samples around the net: main()... string myExecName = Assembly.GetExecutingAssembly().Location; string myPath = System.IO.Path.GetDirectoryName(myExecName); catalog = new AggregateCatalog(); pluginCatalog = new MyDirectoryCatalog(myPath + @"/Plugins"); catalog.Catalogs.Add(pluginCatalog); exportContainer = new CompositionContainer(catalog); CompositionBatch compBatch = new CompositionBatch(); compBatch.AddPart(this); compBatch.AddPart(catalog); exportContainer.Compose(compBatch); and private FileSystemWatcher fileSystemWatcher; public DirectoryCatalog directoryCatalog; private string path; private string extension; public MyDirectoryCatalog(string path) { Initialize(path, "*.dll", "*.dll"); } private void Initialize(string path, string extension, string modulePattern) { this.path = path; this.extension = extension; fileSystemWatcher = new FileSystemWatcher(path, modulePattern); fileSystemWatcher.Changed += new FileSystemEventHandler(fileSystemWatcher_Changed); fileSystemWatcher.Created += new FileSystemEventHandler(fileSystemWatcher_Created); fileSystemWatcher.Deleted += new FileSystemEventHandler(fileSystemWatcher_Deleted); fileSystemWatcher.Renamed += new RenamedEventHandler(fileSystemWatcher_Renamed); fileSystemWatcher.IncludeSubdirectories = false; fileSystemWatcher.EnableRaisingEvents = true; Refresh(); } void fileSystemWatcher_Renamed(object sender, RenamedEventArgs e) { RemoveFromBin(e.OldName); Refresh(); } void fileSystemWatcher_Deleted(object sender, FileSystemEventArgs e) { RemoveFromBin(e.Name); Refresh(); } void fileSystemWatcher_Created(object sender, FileSystemEventArgs e) { Refresh(); } void fileSystemWatcher_Changed(object sender, FileSystemEventArgs e) { Refresh(); } private void Refresh() { // Determine /bin path string binPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Plugins"); string newPath = ""; // Copy files to /bin foreach (string file in Directory.GetFiles(path, extension, SearchOption.TopDirectoryOnly)) { try { DirectoryInfo dInfo = new DirectoryInfo(binPath); DirectoryInfo[] dirs = dInfo.GetDirectories(); int count = dirs.Count() + 1; newPath = binPath + "/" + count; DirectoryInfo dInfo2 = new DirectoryInfo(newPath); if (!dInfo2.Exists) dInfo2.Create(); File.Copy(file, System.IO.Path.Combine(newPath, System.IO.Path.GetFileName(file)), true); } catch { // Not that big deal... Blog readers will probably kill me for this bit of code :-) } } // Create new directory catalog directoryCatalog = new DirectoryCatalog(newPath, extension); directoryCatalog.Refresh(); } public override IQueryable<ComposablePartDefinition> Parts { get { return directoryCatalog.Parts; } } private void RemoveFromBin(string name) { string binPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, ""); File.Delete(Path.Combine(binPath, name)); } So all this actually works, and after the end of the code in main my IEnumerable variable is actually filled with all the plugins in the DLL (which if you follow the code is located in Plugins/1 so that I can modify the dll in the plugins folder). So now at this point I should be able to re-compile the plugins DLL, drop it in to the Plugins folder, my FileWatcher detect that it's changed, and then copy it into folder "2" and directoryCatalog should point to the new folder. All this actually works! The problem is, even though it seems like every thing is pointed to the right place, my IEnumerable variable is never updated with the new plugins. So close, but yet so far! Any suggestions? I know the downsides of doing it this way, that no dll is actually getting unloaded and causing a memory leak, but it's a Windows App and will probably be started at least once a day, and the plugins are un-likely to change that often, but it's still a requirement from the client that it does this without re-loading the app. Thanks! Thanks for any help you all can provide, it's driving me crazy not being able to figure this out."

    Read the article

  • Top things web developers should know about the Visual Studio 2013 release

    - by Jon Galloway
    ASP.NET and Web Tools for Visual Studio 2013 Release NotesASP.NET and Web Tools for Visual Studio 2013 Release NotesSummary for lazy readers: Visual Studio 2013 is now available for download on the Visual Studio site and on MSDN subscriber downloads) Visual Studio 2013 installs side by side with Visual Studio 2012 and supports round-tripping between Visual Studio versions, so you can try it out without committing to a switch Visual Studio 2013 ships with the new version of ASP.NET, which includes ASP.NET MVC 5, ASP.NET Web API 2, Razor 3, Entity Framework 6 and SignalR 2.0 The new releases ASP.NET focuses on One ASP.NET, so core features and web tools work the same across the platform (e.g. adding ASP.NET MVC controllers to a Web Forms application) New core features include new templates based on Bootstrap, a new scaffolding system, and a new identity system Visual Studio 2013 is an incredible editor for web files, including HTML, CSS, JavaScript, Markdown, LESS, Coffeescript, Handlebars, Angular, Ember, Knockdown, etc. Top links: Visual Studio 2013 content on the ASP.NET site are in the standard new releases area: http://www.asp.net/vnext ASP.NET and Web Tools for Visual Studio 2013 Release Notes Short intro videos on the new Visual Studio web editor features from Scott Hanselman and Mads Kristensen Announcing release of ASP.NET and Web Tools for Visual Studio 2013 post on the official .NET Web Development and Tools Blog Scott Guthrie's post: Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework Okay, for those of you who are still with me, let's dig in a bit. Quick web dev notes on downloading and installing Visual Studio 2013 I found Visual Studio 2013 to be a pretty fast install. According to Brian Harry's release post, installing over pre-release versions of Visual Studio is supported.  I've installed the release version over pre-release versions, and it worked fine. If you're only going to be doing web development, you can speed up the install if you just select Web Developer tools. Of course, as a good Microsoft employee, I'll mention that you might also want to install some of those other features, like the Store apps for Windows 8 and the Windows Phone 8.0 SDK, but they do download and install a lot of other stuff (e.g. the Windows Phone SDK sets up Hyper-V and downloads several GB's of VM's). So if you're planning just to do web development for now, you can pick just the Web Developer Tools and install the other stuff later. If you've got a fast internet connection, I recommend using the web installer instead of downloading the ISO. The ISO includes all the features, whereas the web installer just downloads what you're installing. Visual Studio 2013 development settings and color theme When you start up Visual Studio, it'll prompt you to pick some defaults. These are totally up to you -whatever suits your development style - and you can change them later. As I said, these are completely up to you. I recommend either the Web Development or Web Development (Code Only) settings. The only real difference is that Code Only hides the toolbars, and you can switch between them using Tools / Import and Export Settings / Reset. Web Development settings Web Development (code only) settings Usually I've just gone with Web Development (code only) in the past because I just want to focus on the code, although the Standard toolbar does make it easier to switch default web browsers. More on that later. Color theme Sigh. Okay, everyone's got their favorite colors. I alternate between Light and Dark depending on my mood, and I personally like how the low contrast on the window chrome in those themes puts the emphasis on my code rather than the tabs and toolbars. I know some people got pretty worked up over that, though, and wanted the blue theme back. I personally don't like it - it reminds me of ancient versions of Visual Studio that I don't want to think about anymore. So here's the thing: if you install Visual Studio Ultimate, it defaults to Blue. The other versions default to Light. If you use Blue, I won't criticize you - out loud, that is. You can change themes really easily - either Tools / Options / Environment / General, or the smart way: ctrl+q for quick launch, then type Theme and hit enter. Signing in During the first run, you'll be prompted to sign in. You don't have to - you can click the "Not now, maybe later" link at the bottom of that dialog. I recommend signing in, though. It's not hooked in with licensing or tracking the kind of code you write to sell you components. It is doing good things, like  syncing your Visual Studio settings between computers. More about that here. So, you don't have to, but I sure do. Overview of shiny new things in ASP.NET land There are a lot of good new things in ASP.NET. I'll list some of my favorite here, but you can read more on the ASP.NET site. One ASP.NET You've heard us talk about this for a while. The idea is that options are good, but choice can be a burden. When you start a new ASP.NET project, why should you have to make a tough decision - with long-term consequences - about how your application will work? If you want to use ASP.NET Web Forms, but have the option of adding in ASP.NET MVC later, why should that be hard? It's all ASP.NET, right? Ideally, you'd just decide that you want to use ASP.NET to build sites and services, and you could use the appropriate tools (the green blocks below) as you needed them. So, here it is. When you create a new ASP.NET application, you just create an ASP.NET application. Next, you can pick from some templates to get you started... but these are different. They're not "painful decision" templates, they're just some starting pieces. And, most importantly, you can mix and match. I can pick a "mostly" Web Forms template, but include MVC and Web API folders and core references. If you've tried to mix and match in the past, you're probably aware that it was possible, but not pleasant. ASP.NET MVC project files contained special project type GUIDs, so you'd only get controller scaffolding support in a Web Forms project if you manually edited the csproj file. Features in one stack didn't work in others. Project templates were painful choices. That's no longer the case. Hooray! I just did a demo in a presentation last week where I created a new Web Forms + MVC + Web API site, built a model, scaffolded MVC and Web API controllers with EF Code First, add data in the MVC view, viewed it in Web API, then added a GridView to the Web Forms Default.aspx page and bound it to the Model. In about 5 minutes. Sure, it's a simple example, but it's great to be able to share code and features across the whole ASP.NET family. Authentication In the past, authentication was built into the templates. So, for instance, there was an ASP.NET MVC 4 Intranet Project template which created a new ASP.NET MVC 4 application that was preconfigured for Windows Authentication. All of that authentication stuff was built into each template, so they varied between the stacks, and you couldn't reuse them. You didn't see a lot of changes to the authentication options, since they required big changes to a bunch of project templates. Now, the new project dialog includes a common authentication experience. When you hit the Change Authentication button, you get some common options that work the same way regardless of the template or reference settings you've made. These options work on all ASP.NET frameworks, and all hosting environments (IIS, IIS Express, or OWIN for self-host) The default is Individual User Accounts: This is the standard "create a local account, using username / password or OAuth" thing; however, it's all built on the new Identity system. More on that in a second. The one setting that has some configuration to it is Organizational Accounts, which lets you configure authentication using Active Directory, Windows Azure Active Directory, or Office 365. Identity There's a new identity system. We've taken the best parts of the previous ASP.NET Membership and Simple Identity systems, rolled in a lot of feedback and made big enhancements to support important developer concerns like unit testing and extensiblity. I've written long posts about ASP.NET identity, and I'll do it again. Soon. This is not that post. The short version is that I think we've finally got just the right Identity system. Some of my favorite features: There are simple, sensible defaults that work well - you can File / New / Run / Register / Login, and everything works. It supports standard username / password as well as external authentication (OAuth, etc.). It's easy to customize without having to re-implement an entire provider. It's built using pluggable pieces, rather than one large monolithic system. It's built using interfaces like IUser and IRole that allow for unit testing, dependency injection, etc. You can easily add user profile data (e.g. URL, twitter handle, birthday). You just add properties to your ApplicationUser model and they'll automatically be persisted. Complete control over how the identity data is persisted. By default, everything works with Entity Framework Code First, but it's built to support changes from small (modify the schema) to big (use another ORM, store your data in a document database or in the cloud or in XML or in the EXIF data of your desktop background or whatever). It's configured via OWIN. More on OWIN and Katana later, but the fact that it's built using OWIN means it's portable. You can find out more in the Authentication and Identity section of the ASP.NET site (and lots more content will be going up there soon). New Bootstrap based project templates The new project templates are built using Bootstrap 3. Bootstrap (formerly Twitter Bootstrap) is a front-end framework that brings a lot of nice benefits: It's responsive, so your projects will automatically scale to device width using CSS media queries. For example, menus are full size on a desktop browser, but on narrower screens you automatically get a mobile-friendly menu. The built-in Bootstrap styles make your standard page elements (headers, footers, buttons, form inputs, tables etc.) look nice and modern. Bootstrap is themeable, so you can reskin your whole site by dropping in a new Bootstrap theme. Since Bootstrap is pretty popular across the web development community, this gives you a large and rapidly growing variety of templates (free and paid) to choose from. Bootstrap also includes a lot of very useful things: components (like progress bars and badges), useful glyphicons, and some jQuery plugins for tooltips, dropdowns, carousels, etc.). Here's a look at how the responsive part works. When the page is full screen, the menu and header are optimized for a wide screen display: When I shrink the page down (this is all based on page width, not useragent sniffing) the menu turns into a nice mobile-friendly dropdown: For a quick example, I grabbed a new free theme off bootswatch.com. For simple themes, you just need to download the boostrap.css file and replace the /content/bootstrap.css file in your project. Now when I refresh the page, I've got a new theme: Scaffolding The big change in scaffolding is that it's one system that works across ASP.NET. You can create a new Empty Web project or Web Forms project and you'll get the Scaffold context menus. For release, we've got MVC 5 and Web API 2 controllers. We had a preview of Web Forms scaffolding in the preview releases, but they weren't fully baked for RTM. Look for them in a future update, expected pretty soon. This scaffolding system wasn't just changed to work across the ASP.NET frameworks, it's also built to enable future extensibility. That's not in this release, but should also hopefully be out soon. Project Readme page This is a small thing, but I really like it. When you create a new project, you get a Project_Readme.html page that's added to the root of your project and opens in the Visual Studio built-in browser. I love it. A long time ago, when you created a new project we just dumped it on you and left you scratching your head about what to do next. Not ideal. Then we started adding a bunch of Getting Started information to the new project templates. That told you what to do next, but you had to delete all of that stuff out of your website. It doesn't belong there. Not ideal. This is a simple HTML file that's not integrated into your project code at all. You can delete it if you want. But, it shows a lot of helpful links that are current for the project you just created. In the future, if we add new wacky project types, they can create readme docs with specific information on how to do appropriately wacky things. Side note: I really like that they used the internal browser in Visual Studio to show this content rather than popping open an HTML page in the default browser. I hate that. It's annoying. If you're doing that, I hope you'll stop. What if some unnamed person has 40 or 90 tabs saved in their browser session? When you pop open your "Thanks for installing my Visual Studio extension!" page, all eleventy billion tabs start up and I wish I'd never installed your thing. Be like these guys and pop stuff Visual Studio specific HTML docs in the Visual Studio browser. ASP.NET MVC 5 The biggest change with ASP.NET MVC 5 is that it's no longer a separate project type. It integrates well with the rest of ASP.NET. In addition to that and the other common features we've already looked at (Bootstrap templates, Identity, authentication), here's what's new for ASP.NET MVC. Attribute routing ASP.NET MVC now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your routes by annotating your actions and controllers. This supports some pretty complex, customized routing scenarios, and it allows you to keep your route information right with your controller actions if you'd like. Here's a controller that includes an action whose method name is Hiding, but I've used AttributeRouting to configure it to /spaghetti/with-nesting/where-is-waldo public class SampleController : Controller { [Route("spaghetti/with-nesting/where-is-waldo")] public string Hiding() { return "You found me!"; } } I enable that in my RouteConfig.cs, and I can use that in conjunction with my other MVC routes like this: public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } } You can read more about Attribute Routing in ASP.NET MVC 5 here. Filter enhancements There are two new additions to filters: Authentication Filters and Filter Overrides. Authentication filters are a new kind of filter in ASP.NET MVC that run prior to authorization filters in the ASP.NET MVC pipeline and allow you to specify authentication logic per-action, per-controller, or globally for all controllers. Authentication filters process credentials in the request and provide a corresponding principal. Authentication filters can also add authentication challenges in response to unauthorized requests. Override filters let you change which filters apply to a given action method or controller. Override filters specify a set of filter types that should not be run for a given scope (action or controller). This allows you to configure filters that apply globally but then exclude certain global filters from applying to specific actions or controllers. ASP.NET Web API 2 ASP.NET Web API 2 includes a lot of new features. Attribute Routing ASP.NET Web API supports the same attribute routing system that's in ASP.NET MVC 5. You can read more about the Attribute Routing features in Web API in this article. OAuth 2.0 ASP.NET Web API picks up OAuth 2.0 support, using security middleware running on OWIN (discussed below). This is great for features like authenticated Single Page Applications. OData Improvements ASP.NET Web API now has full OData support. That required adding in some of the most powerful operators: $select, $expand, $batch and $value. You can read more about OData operator support in this article by Mike Wasson. Lots more There's a huge list of other features, including CORS (cross-origin request sharing), IHttpActionResult, IHttpRequestContext, and more. I think the best overview is in the release notes. OWIN and Katana I've written about OWIN and Katana recently. I'm a big fan. OWIN is the Open Web Interfaces for .NET. It's a spec, like HTML or HTTP, so you can't install OWIN. The benefit of OWIN is that it's a community specification, so anyone who implements it can plug into the ASP.NET stack, either as middleware or as a host. Katana is the Microsoft implementation of OWIN. It leverages OWIN to wire up things like authentication, handlers, modules, IIS hosting, etc., so ASP.NET can host OWIN components and Katana components can run in someone else's OWIN implementation. Howard Dierking just wrote a cool article in MSDN magazine describing Katana in depth: Getting Started with the Katana Project. He had an interesting example showing an OWIN based pipeline which leveraged SignalR, ASP.NET Web API and NancyFx components in the same stack. If this kind of thing makes sense to you, that's great. If it doesn't, don't worry, but keep an eye on it. You're going to see some cool things happen as a result of ASP.NET becoming more and more pluggable. Visual Studio Web Tools Okay, this stuff's just crazy. Visual Studio has been adding some nice web dev features over the past few years, but they've really cranked it up for this release. Visual Studio is by far my favorite code editor for all web files: CSS, HTML, JavaScript, and lots of popular libraries. Stop thinking of Visual Studio as a big editor that you only use to write back-end code. Stop editing HTML and CSS in Notepad (or Sublime, Notepad++, etc.). Visual Studio starts up in under 2 seconds on a modern computer with an SSD. Misspelling HTML attributes or your CSS classes or jQuery or Angular syntax is stupid. It doesn't make you a better developer, it makes you a silly person who wastes time. Browser Link Browser Link is a real-time, two-way connection between Visual Studio and all connected browsers. It's only attached when you're running locally, in debug, but it applies to any and all connected browser, including emulators. You may have seen demos that showed the browsers refreshing based on changes in the editor, and I'll agree that's pretty cool. But it's really just the start. It's a two-way connection, and it's built for extensiblity. That means you can write extensions that push information from your running application (in IE, Chrome, a mobile emulator, etc.) back to Visual Studio. Mads and team have showed off some demonstrations where they enabled edit mode in the browser which updated the source HTML back on the browser. It's also possible to look at how the rendered HTML performs, check for compatibility issues, watch for unused CSS classes, the sky's the limit. New HTML editor The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Here's a 3 minute tour from Mads Kristensen. The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Lots more Visual Studio web dev features That's just a sampling - there's a ton of great features for JavaScript editing, CSS editing, publishing, and Page Inspector (which shows real-time rendering of your page inside Visual Studio). Here are some more short videos showing those features. Lots, lots more Okay, that's just a summary, and it's still quite a bit. Head on over to http://asp.net/vnext for more information, and download Visual Studio 2013 now to get started!

    Read the article

  • Deploy ASP.NET Web Applications with Web Deployment Projects

    - by Ben Griswold
    One may quickly build and deploy an ASP.NET web application via the Publish option in Visual Studio.  This option works great for most simple deployment scenarios but it won’t always cut it.  Let’s say you need to automate your deployments. Or you have environment-specific configuration settings. Or you need to execute pre/post build operations when you do your builds.  If so, you should consider using Web Deployment Projects. The Web Deployment Project type doesn’t come out-of-the-box with Visual Studio 2008.  You’ll need to Download Visual Studio® 2008 Web Deployment Projects – RTW and install if you want to follow along with this tutorial. I’ve created a shiny new ASP.NET MVC project.  Web Deployment Projects work with websites, web applications and MVC projects so feel free to go with any web project type you’d like.  Once your web application is in place, it’s time to add the Web Deployment project.  You can hunt and peck around the File > New > New Project… dialogue as long as you’d like, but you aren’t going to find what you need.  Instead, select the web project and then choose the “Add Web Deployment Project…” hiding behind the Build menu option. I prefer to name my projects based on the environment in which I plan to deploy.  In this case, I’ll be rolling to the QA machine. Don’t expect too much to happen at this point.  A seemingly empty project with a funny icon will be added to your solution.  That’s it. I want to take a minute and talk about configuration settings before we continue.  Some of the common settings which might change from environment to environment are appSettings, connectionStrings and mailSettings.  Here’s a look at my updated web.config: <appSettings>   <add key="MvcApplication293.Url" value="http://localhost:50596/" />     </appSettings> <connectionStrings>   <add name="ApplicationServices"        connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true"        providerName="System.Data.SqlClient"/> </connectionStrings>   <system.net>   <mailSettings>     <smtp from="[email protected]">         <network host="server.com" userName="username" password="password" port="587" defaultCredentials="false"/>     </smtp>   </mailSettings> </system.net> I want to update these values prior to deploying to the QA environment.  There are variations to this approach, but I like to maintain environment-specific settings for each of the web.config sections in the Config/[Environment] project folders.  I’ve provided a screenshot of the QA environment settings below. It may be obvious what one should include in each of the three files.  Basically, it is a copy of the associated web.config section with updated setting values.  For example, the AppSettings.config file may include a reference to the QA web url, the DB.config would include the QA database server and login information and the StmpSettings.config would include a QA Stmp server and user information. <?xml version="1.0" encoding="utf-8" ?> <appSettings>   <add key="MvcApplication293.Url" value="http://qa.MvcApplicatinon293.com/" /> </appSettings> AppSettings.config  <?xml version="1.0" encoding="utf-8" ?> <connectionStrings>   <add name="ApplicationServices"        connectionString="server=QAServer;integrated security=SSPI;database=MvcApplication293"        providerName="System.Data.SqlClient"/>   </connectionStrings> Db.config  <?xml version="1.0" encoding="utf-8" ?> <smtp from="[email protected]">     <network host="qaserver.com" userName="qausername" password="qapassword" port="587" defaultCredentials="false"/> </smtp> SmtpSettings.config  I think our web project is ready to deploy.  Now, it’s time to concentrate on the Web Deployment Project itself.  Right-click on the project file and open the Property Pages. The first thing to call out is the Configuration dropdown.  I only deploy a project which is built in Release Mode so I only setup the Web Deployment Project for this mode.  (This is when you change the Configuration selection to “Release.”)  I typically keep the Output Folder default value – .\Release\.  When the application is built, all artifacts will be dropped in the .\Release\ folder relative to the Web Deployment Project root.  The final option may be up for some debate.  I like to roll out updatable websites so I select the “Allow this precompiled site to be updatable” option.  I really do like to follow standard SDLC processes when I release my software but there are those times when you just have to make a hotfix to production and I like to keep this option open if need be.  If you are strongly opposed to this idea, please, by all means, don’t check the box. The next tab is boring.  I don’t like to deploy a crazy number of DLLs so I merge all outputs to a single assembly.  Again, you may have another option and feel free to change this selection if you so wish. If you follow my lead, take care when choosing a single assembly name.  The Assembly Name can not be the same as the website or any other project in your solution otherwise you’ll receive a circular reference build error.  In other words, I can’t name the assembly MvcApplication293 or my output window would start yelling at me. Remember when we called out our QA configuration files?  Click on the Deployment tab and you’ll see how where going to use them.  Notice the Web.config file section replacements value.  All this does is swap called out web.config sections with the content of the Config\QA\* files.  You can reduce or extend this list as you deem fit.  Did you see the “Use external configuration source file” option?  You know how you can point any of your web.config sections to an external file via the configSource attribute?  This option allows you to leverage that technique and instead of replacing the content of the sections, you will replace the configSource attribute value instead. <appSettings configSource="Config\QA\AppSettings.config" /> Go ahead and Apply your changes.  I’d like to take a look at the project file we just updated.  Right-click on the Web Deployment Project and select “Open Project File.” One of the first configuration blocks reflects core Release build settings.  There are a couple of points I’d like to call out here: DebugSymbols=false ensures the compilation debug attribute in your web.config is flipped to false as part of build process.  There’s some crumby (more likely old) documentation which implies you need a ToggleDebugCompilation task to make this happen.  Nope. Just make sure the DebugSymbols is set to false.  EnableUpdateable implies a single dll for the web application rather than a dll for each object and and empty view file. I think updatable applications are cleaner and include the benefit (or risk based on your perspective) that portions of the application can be updated directly on the server.  I called this out earlier but I wanted to reiterate. <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">     <DebugSymbols>false</DebugSymbols>     <OutputPath>.\Release</OutputPath>     <EnableUpdateable>true</EnableUpdateable>     <UseMerge>true</UseMerge>     <SingleAssemblyName>MvcApplication293</SingleAssemblyName>     <DeleteAppCodeCompiledFiles>true</DeleteAppCodeCompiledFiles>     <UseWebConfigReplacement>true</UseWebConfigReplacement>     <ValidateWebConfigReplacement>true</ValidateWebConfigReplacement>     <DeleteAppDataFolder>true</DeleteAppDataFolder>   </PropertyGroup> The next section is self-explanatory.  The content merely reflects the replacement value you provided via the Property Pages. <ItemGroup Condition="'$(Configuration)|$(Platform)' == 'Release|AnyCPU'">     <WebConfigReplacementFiles Include="Config\QA\AppSettings.config">       <Section>appSettings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\Db.config">       <Section>connectionStrings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\SmtpSettings.config">       <Section>system.net/mailSettings/smtp</Section>     </WebConfigReplacementFiles>   </ItemGroup> You’ll want to extend the ItemGroup section to include the files you wish to exclude from the build.  The sample ExcludeFromBuild nodes exclude all obj, svn, csproj, user, pdb artifacts from the build. Enough though they files aren’t included in your web project, you’ll need to exclude them or they’ll show up along with required deployment artifacts.  <ItemGroup Condition="'$(Configuration)|$(Platform)' == 'Release|AnyCPU'">     <WebConfigReplacementFiles Include="Config\QA\AppSettings.config">       <Section>appSettings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\Db.config">       <Section>connectionStrings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\SmtpSettings.config">       <Section>system.net/mailSettings/smtp</Section>     </WebConfigReplacementFiles>     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\obj\**\*.*" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\.svn\**\*.*" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\.svn\**\*" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\*.csproj" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\*.user" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\bin\*.pdb" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\Notes.txt" />   </ItemGroup> Pre/post build and Pre/post merge tasks are added to the final code block.  By default, your project file should look like the following – a completely commented out section. <!– To modify your build process, add your task inside one of        the targets below and uncomment it. Other similar extension        points exist, see Microsoft.WebDeployment.targets.   <Target Name="BeforeBuild">   </Target>   <Target Name="BeforeMerge">   </Target>   <Target Name="AfterMerge">   </Target>   <Target Name="AfterBuild">   </Target>   –> Update the section to remove all temporary Config folders and files after the build.  <!– To modify your build process, add your task inside one of        the targets below and uncomment it. Other similar extension        points exist, see Microsoft.WebDeployment.targets.     <Target Name="BeforeMerge">   </Target>   <Target Name="AfterMerge">   </Target>     <Target Name="BeforeBuild">      </Target>       –>   <Target Name="AfterBuild">     <!– WebConfigReplacement requires the Config files. Remove after build. –>     <RemoveDir Directories="$(OutputPath)\Config" />   </Target> That’s it for setup.  Save the project file, flip the solution to Release Mode and build.  If there’s an issue, consult the Output window for details.  If all went well, you will find your deployment artifacts in your Web Deployment Project folder like so. Both the code source and published application will be there. Inside the Release folder you will find your “published files” and you’ll notice the Config folder is no where to be found.  In the Source folder, all project files are found with the exception of the items which were excluded from the build. I’ll wrap up this tutorial by calling out a little Web Deployment pet peeve of mine: there doesn’t appear to be a way to add an existing web deployment project to a solution.  The best I can come up with is create a new web deployment project and then copy and paste the contents of the existing project file into the new project file.  It’s not a big deal but it bugs me. Download the Solution

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Extending NerdDinner: Adding Geolocated Flair

    - by Jon Galloway
    NerdDinner is a website with the audacious goal of “Organizing the world’s nerds and helping them eat in packs.” Because nerds aren’t likely to socialize with others unless a website tells them to do it. Scott Hanselman showed off a lot of the cool features we’ve added to NerdDinner lately during his popular talk at MIX10, Beyond File | New Company: From Cheesy Sample to Social Platform. Did you miss it? Go ahead and watch it, I’ll wait. One of the features we wanted to add was flair. You know about flair, right? It’s a way to let folks who like your site show it off in their own site. For example, here’s my StackOverflow flair: Great! So how could we add some of this flair stuff to NerdDinner? What do we want to show? If we’re going to encourage our users to give up a bit of their beautiful website to show off a bit of ours, we need to think about what they’ll want to show. For instance, my StackOverflow flair is all about me, not StackOverflow. So how will this apply to NerdDinner? Since NerdDinner is all about organizing local dinners, in order for the flair to be useful it needs to make sense for the person viewing the web page. If someone visits from Egypt visits my blog, they should see information about NerdDinners in Egypt. That’s geolocation – localizing site content based on where the browser’s sitting, and it makes sense for flair as well as entire websites. So we’ll set up a simple little callout that prompts them to host a dinner in their area: Hopefully our flair works and there is a dinner near your viewers, so they’ll see another view which lists upcoming dinners near them: The Geolocation Part Generally website geolocation is done by mapping the requestor’s IP address to a geographic area. It’s not an exact science, but I’ve always found it to be pretty accurate. There are (at least) three ways to handle it: You pay somebody like MaxMind for a database (with regular updates) that sits on your server, and you use their API to do lookups. I used this on a pretty big project a few years ago and it worked well. You use HTML 5 Geolocation API or Google Gears or some other browser based solution. I think those are cool (I use Google Gears a lot), but they’re both in flux right now and I don’t think either has a wide enough of an install base yet to rely on them. You might want to, but I’ve heard you do all kinds of crazy stuff, and sometimes it gets you in trouble. I don’t mean talk out of line, but we all laugh behind your back a bit. But, hey, it’s up to you. It’s your flair or whatever. There are some free webservices out there that will take an IP address and give you location information. Easy, and works for everyone. That’s what we’re doing. I looked at a few different services and settled on IPInfoDB. It’s free, has a great API, and even returns JSON, which is handy for Javascript use. The IP query is pretty simple. We hit a URL like this: http://ipinfodb.com/ip_query.php?ip=74.125.45.100&timezone=false … and we get an XML response back like this… <?xml version="1.0" encoding="UTF-8"?> <Response> <Ip>74.125.45.100</Ip> <Status>OK</Status> <CountryCode>US</CountryCode> <CountryName>United States</CountryName> <RegionCode>06</RegionCode> <RegionName>California</RegionName> <City>Mountain View</City> <ZipPostalCode>94043</ZipPostalCode> <Latitude>37.4192</Latitude> <Longitude>-122.057</Longitude> </Response> So we’ll build some data transfer classes to hold the location information, like this: public class LocationInfo { public string Country { get; set; } public string RegionName { get; set; } public string City { get; set; } public string ZipPostalCode { get; set; } public LatLong Position { get; set; } } public class LatLong { public float Lat { get; set; } public float Long { get; set; } } And now hitting the service is pretty simple: public static LocationInfo HostIpToPlaceName(string ip) { string url = "http://ipinfodb.com/ip_query.php?ip={0}&timezone=false"; url = String.Format(url, ip); var result = XDocument.Load(url); var location = (from x in result.Descendants("Response") select new LocationInfo { City = (string)x.Element("City"), RegionName = (string)x.Element("RegionName"), Country = (string)x.Element("CountryName"), ZipPostalCode = (string)x.Element("CountryName"), Position = new LatLong { Lat = (float)x.Element("Latitude"), Long = (float)x.Element("Longitude") } }).First(); return location; } Getting The User’s IP Okay, but first we need the end user’s IP, and you’d think it would be as simple as reading the value from HttpContext: HttpContext.Current.Request.UserHostAddress But you’d be wrong. Sorry. UserHostAddress just wraps HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"], but that doesn’t get you the IP for users behind a proxy. That’s in another header, “HTTP_X_FORWARDED_FOR". So you can either hit a wrapper and then check a header, or just check two headers. I went for uniformity: string SourceIP = string.IsNullOrEmpty(Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; We’re almost set to wrap this up, but first let’s talk about our views. Yes, views, because we’ll have two. Selecting the View We wanted to make it easy for people to include the flair in their sites, so we looked around at how other people were doing this. The StackOverflow folks have a pretty good flair system, which allows you to include the flair in your site as either an IFRAME reference or a Javascript include. We’ll do both. We have a ServicesController to handle use of the site information outside of NerdDinner.com, so this fits in pretty well there. We’ll be displaying the same information for both HTML and Javascript flair, so we can use one Flair controller action which will return a different view depending on the requested format. Here’s our general flow for our controller action: Get the user’s IP Translate it to a location Grab the top three upcoming dinners that are near that location Select the view based on the format (defaulted to “html”) Return a FlairViewModel which contains the list of dinners and the location information public ActionResult Flair(string format = "html") { string SourceIP = string.IsNullOrEmpty( Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; var location = GeolocationService.HostIpToPlaceName(SourceIP); var dinners = dinnerRepository. FindByLocation(location.Position.Lat, location.Position.Long). OrderByDescending(p => p.EventDate).Take(3); // Select the view we'll return. // Using a switch because we'll add in JSON and other formats later. string view; switch (format.ToLower()) { case "javascript": view = "JavascriptFlair"; break; default: view = "Flair"; break; } return View( view, new FlairViewModel { Dinners = dinners.ToList(), LocationName = string.IsNullOrEmpty(location.City) ? "you" : String.Format("{0}, {1}", location.City, location.RegionName) } ); } Note: I’m not in love with the logic here, but it seems like overkill to extract the switch statement away when we’ll probably just have two or three views. What do you think? The HTML View The HTML version of the view is pretty simple – the only thing of any real interest here is the use of an extension method to truncate strings that are would cause the titles to wrap. public static string Truncate(this string s, int maxLength) { if (string.IsNullOrEmpty(s) || maxLength <= 0) return string.Empty; else if (s.Length > maxLength) return s.Substring(0, maxLength) + "..."; else return s; }   So here’s how the HTML view ends up looking: <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage<FlairViewModel>" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Nerd Dinner</title> <link href="/Content/Flair.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="nd-wrapper"> <h2 id="nd-header">NerdDinner.com</h2> <div id="nd-outer"> <% if (Model.Dinners.Count == 0) { %> <div id="nd-bummer"> Looks like there's no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target="_blank" href="http://www.nerddinner.com/Dinners/Create">host one</a>?</div> <% } else { %> <h3> Dinners Near You</h3> <ul> <% foreach (var item in Model.Dinners) { %> <li> <%: Html.ActionLink(String.Format("{0} with {1} on {2}", item.Title.Truncate(20), item.HostedBy, item.EventDate.ToShortDateString()), "Details", "Dinners", new { id = item.DinnerID }, new { target = "_blank" })%></li> <% } %> </ul> <% } %> <div id="nd-footer"> More dinners and fun at <a target="_blank" href="http://nrddnr.com">http://nrddnr.com</a></div> </div> </div> </body> </html> You’d include this in a page using an IFRAME, like this: <IFRAME height=230 marginHeight=0 src="http://nerddinner.com/services/flair" frameBorder=0 width=160 marginWidth=0 scrolling=no></IFRAME> The Javascript view The Javascript flair is written so you can include it in a webpage with a simple script include, like this: <script type="text/javascript" src="http://nerddinner.com/services/flair?format=javascript"></script> The goal of this view is very similar to the HTML embed view, with a few exceptions: We’re creating a script element and adding it to the head of the document, which will then document.write out the content. Note that you have to consider if your users will actually have a <head> element in their documents, but for website flair use cases I think that’s a safe bet. Since the content is being added to the existing page rather than shown in an IFRAME, all links need to be absolute. That means we can’t use Html.ActionLink, since it generates relative routes. We need to escape everything since it’s being written out as strings. We need to set the content type to application/x-javascript. The easiest way to do that is to use the <%@ Page ContentType%> directive. <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<NerdDinner.Models.FlairViewModel>" ContentType="application/x-javascript" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> document.write('<script>var link = document.createElement(\"link\");link.href = \"http://nerddinner.com/content/Flair.css\";link.rel = \"stylesheet\";link.type = \"text/css\";var head = document.getElementsByTagName(\"head\")[0];head.appendChild(link);</script>'); document.write('<div id=\"nd-wrapper\"><h2 id=\"nd-header\">NerdDinner.com</h2><div id=\"nd-outer\">'); <% if (Model.Dinners.Count == 0) { %> document.write('<div id=\"nd-bummer\">Looks like there\'s no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target=\"_blank\" href=\"http://www.nerddinner.com/Dinners/Create\">host one</a>?</div>'); <% } else { %> document.write('<h3> Dinners Near You</h3><ul>'); <% foreach (var item in Model.Dinners) { %> document.write('<li><a target=\"_blank\" href=\"http://nrddnr.com/<%: item.DinnerID %>\"><%: item.Title.Truncate(20) %> with <%: item.HostedBy %> on <%: item.EventDate.ToShortDateString() %></a></li>'); <% } %> document.write('</ul>'); <% } %> document.write('<div id=\"nd-footer\"> More dinners and fun at <a target=\"_blank\" href=\"http://nrddnr.com\">http://nrddnr.com</a></div></div></div>'); Getting IP’s for Testing There are a variety of online services that will translate a location to an IP, which were handy for testing these out. I found http://www.itouchmap.com/latlong.html to be most useful, but I’m open to suggestions if you know of something better. Next steps I think the next step here is to minimize load – you know, in case people start actually using this flair. There are two places to think about – the NerdDinner.com servers, and the services we’re using for Geolocation. I usually think about caching as a first attack on server load, but that’s less helpful here since every user will have a different IP. Instead, I’d look at taking advantage of Asynchronous Controller Actions, a cool new feature in ASP.NET MVC 2. Async Actions let you call a potentially long-running webservice without tying up a thread on the server while waiting for the response. There’s some good info on that in the MSDN documentation, and Dino Esposito wrote a great article on Asynchronous ASP.NET Pages in the April 2010 issue of MSDN Magazine. But let’s think of the children, shall we? What about ipinfodb.com? Well, they don’t have specific daily limits, but they do throttle you if you put a lot of traffic on them. From their FAQ: We do not have a specific daily limit but queries that are at a rate faster than 2 per second will be put in "queue". If you stay below 2 queries/second everything will be normal. If you go over the limit, you will still get an answer for all queries but they will be slowed down to about 1 per second. This should not affect most users but for high volume websites, you can either use our IP database on your server or we can whitelist your IP for 5$/month (simply use the donate form and leave a comment with your server IP). Good programming practices such as not querying our API for all page views (you can store the data in a cookie or a database) will also help not reaching the limit. So the first step there is to save the geolocalization information in a time-limited cookie, which will allow us to look up the local dinners immediately without having to hit the geolocation service.

    Read the article

  • Red Gate Coder interviews: Robin Hellen

    - by Michael Williamson
    Robin Hellen is a test engineer here at Red Gate, and is also the latest coder I’ve interviewed. We chatted about debugging code, the roles of software engineers and testers, and why Vala is currently his favourite programming language. How did you get started with programming?It started when I was about six. My dad’s a professional programmer, and he gave me and my sister one of his old computers and taught us a bit about programming. It was an old Amiga 500 with a variant of BASIC. I don’t think I ever successfully completed anything! It was just faffing around. I didn’t really get anywhere with it.But then presumably you did get somewhere with it at some point.At some point. The PC emerged as the dominant platform, and I learnt a bit of Visual Basic. I didn’t really do much, just a couple of quick hacky things. A bit of demo animation. Took me a long time to get anywhere with programming, really.When did you feel like you did start to get somewhere?I think it was when I started doing things for someone else, which was my sister’s final year of university project. She called up my dad two days before she was due to submit, saying “We need something to display a graph!”. Dad says, “I’m too busy, go talk to your brother”. So I hacked up this ugly piece of code, sent it off and they won a prize for that project. Apparently, the graph, the bit that I wrote, was the reason they won a prize! That was when I first felt that I’d actually done something that was worthwhile. That was my first real bit of code, and the ugliest code I’ve ever written. It’s basically an array of pre-drawn line elements that I shifted round the screen to draw a very spikey graph.When did you decide that programming might actually be something that you wanted to do as a career?It’s not really a decision I took, I always wanted to do something with computers. And I had to take a gap year for uni, so I was looking for twelve month internships. I applied to Red Gate, and they gave me a job as a tester. And that’s where I really started having to write code well. To a better standard that I had been up to that point.How did you find coming to Red Gate and working with other coders?I thought it was really nice. I learnt so much just from other people around. I think one of the things that’s really great is that people are just willing to help you learn. Instead of “Don’t you know that, you’re so stupid”, it’s “You can just do it this way”.If you could go back to the very start of that internship, is there something that you would tell yourself?Write shorter code. I have a tendency to write massive, many-thousand line files that I break out of right at the end. And then half-way through a project I’m doing something, I think “Where did I write that bit that does that thing?”, and it’s almost impossible to find. I wrote some horrendous code when I started. Just that principle, just keep things short. Even if looks a bit crazy to be jumping around all over the place all of the time, it’s actually a lot more understandable.And how do you hold yourself to that?Generally, if a function’s going off my screen, it’s probably too long. That’s what I tell myself, and within the team here we have code reviews, so the guys I’m with at the moment are pretty good at pulling me up on, “Doesn’t that look like it’s getting a bit long?”. It’s more just the subjective standard of readability than anything.So you’re an advocate of code review?Yes, definitely. Both to spot errors that you might have made, and to improve your knowledge. The person you’re reviewing will say “Oh, you could have done it that way”. That’s how we learn, by talking to others, and also just sharing knowledge of how your project works around the team, or even outside the team. Definitely a very firm advocate of code reviews.Do you think there’s more we could do with them?I don’t know. We’re struggling with how to add them as part of the process without it becoming too cumbersome. We’ve experimented with a few different ways, and we’ve not found anything that just works.To get more into the nitty gritty: how do you like to debug code?The first thing is to do it in my head. I’ll actually think what piece of code is likely to have caused that error, and take a quick look at it, just to see if there’s anything glaringly obvious there. The next thing I’ll probably do is throw in print statements, or throw some exceptions from various points, just to check: is it going through the code path I expect it to? A last resort is to actually debug code using a debugger.Why is the debugger the last resort?Probably because of the environments I learnt programming in. VB and early BASIC didn’t have much of a debugger, the only way to find out what your program was doing was to add print statements. Also, because a lot of the stuff I tend to work with is non-interactive, if it’s something that takes a long time to run, I can throw in the print statements, set a run off, go and do something else, and look at it again later, rather than trying to remember what happened at that point when I was debugging through it. So it also gives me the record of what happens. I hate just sitting there pressing F5, F5, continually. If you’re having to find out what your code is doing at each line, you’ve probably got a very wrong mental model of what your code’s doing, and you can find that out just as easily by inspecting a couple of values through the print statements.If I were on some codebase that you were also working on, what should I do to make it as easy as possible to understand?I’d say short and well-named methods. The one thing I like to do when I’m looking at code is to find out where a value comes from, and the more layers of indirection there are, particularly DI [dependency injection] frameworks, the harder it is to find out where something’s come from. I really hate that. I want to know if the value come from the user here or is a constant here, and if I can’t find that out, that makes code very hard to understand for me.As a tester, where do you think the split should lie between software engineers and testers?I think the split is less on areas of the code you write and more what you’re designing and creating. The developers put a structure on the code, while my major role is to say which tests we should have, whether we should test that, or it’s not worth testing that because it’s a tiny function in code that nobody’s ever actually going to see. So it’s not a split in the code, it’s a split in what you’re thinking about. Saying what code we should write, but alternatively what code we should take out.In your experience, do the software engineers tend to do much testing themselves?They tend to control the lowest layer of tests. And, depending on how the balance of people is in the team, they might write some of the higher levels of test. Or that might go to the testers. I’m the only tester on my team with three other developers, so they’ll be writing quite a lot of the actual test code, with input from me as to whether we should test that functionality, whereas on other teams, where it’s been more equal numbers, the testers have written pretty much all of the high level tests, just because that’s the best use of resource.If you could shuffle resources around however you liked, do you think that the developers should be writing those high-level tests?I think they should be writing them occasionally. It helps when they have an understanding of how testing code works and possibly what assumptions we’ve made in tests, and they can say “actually, it doesn’t work like that under the hood so you’ve missed this whole area”. It’s one of those agile things that everyone on the team should be at least comfortable doing the various jobs. So if the developers can write test code then I think that’s a very good thing.So you think testers should be able to write production code?Yes, although given most testers skills at coding, I wouldn’t advise it too much! I have written a few things, and I did make a few changes that have actually gone into our production code base. They’re not necessarily running every time but they are there. I think having that mix of skill sets is really useful. In some ways we’re using our own product to test itself, so being able to make those changes where it’s not working saves me a round-trip through the developers. It can be really annoying if the developers have no time to make a change, and I can’t touch the code.If the software engineers are consistently writing tests at all levels, what role do you think the role of a tester is?I think on a team like that, those distinctions aren’t quite so useful. There’ll be two cases. There’s either the case where the developers think they’ve written good tests, but you still need someone with a test engineer mind-set to go through the tests and validate that it’s a useful set, or the correct set for that code. Or they won’t actually be pure developers, they’ll have that mix of test ability in there.I think having slightly more distinct roles is useful. When it starts to blur, then you lose that view of the tests as a whole. The tester job is not to create tests, it’s to validate the quality of the product, and you don’t do that just by writing tests. There’s more things you’ve got to keep in your mind. And I think when you blur the roles, you start to lose that end of the tester.So because you’re working on those features, you lose that holistic view of the whole system?Yeah, and anyone who’s worked on the feature shouldn’t be testing it. You always need to have it tested it by someone who didn’t write it. Otherwise you’re a bit too close and you assume “yes, people will only use it that way”, but the tester will come along and go “how do people use this? How would our most idiotic user use this?”. I might not test that because it might be completely irrelevant. But it’s coming in and trying to have a different set of assumptions.Are you a believer that it should all be automated if possible?Not entirely. So an automated test is always better than a manual test for the long-term, but there’s still nothing that beats a human sitting in front of the application and thinking “What could I do at this point?”. The automated test is very good but they follow that strict path, and they never check anything off the path. The human tester will look at things that they weren’t expecting, whereas the automated test can only ever go “Is that value correct?” in many respects, and it won’t notice that on the other side of the screen you’re showing something completely wrong. And that value might have been checked independently, but you always find a few odd interactions when you’re going through something manually, and you always need to go through something manually to start with anyway, otherwise you won’t know where the important bits to write your automation are.When you’re doing that manual testing, do you think it’s important to do that across the entire product, or just the bits that you’ve touched recently?I think it’s important to do it mostly on the bits you’ve touched, but you can’t ignore the rest of the product. Unless you’re dealing with a very, very self-contained bit, you’re almost always encounter other bits of the product along the way. Most testers I know, even if they are looking at just one path, they’ll keep open and move around a bit anyway, just because they want to find something that’s broken. If we find that your path is right, we’ll go out and hunt something else.How do you think this fits into the idea of continuously deploying, so long as the tests pass?With deploying a website it’s a bit different because you can always pull it back. If you’re deploying an application to customers, when you’ve released it, it’s out there, you can’t pull it back. Someone’s going to keep it, no matter how hard you try there will be a few installations that stay around. So I’d always have at least a human element on that path. With websites, you could probably automate straight out, or at least straight out to an internal environment or a single server in a cloud of fifty that will serve some people. But I don’t think you should release to everyone just on automated tests passing.You’ve already mentioned using BASIC and C# — are there any other languages that you’ve used?I’ve used a few. That’s something that has changed more recently, I’ve become familiar with more languages. Before I started at Red Gate I learnt a bit of C. Then last year, I taught myself Python which I actually really enjoyed using. I’ve also come across another language called Vala, which is sort of a C#-like language. It’s basically a pre-processor for C, but it has very nice syntax. I think that’s currently my favourite language.Any particular reason for trying Vala?I have a completely Linux environment at home, and I’ve been looking for a nice language, and C# just doesn’t cut it because I won’t touch Mono. So, I was looking for something like C# but that was useable in an open source environment, and Vala’s what I found. C#’s got a few features that Vala doesn’t, and Vala’s got a few features where I think “It would be awesome if C# had that”.What are some of the features that it’s missing?Extension methods. And I think that’s the only one that really bugs me. I like to use them when I’m writing C# because it makes some things really easy, especially with libraries that you can’t touch the internals of. It doesn’t have method overloading, which is sometimes annoying.Where it does win over C#?Everything is non-nullable by default, you never have to check that something’s unexpectedly null.Also, Vala has code contracts. This is starting to come in C# 4, but the way it works in Vala is that you specify requirements in short phrases as part of your function signature and they stick to the signature, so that when you inherit it, it has exactly the same code contract as the base one, or when you inherit from an interface, you have to match the signature exactly. Just using those makes you think a bit more about how you’re writing your method, it’s not an afterthought when you’ve got contracts from base classes given to you, you can’t change it. Which I think is a lot nicer than the way C# handles it. When are those actually checked?They’re checked both at compile and run-time. The compile-time checking isn’t very strong yet, it’s quite a new feature in the compiler, and because it compiles down to C, you can write C code and interface with your methods, so you can bypass that compile-time check anyway. So there’s an extra runtime check, and if you violate one of the contracts at runtime, it’s game over for your program, there’s no exception to catch, it’s just goodbye!One thing I dislike about C# is the exceptions. You write a bit of code and fifty exceptions could come from any point in your ten lines, and you can’t mentally model how those exceptions are going to come out, and you can’t even predict them based on the functions you’re calling, because if you’ve accidentally got a derived class there instead of a base class, that can throw a completely different set of exceptions. So I’ve got no way of mentally modelling those, whereas in Vala they’re checked like Java, so you know only these exceptions can come out. You know in advance the error conditions.I think Raymond Chen on Old New Thing says “the only thing you know when you throw an exception is that you’re in an invalid state somewhere in your program, so just kill it and be done with it!”You said you’ve also learnt bits of Python. How did you find that compared to Vala and C#?Very different because of the dynamic typing. I’ve been writing a website for my own use. I’m quite into photography, so I take photos off my camera, post-process them, dump them in a file, and I get a webpage with all my thumbnails. So sort of like Picassa, but written by myself because I wanted something to learn Python with. There are some things that are really nice, I just found it really difficult to cope with the fact that I’m not quite sure what this object type that I’m passed is, I might not ever be sure, so it can randomly blow up on me. But once I train myself to ignore that and just say “well, I’m fairly sure it’s going to be something that looks like this, so I’ll use it like this”, then it’s quite nice.Any particular features that you’ve appreciated?I don’t like any particular feature, it’s just very straightforward to work with. It’s very quick to write something in, particularly as you don’t have to worry that you’ve changed something that affects a different part of the program. If you have, then that part blows up, but I can get this part working right now.If you were doing a big project, would you be willing to do it in Python rather than C# or Vala?I think I might be willing to try something bigger or long term with Python. We’re currently doing an ASP.NET MVC project on C#, and I don’t like the amount of reflection. There’s a lot of magic that pulls values out, and it’s all done under the scenes. It’s almost managed to put a dynamic type system on top of C#, which in many ways destroys the language to me, whereas if you’re already in a dynamic language, having things done dynamically is much more natural. In many ways, you get the worst of both worlds. I think for web projects, I would go with Python again, whereas for anything desktop, command-line or GUI-based, I’d probably go for C# or Vala, depending on what environment I’m in.It’s the fact that you can gain from the strong typing in ways that you can’t so much on the web app. Or, in a web app, you have to use dynamic typing at some point, or you have to write a hell of a lot of boilerplate, and I’d rather use the dynamic typing than write the boilerplate.What do you think separates great programmers from everyone else?Probably design choices. Choosing to write it a piece of code one way or another. For any given program you ask me to write, I could probably do it five thousand ways. A programmer who is capable will see four or five of them, and choose one of the better ones. The excellent programmer will see the largest proportion and manage to pick the best one very quickly without having to think too much about it. I think that’s probably what separates, is the speed at which they can see what’s the best path to write the program in. More Red Gater Coder interviews

    Read the article

  • Web Application Problems (web.config errors) HTTP 500.19 with IIS7.5 and ASP.NET v2

    - by Django Reinhardt
    This is driving the whole team crazy. There must be some simple mis-configured part of IIS or our Web Server, but every time we try to run out ASP.NET Web Application on IIS 7.5 we get the following error... Here's the error in full: HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. `Detailed Error Information` Module IIS Web Core Notification Unknown Handler Not yet determined Error Code 0x8007000d Config Error Config File \\?\E:\wwwroot\web.config Requested URL http://localhost:80/Default.aspx Physical Path Logon Method Not yet determined Logon User Not yet determined Config Source -1: 0: The machine is running Windows Server 2008 R2. We're developing our Web Application using Visual Studio 2008. According to Microsoft the code 8007000d means there's a syntax error in our web.config -- except the project builds and runs fine locally. Looking at the web.config in XML Notepad doesn't bring up any syntax errors, either. I'm assuming it must be some sort of poor configuration on my part...? Does anyone know where I might find further information about the error? Nothing is showing in EventViewer, either :( Not sure what else would be helpful to mention... Assistance is greatly appreciated. Thanks! UPDATES! - POSTED WEB.CONFIG BELOW Ok, since I posted the original question above, I've tracked down the precise lines in the web.config that were causing the error. Here are the lines (they appear between <System.webServer> tags)... <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> </httpHandlers> Note: If I delete the lines between the <httpHandlers> I STILL get the error. I literally have to delete <httpHandlers> (and the lines inbetween) to stop getting the above error. Once I've done this I get a new 500.19 error, however. Thankfully, this time IIS actually tells me which bit of the web.config is causing a problem... <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory,System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> </handlers> Looking at these lines it's clear the problem has migrated further within the same <system.webServer> tag to the <handlers> tag. The new error is also more explicit and specifically complains that it doesn't recognize the attribute "validate" (as seen on the third line above). Removing this attribute then makes it complain that the same line doesn't have the required "name" attribute. Adding this attribute then brings up ASP.NET error... Could not load file or assembly 'System.web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56' or one of its dependencies. The system cannot find the file specified. Obviously I think these new errors have just arisen from me deleting the <httpHandlers> tags in the first place -- they're obviously needed by the application -- so the question remains: Why would these tags kick up an error in IIS in the first place??? Do I need to install something to IIS to make it work with them? Thanks again for any help. WEB.CONFIG Here's the troublesome bits of our web.Config... I hope this helps someone find our problem! <system.Web> <!-- stuff cut out --> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56" validate="false"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> </httpModules> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <add name="ScriptModule" preCondition="integratedMode" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> </modules> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory,System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> </handlers> </system.webServer>

    Read the article

  • AutoMapper is not working for a Container class

    - by rboarman
    Hello, I have an AutoMapper issue that has been driving me crazy for way too long now. A similar question was also posted on the AutoMapper user site but has not gotten much love. The summary is that I have a container class that holds a Dictionary of components. The components are a derived object of a common base class. I also have a parallel structure that I am using as DTO objects to which I want to map. The error that gets generated seems to say that the mapper cannot map between two of the classes that I have included in the CreateMap calls. I think the error has to do with the fact that I have a Dictionary of objects that are not part of the container‘s hierarchy. I apologize in advance for the length of the code below. My simple test cases work. Needless to say, it’s only the more complex case that is failing. Here are the classes: #region Dto objects public class ComponentContainerDTO { public Dictionary<string, ComponentDTO> Components { get; set; } public ComponentContainerDTO() { this.Components = new Dictionary<string, ComponentDTO>(); } } public class EntityDTO : ComponentContainerDTO { public int Id { get; set; } } public class ComponentDTO { public EntityDTO Owner { get; set; } public int Id { get; set; } public string Name { get; set; } public string ComponentType { get; set; } } public class HealthDTO : ComponentDTO { public decimal CurrentHealth { get; set; } } public class PhysicalLocationDTO : ComponentDTO { public Point2D Location { get; set; } } #endregion #region Domain objects public class ComponentContainer { public Dictionary<string, Component> Components { get; set; } public ComponentContainer() { this.Components = new Dictionary<string, Component>(); } } public class Entity : ComponentContainer { public int Id { get; set; } } public class Component { public Entity Owner { get; set; } public int Id { get; set; } public string Name { get; set; } public string ComponentType { get; set; } } public class Health : Component { public decimal CurrentHealth { get; set; } } public struct Point2D { public decimal X; public decimal Y; public Point2D(decimal x, decimal y) { X = x; Y = y; } } public class PhysicalLocation : Component { public Point2D Location { get; set; } } #endregion The code: var entity = new Entity() { Id = 1 }; var healthComponent = new Health() { CurrentHealth = 100, Owner = entity, Name = "Health", Id = 2 }; entity.Components.Add("1", healthComponent); var locationComponent = new PhysicalLocation() { Location = new Point2D() { X = 1, Y = 2 }, Owner = entity, Name = "PhysicalLocation", Id = 3 }; entity.Components.Add("2", locationComponent); Mapper.CreateMap<ComponentContainer, ComponentContainerDTO>() .Include<Entity, EntityDTO>(); Mapper.CreateMap<Entity, EntityDTO>(); Mapper.CreateMap<Component, ComponentDTO>() .Include<Health, HealthDTO>() .Include<PhysicalLocation, PhysicalLocationDTO>(); Mapper.CreateMap<Component, ComponentDTO>(); Mapper.CreateMap<Health, HealthDTO>(); Mapper.CreateMap<PhysicalLocation, PhysicalLocationDTO>(); Mapper.AssertConfigurationIsValid(); var targetEntity = Mapper.Map<Entity, EntityDTO>(entity); The error when I call Map() (abbreviated stack crawls): AutoMapper.AutoMapperMappingException was unhandled Message=Trying to map MapperTest1.Entity to MapperTest1.EntityDTO. Using mapping configuration for MapperTest1.Entity to MapperTest1.EntityDTO Exception of type 'AutoMapper.AutoMapperMappingException' was thrown. Source=AutoMapper StackTrace: at AutoMapper.MappingEngine.AutoMapper.IMappingEngineRunner.Map(ResolutionContext context) . . . InnerException: AutoMapper.AutoMapperMappingException Message=Trying to map System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[MapperTest1.Component, ElasticTest1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]] to System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[MapperTest1.ComponentDTO, ElasticTest1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]]. Using mapping configuration for MapperTest1.Entity to MapperTest1.EntityDTO Destination property: Components Exception of type 'AutoMapper.AutoMapperMappingException' was thrown. Source=AutoMapper StackTrace: at AutoMapper.Mappers.TypeMapObjectMapperRegistry.PropertyMapMappingStrategy.MapPropertyValue(ResolutionContext context, IMappingEngineRunner mapper, Object mappedObject, PropertyMap propertyMap) . . InnerException: AutoMapper.AutoMapperMappingException Message=Trying to map System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[MapperTest1.Component, ElasticTest1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]] to System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[MapperTest1.ComponentDTO, ElasticTest1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]]. Using mapping configuration for MapperTest1.Entity to MapperTest1.EntityDTO Destination property: Components Exception of type 'AutoMapper.AutoMapperMappingException' was thrown. Source=AutoMapper StackTrace: at AutoMapper.MappingEngine.AutoMapper.IMappingEngineRunner.Map(ResolutionContext context) . InnerException: AutoMapper.AutoMapperMappingException Message=Trying to map MapperTest1.Component to MapperTest1.ComponentDTO. Using mapping configuration for MapperTest1.Health to MapperTest1.HealthDTO Destination property: Components Exception of type 'AutoMapper.AutoMapperMappingException' was thrown. Source=AutoMapper StackTrace: at AutoMapper.MappingEngine.AutoMapper.IMappingEngineRunner.Map(ResolutionContext context) . . InnerException: AutoMapper.AutoMapperMappingException Message=Trying to map System.Decimal to System.Decimal. Using mapping configuration for MapperTest1.Health to MapperTest1.HealthDTO Destination property: CurrentHealth Exception of type 'AutoMapper.AutoMapperMappingException' was thrown. Source=AutoMapper StackTrace: at AutoMapper.Mappers.TypeMapObjectMapperRegistry.PropertyMapMappingStrategy.MapPropertyValue(ResolutionContext context, IMappingEngineRunner mapper, Object mappedObject, PropertyMap propertyMap) . . InnerException: System.InvalidCastException Message=Unable to cast object of type 'MapperTest1.ComponentDTO' to type 'MapperTest1.HealthDTO'. Source=Anonymously Hosted DynamicMethods Assembly StackTrace: at SetCurrentHealth(Object , Object ) . . Thank you in advance. Rick

    Read the article

  • MVC 2 Entity Framework View Model Insert

    - by cannibalcorpse
    This is driving me crazy. Hopefully my question makes sense... I'm using MVC 2 and Entity Framework 1 and am trying to insert a new record with two navigation properties. I have a SQL table, Categories, that has a lookup table CategoryTypes and another self-referencing lookup CategoryParent. EF makes two nav properties on my Category model, one called Parent and another called CategoryType, both instances of their respective models. On my view that creates the new Category, I have two dropdowns, one for the CategoryType and another for the ParentCategory. When I try and insert the new Category WITHOUT the ParentCategory, which allows nulls, everything is fine. As soon as I add the ParentCategory, the insert fails, and oddly (or so I think) complains about the CategoryType in the form of this error: 0 related 'CategoryTypes' were found. 1 'CategoryTypes' is expected. When I step through, I can verifiy that both ID properties coming in on the action method parameter are correct. I can also verify that when I go to the db to get the CategoryType and ParentCategory with the ID's, the records are being pulled fine. Yet it fails on SaveChanges(). All that I can see is that my CategoryParent dropdownlistfor in my view, is somehow causing the insert to bomb. Please see my comments in my httpPost Create action method. My view model looks like this: public class EditModel { public Category MainCategory { get; set; } public IEnumerable<CategoryType> CategoryTypesList { get; set; } public IEnumerable<Category> ParentCategoriesList { get; set; } } My Create action methods look like this: // GET: /Categories/Create public ActionResult Create() { return View(new EditModel() { CategoryTypesList = _db.CategoryTypeSet.ToList(), ParentCategoriesList = _db.CategorySet.ToList() }); } // POST: /Categories/Create [HttpPost] public ActionResult Create(Category mainCategory) { if (!ModelState.IsValid) return View(new EditModel() { MainCategory = mainCategory, CategoryTypesList = _db.CategoryTypeSet.ToList(), ParentCategoriesList = _db.CategorySet.ToList() }); mainCategory.CategoryType = _db.CategoryTypeSet.First(ct => ct.Id == mainCategory.CategoryType.Id); // This db call DOES get the correct Category, but fails on _db.SaveChanges(). // Oddly the error is related to CategoryTypes and not Category. // Entities in 'DbEntities.CategorySet' participate in the 'FK_Categories_CategoryTypes' relationship. // 0 related 'CategoryTypes' were found. 1 'CategoryTypes' is expected. //mainCategory.Parent = _db.CategorySet.First(c => c.Id == mainCategory.Parent.Id); // If I just use the literal ID of the same Category, // AND comment out the CategoryParent dropdownlistfor in the view, all is fine. mainCategory.Parent = _db.CategorySet.First(c => c.Id == 2); _db.AddToCategorySet(mainCategory); _db.SaveChanges(); return RedirectToAction("Index"); } Here is my Create form on the view : <% using (Html.BeginForm()) {%> <%= Html.ValidationSummary(true) %> <fieldset> <legend>Fields</legend> <div> <%= Html.LabelFor(model => model.MainCategory.Parent.Id) %> <%= Html.DropDownListFor(model => model.MainCategory.Parent.Id, new SelectList(Model.ParentCategoriesList, "Id", "Name")) %> <%= Html.ValidationMessageFor(model => model.MainCategory.Parent.Id) %> </div> <div> <%= Html.LabelFor(model => model.MainCategory.CategoryType.Id) %> <%= Html.DropDownListFor(model => model.MainCategory.CategoryType.Id, new SelectList(Model.CategoryTypesList, "Id", "Name"))%> <%= Html.ValidationMessageFor(model => model.MainCategory.CategoryType.Id)%> </div> <div> <%= Html.LabelFor(model => model.MainCategory.Name) %> <%= Html.TextBoxFor(model => model.MainCategory.Name)%> <%= Html.ValidationMessageFor(model => model.MainCategory.Name)%> </div> <div> <%= Html.LabelFor(model => model.MainCategory.Description)%> <%= Html.TextAreaFor(model => model.MainCategory.Description)%> <%= Html.ValidationMessageFor(model => model.MainCategory.Description)%> </div> <div> <%= Html.LabelFor(model => model.MainCategory.SeoName)%> <%= Html.TextBoxFor(model => model.MainCategory.SeoName, new { @class = "large" })%> <%= Html.ValidationMessageFor(model => model.MainCategory.SeoName)%> </div> <div> <%= Html.LabelFor(model => model.MainCategory.HasHomepage)%> <%= Html.CheckBoxFor(model => model.MainCategory.HasHomepage)%> <%= Html.ValidationMessageFor(model => model.MainCategory.HasHomepage)%> </div> <p><input type="submit" value="Create" /></p> </fieldset> <% } %> Maybe I've just been staying up too late playing with MVC 2? :) Please let me know if I'm not being clear enough.

    Read the article

  • A problem with the asp.net create user control

    - by Sir Psycho
    Hi, I've customised the asp.net login control and it seems to create new accounts fine, but if I duplicate the user id thats already registered or enter an email thats already used, the error messages arn't displaying. Its driving me crazy. The page just refreshes without showing an error. I've included the as instructed on the MSDN site but nothing. http://msdn.microsoft.com/en-us/library/ms178342.aspx <asp:CreateUserWizard ErrorMessageStyle-BorderColor="Azure" ID="CreateUserWizard1" runat="server" ContinueDestinationPageUrl="~/home.aspx"> <WizardSteps> <asp:CreateUserWizardStep ID="CreateUserWizardStep1" runat="server"> <ContentTemplate> <asp:Literal ID="ErrorMessage" runat="server"></asp:Literal> <div class="fieldLine"> <asp:Label ID="lblFirstName" runat="server" Text="First Name:" AssociatedControlID="tbxFirstName"></asp:Label> <asp:Label ID="lblLastName" runat="server" Text="Last Name:" AssociatedControlID="tbxLastName"></asp:Label> </div> <div class="fieldLine"> <asp:TextBox ID="tbxFirstName" runat="server"></asp:TextBox> <asp:TextBox ID="tbxLastName" runat="server"></asp:TextBox> </div> <asp:Label ID="lblEmail" runat="server" Text="Email:" AssociatedControlID="Email"></asp:Label> <asp:TextBox ID="Email" runat="server" CssClass="wideInput"></asp:TextBox><br /> <asp:RequiredFieldValidator ID="RequiredFieldValidator1" runat="server" CssClass="aspValidator" Display="Dynamic" ControlToValidate="Email" ErrorMessage="Required"></asp:RequiredFieldValidator> <asp:RegularExpressionValidator ID="RegularExpressionValidator1" runat="server" Display="Dynamic" CssClass="aspValidator" ControlToValidate="Email" SetFocusOnError="true" ValidationExpression="^(?:[a-zA-Z0-9_'^&amp;/+-])+(?:\.(?:[a-zA-Z0-9_'^&amp;/+-])+)*@(?:(?:\[?(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))\.){3}(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\]?)|(?:[a-zA-Z0-9-]+\.)+(?:[a-zA-Z]){2,}\.?)$" ErrorMessage="Email address not valid"></asp:RegularExpressionValidator> <asp:Label ID="lblEmailConfirm" runat="server" Text="Confirm Email Address:" AssociatedControlID="tbxEmailConfirm"></asp:Label> <asp:TextBox ID="tbxEmailConfirm" runat="server" CssClass="wideInput"></asp:TextBox><br /> <asp:RequiredFieldValidator ID="RequiredFieldValidator2" runat="server" CssClass="aspValidator" Display="Dynamic" ControlToValidate="tbxEmailConfirm" ErrorMessage="Required"></asp:RequiredFieldValidator> <asp:RegularExpressionValidator ID="RegularExpressionValidator2" runat="server" Display="Dynamic" CssClass="aspValidator" ControlToValidate="tbxEmailConfirm" SetFocusOnError="true" ValidationExpression="^(?:[a-zA-Z0-9_'^&amp;/+-])+(?:\.(?:[a-zA-Z0-9_'^&amp;/+-])+)*@(?:(?:\[?(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))\.){3}(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\]?)|(?:[a-zA-Z0-9-]+\.)+(?:[a-zA-Z]){2,}\.?)$" ErrorMessage="Email address not valid"></asp:RegularExpressionValidator> <asp:CompareValidator ID="CompareValidator1" runat="server" Display="Dynamic" SetFocusOnError="true" CssClass="aspValidator" ControlToCompare="Email" ControlToValidate="tbxEmailConfirm" ErrorMessage="Email address' do not match"></asp:CompareValidator> <asp:Label ID="lblUsername" runat="server" Text="Username:" AssociatedControlID="UserName"></asp:Label> <asp:TextBox ID="UserName" runat="server" MaxLength="12"></asp:TextBox><br /> <asp:CustomValidator ID="CustomValidatorUserName" runat="server" Display="Dynamic" SetFocusOnError="true" CssClass="aspValidator" ValidateEmptyText="true" ControlToValidate="UserName" ErrorMessage="Username can be between 6 and 12 characters." ClientValidationFunction="ValidateLength" OnServerValidate="ValidateUserName"></asp:CustomValidator> <div class="fieldLine"> <asp:Label ID="lblPassword" runat="server" Text="Password:" AssociatedControlID="Password"></asp:Label> <asp:Label ID="lblPasswordConfirm" runat="server" Text="Confirm Password:" AssociatedControlID="ConfirmPassword" CssClass="confirmPassword"></asp:Label> </div> <div class="fieldLine"> <asp:TextBox ID="Password" runat="server" TextMode="Password"></asp:TextBox> <asp:TextBox ID="ConfirmPassword" runat="server" TextMode="Password"></asp:TextBox><br /> <asp:CustomValidator ID="CustomValidatorPassword" runat="server" Display="Dynamic" SetFocusOnError="true" CssClass="aspValidator" ControlToValidate="Password" ValidateEmptyText="true" ErrorMessage="Password can be between 6 and 12 characters" ClientValidationFunction="ValidateLength" OnServerValidate="ValidatePassword"></asp:CustomValidator> <asp:CustomValidator ID="CustomValidatorConfirmPassword" runat="server" Display="Dynamic" SetFocusOnError="true" CssClass="aspValidator" ControlToValidate="ConfirmPassword" ValidateEmptyText="true" ErrorMessage="Password can be between 6 and 12 characters" ClientValidationFunction="ValidateLength" OnServerValidate="ValidatePassword"></asp:CustomValidator> <asp:CompareValidator ID="CompareValidator2" runat="server" Enabled="false" Display="Dynamic" SetFocusOnError="true" CssClass="aspValidator" ControlToCompare="Password" ControlToValidate="ConfirmPassword" ErrorMessage="Passwords do not match"></asp:CompareValidator> </div> <asp:Label ID="lblCaptch" runat="server" Text="Captcha:" AssociatedControlID="imgCaptcha"></asp:Label> <div class="borderBlue" style="width:200px;"> <asp:Image ID="imgCaptcha" runat="server" ImageUrl="~/JpegImage.aspx" /><br /> </div> <asp:TextBox ID="tbxCaptcha" runat="server" CssClass="captchaText"></asp:TextBox> <asp:RequiredFieldValidator ControlToValidate="tbxCaptcha" CssClass="aspValidator" ID="RequiredFieldValidator3" runat="server" ErrorMessage="Required"></asp:RequiredFieldValidator> <asp:CustomValidator ID="CustomValidator1" ControlToValidate="tbxCaptcha" runat="server" OnServerValidate="ValidateCaptcha" ErrorMessage="Captcha incorrect"></asp:CustomValidator> </ContentTemplate> <CustomNavigationTemplate> <div style="float:left;"> <asp:Button ID="CreateUser" runat="server" Text="Register Now!" CausesValidation="true" CommandName="CreateUser" OnCommand="CreateUserClick" CssClass="registerButton" /> </div> </CustomNavigationTemplate> </asp:CreateUserWizardStep> <asp:CompleteWizardStep ID="CompleteWizardStep1" runat="server"> <ContentTemplate> <table border="0" style="font-size: 100%; font-family: Verdana" id="TABLE1" > <tr> <td align="center" colspan="2" style="font-weight: bold; color: white; background-color: #5d7b9d; height: 18px;"> Complete</td> </tr> <tr> <td> Your account has been successfully created.<br /> </td> </tr> <tr> <td align="right" colspan="2"> <asp:Button ID="Button1" PostBackUrl="~/home.aspx" runat="server" Text="Button" /> </td> </tr> </table> </ContentTemplate> </asp:CompleteWizardStep> </WizardSteps> </asp:CreateUserWizard>

    Read the article

  • Problems using HibernateTemplate: java.lang.NoSuchMethodError: org.hibernate.SessionFactory.openSession()Lorg/hibernate/classic/Session;

    - by user2104160
    I am quite new in Spring world and I am going crazy trying to integrate Hibernate in Spring application using HibernateTemplate abstract support class I have the following class to persist on database table: package org.andrea.myexample.HibernateOnSpring.entity; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; @Entity @Table(name="person") public class Person { @Id @GeneratedValue(strategy=GenerationType.AUTO) private int pid; private String firstname; private String lastname; public int getPid() { return pid; } public void setPid(int pid) { this.pid = pid; } public String getFirstname() { return firstname; } public void setFirstname(String firstname) { this.firstname = firstname; } public String getLastname() { return lastname; } public void setLastname(String lastname) { this.lastname = lastname; } } Next to it I have create an interface named PersonDAO in wich I only define my CRUD method. So I have implement this interface by a class named PersonDAOImpl that also extend the Spring abstract class HibernateTemplate: package org.andrea.myexample.HibernateOnSpring.dao; import java.util.List; import org.andrea.myexample.HibernateOnSpring.entity.Person; import org.springframework.orm.hibernate3.support.HibernateDaoSupport; public class PersonDAOImpl extends HibernateDaoSupport implements PersonDAO{ public void addPerson(Person p) { getHibernateTemplate().saveOrUpdate(p); } public Person getById(int id) { // TODO Auto-generated method stub return null; } public List<Person> getPersonsList() { // TODO Auto-generated method stub return null; } public void delete(int id) { // TODO Auto-generated method stub } public void update(Person person) { // TODO Auto-generated method stub } } (at the moment I am trying to implement only the addPerson() method) Then I have create a main class to test the operation of insert a new object into the database table: package org.andrea.myexample.HibernateOnSpring; import org.andrea.myexample.HibernateOnSpring.dao.PersonDAO; import org.andrea.myexample.HibernateOnSpring.entity.Person; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; public class MainApp { public static void main(String[] args) { ApplicationContext context = new ClassPathXmlApplicationContext("Beans.xml"); System.out.println("Contesto recuperato: " + context); Person persona1 = new Person(); persona1.setFirstname("Pippo"); persona1.setLastname("Blabla"); System.out.println("Creato persona1: " + persona1); PersonDAO dao = (PersonDAO) context.getBean("personDAOImpl"); System.out.println("Creato dao object: " + dao); dao.addPerson(persona1); System.out.println("persona1 salvata nel database"); } } As you can see the PersonDAOImpl class extends HibernateTemplate so I think that it have to contain the operation of setting of the sessionFactory... The problem is that when I try to run this MainApp class I obtain the following exception: Exception in thread "main" java.lang.NoSuchMethodError: org.hibernate.SessionFactory.openSession()Lorg/hibernate/classic/Session; at org.springframework.orm.hibernate3.SessionFactoryUtils.doGetSession(SessionFactoryUtils.java:323) at org.springframework.orm.hibernate3.SessionFactoryUtils.getSession(SessionFactoryUtils.java:235) at org.springframework.orm.hibernate3.HibernateTemplate.getSession(HibernateTemplate.java:457) at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:392) at org.springframework.orm.hibernate3.HibernateTemplate.executeWithNativeSession(HibernateTemplate.java:374) at org.springframework.orm.hibernate3.HibernateTemplate.saveOrUpdate(HibernateTemplate.java:737) at org.andrea.myexample.HibernateOnSpring.dao.PersonDAOImpl.addPerson(PersonDAOImpl.java:12) at org.andrea.myexample.HibernateOnSpring.MainApp.main(MainApp.java:26) Why I have this problem? how can I solve it? To be complete I also insert my pom.xml containing my dependencies list: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>org.andrea.myexample</groupId> <artifactId>HibernateOnSpring</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>HibernateOnSpring</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <!-- Dipendenze di Spring Framework --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>3.2.1.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>3.2.1.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>3.2.1.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context-support</artifactId> <version>3.2.1.RELEASE</version> </dependency> <dependency> <!-- Usata da Hibernate 4 per LocalSessionFactoryBean --> <groupId>org.springframework</groupId> <artifactId>spring-orm</artifactId> <version>3.2.0.RELEASE</version> </dependency> <!-- Dipendenze per AOP --> <dependency> <groupId>cglib</groupId> <artifactId>cglib</artifactId> <version>2.2.2</version> </dependency> <!-- Dipendenze per Persistence Managment --> <dependency> <!-- Apache BasicDataSource --> <groupId>commons-dbcp</groupId> <artifactId>commons-dbcp</artifactId> <version>1.4</version> </dependency> <dependency> <!-- MySQL database driver --> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.23</version> </dependency> <dependency> <!-- Hibernate --> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>4.1.9.Final</version> </dependency> </dependencies> </project>

    Read the article

  • Changing the action of a form with javascript/jquery

    - by Micah
    I'm having an issue that is driving me crazy. I'm trying to modify the openid-selector to support facebook. I'm using RPXNow as my provider so it requires the form to be submitted to a different url than the standard. For example. RpxNow requires me to setup my form like this: <form action="https://wikipediamaze.rpxnow.com/openid/start?token_url=..."> This works for every provider except for facebook and myspace. Those require the form to be posted to a different url like this: <form action="https://wikipediamaze.rpxnow.com/facebook/start?token_url=..."> and <form action="https://wikipediamaze.rpxnow.com/myspace/start?token_url=..."> The open id selector has a bunch of buttons on the form each representing the openid providers. What I'm trying to do is detect when the facebook or myspace button is clicked and changed the action on the form before submitting. However it's not working. Here is my code. I've tried several variations all with the same "not supported" exception $("#openid_form").attr("action", form_url) document.forms[0].action = form_url Any suggestions? Update Here are more details on the code. I've ommitted some for brevity. The only thing i've done is added the facebook section to the "providers_large" object (which successfully adds the logo to the website), and instead of supply a url identifying the user, I'm creating a property called "form_url" which is what I want to set the action of my form to. If you look at the section title "Provider image click" you'll see where I'm checking for the presence of the property "form_url" and using jquery to change the action and submit the form. However when I step through the javascript in debug mode it tells me it's an ivalid operation. var providers_large = { google: { name: 'Google', url: 'https://www.google.com/accounts/o8/id' }, facebook: { name: 'Facebook', form_url: 'http://wikipediamaze.rpxnow.com/facebook/start?token_url=http://www.wikipediamaze.com/Accounts/Logon' }, }; var providers_small = { myopenid: { name: 'MyOpenID', label: 'Enter your MyOpenID username.', url: 'http://{username}.myopenid.com/' }, livejournal: { name: 'LiveJournal', label: 'Enter your Livejournal username.', url: 'http://{username}.livejournal.com/' }, flickr: { name: 'Flickr', label: 'Enter your Flickr username.', url: 'http://flickr.com/{username}/' }, technorati: { name: 'Technorati', label: 'Enter your Technorati username.', url: 'http://technorati.com/people/technorati/{username}/' }, wordpress: { name: 'Wordpress', label: 'Enter your Wordpress.com username.', url: 'http://{username}.wordpress.com/' }, blogger: { name: 'Blogger', label: 'Your Blogger account', url: 'http://{username}.blogspot.com/' }, verisign: { name: 'Verisign', label: 'Your Verisign username', url: 'http://{username}.pip.verisignlabs.com/' }, vidoop: { name: 'Vidoop', label: 'Your Vidoop username', url: 'http://{username}.myvidoop.com/' }, verisign: { name: 'Verisign', label: 'Your Verisign username', url: 'http://{username}.pip.verisignlabs.com/' }, claimid: { name: 'ClaimID', label: 'Your ClaimID username', url: 'http://claimid.com/{username}' } }; var providers = $.extend({}, providers_large, providers_small); var openid = { cookie_expires: 6*30, // 6 months. cookie_name: 'openid_provider', cookie_path: '/', img_path: 'images/', input_id: null, provider_url: null, init: function(input_id) { var openid_btns = $('#openid_btns'); this.input_id = input_id; $('#openid_choice').show(); $('#openid_input_area').empty(); // add box for each provider for (id in providers_large) { openid_btns.append(this.getBoxHTML(providers_large[id], 'large', '.gif')); } if (providers_small) { openid_btns.append('<br/>'); for (id in providers_small) { openid_btns.append(this.getBoxHTML(providers_small[id], 'small', '.ico')); } } $('#openid_form').submit(this.submit); var box_id = this.readCookie(); if (box_id) { this.signin(box_id, true); } }, getBoxHTML: function(provider, box_size, image_ext) { var box_id = provider["name"].toLowerCase(); return '<a title="'+provider["name"]+'" href="javascript: openid.signin(\''+ box_id +'\');"' + ' style="background: #FFF url(' + this.img_path + box_id + image_ext+') no-repeat center center" ' + 'class="' + box_id + ' openid_' + box_size + '_btn"></a>'; }, /* Provider image click */ signin: function(box_id, onload) { var provider = providers[box_id]; if (! provider) { return; } this.highlight(box_id); this.setCookie(box_id); // prompt user for input? if (provider['label']) { this.useInputBox(provider); this.provider_url = provider['url']; } else if(provider['form_url']) { $('#openid_form').attr("action", provider['form_url']); $('#openid_form').submit(); } else { this.setOpenIdUrl(provider['url']); if (! onload) { $('#openid_form').submit(); } } }, /* Sign-in button click */ submit: function() { var url = openid.provider_url; if (url) { url = url.replace('{username}', $('#openid_username').val()); openid.setOpenIdUrl(url); } return true; }, setOpenIdUrl: function (url) { var hidden = $('#'+this.input_id); if (hidden.length > 0) { hidden.value = url; } else { $('#openid_form').append('<input type="hidden" id="' + this.input_id + '" name="' + this.input_id + '" value="'+url+'"/>'); } }, highlight: function (box_id) { // remove previous highlight. var highlight = $('#openid_highlight'); if (highlight) { highlight.replaceWith($('#openid_highlight a')[0]); } // add new highlight. $('.'+box_id).wrap('<div id="openid_highlight"></div>'); }, setCookie: function (value) { var date = new Date(); date.setTime(date.getTime()+(this.cookie_expires*24*60*60*1000)); var expires = "; expires="+date.toGMTString(); document.cookie = this.cookie_name+"="+value+expires+"; path=" + this.cookie_path; }, readCookie: function () { var nameEQ = this.cookie_name + "="; var ca = document.cookie.split(';'); for(var i=0;i < ca.length;i++) { var c = ca[i]; while (c.charAt(0)==' ') c = c.substring(1,c.length); if (c.indexOf(nameEQ) == 0) return c.substring(nameEQ.length,c.length); } return null; }, useInputBox: function (provider) { var input_area = $('#openid_input_area'); var html = ''; var id = 'openid_username'; var value = ''; var label = provider['label']; var style = ''; if (label) { html = '<p>' + label + '</p>'; } if (provider['name'] == 'OpenID') { id = this.input_id; value = 'http://'; style = 'background:#FFF url('+this.img_path+'openid-inputicon.gif) no-repeat scroll 0 50%; padding-left:18px;'; } html += '<input id="'+id+'" type="text" style="'+style+'" name="'+id+'" value="'+value+'" />' + '<input id="openid_submit" type="submit" value="Sign-In"/>'; input_area.empty(); input_area.append(html); $('#'+id).focus(); } };

    Read the article

  • CSS list menu; extra padding on rollover of buttons

    - by user1669878
    I have been going crazy trying to figure out why there is extra padding showing up on my navigation buttons when I rollover them. It's only showing up to the left and right of them though. Here's a link to the screenshot of what it looks like: http://i179.photobucket.com/albums/w319/jdauel/Screenshot2012-09-13at25417PM.png I think it has something to do with my CSS but I have no idea anymore. Please help me??? I tried using Firebug to figure it out with no prevail. Here's the code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>Farren's Photography</title> <style type="text/css"> html { height: 100%; width: 100%; } body { margin: 0px; } #container { font-family: Georgia, "Times New Roman", Times, serif; font-size: 1.2em; color: #000; background-color: #06F; text-align: left; padding: 0px; height: 650px; width: 960px; margin-right: auto; margin-left: auto; background-image: url(images/background_image.png); background-repeat: no-repeat; margin-top: 45px; } a:link { color: #FFF; } a:visited { color: #FFF; } a:hover { color: #FFF; } #container #logo { } #container #logo #fp-logo { background-image: url(images/logo.png); height: 137px; width: 408px; text-indent: -9999px; display: block; } #logo { height: 137px; width: 408px; position: relative; padding-top: 35px; padding-right: 0px; padding-bottom: 0px; padding-left: 35px; } #main { background-color: #FFF; min-height: 383px; width: 707px; position: relative; left: 217px; top: 16px; right: 36px; bottom: 113px; } #container #navbar { font-family: Georgia, "Times New Roman", Times, serif; font-size: 14px; color: #FFF; text-align: right; height: 45px; background-color: #CC0000; position: relative; top: 8px; bottom: 0px; left: 0px; right: 0px; } #container #navbar ul li a { text-decoration: none; } #container #navbar ul { list-style-type: none; padding-top: 16px; } #container #navbar ul li { display: inline; background-color: #280803; margin: 0px; height: 0px; width: 0px; position: relative; padding-top: 16px; padding-right: 15px; padding-bottom: 17px; padding-left: 15px; } #container #navbar ul li a:link { text-decoration: none; color: #FFF; } #container #navbar ul li a:visited { text-decoration: none; color: #FFF; } #container #navbar ul li a:hover { text-decoration: none; color: #FFF; background-color: #027e8e; padding-top: 16px; padding-right: 15px; padding-bottom: 17px; padding-left: 15px; margin: 0px; } #footer { font-family: Arial, Helvetica, sans-serif; font-size: x-small; height: 28px; position: relative; top: 8px; color: #FFF; font-style: italic; } </style> </head> <body> <div id="container"> <div id="logo"><a href="http://www.farrensphotography.com" title="Farren's Photography" target="_self" id="fp-logo">Farren's Photography</a></div><!-- end logo --> <div id="main"> <div id="content"> </div><!-- end content --> </div><!-- end main --> <div id="navbar"> <ul> <li><a href="index.html" target="_self">Home</a></li> <li><a href="portfolio.html" target="_self">Portfolio</a></li> <li><a href="mystyle.html" target="_self">My Style</a></li> <li><a href="specials.html" target="_self">Specials</a></li> <li><a href="pricing.html" target="_self">Pricing</a></li> <li><a href="contact.html" target="_self">Contact</a></li> </ul> </div> <!-- end navbar --> <div id="footer"> <div id="copyright">All images copyright© Farrens Photography </div><!-- end copyright --> <div id="network">Facebook button </div><!-- end network --> </div><!-- end footer --> </div><!-- end container --> </body> </html>

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • Java JTextPane JScrollPane Display Issue

    - by ikurtz
    The following class implements a chatGUI. When it runs okay the screen looks like this: Fine ChatGUI The problem is very often when i enter text of large length ie. 50 - 100 chars the gui goes crazy. the chat history box shrinks as shown in this image. Any ideas regarding what is causing this? Thank you. package Sartre.Connect4; import javax.swing.*; import java.net.*; import java.awt.*; import java.awt.event.*; import javax.swing.text.StyledDocument; import javax.swing.text.Style; import javax.swing.text.StyleConstants; import javax.swing.text.BadLocationException; import java.io.BufferedOutputStream; import javax.swing.text.html.HTMLEditorKit; import java.io.FileOutputStream; import java.io.IOException; import java.io.FileNotFoundException; import javax.swing.filechooser.FileNameExtensionFilter; import javax.swing.JFileChooser; /** * Chat form class * @author iAmjad */ public class ChatGUI extends JDialog implements ActionListener { /** * Used to hold chat history data */ private JTextPane textPaneHistory = new JTextPane(); /** * provides scrolling to chat history pane */ private JScrollPane scrollPaneHistory = new JScrollPane(textPaneHistory); /** * used to input local message to chat history */ private JTextPane textPaneHome = new JTextPane(); /** * Provides scrolling to local chat pane */ private JScrollPane scrollPaneHomeText = new JScrollPane(textPaneHome); /** * JLabel acting as a statusbar */ private JLabel statusBar = new JLabel("Ready"); /** * Button to clear chat history pane */ private JButton JBClear = new JButton("Clear"); /** * Button to save chat history pane */ private JButton JBSave = new JButton("Save"); /** * Holds contentPane */ private Container containerPane; /** * Layout GridBagLayout manager */ private GridBagLayout gridBagLayout = new GridBagLayout(); /** * GridBagConstraints */ private GridBagConstraints constraints = new GridBagConstraints(); /** * Constructor for ChatGUI */ public ChatGUI(){ setTitle("Chat"); // set up dialog icon URL url = getClass().getResource("Resources/SartreIcon.jpg"); ImageIcon imageIcon = new ImageIcon(url); Image image = imageIcon.getImage(); this.setIconImage(image); this.setAlwaysOnTop(true); setLocationRelativeTo(this.getParent()); //////////////// End icon and placement ///////////////////////// // Get pane and set layout manager containerPane = getContentPane(); containerPane.setLayout(gridBagLayout); ///////////////////////////////////////////////////////////// //////////////// Begin Chat History ////////////////////////////// textPaneHistory.setToolTipText("Chat History Window"); textPaneHistory.setEditable(false); textPaneHistory.setPreferredSize(new Dimension(350,250)); scrollPaneHistory.setVerticalScrollBarPolicy(ScrollPaneConstants.VERTICAL_SCROLLBAR_ALWAYS); scrollPaneHistory.setHorizontalScrollBarPolicy(ScrollPaneConstants.HORIZONTAL_SCROLLBAR_NEVER); // fill Chat History GridBagConstraints constraints.gridx = 0; constraints.gridy = 0; constraints.gridwidth = 10; constraints.gridheight = 10; constraints.weightx = 100; constraints.weighty = 100; constraints.fill = GridBagConstraints.BOTH; constraints.anchor = GridBagConstraints.CENTER; constraints.insets = new Insets(10,10,10,10); constraints.ipadx = 0; constraints.ipady = 0; gridBagLayout.setConstraints(scrollPaneHistory, constraints); // add to the pane containerPane.add(scrollPaneHistory); /////////////////////////////// End Chat History /////////////////////// ///////////////////////// Begin Home Chat ////////////////////////////// textPaneHome.setToolTipText("Home Chat Message Window"); textPaneHome.setPreferredSize(new Dimension(200,50)); textPaneHome.addKeyListener(new MyKeyAdapter()); scrollPaneHomeText.setVerticalScrollBarPolicy(ScrollPaneConstants.VERTICAL_SCROLLBAR_ALWAYS); scrollPaneHomeText.setHorizontalScrollBarPolicy(ScrollPaneConstants.HORIZONTAL_SCROLLBAR_NEVER); // fill Chat History GridBagConstraints constraints.gridx = 0; constraints.gridy = 10; constraints.gridwidth = 6; constraints.gridheight = 1; constraints.weightx = 100; constraints.weighty = 100; constraints.fill = GridBagConstraints.BOTH; constraints.anchor = GridBagConstraints.CENTER; constraints.insets = new Insets(10,10,10,10); constraints.ipadx = 0; constraints.ipady = 0; gridBagLayout.setConstraints(scrollPaneHomeText, constraints); // add to the pane containerPane.add(scrollPaneHomeText); ////////////////////////// End Home Chat ///////////////////////// ///////////////////////Begin Clear Chat History //////////////////////// JBClear.setToolTipText("Clear Chat History"); // fill Chat History GridBagConstraints constraints.gridx = 6; constraints.gridy = 10; constraints.gridwidth = 2; constraints.gridheight = 1; constraints.weightx = 100; constraints.weighty = 100; constraints.fill = GridBagConstraints.BOTH; constraints.anchor = GridBagConstraints.CENTER; constraints.insets = new Insets(10,10,10,10); constraints.ipadx = 0; constraints.ipady = 0; gridBagLayout.setConstraints(JBClear, constraints); JBClear.addActionListener(this); // add to the pane containerPane.add(JBClear); ///////////////// End Clear Chat History //////////////////////// /////////////// Begin Save Chat History ////////////////////////// JBSave.setToolTipText("Save Chat History"); constraints.gridx = 8; constraints.gridy = 10; constraints.gridwidth = 2; constraints.gridheight = 1; constraints.weightx = 100; constraints.weighty = 100; constraints.fill = GridBagConstraints.BOTH; constraints.anchor = GridBagConstraints.CENTER; constraints.insets = new Insets(10,10,10,10); constraints.ipadx = 0; constraints.ipady = 0; gridBagLayout.setConstraints(JBSave, constraints); JBSave.addActionListener(this); // add to the pane containerPane.add(JBSave); ///////////////////// End Save Chat History ///////////////////// /////////////////// Begin Status Bar ///////////////////////////// constraints.gridx = 0; constraints.gridy = 11; constraints.gridwidth = 10; constraints.gridheight = 1; constraints.weightx = 100; constraints.weighty = 50; constraints.fill = GridBagConstraints.BOTH; constraints.anchor = GridBagConstraints.CENTER; constraints.insets = new Insets(0,10,5,0); constraints.ipadx = 0; constraints.ipady = 0; gridBagLayout.setConstraints(statusBar, constraints); // add to the pane containerPane.add(statusBar); ////////////// End Status Bar //////////////////////////// // set resizable to false this.setResizable(false); // pack the GUI pack(); } /** * Deals with necessary menu click events * @param event */ public void actionPerformed(ActionEvent event) { Object source = event.getSource(); // Process Clear button event if (source == JBClear){ textPaneHistory.setText(null); statusBar.setText("Chat History Cleared"); } // Process Save button event if (source == JBSave){ // process only if there is data in history pane if (textPaneHistory.getText().length() > 0){ // process location where to save the chat history file JFileChooser chooser = new JFileChooser(); chooser.setMultiSelectionEnabled(false); chooser.setAcceptAllFileFilterUsed(false); FileNameExtensionFilter filter = new FileNameExtensionFilter("HTML Documents", "htm", "html"); chooser.setFileFilter(filter); int option = chooser.showSaveDialog(ChatGUI.this); if (option == JFileChooser.APPROVE_OPTION) { // Set up document to be parsed as HTML StyledDocument doc = (StyledDocument)textPaneHistory.getDocument(); HTMLEditorKit kit = new HTMLEditorKit(); BufferedOutputStream out; try { // add final file name and extension String filePath = chooser.getSelectedFile().getAbsoluteFile() + ".html"; out = new BufferedOutputStream(new FileOutputStream(filePath)); // write out the HTML document kit.write(out, doc, doc.getStartPosition().getOffset(), doc.getLength()); } catch (FileNotFoundException e) { JOptionPane.showMessageDialog(ChatGUI.this, "Application will now close. \n A restart may cure the error!\n\n" + e.getMessage(), "Fatal Error", JOptionPane.WARNING_MESSAGE, null); System.exit(2); } catch (IOException e){ JOptionPane.showMessageDialog(ChatGUI.this, "Application will now close. \n A restart may cure the error!\n\n" + e.getMessage(), "Fatal Error", JOptionPane.WARNING_MESSAGE, null); System.exit(3); } catch (BadLocationException e){ JOptionPane.showMessageDialog(ChatGUI.this, "Application will now close. \n A restart may cure the error!\n\n" + e.getMessage(), "Fatal Error", JOptionPane.WARNING_MESSAGE, null); System.exit(4); } statusBar.setText("Chat History Saved"); } } } } /** * Process return key for sending the message */ private class MyKeyAdapter extends KeyAdapter { @Override @SuppressWarnings("static-access") public void keyPressed(KeyEvent ke) { DateTime dateTime = new DateTime(); String nowdateTime = dateTime.getDateTime(); int kc = ke.getKeyCode(); if (kc == ke.VK_ENTER) { try { // Process only if there is data if (textPaneHome.getText().length() > 0){ // Add message origin formatting StyledDocument doc = (StyledDocument)textPaneHistory.getDocument(); Style style = doc.addStyle("HomeStyle", null); StyleConstants.setBold(style, true); String home = "Home [" + nowdateTime + "]: "; doc.insertString(doc.getLength(), home, style); StyleConstants.setBold(style, false); doc.insertString(doc.getLength(), textPaneHome.getText() + "\n", style); // update caret location textPaneHistory.setCaretPosition(doc.getLength()); textPaneHome.setText(null); statusBar.setText("Message Sent"); } } catch (BadLocationException e) { JOptionPane.showMessageDialog(ChatGUI.this, "Application will now close. \n A restart may cure the error!\n\n" + e.getMessage(), "Fatal Error", JOptionPane.WARNING_MESSAGE, null); System.exit(1); } ke.consume(); } } } }

    Read the article

  • Stuck on preserving config file in WIX major upgrade!

    - by Joshua
    ARGH! Wix is driving me crazy. So, of course I have seen the many posts both here on stackoverflow and elsewhere about WiX and major upgrades. I inherited this software project using WiX and am releasing a new version. I need this new version to leave ONLY the one configuration file if it's present, and replace everything else. This installer works EXCEPT no matter what I have done so far, the new XML file replaces the old on every install. Even attempting to use NeverOverwrite="yes" and even trying and messing back and forth with OnlyDetect="no"! I am simply stuck and humbly request a little guidance. The file that needs to be preserved is called SETTINGS.XML and is in the All Users-ApplicationData directory. Here is (most of) my .wxs file! <Package Id='$(var.PackageCode)' Description="Pathways Directory Software" InstallerVersion="301" Compressed="yes" /> <WixVariable Id="WixUILicenseRtf" Value="License.rtf" /> <Media Id="1" Cabinet="Pathways.cab" EmbedCab="yes" /> <Upgrade Id="$(var.UpgradeCode)"> <UpgradeVersion OnlyDetect="no" Maximum="$(var.ProductVersion)" IncludeMaximum="no" Language="1033" Property="OLDAPPFOUND" /> <UpgradeVersion Minimum="$(var.ProductVersion)" IncludeMinimum="yes" OnlyDetect="no" Language="1033" Property="NEWAPPFOUND" /> </Upgrade> <!-- program files directory --> <Directory Id="ProgramFilesFolder"> <Directory Id="INSTALLDIR" Name="Pathways"/> </Directory> <!-- application data directory --> <Directory Id="CommonAppDataFolder" Name="CommonAppData"> <Directory Id="CommonAppDataPathways" Name="Pathways" /> </Directory> <!-- start menu program directory --> <Directory Id="ProgramMenuFolder"> <Directory Id="ProgramsMenuPathwaysFolder" Name="Pathways" /> </Directory> <!-- desktop directory --> <Directory Id="DesktopFolder" /> </Directory> <Icon Id="PathwaysIcon" SourceFile="\\Fileserver\Release\Pathways\Latest\Release\Pathways.exe" /> <!-- components in the reference to the install directory --> <DirectoryRef Id="INSTALLDIR"> <Component Id="Application" Guid="EEE4EB55-A515-4872-A4A5-06D6AB4A06A6"> <File Id="pathwaysExe" Name="Pathways.exe" DiskId="1" Source="\\Fileserver\Release\Pathways\Latest\Release\Pathways.exe" Vital="yes" KeyPath="yes" Assembly=".net" AssemblyApplication="pathwaysExe" AssemblyManifest="pathwaysExe"> <!--<netfx:NativeImage Id="ngen_Pathways.exe" Platform="32bit" Priority="2"/> --> </File> <File Id="pathwaysChm" Name="Pathways.chm" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\Pathways.chm" /> <File Id="publicKeyXml" ShortName="RSAPUBLI.XML" Name="RSAPublicKey.xml" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\RSAPublicKey.xml" Vital="yes" /> <File Id="staticListsXml" ShortName="STATICLI.XML" Name="StaticLists.xml" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\StaticLists.xml" Vital="yes" /> <File Id="axInteropMapPointDll" ShortName="AXMPOINT.DLL" Name="AxInterop.MapPoint.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\AxInterop.MapPoint.dll" Vital="yes" /> <File Id="interopMapPointDll" ShortName="INMPOINT.DLL" Name="Interop.MapPoint.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\Interop.MapPoint.dll" Vital="yes" /> <File Id="mapPointDll" ShortName="MAPPOINT.DLL" Name="MapPoint.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\Interop.MapPoint.dll" Vital="yes" /> <File Id="devExpressData63Dll" ShortName="DAAT63.DLL" Name="DevExpress.Data.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.Data.v6.3.dll" Vital="yes" /> <File Id="devExpressUtils63Dll" ShortName="UTILS63.DLL" Name="DevExpress.Utils.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.Utils.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraBars63Dll" ShortName="BARS63.DLL" Name="DevExpress.XtraBars.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraBars.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraNavBar63Dll" ShortName="NAVBAR63.DLL" Name="DevExpress.XtraNavBar.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraNavBar.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraCharts63Dll" ShortName="CHARTS63.DLL" Name="DevExpress.XtraCharts.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraCharts.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraEditors63Dll" ShortName="EDITOR63.DLL" Name="DevExpress.XtraEditors.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraEditors.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraPrinting63Dll" ShortName="PRINT63.DLL" Name="DevExpress.XtraPrinting.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraPrinting.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraReports63Dll" ShortName="REPORT63.DLL" Name="DevExpress.XtraReports.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraReports.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraRichTextEdit63Dll" ShortName="RICHTE63.DLL" Name="DevExpress.XtraRichTextEdit.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraRichTextEdit.v6.3.dll" Vital="yes" /> <RegistryValue Id="PathwaysInstallDir" Root="HKLM" Key="Software\Tribal Data Resources\Pathways" Name="InstallDir" Action="write" Type="string" Value="[INSTALLDIR]" /> </Component> </DirectoryRef> <!-- application data components --> <DirectoryRef Id="CommonAppDataPathways"> <Component Id="CommonAppDataPathwaysFolderComponent" Guid="087C6F14-E87E-4B57-A7FA-C03FC8488E0D"> <CreateFolder> <Permission User="Everyone" GenericAll="yes" /> </CreateFolder> <RemoveFolder Id="CommonAppDataPathways" On="uninstall" /> <RegistryValue Root="HKCU" Key="Software\TDR\Pathways" Name="installed" Type="integer" Value="1" KeyPath="yes" /> <File Id="settingsXml" ShortName="SETTINGS.XML" Name="Settings.xml" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\Settings\settings.xml" Vital="yes" /> </Component> <Component Id="Database" Guid="1D8756EF-FD6C-49BC-8400-299492E8C65D"> <File Id="pathwaysMdf" Name="Pathways.mdf" DiskId="1" Source="\\fileserver\Shared\Databases\Pathways\SystemDBs\Pathways.mdf" /> <RemoveFile Id="pathwaysLdf" ShortName="Pathways.ldf" Name="Pathways_log.LDF" On="uninstall" /> </Component> </DirectoryRef> <!-- shortcut components --> <DirectoryRef Id="DesktopFolder"> <Component Id="DesktopShortcutComponent" Guid="1BF412BA-9C6B-460D-80ED-8388AC66703F"> <Shortcut Id="DesktopShortcut" Target="[INSTALLDIR]Pathways.exe" Name="Pathways" Description="Pathways Tribal Directory" Icon="PathwaysIcon" Show="normal" WorkingDirectory="INSTALLDIR" /> <RegistryValue Root="HKCU" Key="Software\TDR\Pathways" Name="installed" Type="integer" Value="1" KeyPath="yes"/> </Component> </DirectoryRef> <DirectoryRef Id ="ProgramsMenuPathwaysFolder"> <Component Id="ProgramsMenuShortcutComponent" Guid="83A18245-4C22-4CDC-94E0-B480F80A407D"> <Shortcut Id="ProgramsMenuShortcut" Target="[INSTALLDIR]Pathways.exe" Name="Pathways" Icon="PathwaysIcon" Show="normal" WorkingDirectory="INSTALLDIR" /> <RemoveFolder Id="ProgramsMenuPathwaysFolder" On="uninstall"/> <RegistryValue Root="HKCU" Key="Software\TDR\Pathways" Name="installed" Type="integer" Value="1" KeyPath="yes"/> </Component> </DirectoryRef> <Feature Id="App" Title="Pathways Application" Level="1" Description="Pathways software" Display="expand" ConfigurableDirectory="INSTALLDIR" Absent="disallow" AllowAdvertise="no" InstallDefault="local"> <ComponentRef Id="Application" /> <ComponentRef Id="CommonAppDataPathwaysFolderComponent" /> <ComponentRef Id="ProgramsMenuShortcutComponent" /> <Feature Id="Shortcuts" Title="Desktop Shortcut" Level="1" Absent="allow" AllowAdvertise="no" InstallDefault="local"> <ComponentRef Id="DesktopShortcutComponent" /> </Feature> </Feature> <Feature Id="Data" Title="Database" Level="1" Absent="allow" AllowAdvertise="no" InstallDefault="local"> <ComponentRef Id="Database" /> </Feature> <!-- <UIRef Id="WixUI_Minimal" /> --> <UIRef Id ="WixUI_FeatureTree"/> <UIRef Id="WixUI_ErrorProgressText"/> <UI> <Error Id="2000">There is a later version of this program installed.</Error> </UI> <CustomAction Id="NewerVersionDetected" Error="2000" /> <InstallExecuteSequence> <RemoveExistingProducts After="InstallFinalize"/> </InstallExecuteSequence> </Product>

    Read the article

  • Suckerfish Menu Not Working. Nested UL not displaying block

    - by Brett
    I'm going out of my mind with this one. I'm trying to build a suckerfish css drop down menu. I have the first level working ok but I can't get the nested ul (under the Glossary tab) to display in block format and show my a background color. I've been at this for three days and I'm about to go crazy. If anyone can help I'd really appreciate it. You can see it here: www.brettlockhart.com/blast/ body { min-width:640px; margin:0px; padding:0px 40px 0px 40px; background-color:#eee; } /*Nav One styles*/ nav#navOne ul { width:100%; min-width:640px; height:25px; margin:0px auto; padding:0px; background-image:url(/blast/images/navOneBg.png); border-radius:0px 0px 8px 8px; text-align:right; float:right; } nav#navOne ul li { position:relative; display:inline; margin: 0px 0px 0px 10px auto; padding:3px 0px; border-left:1px #0b4c8f solid; line-height:23px; } nav#navOne ul li:last-child { margin-right:10px; } .arrowDown { margin: auto; font-family:Tahoma, Geneva, sans-serif; font-size:.7em; color:#FFF; padding: 0px 5px 0px 0px; } nav#navOne ul li a { margin: 0px auto; padding: 4px 10px 4px 15px; font-family:Tahoma, Geneva, sans-serif; font-size:.7em; color:#FFF; border-left:1px #5d9ee0 solid; text-decoration:none; line-height:23px; } /*Nav One rollover*/ nav#navOne ul li ul { display:none; background-image:none; } nav#navOne ul li:hover ul { display:block !important; } nav#navOne ul li:hover nav#navOne ul li ul li { background-color:#69C !important; left:0px; top:26px; z-index:10; } h1 { float:left; width:158px; text-indent:-9999px; background-image:url(/blast/images/logo.png); background-repeat:no-repeat; } #search { float:right; width:280px; margin:0px; padding:0px; } /*Nav Two styles*/ nav#navTwo h3 { display:inline; } nav#navTwo ul { width:100%; height:50px; margin:0 auto; padding:0px; border:1px solid red; clear:both; } nav#navTwo ul li { display:inline; border: 1px solid green; margin-top:20px; } <header> <nav id="navOne"> <ul> <li><a href="#">Sign In</a></li> <li><a href="#">Register</a></li> <li><a href="#">Print Page</a></li> <li id="glossary"><a href="#">Glossary</a> <ul> <li><a href="#">Item Placeholder</a></li> <li><a href="#">Item Placeholder</a></li> <li><a href="#">Item Placeholder</a></li> <li><a href="#">Item Placeholder</a></li> <li><a href="#">Item Placeholder</a></li> <li><a href="#">Item Placeholder</a></li> </ul> </li> <li><a href="#">Text Size: A A A</a></li> <li><a href="#">Select Your Location</a> <span class="arrowDown">&#x2C5;</span></li> </ul> </nav><!--/navOne--> <h1>Logo</h1> <div id="search"> <form action="?" method="get"> <fieldset> <input type="text" id="searchField"> <input type="submit" id="searchSubmit" value="Submit"> Search:<label for="radioHere">here</label> <input type="radio" id="radioHere" name="here" value="here"> <label for="radioWeb">the web</label> <input type="radio" id="radioWeb" name="the web" value="the web"> </fieldset> </form> </div><!--/search--> <nav id="navTwo"> <ul> <li><h3>Residential:</h3></li> <li>TV</li> <li>Internet</li> <li>Phone</li> <li>Pricing</li> <li class="navTwoSelected">Music</li> <li>Order</li> <li>Billing</li> <li>Support</li> <li>|</li> <li>Business</li> <li>About Us</li> </ul> </nav><!--/navTwo--> <nav id="navThree"> <ul> <li>Today</li> <li>Watch</li> <li>Surf</li> <li>Play</li> <li class="navThreeSelected">Listen</li> <li>Learn</li> <li>Local</li> </ul> <h3>Tools:</h3> <ul> <li>Webmail</li> <li>Account</li> <li>Billing</li> <li>Order Services</li> </ul> </nav><!--/navThree--> <nav id="navFour"> <h3>Your are here:</h3> <ul id="breadcrumbs"> <li>Residential ></li> <li>My Place ></li> <li>Listen ></li> <li class="currentPage">Music</li> </ul> </nav><!--/navFour--> </header>

    Read the article

  • MS Dynamics CRM trapping .NET error before I can handle it

    - by clifgriffin
    This is a fun one. I have written a custom search page that provides faster, more user friendly searches than the default Contacts view and also allows searching of Leads and Contacts simultaneously. It uses GridViews bound to SqlDataSources that query filtered views. I'm sure someone will complain that I'm not using the web services for this, but this is just the design decision we made. These GridViews live in UpdatePanels to enable very slick AJAX updates upon search. It's all working great. Nearly ready to be deployed, except for one thing: Some long running searches are triggering an uncatchable SQL timeout exception. [SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.] at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) at System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable) at System.Web.UI.WebControls.SqlDataSourceView.ExecuteSelect(DataSourceSelectArguments arguments) at System.Web.UI.DataSourceView.Select(DataSourceSelectArguments arguments, DataSourceViewSelectCallback callback) at System.Web.UI.WebControls.DataBoundControl.PerformSelect() at System.Web.UI.WebControls.BaseDataBoundControl.DataBind() at System.Web.UI.WebControls.GridView.DataBind() at System.Web.UI.WebControls.BaseDataBoundControl.EnsureDataBound() at System.Web.UI.WebControls.CompositeDataBoundControl.CreateChildControls() at System.Web.UI.Control.EnsureChildControls() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) I found that CRM is doing a server.transfer to capture this error because my UpdatePanels started throwing JavaSript errors when this error would occur. I was only able to get the full error message by using the JavaScript debugger in IE. Having found this error, I thought the solution would be simple. I just needed to wrap my databind calls in try/catch blocks to capture any errors. Unfortunately it seems CRM's IIS configuration has the magic ability to capture this error before it ever gets back to my code. Using the debugger I never see it. It never gets to my catch blocks, but it's clearly happening in the SQL Data Source which is clearly (by the stack trace) being triggered by my GridView bind. Any ideas on this? It's driving me crazy. Code Behind (with some irrelevant functions omitted): protected void Page_Load(object sender, EventArgs e) { //Initialize some stuff this.bannerOracle = new OdbcConnection(ConfigurationManager.ConnectionStrings["OracleConnectionString"].ConnectionString); //Prospect default HideProspects(); HideProspectAddressColumn(); //Contacts default HideContactAddressColumn(); //Default error messages gvContacts.EmptyDataText = "Sad day. Your search returned no contacts."; gvProspects.EmptyDataText = "Sad day. Your search returned no prospects."; //New search try { SearchContact(null, -1); } catch { gvContacts.EmptyDataText = "Oops! An error occured. This may have been a timeout. Please try your search again."; gvContacts.DataSource = null; gvContacts.DataBind(); } } protected void txtSearchString_TextChanged(object sender, EventArgs e) { if(!String.IsNullOrEmpty(txtSearchString.Text)) { try { SearchContact(txtSearchString.Text, Convert.ToInt16(lstSearchType.SelectedValue)); } catch { gvContacts.EmptyDataText = "Oops! An error occured. This may have been a timeout. Please try your search again."; gvContacts.DataSource = null; gvContacts.DataBind(); } if (chkProspects.Checked == true) { try { SearchProspect(txtSearchString.Text, Convert.ToInt16(lstSearchType.SelectedValue)); } catch { gvProspects.EmptyDataText = "Oops! An error occured. This may have been a timeout. Please try your search again."; gvProspects.DataSource = null; gvProspects.DataBind(); } finally { ShowProspects(); } } else { HideProspects(); } } } protected void SearchContact(string search, int type) { SqlCRM_Contact.ConnectionString = ConfigurationManager.ConnectionStrings["MSSQLConnectionString"].ConnectionString; gvContacts.DataSourceID = "SqlCRM_Contact"; string strQuery = ""; string baseQuery = @"SELECT filteredcontact.contactid, filteredcontact.new_libertyid, filteredcontact.fullname, 'none' AS line1, filteredcontact.emailaddress1, filteredcontact.telephone1, filteredcontact.birthdateutc AS birthdate, filteredcontact.gendercodename FROM filteredcontact "; switch(type) { case LASTFIRST: strQuery = baseQuery + "WHERE fullname LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LAST: strQuery = baseQuery + "WHERE lastname LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case FIRST: strQuery = baseQuery + "WHERE firstname LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LIBERTYID: strQuery = baseQuery + "WHERE new_libertyid LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case EMAIL: strQuery = baseQuery + "WHERE emailaddress1 LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case TELEPHONE: strQuery = baseQuery + "WHERE telephone1 LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case BIRTHDAY: strQuery = baseQuery + "WHERE filteredcontact.birthdateutc BETWEEN @dateStart AND @dateEnd AND filteredcontact.statecode = 0"; try { DateTime temp = DateTime.Parse(search); if (temp.Year < 1753 || temp.Year > 9999) { search = string.Empty; } else { search = temp.ToString("yyyy-MM-dd"); } } catch { search = string.Empty; } SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("dateStart", DbType.String, search.Trim() + " 00:00:00.000"); SqlCRM_Contact.SelectParameters.Add("dateEnd", DbType.String, search.Trim() + " 23:59:59.999"); break; case SSN: //Do something break; case ADDRESS: strQuery = @"SELECT contactid, new_libertyid, fullname, line1, emailaddress1, telephone1, birthdate, gendercodename FROM (SELECT FC.contactid, FC.new_libertyid, FC.fullname, FA.line1, FC.emailaddress1, FC.telephone1, FC.birthdateutc AS birthdate, FC.gendercodename, ROW_NUMBER() OVER(PARTITION BY FC.contactid ORDER BY FC.contactid DESC) AS rn FROM filteredcontact FC INNER JOIN FilteredCustomerAddress FA ON FC.contactid = FA.parentid WHERE FA.line1 LIKE @value AND FA.addressnumber <> 1 AND FC.statecode = 0 ) AS RESULTS WHERE rn = 1"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); ShowContactAddressColumn(); break; default: strQuery = @"SELECT TOP 500 filteredcontact.contactid, filteredcontact.new_libertyid, filteredcontact.fullname, 'none' AS line1, filteredcontact.emailaddress1, filteredcontact.telephone1, filteredcontact.birthdateutc AS birthdate, filteredcontact.gendercodename FROM filteredcontact WHERE filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; break; } if (type != ADDRESS) { HideContactAddressColumn(); } gvContacts.PageIndex = 0; //try //{ // SqlCRM_Contact.DataBind(); //} //catch //{ // SqlCRM_Contact.DataBind(); //} gvContacts.DataBind(); } protected void SearchProspect(string search, int type) { SqlCRM_Prospect.ConnectionString = ConfigurationManager.ConnectionStrings["MSSQLConnectionString"].ConnectionString; gvProspects.DataSourceID = "SqlCRM_Prospect"; string strQuery = ""; string baseQuery = @"SELECT filteredlead.leadid, filteredlead.fullname, 'none' AS address1_line1, filteredlead.emailaddress1, filteredlead.telephone1, filteredlead.lu_dateofbirthutc AS lu_dateofbirth, filteredlead.lu_gendername FROM filteredlead "; switch (type) { case LASTFIRST: strQuery = baseQuery + "WHERE fullname LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LAST: strQuery = baseQuery + "WHERE lastname LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case FIRST: strQuery = baseQuery + "WHERE firstname LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LIBERTYID: strQuery = baseQuery + "WHERE new_libertyid LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case EMAIL: strQuery = baseQuery + "WHERE emailaddress1 LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case TELEPHONE: strQuery = baseQuery + "WHERE telephone1 LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case BIRTHDAY: strQuery = baseQuery + "WHERE filteredlead.lu_dateofbirth BETWEEN @dateStart AND @dateEnd AND filteredlead.statecode = 0"; try { DateTime temp = DateTime.Parse(search); if (temp.Year < 1753 || temp.Year > 9999) { search = string.Empty; } else { search = temp.ToString("yyyy-MM-dd"); } } catch { search = string.Empty; } SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("dateStart", DbType.String, search.Trim() + " 00:00:00.000"); SqlCRM_Prospect.SelectParameters.Add("dateEnd", DbType.String, search.Trim() + " 23:59:59.999"); break; case SSN: //Do nothing break; case ADDRESS: strQuery = @"SELECT filteredlead.leadid, filteredlead.fullname, filteredlead.address1_line1, filteredlead.emailaddress1, filteredlead.telephone1, filteredlead.lu_dateofbirthutc AS lu_dateofbirth, filteredlead.lu_gendername FROM filteredlead WHERE address1_line1 LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); ShowProspectAddressColumn(); break; default: strQuery = @"SELECT TOP 500 filteredlead.leadid, filteredlead.fullname, 'none' AS address1_line1 filteredlead.emailaddress1, filteredlead.telephone1, filteredlead.lu_dateofbirthutc AS lu_dateofbirth, filteredlead.lu_gendername FROM filteredlead WHERE filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; break; } if (type != ADDRESS) { HideProspectAddressColumn(); } gvProspects.PageIndex = 0; //try //{ // SqlCRM_Prospect.DataBind(); //} //catch (Exception ex) //{ // SqlCRM_Prospect.DataBind(); //} gvProspects.DataBind(); }

    Read the article

  • How to solve "403 Forbidden" on CentOS6 with SELinux Disabled?

    - by André
    I have a machine on Linode that is driving me crazy. Linode does not have SELinux on CentOS6... I'm trying to configure to put my website in "/home/websites/public_html/mysite.com/public" As I don´t have SELinux enable, how can I avoid the "403 Forbidden" that I get when trying to access the webpage? Sorry for my english. Best Regards, Update1, ERROR_LOG [Mon Oct 17 14:04:16 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:08:07 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:10:25 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:10:41 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:32:35 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:34:45 2011] [error] [client 58.218.199.227] (13)Permission denied: access to /proxy-1.php denied [Mon Oct 17 15:32:25 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:37:26 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:37:43 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:38:32 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:42:56 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Oct 17 15:43:12 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Oct 17 15:45:34 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Oct 17 15:51:25 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Upadate2, /home/websites directory drwx------ 3 websites websites 4096 Oct 17 14:52 . drwxr-xr-x. 3 root root 4096 Oct 17 13:42 .. -rw------- 1 websites websites 372 Oct 17 14:52 .bash_history -rw-r--r-- 1 websites websites 18 May 30 11:46 .bash_logout -rw-r--r-- 1 websites websites 176 May 30 11:46 .bash_profile -rw-r--r-- 1 websites websites 124 May 30 11:46 .bashrc drwxrwxr-x 3 websites apache 4096 Oct 17 13:45 public_html Update3, httpd.conf ### Section 1: Global Environment ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 15 <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> <IfModule worker.c> StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> #Listen 12.34.56.78:80 Listen 80 LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule version_module modules/mod_version.so Include conf.d/*.conf #ExtendedStatus On User apache Group apache ServerAdmin root@localhost #ServerName www.example.com:80 UseCanonicalName Off DocumentRoot "/var/www/html" # # Each directory to which Apache has access can be configured with respect # to which services and features are allowed and/or disabled in that # directory (and its subdirectories). # # First, we configure the "default" to be a very restrictive set of # features. # <Directory /> Options FollowSymLinks AllowOverride None </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # This should be changed to whatever you set DocumentRoot to. # <Directory "/home/websites/public_html"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # Options Indexes FollowSymLinks # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # AllowOverride None # # Controls who can get stuff from this server. # Order allow,deny Allow from all </Directory> # # UserDir: The name of the directory that is appended onto a user's home # directory if a ~user request is received. # # The path to the end user account 'public_html' directory must be # accessible to the webserver userid. This usually means that ~userid # must have permissions of 711, ~userid/public_html must have permissions # of 755, and documents contained therein must be world-readable. # Otherwise, the client will only receive a "403 Forbidden" message. # # See also: http://httpd.apache.org/docs/misc/FAQ.html#forbidden # <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir disabled # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # #UserDir public_html </IfModule> # # Control access to UserDir directories. The following is an example # for a site where these directories are restricted to read-only. # #<Directory /home/*/public_html> # AllowOverride FileInfo AuthConfig Limit # Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec # <Limit GET POST OPTIONS> # Order allow,deny # Allow from all # </Limit> # <LimitExcept GET POST OPTIONS> # Order deny,allow # Deny from all # </LimitExcept> #</Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # # The index.html.var file (a type-map) is used to deliver content- # negotiated documents. The MultiViews Option can be used for the # same purpose, but it is much slower. # DirectoryIndex index.html index.html.var # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy All </Files> # # TypesConfig describes where the mime.types file (or equivalent) is # to be found. # TypesConfig /etc/mime.types # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # <IfModule mod_mime_magic.c> # MIMEMagicFile /usr/share/magic.mime MIMEMagicFile conf/magic </IfModule> # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off #EnableMMAP off #EnableSendfile off # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog logs/error_log LogLevel warn # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # "combinedio" includes actual counts of actual bytes received (%I) and sent (%O); this # requires the mod_logio module to be loaded. #LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # #CustomLog logs/access_log common # # If you would like to have separate agent and referer logfiles, uncomment # the following directives. # #CustomLog logs/referer_log referer #CustomLog logs/agent_log agent # # For a single logfile with access, agent, and referer information # (Combined Logfile Format), use the following directive: # CustomLog logs/access_log combined ServerSignature On Alias /icons/ "/var/www/icons/" <Directory "/var/www/icons"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> # # WebDAV module configuration section. # <IfModule mod_dav_fs.c> # Location of the WebDAV lock database. DAVLockDB /var/lib/dav/lockdb </IfModule> # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the client. # The same rules about trailing "/" apply to ScriptAlias directives as to # Alias. # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" # # "/var/www/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/var/www/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable Charset=UTF-8 AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip AddIconByType (TXT,/icons/text.gif) text/* AddIconByType (IMG,/icons/image2.gif) image/* AddIconByType (SND,/icons/sound2.gif) audio/* AddIconByType (VID,/icons/movie.gif) video/* AddIcon /icons/binary.gif .bin .exe AddIcon /icons/binhex.gif .hqx AddIcon /icons/tar.gif .tar AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip AddIcon /icons/a.gif .ps .ai .eps AddIcon /icons/layout.gif .html .shtml .htm .pdf AddIcon /icons/text.gif .txt AddIcon /icons/c.gif .c AddIcon /icons/p.gif .pl .py AddIcon /icons/f.gif .for AddIcon /icons/dvi.gif .dvi AddIcon /icons/uuencoded.gif .uu AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl AddIcon /icons/tex.gif .tex AddIcon /icons/bomb.gif core AddIcon /icons/back.gif .. AddIcon /icons/hand.right.gif README AddIcon /icons/folder.gif ^^DIRECTORY^^ AddIcon /icons/blank.gif ^^BLANKICON^^ # # DefaultIcon is which icon to show for files which do not have an icon # explicitly set. # DefaultIcon /icons/unknown.gif # # AddDescription allows you to place a short description after a file in # server-generated indexes. These are only displayed for FancyIndexed # directories. # Format: AddDescription "description" filename # #AddDescription "GZIP compressed document" .gz #AddDescription "tar archive" .tar #AddDescription "GZIP compressed tar archive" .tgz # # ReadmeName is the name of the README file the server will look for by # default, and append to directory listings. # # HeaderName is the name of a file which should be prepended to # directory indexes. ReadmeName README.html HeaderName HEADER.html # # IndexIgnore is a set of filenames which directory indexing should ignore # and not include in the listing. Shell-style wildcarding is permitted. # IndexIgnore .??* *~ *# HEADER* README* RCS CVS *,v *,t # # DefaultLanguage and AddLanguage allows you to specify the language of # a document. You can then use content negotiation to give a browser a # file in a language the user can understand. # # Specify a default language. This means that all data # going out without a specific language tag (see below) will # be marked with this one. You probably do NOT want to set # this unless you are sure it is correct for all cases. # # * It is generally better to not mark a page as # * being a certain language than marking it with the wrong # * language! # # DefaultLanguage nl # # Note 1: The suffix does not have to be the same as the language # keyword --- those with documents in Polish (whose net-standard # language code is pl) may wish to use "AddLanguage pl .po" to # avoid the ambiguity with the common suffix for perl scripts. # # Note 2: The example entries below illustrate that in some cases # the two character 'Language' abbreviation is not identical to # the two character 'Country' code for its country, # E.g. 'Danmark/dk' versus 'Danish/da'. # # Note 3: In the case of 'ltz' we violate the RFC by using a three char # specifier. There is 'work in progress' to fix this and get # the reference data for rfc1766 cleaned up. # # Catalan (ca) - Croatian (hr) - Czech (cs) - Danish (da) - Dutch (nl) # English (en) - Esperanto (eo) - Estonian (et) - French (fr) - German (de) # Greek-Modern (el) - Hebrew (he) - Italian (it) - Japanese (ja) # Korean (ko) - Luxembourgeois* (ltz) - Norwegian Nynorsk (nn) # Norwegian (no) - Polish (pl) - Portugese (pt) # Brazilian Portuguese (pt-BR) - Russian (ru) - Swedish (sv) # Simplified Chinese (zh-CN) - Spanish (es) - Traditional Chinese (zh-TW) # AddLanguage ca .ca AddLanguage cs .cz .cs AddLanguage da .dk AddLanguage de .de AddLanguage el .el AddLanguage en .en AddLanguage eo .eo AddLanguage es .es AddLanguage et .et AddLanguage fr .fr AddLanguage he .he AddLanguage hr .hr AddLanguage it .it AddLanguage ja .ja AddLanguage ko .ko AddLanguage ltz .ltz AddLanguage nl .nl AddLanguage nn .nn AddLanguage no .no AddLanguage pl .po AddLanguage pt .pt AddLanguage pt-BR .pt-br AddLanguage ru .ru AddLanguage sv .sv AddLanguage zh-CN .zh-cn AddLanguage zh-TW .zh-tw # # LanguagePriority allows you to give precedence to some languages # in case of a tie during content negotiation. # # Just list the languages in decreasing order of preference. We have # more or less alphabetized them here. You probably want to change this. # LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW # # ForceLanguagePriority allows you to serve a result page rather than # MULTIPLE CHOICES (Prefer) [in case of a tie] or NOT ACCEPTABLE (Fallback) # [in case no accepted languages matched the available variants] # ForceLanguagePriority Prefer Fallback # # Specify a default charset for all content served; this enables # interpretation of all content as UTF-8 by default. To use the # default browser choice (ISO-8859-1), or to allow the META tags # in HTML content to override this choice, comment out this # directive: # AddDefaultCharset UTF-8 # # AddType allows you to add to or override the MIME configuration # file mime.types for specific file types. # #AddType application/x-tar .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # Despite the name similarity, the following Add* directives have nothing # to do with the FancyIndexing customization directives above. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # MIME-types for downloading Certificates and CRLs # AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # # For files that include their own HTTP headers: # #AddHandler send-as-is asis # # For type maps (negotiated resources): # (This is enabled by default to allow the Apache "It Worked" page # to be distributed in multiple languages.) # AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # AddType text/html .shtml AddOutputFilter INCLUDES .shtml # # Action lets you define media types that will execute a script whenever # a matching file is called. This eliminates the need for repeated URL # pathnames for oft-used CGI file processors. # Format: Action media/type /cgi-script/location # Format: Action handler-name /cgi-script/location # # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # Putting this all together, we can internationalize error responses. # # We use Alias to redirect any /error/HTTP_<error>.html.var response to # our collection of by-error message multi-language collections. We use # includes to substitute the appropriate text. # # You can modify the messages' appearance without changing any of the # default HTTP_<error>.html.var files by adding the line: # # Alias /error/include/ "/your/include/path/" # # which allows you to create your own set of files by starting with the # /var/www/error/include/ files and # copying them to /your/include/path/, even on a per-VirtualHost basis. # Alias /error/ "/var/www/error/" <IfModule mod_negotiation.c> <IfModule mod_include.c> <Directory "/var/www/error"> AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en es de fr ForceLanguagePriority Prefer Fallback </Directory> # ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var # ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var # ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var # ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var # ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var # ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var # ErrorDocument 410 /error/HTTP_GONE.html.var # ErrorDocument 411 /error/HTTP_LENGTH_REQUIRED.html.var # ErrorDocument 412 /error/HTTP_PRECONDITION_FAILED.html.var # ErrorDocument 413 /error/HTTP_REQUEST_ENTITY_TOO_LARGE.html.var # ErrorDocument 414 /error/HTTP_REQUEST_URI_TOO_LARGE.html.var # ErrorDocument 415 /error/HTTP_UNSUPPORTED_MEDIA_TYPE.html.var # ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var # ErrorDocument 501 /error/HTTP_NOT_IMPLEMENTED.html.var # ErrorDocument 502 /error/HTTP_BAD_GATEWAY.html.var # ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var # ErrorDocument 506 /error/HTTP_VARIANT_ALSO_VARIES.html.var </IfModule> </IfModule> # # The following directives modify normal HTTP response behavior to # handle known problems with browser implementations. # BrowserMatch "Mozilla/2" nokeepalive BrowserMatch "MSIE 4\.0b2;" nokeepalive downgrade-1.0 force-response-1.0 BrowserMatch "RealPlayer 4\.0" force-response-1.0 BrowserMatch "Java/1\.0" force-response-1.0 BrowserMatch "JDK/1\.0" force-response-1.0 # # The following directive disables redirects on non-GET requests for # a directory that does not include the trailing slash. This fixes a # problem with Microsoft WebFolders which does not appropriately handle # redirects for folders with DAV methods. # Same deal with Apple's DAV filesystem and Gnome VFS support for DAV. # BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully BrowserMatch "MS FrontPage" redirect-carefully BrowserMatch "^WebDrive" redirect-carefully BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully BrowserMatch "^gnome-vfs/1.0" redirect-carefully BrowserMatch "^XML Spy" redirect-carefully BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully # # Allow server status reports generated by mod_status, # with the URL of http://servername/server-status # Change the ".example.com" to match your domain to enable. # #<Location /server-status> # SetHandler server-status # Order deny,allow # Deny from all # Allow from .example.com #</Location> # # Allow remote server configuration reports, with the URL of # http://servername/server-info (requires that mod_info.c be loaded). # Change the ".example.com" to match your domain to enable. # #<Location /server-info> # SetHandler server-info # Order deny,allow # Deny from all # Allow from .example.com #</Location> # # Proxy Server directives. Uncomment the following lines to # enable the proxy server: # #<IfModule mod_proxy.c> #ProxyRequests On # #<Proxy *> # Order deny,allow # Deny from all # Allow from .example.com #</Proxy> # # Enable/disable the handling of HTTP/1.1 "Via:" headers. # ("Full" adds the server version; "Block" removes all outgoing Via: headers) # Set to one of: Off | On | Full | Block # #ProxyVia On # # To enable a cache of proxied content, uncomment the following lines. # See http://httpd.apache.org/docs/2.2/mod/mod_cache.html for more details. # #<IfModule mod_disk_cache.c> # CacheEnable disk / # CacheRoot "/var/cache/mod_proxy" #</IfModule> # #</IfModule> # End of proxy directives. ### Section 3: Virtual Hosts # # VirtualHost: If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # NameVirtualHost *:80 # # NOTE: NameVirtualHost cannot be used without a port specifier # (e.g. :80) if mod_ssl is being used, due to the nature of the # SSL protocol. # # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for requests without a known # server name. # #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot /www/docs/dummy-host.example.com # ServerName dummy-host.example.com # ErrorLog logs/dummy-host.example.com-error_log # CustomLog logs/dummy-host.example.com-access_log common #</VirtualHost> # domain: mysite.com # public: /home/websites/public_html/mysite.com/ <VirtualHost *:80> # Admin email, Server Name (domain name) and any aliases ServerAdmin [email protected] ServerName mysite.com ServerAlias www.mysite.com # Index file and Document Root (where the public files are located) DirectoryIndex index.html DocumentRoot /home/websites/public_html/mysite.com/public # Custom log file locations LogLevel warn ErrorLog /home/websites/public_html/mysite.com/log/error.log CustomLog /home/websites/public_html/mysite.com/log/access.log combined </VirtualHost>

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • Asp.NET custom templated datalist throws argument out of range (index) on button press

    - by MrTortoise
    I have a class BaseTemplate public abstract class BaseTemplate : ITemplate This adds the controls, and provides abstract methods to implement in the inheriting class. The inheriting class then adds its html according to its data source and manages the data binding. This all works fine - I get the control appearing with properly parsed html. The problem is that the base class adds controls into the template that have their own CommandName arguments; the idea is that the class that implements the custom templated dataList will provide the logic of setting the Selected and Edit Indexes. This class also manages the data binding, etc. It sets all of the templates on the datalist in the Init method (which was another cause of this exception). The exception gets thrown when I hit one of these buttons - I have tried hooking up both their click and command events everywhere in case this was the problem. I have also ensured that their command names do not match any of the system ones. The stack trace does not include any references to my methods or objects which is why I am so stuck. It is the most unhelpful message I can imagine. The really frustrating thing is that I cannot get a breakpoint to fire - i.e. the problem is happening after I click the button, but before and of my code can execute. The last time this exception happened was when I had this code in a user control and was assigning the templates to the datalist in the PageLoad. I moved these into init to fix that problem; however, this is a problem that was there then and I have no idea what is causing it let alone how to solve it (and index out of range doesn't really help without knowing what index.) The Exception Details Exception Details: System.ArgumentOutOfRangeException: Specified argument was out of the range of valid values. Parameter name: index The Stack Trace: [ArgumentOutOfRangeException: Specified argument was out of the range of valid values. Parameter name: index] System.Web.UI.ControlCollection.get_Item(Int32 index) +8665582 System.Web.UI.WebControls.DataList.GetItem(ListItemType itemType, Int32 repeatIndex) +8667655 System.Web.UI.WebControls.DataList.System.Web.UI.WebControls.IRepeatInfoUser.GetItemStyle(ListItemType itemType, Int32 repeatIndex) +11 System.Web.UI.WebControls.RepeatInfo.RenderVerticalRepeater(HtmlTextWriter writer, IRepeatInfoUser user, Style controlStyle, WebControl baseControl) +8640873 System.Web.UI.WebControls.RepeatInfo.RenderRepeater(HtmlTextWriter writer, IRepeatInfoUser user, Style controlStyle, WebControl baseControl) +27 System.Web.UI.WebControls.DataList.RenderContents(HtmlTextWriter writer) +208 System.Web.UI.WebControls.BaseDataList.Render(HtmlTextWriter writer) +30 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +99 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +25 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +134 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +19 System.Web.UI.HtmlControls.HtmlForm.RenderChildren(HtmlTextWriter writer) +163 System.Web.UI.HtmlControls.HtmlContainerControl.Render(HtmlTextWriter writer) +32 System.Web.UI.HtmlControls.HtmlForm.Render(HtmlTextWriter output) +51 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +99 System.Web.UI.HtmlControls.HtmlForm.RenderControl(HtmlTextWriter writer) +40 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +134 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +19 System.Web.UI.Page.Render(HtmlTextWriter writer) +29 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +99 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +25 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1266 The code Base class: public abstract class BaseTemplate : ITemplate { ListItemType _templateType; public BaseTemplate(ListItemType theTemplateType) { _templateType = theTemplateType; } public ListItemType ListItemType { get { return _templateType; } } #region ITemplate Members public void InstantiateIn(Control container) { PlaceHolder ph = new PlaceHolder(); container.Controls.Add(ph); Literal l = new Literal(); switch (_templateType) { case ListItemType.Header: { ph.Controls.Add(new LiteralControl(@"<table><tr>")); InstantiateInHeader(ph); ph.Controls.Add(new LiteralControl(@"</tr>")); break; } case ListItemType.Footer: { ph.Controls.Add(new LiteralControl(@"<tr>")); InstantiateInFooter(ph); ph.Controls.Add(new LiteralControl(@"</tr></table>")); break; } case ListItemType.Item: { ph.Controls.Add(new LiteralControl(@"<tr>")); InstantiateInItem(ph); ph.Controls.Add(new LiteralControl(@"<td>")); Button select = new Button(); select.ID = "btnSelect"; select.CommandName = "SelectRow"; select.Text = "Select"; ph.Controls.Add(select); ph.Controls.Add(new LiteralControl(@"</td>")); ph.Controls.Add(new LiteralControl(@"</tr>")); ph.DataBinding += new EventHandler(ph_DataBinding); break; } case ListItemType.AlternatingItem: { ph.Controls.Add(new LiteralControl(@"<tr>")); InstantiateInAlternatingItem(ph); ph.Controls.Add(new LiteralControl(@"<td>")); Button select = new Button(); select.ID = "btnSelect"; select.CommandName = "SelectRow"; select.Text = "Select"; ph.Controls.Add(select); ph.Controls.Add(new LiteralControl(@"</td>")); ph.Controls.Add(new LiteralControl(@"</tr>")); ph.DataBinding+=new EventHandler(ph_DataBinding); break; } case ListItemType.SelectedItem: { ph.Controls.Add(new LiteralControl(@"<tr>")); InstantiateInItem(ph); ph.Controls.Add(new LiteralControl(@"<td>")); Button edit = new Button(); edit.ID = "btnEdit"; edit.CommandName = "EditRow"; edit.Text = "Edit"; ph.Controls.Add(edit); Button delete = new Button(); delete.ID = "btnDelete"; delete.CommandName = "DeleteRow"; delete.Text = "Delete"; ph.Controls.Add(delete); ph.Controls.Add(new LiteralControl(@"</td>")); ph.Controls.Add(new LiteralControl(@"</tr>")); ph.DataBinding += new EventHandler(ph_DataBinding); break; } case ListItemType.EditItem: { ph.Controls.Add(new LiteralControl(@"<tr>")); InstantiateInEdit(ph); ph.Controls.Add(new LiteralControl(@"<td>")); Button save = new Button(); save.ID = "btnSave"; save.CommandName = "SaveRow"; save.Text = "Save"; ph.Controls.Add(save); Button cancel = new Button(); cancel.ID = "btnCancel"; cancel.CommandName = "CancelRow"; cancel.Text = "Cancel"; ph.Controls.Add(cancel); ph.Controls.Add(new LiteralControl(@"</td>")); ph.Controls.Add(new LiteralControl(@"</tr>")); ph.DataBinding += new EventHandler(ph_DataBinding); break; } case ListItemType.Separator: { InstantiateInSeperator(ph); break; } } } void ph_DataBinding(object sender, EventArgs e) { DataBindingOverride(sender, e); } /// <summary> /// the controls placed into the PlaceHolder will get wrapped in &lt;table&gt;&lt;tr&gt; &lt;/tr&gt;. I.e. you need to provide the column names wrapped in &lt;td&gt;&lt;/td&gt; tags. /// </summary> /// <param name="header"></param> public abstract void InstantiateInHeader(PlaceHolder ph); /// <summary> /// the controls will have a column added after them and so require each column to be properly wrapped in &lt;td&gt;&lt;/td&gt; tags. The &lt;tr&gt;&lt;/tr&gt; is handled in the base class. /// </summary> /// <param name="ph"></param> public abstract void InstantiateInItem(PlaceHolder ph); /// <summary> /// the controls will have a column added after them and so require each column to be properly wrapped in &lt;td&gt;&lt;/td&gt; tags. The &lt;tr&gt;&lt;/tr&gt; is handled in the base class. /// </summary> /// <param name="ph"></param> public abstract void InstantiateInAlternatingItem(PlaceHolder ph); /// <summary> /// the controls will have a column added after them and so require each column to be properly wrapped in &lt;td&gt;&lt;/td&gt; tags. The &lt;tr&gt;&lt;/tr&gt; is handled in the base class. /// </summary> /// <param name="ph"></param> public abstract void InstantiateInEdit(PlaceHolder ph); /// <summary> /// Any html used in the footer will have &lt;/tr&gt;&lt;table&gt; appended to the end. /// &lt;tr&gt; will be appended to the front. /// </summary> /// <param name="ph"></param> public abstract void InstantiateInFooter(PlaceHolder ph); /// <summary> /// the controls will have a column added after them and so require each column to be properly wrapped in &lt;td&gt;&lt;/td&gt; tags. The &lt;tr&gt;&lt;/tr&gt; is handled in the base class. /// Adds Delete and Edit Buttons after the table contents. /// </summary> /// <param name="ph"></param> public abstract void InstantiateInSelectedItem(PlaceHolder ph); /// <summary> /// The base class provides no &lt;tr&gt;&lt;/tr&gt; tags /// </summary> /// <param name="ph"></param> public abstract void InstantiateInSeperator(PlaceHolder ph); /// <summary> /// Use this method to bind the controls to their data. /// </summary> /// <param name="sender"></param> /// <param name="e"></param> public abstract void DataBindingOverride(object sender, EventArgs e); #endregion } Inheriting class: public class NominalGroupTemplate : BaseTemplate { public NominalGroupTemplate(ListItemType theListItemType) : base(theListItemType) { } public override void InstantiateInHeader(PlaceHolder ph) { ph.Controls.Add(new LiteralControl(@"<td>ID</td><td>Group</td><td>IsPositive</td>")); } public override void InstantiateInItem(PlaceHolder ph) { ph.Controls.Add(new LiteralControl(@"<td>")); Label lblID = new Label(); lblID.ID = "lblID"; ph.Controls.Add(lblID); ph.Controls.Add(new LiteralControl(@"</td><td>")); Label lblGroup = new Label(); lblGroup.ID = "lblGroup"; ph.Controls.Add(lblGroup); ph.Controls.Add(new LiteralControl(@"</td><td>")); CheckBox chkIsPositive = new CheckBox(); chkIsPositive.ID = "chkIsPositive"; chkIsPositive.Enabled = false; ph.Controls.Add(chkIsPositive); ph.Controls.Add(new LiteralControl(@"</td>")); } public override void InstantiateInAlternatingItem(PlaceHolder ph) { InstantiateInItem(ph); } public override void InstantiateInEdit(PlaceHolder ph) { ph.Controls.Add(new LiteralControl(@"<td>")); Label lblID = new Label(); lblID.ID = "lblID"; ph.Controls.Add(lblID); ph.Controls.Add(new LiteralControl(@"</td><td>")); TextBox txtGroup = new TextBox(); txtGroup.ID = "txtGroup"; txtGroup.Visible = true; txtGroup.Enabled = true ; ph.Controls.Add(txtGroup); ph.Controls.Add(new LiteralControl(@"</td><td>")); CheckBox chkIsPositive = new CheckBox(); chkIsPositive.ID = "chkIsPositive"; chkIsPositive.Visible = true; chkIsPositive.Enabled = true ; ph.Controls.Add(chkIsPositive); ph.Controls.Add(new LiteralControl(@"</td>")); } public override void InstantiateInFooter(PlaceHolder ph) { InstantiateInHeader(ph); } public override void InstantiateInSelectedItem(PlaceHolder ph) { ph.Controls.Add(new LiteralControl(@"<td>")); Label lblID = new Label(); lblID.ID = "lblID"; ph.Controls.Add(lblID); ph.Controls.Add(new LiteralControl(@"</td><td>")); TextBox txtGroup = new TextBox(); txtGroup.ID = "txtGroup"; txtGroup.Visible = true; txtGroup.Enabled = false; ph.Controls.Add(txtGroup); ph.Controls.Add(new LiteralControl(@"</td><td>")); CheckBox chkIsPositive = new CheckBox(); chkIsPositive.ID = "chkIsPositive"; chkIsPositive.Visible = true; chkIsPositive.Enabled = false; ph.Controls.Add(chkIsPositive); ph.Controls.Add(new LiteralControl(@"</td>")); } public override void InstantiateInSeperator(PlaceHolder ph) { } public override void DataBindingOverride(object sender, EventArgs e) { PlaceHolder ph = (PlaceHolder)sender; DataListItem li = (DataListItem)ph.NamingContainer; int id = Convert.ToInt32(DataBinder.Eval(li.DataItem, "ID")); string group = (string)DataBinder.Eval(li.DataItem, "Group"); bool isPositive = Convert.ToBoolean(DataBinder.Eval(li.DataItem, "IsPositive")); switch (this.ListItemType) { case ListItemType.Item: case ListItemType.AlternatingItem: { ((Label)ph.FindControl("lblID")).Text = id.ToString(); ((Label)ph.FindControl("lblGroup")).Text = group; ((CheckBox)ph.FindControl("chkIsPositive")).Text = isPositive.ToString(); break; } case ListItemType.EditItem: case ListItemType.SelectedItem: { ((TextBox)ph.FindControl("lblID")).Text = id.ToString(); ((TextBox)ph.FindControl("txtGroup")).Text = group; ((CheckBox)ph.FindControl("chkIsPositive")).Text = isPositive.ToString(); break; } } } } From here I added the control to a page the code behind public partial class NominalGroupbroke : System.Web.UI.UserControl { public void SetNominalGroupList(IList<BONominalGroup> theNominalGroups) { XElement data = Serialiser<BONominalGroup>.SerialiseObjectList(theNominalGroups); ViewState.Add("nominalGroups", data.ToString()); dlNominalGroup.DataSource = theNominalGroups; dlNominalGroup.DataBind(); } protected void Page_init() { dlNominalGroup.HeaderTemplate = new NominalGroupTemplate(ListItemType.Header); dlNominalGroup.ItemTemplate = new NominalGroupTemplate(ListItemType.Item); dlNominalGroup.AlternatingItemTemplate = new NominalGroupTemplate(ListItemType.AlternatingItem); dlNominalGroup.SeparatorTemplate = new NominalGroupTemplate(ListItemType.Separator); dlNominalGroup.SelectedItemTemplate = new NominalGroupTemplate(ListItemType.SelectedItem); dlNominalGroup.EditItemTemplate = new NominalGroupTemplate(ListItemType.EditItem); dlNominalGroup.FooterTemplate = new NominalGroupTemplate(ListItemType.Footer); } protected void Page_Load(object sender, EventArgs e) { dlNominalGroup.ItemCommand += new DataListCommandEventHandler(dlNominalGroup_ItemCommand); } void dlNominalGroup_Init(object sender, EventArgs e) { dlNominalGroup.HeaderTemplate = new NominalGroupTemplate(ListItemType.Header); dlNominalGroup.ItemTemplate = new NominalGroupTemplate(ListItemType.Item); dlNominalGroup.AlternatingItemTemplate = new NominalGroupTemplate(ListItemType.AlternatingItem); dlNominalGroup.SeparatorTemplate = new NominalGroupTemplate(ListItemType.Separator); dlNominalGroup.SelectedItemTemplate = new NominalGroupTemplate(ListItemType.SelectedItem); dlNominalGroup.EditItemTemplate = new NominalGroupTemplate(ListItemType.EditItem); dlNominalGroup.FooterTemplate = new NominalGroupTemplate(ListItemType.Footer); } void dlNominalGroup_DataBinding(object sender, EventArgs e) { } void deleteNominalGroup(int index) { XElement data = XElement.Parse(Convert.ToString( ViewState["nominalGroups"] )); IList<BONominalGroup> list = Serialiser<BONominalGroup>.DeserialiseObjectList(data); FENominalGroup.DeleteNominalGroup(list[index].ID); list.RemoveAt(index); data = Serialiser<BONominalGroup>.SerialiseObjectList(list); ViewState["nominalGroups"] = data.ToString(); dlNominalGroup.DataSource = list; dlNominalGroup.DataBind(); } void updateNominalGroup(DataListItem theItem) { XElement data = XElement.Parse(Convert.ToString( ViewState["nominalGroups"])); IList<BONominalGroup> list = Serialiser<BONominalGroup>.DeserialiseObjectList(data); BONominalGroup old = list[theItem.ItemIndex]; BONominalGroup n = new BONominalGroup(); byte id = Convert.ToByte(((TextBox)theItem.FindControl("lblID")).Text); string group = ((TextBox)theItem.FindControl("txtGroup")).Text; bool isPositive = Convert.ToBoolean(((CheckBox)theItem.FindControl("chkIsPositive")).Text); n.ID = id; n.Group = group; n.IsPositive = isPositive; FENominalGroup.UpdateNominalGroup(old, n); list[theItem.ItemIndex] = n; data = Serialiser<BONominalGroup>.SerialiseObjectList(list); ViewState["nominalGroups"] = data.ToString(); } void dlNominalGroup_ItemCommand(object source, DataListCommandEventArgs e) { DataList l = (DataList)source; switch (e.CommandName) { case "SelectRow": { if (l.EditItemIndex == -1) { l.SelectedIndex = e.Item.ItemIndex; l.EditItemIndex = -1; } break; } case "EditRow": { if (l.SelectedIndex == e.Item.ItemIndex) { l.EditItemIndex = e.Item.ItemIndex; } break; } case "DeleteRow": { deleteNominalGroup(e.Item.ItemIndex); l.EditItemIndex = -1; try { l.SelectedIndex = e.Item.ItemIndex; } catch { l.SelectedIndex = -1; } break; } case "CancelRow": { l.SelectedIndex = l.EditItemIndex; l.EditItemIndex = -1; break; } case "SaveRow": { updateNominalGroup(e.Item); try { l.SelectedIndex = e.Item.ItemIndex; } catch { l.SelectedIndex = -1; } l.EditItemIndex = -1; break; } } } Lots of code there, I'm afraid, but it should build. Thanks if anyone manages to spot my silliness. The BONominalGroup class (please ignore my crazy getHash override, I'm not proud of it). IAudit can just be an empty interface here and all will be fine. It used to inherit from another class, I have cleaned that out - so the serialization logic may be broken here. public class BONominalGroup { public BONominalGroup() #region Fields and properties private Int16 _ID; public Int16 ID { get { return _ID; } set { _ID = value; } } private string _group; public string Group { get { return _group; } set { _group = value; } } private bool _isPositve; public bool IsPositive { get { return _isPositve; } set { _isPositve = value; } } #endregion public override bool Equals(object obj) { bool retVal = false; BONominalGroup ng = obj as BONominalGroup; if (ng!=null) if (ng._group == this._group && ng._ID == this.ID && ng.IsPositive == this.IsPositive) { retVal = true; } return retVal; } public override int GetHashCode() { return ToString().GetHashCode(); } public override string ToString() { return "BONominalGroup{ID:" + this.ID.ToString() + ",Group:" + this.Group.ToString() + ",IsPositive:" + this.IsPositive.ToString() + "," + "}"; } #region IXmlSerializable Members public override void ReadXml(XmlReader reader) { reader.ReadStartElement("BONominalGroup"); this.ID = Convert.ToByte(reader.ReadElementString("id")); this.Group = reader.ReadElementString("group"); this.IsPositive = Convert.ToBoolean(reader.ReadElementString("isPositive")); base.ReadXml(reader); reader.ReadEndElement(); } public override void WriteXml(XmlWriter writer) { writer.WriteElementString("id", this.ID.ToString()); writer.WriteElementString("group", this.Group); writer.WriteElementString("isPositive", this.IsPositive.ToString()); // writer.WriteStartElement("BOBase"); // base.WriteXml(writer); writer.WriteEndElement(); } #endregion }

    Read the article

< Previous Page | 53 54 55 56 57