Search Results

Search found 1438 results on 58 pages for 'nick sweet'.

Page 55/58 | < Previous Page | 51 52 53 54 55 56 57 58  | Next Page >

  • XNA Notes 006

    - by George Clingerman
    If you used to think the XNA community was small and inactive, hopefully these XNA Notes are opening your eyes. And I honestly feel like I’m still only catching the tail end of everything that’s going on. It’s a large and active community and you can be so mired down in one part of it you miss all sorts of cool stuff another part is doing. XNA is many things to a lot of people and that makes for a lot of really awesome things going on. So here’s what I saw going on this last week! Time Critical XNA New: XNA Team - Peer Review now closes for XNA 3.1 games http://blogs.msdn.com/b/xna/archive/2011/02/08/peer-review-pipeline-closed-for-new-xna-gs-3-1-games-or-updates-on-app-hub.aspx http://twitter.com/XNACommunity/statuses/34649816529256448 The XNA Team posts about a meet up with Microsoft for Creator’s going to be at GDC, March 3rd at the Lobby Bar http://on.fb.me/fZungJ XNA Team: @mklucher is busying playing the the bubblegum on WP7 made by a member of the XNA team (although reportedly made in Silverlight? Crazy! ;) ) http://twitter.com/mklucher/statuses/34645662737895426 http://bubblegum.me Shawn Hargreaves posts multiple posts (is this a sign that something new is coming from the XNA team? Usually when Shawn has time to post, something has just wrapped up…) Random Shuffle http://blogs.msdn.com/b/shawnhar/archive/2011/02/09/random-shuffle.aspx Doing the right thing: resume, rewind or skip ahead http://blogs.msdn.com/b/shawnhar/archive/2011/02/10/doing-the-right-thing-resume-rewind-or-skip-ahead.aspx XNA Developers: Andrew Russel was on .NET Rocks recently talking with Carl and Richard about developing games for Xbox, iPhone and Android http://www.dotnetrocks.com/default.aspx?ShowNum=635 Eric W. releases the Fishing Girl source code into the wild http://ericw.ca/blog/posts/fishing-girl-now-open-source/ http://forums.create.msdn.com/forums/p/74642/454512.aspx#454512 BinaryTweedDeej reminds that XNA community that Indie City wants you involved http://twitter.com/BinaryTweedDeej/statuses/34596114028044288 http://www.indiecity.com Mike McLaughlin (@mikebmcl) releases his first two XNA articles on the TechNet wiki http://social.technet.microsoft.com/wiki/contents/articles/xna-framework-overview.aspx http://social.technet.microsoft.com/wiki/contents/articles/content-pipeline-overview.aspx John Watte plays around with the Content Pipeline and Music Visualization exploring just what can be done. http://www.enchantedage.com/xna-content-pipeline-fft-song-analysis http://www.enchantedage.com/fft-in-xna-content-pipeline-for-beat-detection-for-the-win Simon Stevens writes up his talk on Vector Collision Physics http://www.simonpstevens.com/News/VectorCollisionPhysics @domipheus puts together an XNA Task Manager http://www.flickr.com/photos/domipheus/5405603197/ MadNinjaSkillz releases his fork of Nick's Easy Storage component on CodePlex http://twitter.com/MadNinjaSkillz/statuses/34739039068229634 http://ezstorage.codeplex.com @ActiveNick was interviewed by Rob Cameron and discusses Windows Phone 7, Bing Maps and XNA http://twitter.com/ActiveNick/statuses/35348548526546944 http://msdn.microsoft.com/en-us/cc537546 Radiangames (Luke Schneider) posts about converting his games from XNA to Unity http://radiangames.com/?p=592 UberMonkey (@ElementCy) posts about a new project in the works, CubeTest a Minecraft style terrain http://www.ubergamermonkey.com/personal-projects/new-project-in-the-works/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Ubergamermonkey+%28UberGamerMonkey%29 Xbox LIVE Indie Games (XBLIG): VideoGamer Rob review Bonded Realities http://videogamerrob.wordpress.com/2011/02/05/xblig-review-bonded-realities/ XBLIG Round Up on Gamergeddon http://www.gamergeddon.com/2011/02/06/xbox-indie-game-round-up-february-6th/ Are gamers still rating Indie Games after the Xbox Dashboard update? http://www.gamemarx.com/news/2011/02/06/are-gamers-still-rating-indie-games-after-the-xbox-dashboard-update.aspx Joystiq - Xbox Live Indie Gems: Corrupted http://www.joystiq.com/2011/02/04/xbox-live-indie-gems-corrupted/ Raymond Matthews of DarkStarMatryx reviews (Almost) Total Mayhem and Aban Hawkins & the 1000 Spikes http://www.darkstarmatryx.com/?p=225 http://www.darkstarmatryx.com/?p=229 8 Bit Horse reviews Aban Hawkins & the 1000 spikes http://8bithorse.blogspot.com/2011/01/aban-hawkins-1000-spikes-xbl-indie.html 2010 wrap-up for FunInfused Games http://www.krissteele.net/blogdetails.aspx?id=245 NeoGaf roundup of January's XBLIGs http://www.neogaf.com/forum/showthread.php?t=420528 Armless Ocotopus interviews Michael Ventnor creator of Bonded Realities http://www.armlessoctopus.com/2011/02/07/interview-michael-ventnor-of-red-crest-studios/ @recharge_media posts about the new city music for Woodvale in Sin Rising http://rechargemedia.com/2011/02/08/new-city-theme-woodvale/ @DrMisty posts some footage of YoYoYo in action http://www.mstargames.co.uk/mistryblogmain/54-yoyoyoblogs/184-video-update.html Xona Games - Decimation X3 on Reviews on the Run http://video.citytv.com/video/detail/782443063001.000000/reviews-on-the-run--february-8-2011/g4/ @benkane gives an early peek at his action RPG coming to XBLIG http://www.youtube.com/watch?v=bDF_PrvtwU8 Rock, Paper Shotgun talks to Zeboyd games about bringing Cthulhu Saves the World to PC http://www.rockpapershotgun.com/2011/02/11/summoning-cthulhu-natter-with-zeboyd/ Xbox LIVE Indieverse interviews the creator of Bonded Realities http://xbl-indieverse.blogspot.com/2011/02/xbl-indieverse-interview-red-crest.html XNA Game Development: Dream-In-Code posts about an upcoming XNA Challenge/Coding contest http://www.dreamincode.net/forums/blog/1385/entry-3192-xna-challengecontest/ Sgt.Conker covers Fishing Girl and IndieFreaks Game Framework release http://www.sgtconker.com/2011/02/fishing-girl-did-not-sell-a-single-copy/ http://www.sgtconker.com/2011/02/indiefreaks-game-framework-v0-2-0-0/ @slyprid releases Transmute v0.40a with lots of new features and fixes http://twitter.com/slyprid/statuses/34125423067533312 http://twitter.com/slyprid/statuses/35326876243337216 http://forgottenstarstudios.com/ Jeff Brown writes an XNA 4.0 tutorial on Saving/Loading on the Xbox 360 http://www.robotfootgames.com/xna-tutorials/92-xna-tutorial-savingloading-on-xbox-360-40 XNA for Silverlight Developers: Part 3- Animation http://www.silverlightshow.net/items/XNA-for-Silverlight-developers-Part-3-Animation-transforms.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+xna-connection-twitter-specific-stream+%28XNA+Connection%27s+Twitter+specific+stream%29 The news from Nokia is definitely something XNA developers will want to keep their eye on http://blogs.forum.nokia.com/blog/nokia-developer-news/2011/02/11/letter-to-developers?sf1066337=1

    Read the article

  • Top 4 Lame Tech Blogging Posts

    - by jkauffman
    From a consumption point of view, tech blogging is a great resource for one-off articles on niche subjects. If you spend any time reading tech blogs, you may find yourself running into several common, useless types of posts tech bloggers slip into. Some of these lame posts may just be natural due to common nerd psychology, and some others are probably due to lame, lemming-like laziness. I’m sure I’ll do my fair share of fitting the mold, but I quickly get bored when I happen upon posts that hit these patterns without any real purpose or personal touches. 1. The Content Regurgitation Posts This is a common pattern fueled by the starving pan-handlers in the web traffic economy. These are posts that are terse opinions or addendums to an existing post. I commonly see these involve huge block quotes from the linked article which almost always produces over 50% of the post itself. I’ve accidentally gone to these posts when I’m knowingly only interested in the source material. Web links can degrade as well, so if the source link is broken, then, well, I’m pretty steamed. I see this occur with simple opinions on technologies, Stack Overflow solutions, or various tech news like posts from Microsoft. It’s not uncommon to go to the linked article and see the author announce that he “added a blog post” as a response or summary of the topic. This is just rude, but those who do it are probably aware of this. It’s a matter of winning that sweet, juicy web traffic. I doubt this leeching is fooling anybody these days. I would like to rally human dignity and urge people to avoid these types of posts, and just leave a comment on the source material. 2. The “Sorry I Haven’t Posted In A While” Posts This one is far too common. You’ll most likely see this quote somewhere in the body of the offending post: I have been really busy. If the poster is especially guilt-ridden, you’ll see a few volleys of excuses. Here are some common reasons I’ve seen, which I’ll list from least to most painfully awkward. Out of town Vague allusions to personal health problems (these typically includes phrases like “sick”, “treatment'”, and “all better now!”) “Personal issues” (which I usually read as "divorce”) Graphic or specific personal health problems (maximum awkwardness potential is achieved if you see links to charity fund websites) I can’t help but to try over-analyzing why this occurs. Personally, I see this an an amalgamation of three plain factors: Life happens Us nerds are duty-driven, and driven to guilt at personal inefficiencies Tech blogs can become personal journals I don’t think we can do much about the first two, but on the third I think we could certainly contain our urges. I’m a pretty boring guy and, whether or I like it or not, I have an unspoken duty to protect the world from hearing about my unremarkable existence. Nobody cares what kind of sandwich I’m eating. Similarly, if I disappear for a while, it’s unlikely that anybody who happens upon my blog would care why. Rest assured, if I stop posting for a while due to a vasectomy, you will be the first to know. 3. The “At A Conference”, or “Conference Review” Posts I don’t know if I’m like everyone else on this one, but I have never been successfully interested in these posts. It even sounds like a good idea: if I can’t make it to a particular conference (like the KCDC this year), wouldn’t I be interested in a concentrated summary of events? Apparently, no! Within this realm, I’ve never read a post by a blogger that held my interest. What really baffles is is that, for whatever reason, I am genuinely engaged and interested when talking to someone in person regarding the same topic. I have noticed the same phenomenon when hearing about others’ vacations. If someone sends me an email about their vacation, I gloss over it and forget about it quickly. In contrast, if I’m speaking to that individual in person about their vacation, I’m actually interested. I’m unsure why the written medium eradicates the intrigue. I was raised by a roaming pack of friendly wild video games, so that may be a factor. 4. The “Top X Number of Y’s That Z” Posts I’ve seen this one crop up a lot more in the past few of years. Here are some fabricated examples: 5 Easy Ways to Improve Your Code Top 7 Good Habits Programmers Learn From Experience The 8 Things to Consider When Giving Estimates Top 4 Lame Tech Blogging Posts These are attention-grabbing headlines, and I’d assume they rack up hits. In fact, I enjoy a good number of these. But, I’ve been drawn to articles like this just to find an endless list of identically formatted posts on the blog’s archive sidebar. Often times these posts have overlapping topics, too. These types of posts give the impression that the author has given thought to prioritize and organize the points as a result of a comprehensive consideration of a particular topic. Did the author really weigh all the possibilities when identifying the “Top 4 Lame Tech Blogging Patterns”? Unfortunately, probably not. What a tool. To reiterate, I still enjoy the format, but I feel it is abused. Nowadays, I’m pretty skeptical when approaching posts in this format. If these trends continue, my brain will filter these blog posts out just as effectively as it ignores the encroaching “do xxx with this one trick” advertisements. Conclusion To active blog readers, I hope my guide has served you precious time in being able to identify lame blog posts at a glance. Save time and energy by skipping over the chaff of the internet! And if you author a blog, perhaps my insight will help you to avoid the occasional urge to produce these needless filler posts.

    Read the article

  • Rendering ASP.NET MVC Razor Views outside of MVC revisited

    - by Rick Strahl
    Last year I posted a detailed article on how to render Razor Views to string both inside of ASP.NET MVC and outside of it. In that article I showed several different approaches to capture the rendering output. The first and easiest is to use an existing MVC Controller Context to render a view by simply passing the controller context which is fairly trivial and I demonstrated a simple ViewRenderer class that simplified the process down to a couple lines of code. However, if no Controller Context is available the process is not quite as straight forward and I referenced an old, much more complex example that uses my RazorHosting library, which is a custom self-contained implementation of the Razor templating engine that can be hosted completely outside of ASP.NET. While it works inside of ASP.NET, it’s an awkward solution when running inside of ASP.NET, because it requires a bit of setup to run efficiently.Well, it turns out that I missed something in the original article, namely that it is possible to create a ControllerContext, if you have a controller instance, even if MVC didn’t create that instance. Creating a Controller Instance outside of MVCThe trick to make this work is to create an MVC Controller instance – any Controller instance – and then configure a ControllerContext through that instance. As long as an HttpContext.Current is available it’s possible to create a fully functional controller context as Razor can get all the necessary context information from the HttpContextWrapper().The key to make this work is the following method:/// <summary> /// Creates an instance of an MVC controller from scratch /// when no existing ControllerContext is present /// </summary> /// <typeparam name="T">Type of the controller to create</typeparam> /// <returns>Controller Context for T</returns> /// <exception cref="InvalidOperationException">thrown if HttpContext not available</exception> public static T CreateController<T>(RouteData routeData = null) where T : Controller, new() { // create a disconnected controller instance T controller = new T(); // get context wrapper from HttpContext if available HttpContextBase wrapper = null; if (HttpContext.Current != null) wrapper = new HttpContextWrapper(System.Web.HttpContext.Current); else throw new InvalidOperationException( "Can't create Controller Context if no active HttpContext instance is available."); if (routeData == null) routeData = new RouteData(); // add the controller routing if not existing if (!routeData.Values.ContainsKey("controller") && !routeData.Values.ContainsKey("Controller")) routeData.Values.Add("controller", controller.GetType().Name .ToLower() .Replace("controller", "")); controller.ControllerContext = new ControllerContext(wrapper, routeData, controller); return controller; }This method creates an instance of a Controller class from an existing HttpContext which means this code should work from anywhere within ASP.NET to create a controller instance that’s ready to be rendered. This means you can use this from within an Application_Error handler as I needed to or even from within a WebAPI controller as long as it’s running inside of ASP.NET (ie. not self-hosted). Nice.So using the ViewRenderer class from the previous article I can now very easily render an MVC view outside of the context of MVC. Here’s what I ended up in my Application’s custom error HttpModule: protected override void OnDisplayError(WebErrorHandler errorHandler, ErrorViewModel model) { var Response = HttpContext.Current.Response; Response.ContentType = "text/html"; Response.StatusCode = errorHandler.OriginalHttpStatusCode; var context = ViewRenderer.CreateController<ErrorController>().ControllerContext; var renderer = new ViewRenderer(context); string html = renderer.RenderView("~/Views/Shared/GenericError.cshtml", model); Response.Write(html); }That’s pretty sweet, because it’s now possible to use ViewRenderer just about anywhere in any ASP.NET application, not only inside of controller code. This also allows the constructor for the ViewRenderer from the last article to work without a controller context parameter, using a generic view as a base for the controller context when not passed:public ViewRenderer(ControllerContext controllerContext = null) { // Create a known controller from HttpContext if no context is passed if (controllerContext == null) { if (HttpContext.Current != null) controllerContext = CreateController<ErrorController>().ControllerContext; else throw new InvalidOperationException( "ViewRenderer must run in the context of an ASP.NET " + "Application and requires HttpContext.Current to be present."); } Context = controllerContext; }In this case I use the ErrorController class which is a generic controller instance that exists in the same assembly as my ViewRenderer class and that works just fine since ‘generically’ rendered views tend to not rely on anything from the controller other than the model which is explicitly passed.While these days most of my apps use MVC I do still have a number of generic pieces in most of these applications where Razor comes in handy. This includes modules like the above, which when they error often need to display error output. In other cases I need to generate string template output for emailing or logging data to disk. Being able to render simply render an arbitrary View to and pass in a model makes this super nice and easy at least within the context of an ASP.NET application!You can check out the updated ViewRenderer class below to render your ‘generic views’ from anywhere within your ASP.NET applications. Hope some of you find this useful.ResourcesViewRenderer Class in Westwind.Web.Mvc Library (Github)Original ViewRenderer ArticleRazor Hosting Library (GitHub)Original Razor Hosting Article© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ISO Files to USB &ndash; The Cheap and Easy Way

    - by RonGarlit
    (DISCLAIMER: Yes there are lots of more elegant ISO software beside the free Microsoft one I’m about to show. But free is free and it has been tested and works for me for making advance bootable USB drives. That is another story. Look up Windows 8 Developer Preview for that one on BING.) For those of use that work with new technology all the time we accumulate a lot of ISO files and have to burn them to CD/DVD’s quite often. But we now have machines without burner in the corporate environment. We have personally Netbooks and light wait highly mobile laptops that do not have DVD burner. USB ports are all the rage and now we have USB 3.0 which is way faster than the 2.0 we are used to. Just looking at the technology, space saving and the cost issues alone is a reason to buy these answer to the DVD’s. So what is special about USB 2.0 and USB 3.0? USB 2 has a maximum speed of 480 Mbps... (That is Megabits per SECOND!!) Now look at the storage that we have with USB thumb drives that are now up to 64 GB in size, cell phone and PDAs that have a lots of internal storage built in well above the 16 Gig range. At the MAX USB 2.0 speed of 480 Mbps a full transfer of data in between devices can take a long time. Time is money right. Every back up a iPhone? Don’t get me started. So at least the engineers have been planning ahead with USB 3.0 which offers a maximum transfer speed of 4.8 Gbps... (That is Giga bits per SECOND!!) That speed is almost 10 times faster than USB 2.0 …. We don’t need to do the math on that one do we? But for now I'm thrilled with USB 2.0 and the fact I can get these little 4 Gig USB drives for $4.00 each at Staples on sale. Well that is a no brainer don’t you think. But what can you do with them to replace that DVD. Simply and cheaply put………. THIS! First let’s get an ISO file like the Visual Studio 2010 Ultimate DVD ISO from MSDN to demonstrate with. I develop on several computers so this is a good choice for me. So we downloaded the ISO file and put it in a folder somewhere like this. Next we go download to the Windows 7 USB/DVD Download Tool site and read about the tool. http://www.microsoftstore.com/store/msstore/html/pbPage.Help_Win7_usbdvd_dwnTool And click this like to get the tool and install it. Once it is installed you go to the Start, Programs menu, Windows 7 USB DVD Download Tool folder. And then click the tool to open it up. As you will see it is a sweet, simple tool that was originally designed to put the ISO for Windows 7 which is designed to be bootable on a USB or DVD for us geeks to play with. It is now being used for the Windows 8 Developer Preview by many developers for that for the same purpose it was built for in the past. But for now we will use it to put a NON Bootable ISO on a USB. Hey it does the job and I’m reusing a left over program. Why buy the fancy one or a free trial and clutter up my machine. We will click the BROWSE button and navigate to where we put our ISO file we want to put on the USB drive. Obviously we are going to click NEXT and continue to select a USB Device (you can guess what the DVD button is for). Next we select the USB that we have plugged into one of our laptops USB ports. Then we click the BEGIN COPYING button and the first thing the program does is format our USB drive. Then it starts copying out files out of the ISO and constructing the USB as if it was a DVD. So now that the files are copying to the drive I’m going to warn you. We will error out here. This program was design for bootable ISO’s of which this one is NOT. No problem because what fails it the writing of the bootable data to the drive that isn’t there. No biggie…. Forget the STARTOVER button is even there and click the dialog’s CLOSE button and exit the program. Now go to Windows Explorer and navigate to the USB Device. You can now access everything and even add stuff to the drive. But for me I want to keep this drive for one purpose and that is to install VS2010 on various machines. So the only stuff I’ll add to this is a folder of notes on things on visual studio that I might want to put on other machines I’m installing VS2010 on to. So that is it. Have a nice day! The Ron

    Read the article

  • Visualising data a different way with Pivot collections

    - by Rob Farley
    Roger’s been doing a great job extending PivotViewer recently, and you can find the list of LobsterPot pivots at http://pivot.lobsterpot.com.au Many months back, the TED Talk that Gary Flake did about Pivot caught my imagination, and I did some research into it. At the time, most of what we did with Pivot was geared towards what we could do for clients, including making Pivot collections based on students at a school, and using it to browse PDF invoices by their various properties. We had actual commercial work based on Pivot collections back then, and it was all kinds of fun. Later, we made some collections for events that were happening, and even got featured in the TechEd Australia keynote. But I’m getting ahead of myself... let me explain the concept. A Pivot collection is an XML file (with .cxml extension) which lists Items, each linking to an image that’s stored in a Deep Zoom format (this means that it contains tiles like Bing Maps, so that the browser can request only the ones of interest according to the zoom level). This collection can be shown in a Silverlight application that uses the PivotViewer control, or in the Pivot Browser that’s available from getpivot.com. Filtering and sorting the items according to their facets (attributes, such as size, age, category, etc), the PivotViewer rearranges the way that these are shown in a very dynamic way. To quote Gary Flake, this lets us “see patterns which are otherwise hidden”. This browsing mechanism is very suited to a number of different methods, because it’s just that – browsing. It’s not searching, it’s more akin to window-shopping than doing an internet search. When we decided to put something together for the conferences such as TechEd Australia 2010 and the PASS Summit 2010, we did some screen-scraping to provide a different view of data that was already available online. Nick Hodge and Michael Kordahi from Microsoft liked the idea a lot, and after a bit of tweaking, we produced one that Michael used in the TechEd Australia keynote to show the variety of talks on offer. It’s interesting to see a pattern in this data: The Office track has the most sessions, but if the Interactive Sessions and Instructor-Led Labs are removed, it drops down to only the sixth most popular track, with Cloud Computing taking over. This is something which just isn’t obvious when you look an ordinary search tool. You get a much better feel for the data when moving around it like this. The more observant amongst you will have noticed some difference in the collection that Michael is demonstrating in the picture above with the screenshots I’ve shown. That’s because it’s been extended some more. At the SQLBits conference in the UK this year, I had some interesting discussions with the guys from Xpert360, particularly Phil Carter, who I’d met in 2009 at an earlier SQLBits conference. They had got around to producing a Pivot collection based on the SQLBits data, which we had been planning to do but ran out of time. We discussed some of ways that Pivot could be used, including the ways that my old friend Howard Dierking had extended it for the MSDN Magazine. I’m not suggesting I influenced Xpert360 at all, but they certainly inspired us with some of their posts on the matter So with LobsterPot guys David Gardiner and Roger Noble both having dabbled in Pivot collections (and Dave doing some for clients), I set Roger to work on extending it some more. He’s used various events and so on to be able to make an environment that allows us to do quick deployment of new collections, as well as showing the data in a grid view which behaves as if it were simply a third view of the data (the other two being the array of images and the ‘histogram’ view). I see PivotViewer as being a significant step in data visualisation – so much so that I feature it when I deliver talks on Spatial Data Visualisation methods. Any time when there is information that can be conveyed through an image, you have to ask yourself how best to show that image, and whether that image is the focal point. For Spatial data, the image is most often a map, and the map becomes the central mode for navigation. I show Pivot with postcode areas, since I can browse the postcodes based on their data, and many of the images are recognisable (to locals of South Australia). Naturally, the images could link through to the map itself, and so on, but generally people think of Spatial data in terms of navigating a map, which doesn’t always gel with the information you’re trying to extract. Roger’s even looking into ways to hook PivotViewer into the Bing Maps API, in a similar way to the Deep Earth project, displaying different levels of map detail according to how ‘zoomed in’ the images are. Some of the work that Dave did with one of the schools was generating the Deep Zoom tiles “on the fly”, based on images stored in a database, and Roger has produced a collection which uses images from flickr, that lets you move from one search term to another. Pulling the images down from flickr.com isn’t particularly ideal from a performance aspect, and flickr doesn’t store images in a small-enough format to really lend itself to this use, but you might agree that it’s an interesting concept which compares nicely to using Maps. I’m looking forward to future versions of the PivotViewer control, and hope they provide many more events that can be used, and even more hooks into it. Naturally, LobsterPot could help provide your business with a PivotViewer experience, but you can probably do a lot of it yourself too. There’s a thorough guide at getpivot.com, which is how we got into it. For some examples of what we’ve done, have a look at http://pivot.lobsterpot.com.au. I’d like to see PivotViewer really catch on a data visualisation tool.

    Read the article

  • HTML5 and CSS3 Editing in Windows Live Writer

    - by Rick Strahl
    Windows Live Writer is a wonderful tool for editing blog posts and getting them posted to your blog. What makes it nice is that it has a small set of useful features, plus a simple plug-in model that has spawned many useful add-ins. Small tool with a reasonably decent plug-in model to extend equals a great solution to a simple problem. If you're running Windows, have a blog and aren’t using Live Writer you’re probably doing it wrong…One of Live Writer’s nice features is that it can download your blog’s CSS for preview and edit displays. It lets you edit your content inside of the context of that CSS using the WYSIWYG editor, so your content actually looks very close to what you’ll see on your blog while you’re editing your post. Unfortunately Live Writer renders the HTML content in the Web Browser Control’s  default IE 7 rendering mode. Yeah you read that right: IE 7 is the default for the Web Browser control and most applications that use it, are stuck in this modus unless the application explicitly overrides this default. The Web Browser control does not use the version of Internet Explorer installed on the system (IE 10 on my Win8 machine) but uses IE 7 mode for ‘compatibility’ for old applications.If you are importing your blog’s CSS that may suck if you’re using rich HTML 5 and CSS 3 formatting. Hack the Registry to get Live Writer to render using IE 9 or 10In order to get Live Writer (or any other application that uses the Web Browser Control for that matter) to render you can apply a registry hack that overrides the Web Browser Control engine usage for a specific application. I wrote about this in detail in a previous blog post a couple of years back.Here’s how you can set up Windows Live Writer to render your CSS 3 by making a change in your registry:The above is for setup on a 64 bit machine, where I configure Live Writer which is a 32 bit application for using IE 10 rendering. The keys set are as follows:32bit Configuration on 64 bit machine:HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATIONKey: WindowsLiveWriter.exeValue: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)On a 32 bit only machine: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATIONKey: WindowsLiveWriter.exeValue: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)Use decimal values of 9000, 10000 or 11000 to specify specific versions of Internet Explorer. This is a minor tweak, but it’s nice to actually see my blog posts now with the proper CSS formatting intact. Notice the rounded borders and shadow on the code blocks as well as the overflow-x and scrollbars that show up. In this particular case I can see what the code blocks actually look like in a specific resolution – much better than in the old plain view which just chopped things off at the end of the window frame. There are a few other elements that now show properly in the editor as well including block quotes and note boxes that I occasionally use. It’s minor stuff, but it makes the editing experience better yet and closer to the final things so there are less republish operations than I previously had. Sweet!Note that this approach of putting an IE version override into the registry works with most applications that use the Web Browser control. If you are using the Web Browser control in your own applications, it’s a good idea to switch the browser to a more recent version so you can take advantage of HTML 5 and CSS 3 in your browser displayed content by automatically setting this flag in the registry or as part of the application’s startup routine if not dedicated setup tool is used. At the very least you might set it to 9000 (IE 9) which supports most of the basic CSS3 features and is a decent baseline that works for most Windows 7 and 8 machines. If running pre-IE9, the browser will fall back to IE7 rendering and look bad but at least more recent browsers will see an improved experience.I’m surprised that there aren’t more vendors and third party apps using this feature. You can see in my first screen shot that there are only very few entries in the registry key group on my machine – any other apps use the Web Browser control are using IE7. Go figure. Certainly Windows Live Writer should be writing this key into the registry automatically as part of installation to support this functionality out of the box, but alas since it does not, this registry hack lets you get your way anyway…Resources.reg Files to register Live Write Browser Emulation (set for IE9)Specifying Internet Explorer Version for ApplicationsSnagIt LiveWriter Plug-inDownload Windows Live WriterDownload Windows Live Writer with Chocolatey© Rick Strahl, West Wind Technologies, 2005-2013Posted in Live Writer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • The ugly evolution of running a background operation in the context of an ASP.NET app

    - by Jeff
    If you’re one of the two people who has followed my blog for many years, you know that I’ve been going at POP Forums now for over almost 15 years. Publishing it as an open source app has been a big help because it helps me understand how people want to use it, and having it translated to six languages is pretty sweet. Despite this warm and fuzzy group hug, there has been an ugly hack hiding in there for years. One of the things we find ourselves wanting to do is hide some kind of regular process inside of an ASP.NET application that runs periodically. The motivation for this has always been that a lot of people simply don’t have a choice, because they’re running the app on shared hosting, or don’t otherwise have access to a box that can run some kind of regular background service. In POP Forums, I “solved” this problem years ago by hiding some static timers in an HttpModule. Truthfully, this works well as long as you don’t run multiple instances of the app, which in the cloud world, is always a possibility. With the arrival of WebJobs in Azure, I’m going to solve this problem. This post isn’t about that. The other little hacky problem that I “solved” was spawning a background thread to queue emails to subscribed users of the forum. This evolved quite a bit over the years, starting with a long running page to mail users in real-time, when I had only a few hundred. By the time it got into the thousands, or tens of thousands, I needed a better way. What I did is launched a new thread that read all of the user data in, then wrote a queued email to the database (as in, the entire body of the email, every time), with the properly formatted opt-out link. It was super inefficient, but it worked. Then I moved my biggest site using it, CoasterBuzz, to an Azure Website, and it stopped working. So let’s start with the first stupid thing I was doing. The new thread was simply created with delegate code inline. As best I can tell, Azure Websites are more aggressive about garbage collection, because that thread didn’t queue even one message. When the calling server response went out of scope, so went the magic background thread. Duh, all I had to do was move the thread to a private static variable in the class. That’s the way I was able to keep stuff running from the HttpModule. (And yes, I know this is still prone to failure, particularly if the app recycles. For as infrequently as it’s used, I have not, however, experienced this.) It was still failing, but this time I wasn’t sure why. It would queue a few dozen messages, then die. Running in Azure, I had to turn on the application logging and FTP in to see what was going on. That led me to a helper method I was using as delegate to build the unsubscribe links. The idea here is that I didn’t want yet another config entry to describe the base URL, appended with the right path that would match the routing table. No, I wanted the app to figure it out for you, so I came up with this little thing: public static string FullUrlHelper(this Controller controller, string actionName, string controllerName, object routeValues = null) { var helper = new UrlHelper(controller.Request.RequestContext); var requestUrl = controller.Request.Url; if (requestUrl == null) return String.Empty; var url = requestUrl.Scheme + "://"; url += requestUrl.Host; url += (requestUrl.Port != 80 ? ":" + requestUrl.Port : ""); url += helper.Action(actionName, controllerName, routeValues); return url; } And yes, that should have been done with a string builder. This is useful for sending out the email verification messages, too. As clever as I thought I was with this, I was using a delegate in the admin controller to format these unsubscribe links for tens of thousands of users. I passed that delegate into a service class that did the email work: Func<User, string> unsubscribeLinkGenerator = user => this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = user.UserID, key = _profileService.GetUnsubscribeHash(user) }); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); Cool, right? Actually, not so much. If you look back at the helper, this delegate then will depend on the controller context to learn the routing and format for the URL. As you might have guessed, those things were turning null after a few dozen formatted links, when the original request to the admin controller went away. That this wasn’t already happening on my dedicated server is surprising, but again, I understand why the Azure environment might be eager to reclaim a thread after servicing the request. It’s already inefficient that I’m building the entire email for every user, but going back to check the routing table for the right link every time isn’t a win either. I put together a little hack to look up one generic URL, and use that as the basis for a string format. If you’re wondering why I didn’t just use the curly braces up front, it’s because they get URL formatted: var baseString = this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = "--id--", key = "--key--" }); baseString = baseString.Replace("--id--", "{0}").Replace("--key--", "{1}"); Func unsubscribeLinkGenerator = user => String.Format(baseString, user.UserID, _profileService.GetUnsubscribeHash(user)); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); And wouldn’t you know it, the new solution works just fine. It’s still kind of hacky and inefficient, but it will work until this somehow breaks too.

    Read the article

  • JavaOne pictures and Community Commentary on JCP Awards

    - by heathervc
    We posted some pictures from JCP related events at JavaOne 2012 on the JCP Facebook page today.  The 2012 JCP Program Award winners and some of the nominees responded to the community recognition of their achievements during some of the JCP events last week.     “Our job on the EC is to balance the need of innovation – so we don’t standardize too early, or too late. We try to find that sweet spot that makes innovation and standardization work together, and not against each other.”- Ben Evans, CEO of jClarity and Executive Committee (EC) representative of the London Java Community, 2012 JCP Member/Participant of the Year Winner“SouJava has been evangelizing the Java platform, promoting the Java ecosystem in Brazil, and contributing to JSRs for several years. It’s very gratifying to have our work recognized, on behalf of many developers and Java User Groups around the world. This really is the work of a large group of people, represented by the few that can be here tonight.”- Michael Santos, representative of SouJava, 2012 JCP Member/Participant of the Year Winner "In the last years Credit Suisse has contributed to the development of Java EE specifications through participation in many customer advisory boards, through statements of requirements for extensions to the core Java related products in use, and active participation in JSRs. Winning the JCP Outstanding Spec Lead Award 2012 is very encouraging for our engagement and also demonstrates the level of expertise and commitment to drive the evolution of Java. Victor Grazi is happy and honored to receive this award." - Susanne Cech Previtali, Executive Committee (EC) representative of Credit Suisse, accepting award for 2012 JCP Outstanding Spec Lead Winner "Managing a JSR is difficult. There are so many decisions to be made and so many good and varied opinions, you never really know if you have decided correctly. The key to success is transparency and collaboration. I am truly humbled by receiving this award, there are so many other active JSRs.” Victor added that going forward in the JCP EC, they would like to simplify and open the process of participation – being addressed in the JCP.Next initiative of the JCP EC. "We would also like to encourage the engagement of universities, professors and students – as an important part of the Java community. While innovation is the lifeblood of our community and industry, without strong standards and compatibility requirements, we all end up in a maze of technology where everything is slightly different and doesn’t quite work with everything else." Victo Grazi, Executive Committee (EC) representative of Credit Suisse, 2012 JCP Outstanding Spec Lead Winner“I am very pleased, of course, to accept this award, but the credit really should go to all of those who have participated in the work of the JCP, while pushing for changes in the way it operates.  JCP.Next represents three JSRs. The first two are done, but the final step, JSR 358, is the complicated one, and it will bring in the lawyers. Just to give you an idea of what we’re dealing with, it affects licensing, intellectual property, patents, implementations not based on the Reference Implementation (RI), the role of the RI, compatibility policy, possible changes to the Technical Compatibility Kit (TCK), transparency, where do individuals fit in, open source, and more.”- Patrick Curran, JCP Chair, Spec Lead on JCP.Next JSRs (JSR 348, JSR 355 and JSR 358), 2012 JCP Most Significant JSR Winner“I’m especially glad to see the JCP community recognize JCP.Next for its importance. The governance work it represents is KEY to moving the Java platform forward and the success of the technology.”- John Rizzo, Executive Committee (EC) representative of Aplix Corporation, JSR Expert Group Member “I am deeply honored to be nominated. I had the privilege to receive two awards on behalf of Expert Groups and Spec Leads two years ago. But this time, I am nominated personally, which values my own contribution to the JCP, and of course, participation in JSRs and the EC work. I’m a fan of Agile Principles and Values Working. Being an Agile Coach and Consultant, I use it for some of the biggest EC Member companies and projects. It fuels my ability to help the JCP become more agile, lean and transparent as part of the JCP.Next effort.” - Werner Keil, Individual Executive Committee (EC) Member, a 2012 JCP Member/Participant of the Year Nominee, JSR Expert Group Member“The JCP ever has been some kind of institution for me,” Markus said. “If in technical doubt, I go there, look for the specifications of the implementation I work with at the moment and verify what I had observed. Since the beginning of my Java journey more than 12 years back now, I always had a strong relationship with the JCP. Shaping the future of a technology by joining the JCP – giving feedback and contributing to the road ahead through individual JSRs – that brings you to a whole new level.”Calling himself, “the new kid on the block,” he explained that for years he was afraid to join the JCP and contribute. But in reality, “Every single one of the big names I meet from the different Expert Groups is a nice person. People you can actually work with,” he says. “And nobody blames you for things you don't know. As long as you are committed and bring what is worth the most: passion, experiences and the desire to make a difference.” - Markus Eisele, a 2012 JCP Member of the Year Nominee, JSR Expert Group MemberCongratulations again to all of the nominees and winners of the JCP Program Awards.  Next year, we will add another award for the group of JUG members (not an entire JUG) that makes the best contribution to the Adopt-a-JSR program.  Let us know if you have other suggestions or improvements.

    Read the article

  • Red Gate does Byte Night 2012

    - by red(at)work
    On the 5th of October 2012, a team of nine plucky Red Gaters braved the howling wind and the driving rain to sleep outside. No tents or mattresses were allowed – all we took for protection were sleeping bags, groundsheets, plastic sacks and Colin’s enormous fishing umbrella (a godsend in umbrella-y disguise). Why would we do such a thing? For Byte Night, an annual tech sector sleepout in support of Action for Children, who tackle the causes as well as the consequences of youth homelessness. Byte Night encourages technology professionals to do for one night a year what thousands of young people have to do every night – sleep rough.  We signed up for Byte Night in the warm, heady midst of the British summer, thinking it couldn’t possibly be all that bad. Even on the night itself – before the rain began to fall, sat in the comfort and warmth of a company canteen, drinking wine and eating chill and preparing to win the pub quiz – we were excited and optimistic about the night that lay ahead of us. All of that changed as soon as we stepped out into one of the worst rainstorms of the year. Brian, the team’s birthday boy, describes it best: Picture the scene: it’s 3 am on a Friday. I’m lying outside, fully clothed in a sleeping bag, wearing a raincoat, trussed up inside a large plastic pocket, on a ground sheet beneath a giant umbrella, wedged so tightly between two of my colleagues that I can’t move my arms. I’m wide awake, staring up at the grey sky beyond the edge of the umbrella; a limp, flickering white glow hints at a moon somewhere behind the drifting clouds. I haven’t slept since we first moved outside at 11 pm. Outside. Did I mention we were outside? I’m hung over. I need the loo. But there is no way on earth that I’m getting out of this sleeping bag. It’s cold. It’s raining. Not just raining, but chucking it down. It’s been doing this non-stop since 10pm. The rain sounds like a hyperactive drummer on the fishing umbrella, and the noise is loud and relentless. Puddles of water are forming all over the groundsheet, and, despite being ensconced inside the plastic pouch, I am wet. The fishing umbrella is protecting me from the worst of the driving rain, but not all of me is under it, and five hours of rain is no match for it. Everything is wet. My left side has become horribly damp. My trainers, which I placed next to my sleeping bag, are now completely soaked through. Mmm. That’ll be fun in the morning. My head is next to Colin’s head on one side, and a multi-pack of McCoy’s cheddar and onion crisps on the other. Don’t ask about the tub of hummus. That’s somewhere down by my ankles, abandoned to the night. Jess, who is lying next to me, rolls over onto her side. A mini waterfall cascades from her rain-pouch onto my face. Bah. I continue to stare into the heavens, willing the dawn to hurry up. Something lands on my face. It’s a mosquito. Great. Midnight, when this still seemed like fun – when we opened some champagne and my colleagues presented me with a caterpillar birthday cake, when everyone was drunk and jolly and full of stoic resolve – feels like a long time ago. Did I mention that today is my birthday? The remains of the caterpillar cake endure the same fate as the hummus, left out in the rain like a metaphor for sadness. It’s getting colder. I can see my breath. Silence has descended on the group, apart from the rustle of plastic. And the rain, obviously. Someone snores, and I envy whoever it is the sweet escape of sleep. I try to wriggle a bit further down inside my sleeping bag, but it doesn’t want to be wriggled into. Only 3 hours till dawn. 180 minutes. I begin to count them off, one at a time.  All nine of us got to go home in the morning, but thousands of children across the UK don’t have that luxury. If you’d like to sponsor the Red Gate Byte Night team, our JustGiving page can be found here.   Chris, before the outside bit actually happened. More photos from Byte Night Cambridge 2012 can be found here.

    Read the article

  • XNA Notes 002

    - by George Clingerman
    This past week (much like every week in the XNA community) was filled with things happening and people doing cool things (and getting noticed for doing cool things!). You can definitely tell there are some Xbox LIVE Indie game developers starting to make some names for themselves. Can’t wait to name drop them at bars. Me- “Oh you played Game X? Yeah, I know the guy that made that. Pretty cool guy.” Yeah, I’ll be THAT guy.   Time Critical XNA News 30 days left to submit XBLIGs made in XNA Game Studio 3.1 http://blogs.msdn.com/b/xna/archive/2011/01/08/30-days-left-to-submit-xna-gs-3-1-games-to-app-hub.aspx Jeromy Walsh wants you to know his XNA 4.0 Winter Workshop starting soon, go get signed up! And the forum is now LIVE on GameDev.net http://gamedevelopedia.com/ http://tinyurl.com/4gg2cfv The XNA Team Per Nick Gravelyn, Aaron Stebner’s blog post is a must read for icons on Windows Phone http://forums.create.msdn.com/forums/p/72022/439597.aspx#439597 http://blogs.msdn.com/b/astebner/archive/2010/10/01/10070507.aspx Shawn Hargreaves writes about Sprite Billboards in a 3D world http://blogs.msdn.com/b/shawnhar/archive/2011/01/12/spritebatch-billboards-in-a-3d-world.aspx XNA MVPs Andy “The ZMan” Dunn wants YOU to come to the MVP Summit and run a 5K http://www.indiegameguy.com/blogs/zman/archive/2010/12/26/come-to-the-mvp-summit-and-run-a-5k-yes-you.aspx Jim Perry updates his forum signature just to make it clear that he’s not speaking for Microsoft or giving official information (LOL, thanks Jim, now if only people will take the time to read that...) XNA MVP | Please use the Forum Search and read the Forum FAQs | My posts are not official info http://forums.create.msdn.com/forums/p/70849/439613.aspx#439613 XNA Developers Robert Boyd (@werezombie) working hard at converting his RPG engine used to make Breath of Death VII and Chtulu Saves the World to XNA 4.0. If you haven’t done the upgrade yet yourself, might be useful to read back through his tweets and recent forum posts to see the problems/solutions he’s encountered. http://forums.create.msdn.com/forums/p/71834/438099.aspx#438099 http://www.twitter.com/werezompire SpynDoctorGames is in the final phase before the release of Your Doodles are Bugged for the PC! Going to be interesting to watch as more XNA game developers explore the PC game market for their games. http://twitter.com/SpynDoctorGames/statuses/24503173217521664 http://www.spyn-doctor.de @DrMistry shares some details of his next title YoYoYo http://www.mstargames.co.uk/mistryblogmain/35-genblog/177-a-new-year-a-new-game-and-maybe-a-new-approach.html Travis Woodward (@RabidLionGames) has a blog post coming this weekend on Farseer and Mario-like platformer movement. http://twitter.com/RabidLionGames/statuses/24992762021548032 http://www.rabidlion.com/ S4G Interview with Radiangames http://n4g.com/news/679492/s4g-interview-with-radiangames XBLAratings.com interviews Steve Flores (@DragonDivide) developer of Alpha Squad http://www.xblaratings.com/developer-qaa/3621-alpha-squad-developer-interview XBox LIVE Indie Games If you haven’t been reading the roundups on IndieGames by NaviFairy on GayGamer, you’ve been missing out! http://gaygamer.net/2011/01/xbox_indie_review_roundup_1112.html Armless Octopus posts the Top 20 Games of 2010 Part 1 http://www.armlessoctopus.com/2011/01/10/top-20-xbox-live-indie-games-of-2010-part-1/ Armless Octopus posts the Top 20 Games of 2010 Part 2 http://www.armlessoctopus.com/2011/01/12/top-20-xbox-live-indie-games-of-2010-%E2%80%93-part-2/ Xbox LIVE Indie Game Reviews http://www.gamemarx.com/ Don’t forget to be following @XboxHornet . That’s a great way to snag free copies of Xbox LIVE Indie Games http://twitter.com/XboxHornet/statuses/24471103808208896 http://www.xboxhornet.com/wordpress/ Xbox LIVE Indie Game Review posts the top 20 Xbox 360 LIVE Indie Games of 2010 http://www.xbox-360-community-games-reviews.com/top-20-best-xbox-360-live-indie-games-of-2010/ VVGtv to Stream #XBLIG Again! Help out if you can. http://vvgtv.com/2011/01/07/vvgtv-to-stream-xblig-again/ Indie Gamer Magazine Issue 14 has a look at the Xbox LIVE Winter Indie Game Uprisiing http://www.indiegamemag.com/issue14/ XNA Game Development Andrew Russell announced and asked for help in his development of ExEn: XNA for iPhone, Android and Silverlight http://rockethub.com/projects/752-exen-xna-for-iphone-android-and-silverlight App Hub forums letting you down? Don’t forget about StackOverflow and the game development specific version gamedev.stackexchange http://stackoverflow.com/questions/tagged/xna http://gamedev.stackexchange.com/questions/tagged/xna Transmute gets an update from Aaron Foley (@slyprid) and you can now add and visually edit parallax layers to your 2D tile game. http://twitpic.com/3nudj0 http://twitter.com/slyprid/statuses/23418379574448128 http://forgottenstarstudios.com/Transmute/default.html Webcomics Weekly #75 touches on some feelings I’ve seen people try to express (myself included) when talking about game development and what types of games should be released for XBLIG http://www.pvponline.com/2011/01/05/webcomics-weekly-75-sour-oats/ Setting up a new PC for XNA development? Here’s a site that helps you quickly build a installer for all the most common applications developers use. http://ninite.com/ Fun wew thread on the XNA forums asking XBLIG/XNA developers just what their Top 10 favorite video games of all time are. http://forums.create.msdn.com/forums/107.aspx Christopher Hill (@Xalterax) stumbled across an entire community that does nothing but create box art. This is a great potential resource for Xbox LIVE Indie Game developers to get some awesome box art for their games. http://forums.create.msdn.com/forums/p/46582/441451.aspx#441451 http://www.vgboxart.com/browse/plat/360/ Don’t forget about the XNA Wiki, fantastic community resource (and roll up those sleeves and contribute already!) http://xnawiki.com/index.php?title=Main_Page

    Read the article

  • Answers to “What source control system do you use?” (and some winners)

    - by jamiet
    About a month ago I posed a question here on my blog SQL Server devs–what source control system do you use, if any? (answer and maybe win free stuff) in which I asked SQL Server developers to answer the following questions: Are you putting your SQL Server code into a source control system? If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? Why did you make those particular software choices? Any interesting anecdotes to share in regard to your use of source control and SQL Server? I had some really great responses (I highly recommend going and reading them). I promised that the five best, most thought-provoking, responses (as determined by me) would win one of five pairs of licenses for Red Gate SQL Source Control and Red Gate SQL Connect; here are the five that I chose (note that if you responded but did not leave a means of getting in touch then you weren’t considered for one of the prizes – sorry): In general, I don't think the management overhead and licensing cost associated with TFS is worthwhile if all you're doing is using source control. To get value from TFS, at a minimum you need to be using team build, and possibly other stuff as well, such as the sharepoint integration. If that's all you need, then svn with Tortoise would be my first choice. If you want to add build automation later, you can do this with cruisecontrol (is it still called that?), JetBrains, etc. For a long time I thought that Redgate's claims about "bridging the SSMS-VS divide" were a load of hot air, since in my experience anyone who knew what they were doing was using Visual Studio, in particular SSDT and its predecessors. However, on a recent client I was putting in source control for the first time, and I discovered that the "divide" really does exist. That client has ended up using svn with Redgate SQL Source Control, with no build automation, but with scope to add it in the future. Gavin Campbell I think putting the DB under source control is a great idea.  I have issues with the earlier versions of SQL Source Control in that it provides little help in versioning the DB. I think the latest version merges SQL Compare and SQL Source Control together.  Which is how it should have been all along. Sure I have the DB scripts in SVN, but I can't automate DB builds and changes without more tools.  Frankly I'm surprised databases don't have some sort of versioning built into them. Nick Portelli Source control has been immensely useful and saved me from a lot of rework on more than one occasion.  I have learned that you have to be extremely careful checking in data.  Our system is internal only so during the system production run once a week, if there is a problem that I can fix easily(for example, a control table points to a file in the wrong environment), I'll do it directly in production so the run can continue as soon as possible since we have a specified time window.  We do full test runs to minimize this but it has come up once or twice.  We use Red-Gate source control to "push" from the test environment to the production environment.  There have been a couple of occasions where the test environment with the wrong setting was pushed back over the production environment because the change was made only in production.  Gotta keep an eye on that. Alan Dykes Goodness is it manual.  And can be extremely painful at times.  Not only are we running thin, we are constrained on the tools we can get ($$ must mean free).  Certainly no excuse, and a great opportunity to improve my skills by learning new things.  But...  Getting buy in a on a proven process or methodology is hard, takes time, and diverts us from development.  If SQL Source Control is easy to use and proven oh boy could you get some serious fans around here!  Seriously though, as the "accidental dba" of this shop any new ideas / easy to implement tools can make a world of difference in productivity and most importantly accuracy.  Manual = bad. :) John Hennesey (who left his email address) The one thing I would love to know more about is the unique challenges of working with databases as source code - you can store scripts, but are they written as deployment scripts with all the logic about how to apply them to an existing DB? Where is that baseline DB? Where's the data? How does a team share the data and the code? It's a real challenge. Merrill Aldrich Congratulations to the five of you. Red Gate will be in touch with you soon about your free licenses. Thank you to all those that responded. And again, go and check out all the responses – those above are only small proportion from what is a very interesting comment thread. @Jamiet

    Read the article

  • Adding a DLL to the GAC in Windows 7

    - by Jim Giercyk
    I recently created a DLL and I wanted to reference it from a project I was developing in Visual Studio.  In previous versions of Windows, doing so was simply a matter of dropping the DLL file in the C:\Windows\assembly folder.  That would add the DLL to the Global Assembly Cache (GAC) and make it accessible in Visual Studio.  However, as is often the case, Window 7 is different.  Even if you have Administrator privileges on your machine, you still do not have permission to drop a file in the assembly folder.  Undaunted, I thought about using the old DOS command line utility gacutil.exe.  Microsoft developed the tool as part of the .Net framework, and it is available in the Windows SDK Framework Tools.  If you have never used gacutil.exe before, you can find out everything you ever wanted to know but were afraid to ask here: http://msdn.microsoft.com/en-us/library/ex0ss12c(v=vs.80).aspx .  Unfortunately, if you do not have the Windows SDK loaded on your development machine, you will need to install it to use gacutil, but it is relatively quick and painless, and the framework tools are very useful.  Look here for your latest SDK: http://www.microsoft.com/download/en/search.aspx?q=Windows%20SDK .   After installing the SDK, I tried installing my DLL to the GAC by running gacutil from a DOS command line: That’s odd.  Microsoft is shipping a tool that cannot be executed even with Administrator rights?  Let me stop here and say that I am by no means a Windows security expert, so I actually did contact my system administrators, and they were not sure how to fix the problem….there must be a super administrator access level, but it isn’t available to your average developer in my company.  The solution outlined here is working within the boundaries of a normal windows Administrator. So, now the hacker in me bubbles to the surface.  What if I were to create a simple BAT file containing the gacutil command?  It’s so crazy it just might work!  Ugh!  I was starting to think this would never work, but then I realized that simply executing a batch program did not change my level of access.  Typically in Windows 7, you would select the “Run As Administrator” option to temporarily act as an administrator for the purpose of executing a process.  However, that option is not available for BAT files run from the command line.  SOLUTION: Create a desktop shortcut to execute the BAT file, which in turn will execute the line command…..are you still with me?  I created a shortcut and pointed it to my batch file.  Theoretically, all I need to do now is right-click on the shortcut and select “Run As Administrator” and we’re good, right?  Well, kinda.  If you notice the syntax of my BAT file, the name of the DLL is passed in as a parameter.  Therefore, I either have to hard-code the file name in the BAT program (YUCK!!), or I can leave the parameter and drag the DLL file to the shortcut and drop it.  Sweet, drag-and-drop works for me…..but if I use the drag-and-drop method, there is no way for me to right-click and select “Run As Administrator”.  That is not a problem…..I simply have to adjust the properties of the shortcut I created and I am in business.  I Right-clicked on the shortcut and select “Properties”.  Under the “Shortcut” tab there is an “Advanced” button…..I clicked it. All I needed to do was check the “Run As Administrator” box: In summary, what I have done is create a BAT file to execute a command line utility, gacutil.exe.  Then, rather than executing the BAT file from the command line, I created a desktop shortcut to run it and set the shortcut properties to “Run As Administrator”.  This will effectively mean I am executing the command line utility with Administrator privileges.  Pretty sneaky. Now, when I drag the DLL file  over to the shortcut, it starts the BAT file and adds the DLL to the assembly cache.  I created another BAT file to remove a DLL from the GAC in case the need should arise.  The code for that is: Give it a try.  I can’t imagine why updating the GAC has been made into such a chore in Windows 7.  Hopefully there is a service pack in the works that will give developers the functionality they had in Windows XP, but in the meantime, this workaround is extremely useful.

    Read the article

  • AppHarbor - Azure Done Right AKA Heroku for .NET

    - by Robz / Fervent Coder
    Easy and Instant deployments and instant scale for .NET? Awhile back a few of us were looking at Ruby Gems as the answer to package management for .NET. The gems platform supported the concept of DLLs as packages although some changes would have needed to happen to have long term use for the entire community. From that we formed a partnership with some folks at Microsoft to make v2 into something that would meet wider adoption across the community, which people now call NuGet. So now we have the concept of package management. What comes next? Heroku Instant deployments and instant scaling. Stupid simple API. This is Heroku. It doesn’t sound like much, but when you think of how fast you can go from an idea to having someone else tinker with it, you can start to see its power. In literally seconds you can be looking at your rails application deployed and online. Then when you are ready to scale, you can do that. This is power. Some may call this “cloud-computing” or PaaS (Platform as a Service). I first ran into Heroku back in July when I met Nick of RubyGems.org. At the time there was no alternative in the .NET-o-sphere. I don’t count Windows Azure, mostly because it is not simple and I don’t believe there is a free version. Heroku itself would not lend itself well to .NET due to the nature of platforms and each language’s specific needs (solution stack).  So I tucked the idea in the back of my head and moved on. AppHarbor Enters The Scene I’m not sure when I first heard about AppHarbor as a possible .NET version of Heroku. It may have been in November, but I didn’t actually try it until January. I was instantly hooked. AppHarbor is awesome! It still has a ways to go to be considered Heroku for .NET, but it already has a growing community. I created a video series (at the bottom of this post) that really highlights how fast you can get a product onto the web and really shows the power and simplicity of AppHarbor. Deploying is as simple as a git/hg push to appharbor. From there they build your code, run any unit tests you have and deploy it if everything succeeds. The screen on the right shows a simple and elegant UI to getting things done. The folks at AppHarbor graciously gave me a limited number of invites to hand out. If you are itching to try AppHarbor then navigate to: https://appharbor.com/account/new?inviteCode=ferventcoder  After playing with it, send feedback if you want more features. Go vote up two features I want that will make it more like Heroku. Disclaimer: I am in no way affiliated with AppHarbor and have not received any funds or favors from anyone at AppHarbor. I just think it is awesome and I want others to know about it. From Zero To Deployed in 15 Minutes (Or Less) Now I have a challenge for you. I created a video series showing how fast I could go from nothing to a deployed application. It could have been from Zero to Deployed in Less than 5 minutes, but I wanted to show you the tools a little more and give you an opportunity to beat my time. And that’s the challenge. Beat my time and show it in a video response. The video series is below (at least one of the videos has to be watched on YouTube). The person with the best time by March 15th @ 11:59PM CST will receive a prize. Ground rules: .NET Application with a valid database connection Start from Zero Deployed with AppHarbor or an alternative A timer displayed in the video that runs during the entire process Video response published on YouTube or acceptable alternative Video(s) must be published by March 15th at 11:59PM CST. Either post the link here as a comment or on YouTube as a response (also by 11:59PM CST March 15th) From Zero To Deployed In 15 Minutes (Or Less) Part 1 From Zero To Deployed In 15 Minutes (Or Less) Part 2 From Zero To Deployed In 15 Minutes (Or Less) Part 3

    Read the article

  • Gamification = -10#/3mo

    - by erikanollwebb
    One of the purposes of gamification of anything is to see if you can modify the behavior of the user. In the enterprise, that might mean getting sales people to enter more information into a CRM system, encouraging employees to update their HR records, motivating people to participate in forums and discussions, or process invoices more quickly.  Wikipedia defines behavior modification as "the traditional term for the use of empirically demonstrated behavior change techniques to increase or decrease the frequency of behaviors, such as altering an individual's behaviors and reactions to stimuli through positive and negative reinforcement of adaptive behavior and/or the reduction of behavior through its extinction, punishment and/or satiation."  Gamification is just a way to modify someone's behavior using game mechanics. And the magic question is always whether it works. So I thought I would present my own little experiment from the last few months.  This spring, I upgraded to a Samsung Galaxy 4.  It's a pretty sweet phone in many ways, but one of the little extras I discovered was a built in app called S Health. S Health is an app that you can use to track calories, weight, exercise and it has a built in pedometer. I looked at it when I got the phone, but assumed you had to turn it on to use it so I didn't look at it much.  But sometime in July, I realized that in fact, it just ran in the background and was quietly tracking my steps, with a goal of 10,000 per day.  10,000 steps per day is this magic number recommended by the Surgeon General and the American Heart Association.  Dr. Oz pushes it as the goal for daily exercise.  It's about 5 miles of walking. I'm generally not the kind of person who always has my phone with me.  I leave it in my purse and pull it out when I need it.  But then I realized that meant I wasn't getting a good measure of my steps.  I decided to do a little experiment, and carry it with me as much as possible for a week.  That's when I discovered the gamification that changed my life over the last 3 months.  When I hit 10,000 steps, the app jingled out a little "success!" tune and I got a badge.  I was hooked.  I started carrying my phone.  I started making sure I had shoes I could walk in with me.  I started walking at lunch time, because I realized how often I sat at my desk for 8-10 hours every day without moving.  I started pestering my husband to walk with me after work because I hadn't hit my 10,000 yet, leading him at one point to say "I'm not as much a slave to that badge as you are!"  I started looking at parking lots differently.  Can't get a space up close?  No worries, just that many steps toward my 10,000.  I even tried to see if there was a second power user level at 15,000 or 20,000 (*sadly, no).  If I was close at the end of the day, I have done laps around my house until I got my badge.  I have walked around the block one more time to get my badge.  I have mentally chastised myself when I forgot to put my phone in my pocket because I don't know how many steps I got.  The badge below I got when my boss and I were in New York City and we walked around the block of our hotel just to watch the badge pop up. There are a bunch of tools out on the market now that have similar ideas for helping you to track your exercise, make it social.  There are apps (my favorite is still Zombies, Run!).  You could buy a FitBit or UP by Jawbone.   Interactive fitness makes the Expresso stationary bike with built in video games.  All designed to help you be more aware of your activity and keep you engaged and motivated.  And the idea is to help you change your behavior. I know someone who would spend extra time and work hard on the Expresso because he had built up strategies for how to kill the most dragons while he was riding to get more points.  When the machine broke down, he didn't ride a different bike because it just wasn't that interesting. But for me, just the simple jingle and badge have been all I needed.  I admit, I still giggle gleefully when I hear the tune sing out from my pocket. After a few weeks, I noticed I had dropped a few pounds.  Not a lot, just 2-3.  But then I was really hooked.  I started making a point both to eat a little less and hit 10,000 steps as much as I could.  I bemoaned that during the floods in Boulder, I wasn't hitting my 10,000 steps.  And now, a few months later, I'm almost 10 lbs lighter. All for 1 badge a day. So yes, simple gamification can increase motivation and engagement.  And that can lead to changes in behavior.  Now the job is to apply that to the enterprise space in a meaningful and engaging way. 

    Read the article

  • boot issues - long delay, then "gave up waiting for root device"

    - by chazomaticus
    I've had this issue on and off for about two years now. I noticed it on a new (custom built) machine running 10.04 when that first came out, but then it went away until a few months ago. I've gone through a number of hard drive changes but I can't say specifically what if anything I changed hardware-wise to make it stop or start happening. I had assumed upgrading to a modern Ubuntu version would fix the issue, so I installed 12.04 beta on a spare partition last night, but it's still happening. Here's the issue. After grub loads and I select a kernel to boot, the screen goes blank save for a blinking cursor. It sits in this state for many long minutes before it finally gives up and gives me an initramfs shell with the message gave up waiting for root device (and lists the /dev/disk/by-uuid/... path it was waiting for) but no other specific diagnostic information. Now, here's the tricky part. For one, the problem is intermittent - sometimes it progresses from the blinking cursor to the Ubuntu splash boot screen in a few seconds, and once it gets that far it always continues booting fine. The really bizarre thing is that I can "force" it to "find" the root device by repeatedly pressing the space bar and hitting the machine's power button. If I tap those enough, eventually I will notice the hard drive light coming on, at which point it will always continue the boot process after a few seconds. Interestingly, if I wait slightly too long before pressing the power button (30s?), as soon as I press it I get the gave up waiting message and the initramfs shell. I've tried setting up /etc/fstab (and the grub menu.lst or whatever it's called nowadays) to use device names (e.g. /dev/sda1) instead of UUIDs, but I get the same effect just with the device name, not UUID, in the error message. I should also mention that when I boot to Windows 7, there is no issue. It boots slowly all the time just by virtue of being Windows, but it never hangs indefinitely. This would seem to indicate it's a problem in Ubuntu, not the hardware. It's pretty annoying to have to babysit the computer every time it boots. Any ideas? I'm at a loss. Not even sure how to diagnose the issue. Thanks! EDIT: Here's some dmesg output from 10.04. The 15 second gap is where it was doing nothing. I pressed the power button and space bar a few times, and the stuff at 16 seconds happened. Not sure what any of it means. [ 1.320250] scsi18 : ahci [ 1.320294] scsi19 : ahci [ 1.320320] ata19: SATA max UDMA/133 abar m8192@0xfd4fe000 port 0xfd4fe100 ir q 18 [ 1.320323] ata20: SATA max UDMA/133 abar m8192@0xfd4fe000 port 0xfd4fe180 ir q 18 [ 1.403886] usb 2-4: new high speed USB device using ehci_hcd and address 4 [ 1.562558] usb 2-4: configuration #1 chosen from 1 choice [ 16.477824] ata16: SATA link down (SStatus 0 SControl 300) [ 16.477843] ata19: SATA link down (SStatus 0 SControl 300) [ 16.477857] ata3: SATA link down (SStatus 0 SControl 300) [ 16.477895] ata15: SATA link down (SStatus 0 SControl 300) [ 16.477906] ata20: SATA link down (SStatus 0 SControl 300) [ 16.477977] ata17: SATA link down (SStatus 0 SControl 300) [ 16.478003] ata12: SATA link down (SStatus 0 SControl 300) [ 16.478046] ata13: SATA link down (SStatus 0 SControl 300) [ 16.478063] ata14: SATA link down (SStatus 0 SControl 300) [ 16.478108] ata11: SATA link down (SStatus 0 SControl 300) [ 16.478123] ata18: SATA link up 1.5 Gbps (SStatus 113 SControl 300) [ 16.478127] ata6: SATA link down (SStatus 0 SControl 300) [ 16.478157] ata5: SATA link down (SStatus 0 SControl 300) [ 16.478193] ata18.00: ATAPI: MARVELL VIRTUALL, 1.09, max UDMA/66 After that, it took its sweet time, and I had to keep hitting space bar to coax it along. Here's some more dmesg output from a little later in the boot process: [ 17.982291] input: BTC USB Multimedia Keyboard as /devices/pci0000:00/0000:00 :13.0/usb5/5-2/5-2:1.0/input/input4 [ 17.982335] generic-usb 0003:046E:5506.0002: input,hidraw1: USB HID v1.10 Key board [BTC USB Multimedia Keyboard] on usb-0000:00:13.0-2/input0 [ 18.005211] input: BTC USB Multimedia Keyboard as /devices/pci0000:00/0000:00 :13.0/usb5/5-2/5-2:1.1/input/input5 [ 18.005274] generic-usb 0003:046E:5506.0003: input,hiddev96,hidraw2: USB HID v1.10 Device [BTC USB Multimedia Keyboard] on usb-0000:00:13.0-2/input1 [ 22.484906] EXT4-fs (sda6): INFO: recovery required on readonly filesystem [ 22.484910] EXT4-fs (sda6): write access will be enabled during recovery [ 22.548542] EXT4-fs (sda6): recovery complete [ 22.549074] EXT4-fs (sda6): mounted filesystem with ordered data mode [ 32.516772] Adding 20482832k swap on /dev/sda5. Priority:-1 extents:1 across:20482832k [ 32.742540] udev: starting version 151 [ 33.002004] Bluetooth: Atheros AR30xx firmware driver ver 1.0 [ 33.008135] parport_pc 00:09: reported by Plug and Play ACPI [ 33.008186] parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE] [ 33.012076] lp: driver loaded but no devices found [ 33.037271] ppdev: user-space parallel port driver [ 33.090256] lp0: using parport0 (interrupt-driven). Any clues in there?

    Read the article

  • Rules for Naming

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Naming Documents (or is it “Document, Naming”?) Tis but thy name that is my enemy; Thou art thyself, though not a Montague. What's Montague? It is nor hand, nor foot, Nor arm, nor face, nor any other part Belonging to a man. O, be some other name! What's in a name? That which we call a rose By any other name would smell as sweet; So Romeo would, were he not Romeo call'd, Retain that dear perfection which he owes Without that title. Romeo, doff thy name And for that name which is no part of thee Take all myself.  Shakespeare – Romeo and Juliet Act II, Scene 2 We normally only use the bold portion of the famous Shakespearean quote above, but it is really out of context. As the play unfolds, we learn that a name is all too powerful. Indeed it is because of their names that the doomed lovers die. There might be life and death in a name (BTW, when I wrote this monogram, I was in Hatfield, PA. Remember the Hatfields and the McCoys?) This is a bit extreme, but in the field of Knowledge Management (KM) names are of the utmost importance as well. When I write an article about managing SharePoint sites, how should I name it? “Managing a site” or “Site, managing”? Nine times out of ten I’d opt for the latter. Almost everything we do is “Managing” so to make life easier for a person looking for meaningful content, we title our articles starting with the differentiator rather than the common factor. As a rule of thumb, we start the name with the noun rather than the verb. It is not what we do that is the primary key; it is what we do it to. So, answer this – is it a “rule of thumb” or a “thumb rule?” This is tough. A lot of what we do when naming is a judgment call. Both thumb and rule are nouns, albeit concrete and abstract (more about this later), but to most people “thumb rule” is meaningless while “rule of thumb” is an idiom. The difference between knowledge and information is that knowledge is meaningful information placed in context. Thus I elect the “rule of thumb”. It is the more meaningful title. Abstract and Concrete are relative terms. Many nouns (and verbs) that are abstract to a commoner, are concrete to a practitioner of one profession or another and may even have different concrete meanings in different professional jargons. Think about “running”. To an executive it means running a business, to a marathoner its meaning is much more literal. Generally speaking, we store and disseminate knowledge within a practice more than we do it in general. Even dictionaries encyclopedias define terms as they apply to different audiences. The rule of thumb is to put the more concrete first, but within the audience’s jargon. Even the title of this monogram is a question. Do I name it “Naming Documents” or “Documents, Naming”? Well, my own rule of thumb (“Here he goes again!?”) states that the latter is better because it starts with a noun, but this is a document about naming more than it about documents. The rules of naming also apply to graphs and charts, excel spreadsheets, and so on. Thus, I vote for the former.  A better title could have been “Naming Objects” only the word “Object” is a bit too abstract. How about just “Naming” or “Naming, rules of”? You get the drift. One of the ways to resolve all of this is to store the documents in Knowledge-Bases, which may become the subjects of a future punditry. Knowledge bases use keywords to describe their content.  Use a Metadata store for the keywords to at least attempt some common grounds. Here is another general rule (rule of thumb?!!) – put at least the one keyword in the title. Use subtitles. Here is an example: Migrating documents – Screening, cleaning, and organizing our knowledge. The main keyword is “documents”, next is “migrating”, other keywords also appear in the subtitle. They are “screening”, “cleaning”, and “organizing”. Any questions? Send me an amply named document by email: [email protected]

    Read the article

  • Javascript Noob: How to emulate slideshow on front page by automatically cycling through existing ho

    - by Zildjoms
    hey everyone. hope you could help me out am working on this website and i've finished all the hover effects i like - they're exactly how i want them to be: http://s5ent.brinkster.net/beta3.asp - try hovering over the four links and you'll see a very simple fade effect at work, which degrades into a regular css hover without javascript. what i plan to do is to make the page look like it had a fancy slideshow going on upon loading and while idle, and i wanted to achieve that by capitalizing on the existing hover styling/behavior of the main page links instead of using another script to create the effect from scratch. to do this i imagined i'll need a script that emulates a hover action on each link at regular time intervals once the page has loaded, starting from left to right (footcare, lawn & equipment, about us, contact us), looping through all 4 links indefinitely (footcare, lawn & equipment, about us, contact us, footcare, lawn& equipment, etc.) but pauses when any of them have been actually hovered over by a viewer and resumes from wherever the user left off upon mouseout. hope you get my drift... i also want to achieve this without unnecessarily disrupting the current html. so i guess everything will have to be done by scripting as much as possible.. i'm very new to javascript and jquery. as you can see at s5ent.brinkster.net/beta3.1-autohover.asp, the following script i made works wrong: it hovers-on all of them at the same time and doesn't hover-out anymore. when you try to actually hover into and out of each link the link just comes back on: <script type="text/javascript"> $(document).ready(function () { var speed = 5000; var run = setInterval('rotate()', speed); }); function rotate() { $('.lilevel1 a').each(function(i) { $(this).mouseover(); }); } </script> it's just gross. aside from the fact that this last bit of script isn't even working in ie. could you please help me make this thing happen? that'd be really sweet, guys. i know there are tonsa geniuses out there who could whip this up in no time. or if you have a better way to go about it by all means kindly lemme know. thanks guys, hope you're all havin a blast.

    Read the article

  • How much abstraction is too much?

    - by Daniel Bingham
    In an Object Oriented Program: How much abstraction is too much? How much is just right? I have always been a nuts and bolts kind of guy. I understood the concept behind high levels of encapsulation and abstraction, but always felt instinctively that adding too much would just confuse the program. I always tried to shoot for an amount of abstraction that left no empty classes or layers. And where in doubt, instead of adding a new layer to the hierarchy, I would try and fit something into the existing layers. However, recently I've been encountering more highly abstracted systems. Systems where everything that could require a representation later in the hierarchy gets one up front. This leads to a lot of empty layers, which at first seems like bad design. However, on second thought I've come to realize that leaving those empty layers gives you more places to hook into in the future with out much refactoring. It leaves you greater ability to add new functionality on top of the old with out doing nearly as much work to adjust the old. The two risks of this seem to be that you could get the layers you need wrong. In this case one would wind up still needing to do substantial refactoring to extend the code and would still have a ton of never used layers. But depending on how much time you spend coming up with the initial abstractions, the chance of screwing it up, and the time that could be saved later if you get it right - it may still be worth it to try. The other risk I can think of is the risk of over doing it and never needing all the extra layers. But is that really so bad? Are extra class layers really so expensive that it is much of a loss if they are never used? The biggest expense and loss here would be time that is lost up front coming up with the layers. But much of that time still might be saved later when one can work with the abstracted code rather than more low level code. So when is it too much? At what point do the empty layers and extra "might need" abstractions become overkill? How little is too little? Where's the sweet spot? Are there any dependable rules of thumb you've found in the course of your career that help you judge the amount of abstraction needed?

    Read the article

  • JSF Navigation issue Facelets and Beans

    - by ortho
    Hi, I have an issue with navigation in my simple jsf system. I have MainBean that has two methods: public String register() and public String login(). I have played with faces-config.xml for several hours and I feel like I miss something very important because I think like I have tried all simple solutions so far :). I have added the mysql connector (jdbc) to Tomcat's lib folder and I am able to register the table to mySQL databse. It even allows my users to log in to the page. The only problem is that I can't use navigation in any other page than login.xhtml. Seems like navigation is only active on this one. I tried to use * but is no joy. I am sure that there is a simple fix for that and someone will come up with correct solution soon. Let's skip all the mysql part and try to fix the navigation issue please. faces-config.xml: http://java.sun.com/xml/ns/javaee/web-facesconfig_1_2.xsd" version="1.2" com.sun.facelets.FaceletViewHandler mainBean dk.itu.beans.MainBean session registerBean dk.itu.beans.RegisterBean session /login.xhtml success /welcome.xhtml failure /login_failed.xhtml sign /register.xhtml /login_failed.xhtml back /login.xhtml Here the last navigation-rule from login_failed.xhtml does not work at all. login.xhtml (which is the main - starting view): login_failed.xhtml: Here I tried many options. I used action="#{mainBean.register})" method that returns "sign" string and none of these worked. There is one more file (not specified in faces-config.xml file, because did not work either - but going from login.xhtml by button works fine. I tried to manage navigation from login_failed.xhtml first, then I will apply the same rool for registration to come back to login page when customer registers his nick name). register.xhtml: Basicaly mainBean.register now calls the database and returns string, but obviously it doesn't navigate to any view (but gives an entry to database). I believe it is simple fix for most of the experienced web developers and any help will be greately appreciated. I use eclipse, tomcat 6 and Widows Vista if that helps :) Thank you in advance. Kindest regards.

    Read the article

  • WPF ListView Data Binding wont synchronize

    - by Aviatrix
    my listview will display only the item that its added by the constructor and it wont add the items that i add dynamically public class PostList : ObservableCollection<PostData> { public PostList() : base() { Add(new PostData("Isak", "Dinesen")); // Add(new PostData("Victor", "Hugo")); // Add(new PostData("Jules", "Verne")); } public void AddToList (string[] data){ string username2, msg; msg = data[0]; username2 = data[1]; Add(new PostData(msg, username2)); } } public class PostData { private string Username; private string Msg; private string Avatar; private string LinkAttached; private string PicAttached; private string VideoAttached; public PostData(string msg ,string username, string avatar=null, string link=null,string pic=null ,string video=null) { this.Username = username; this.Msg = msg; this.Avatar = avatar; this.LinkAttached = link; this.PicAttached = pic; this.VideoAttached = video; } public string pMsg { get { return Msg; } set { Msg = value; } } public string pUsername { get { return Username; } set { Username = value; } } ..... ..... } Xaml <ListView x:Name="PostListView" BorderThickness="0" MinHeight="300" Background="{x:Null}" BorderBrush="{x:Null}" Foreground="{x:Null}" VerticalContentAlignment="Top" ScrollViewer.HorizontalScrollBarVisibility="Disabled" ScrollViewer.VerticalScrollBarVisibility="Disabled" IsSynchronizedWithCurrentItem="True" MinWidth="332" SelectedIndex="0" SelectionMode="Extended" AlternationCount="1" ItemsSource="{Binding Source={StaticResource PostListData}}"> <ListView.ItemTemplate> <DataTemplate> <DockPanel x:Name="SinglePost" VerticalAlignment="Top" ScrollViewer.CanContentScroll="True" ClipToBounds="True" Width="333" Height="70" d:LayoutOverrides="VerticalAlignment" d:IsEffectDisabled="True"> <StackPanel x:Name="AvatarNickHolder" Width="60"> <Label x:Name="Nick" HorizontalAlignment="Center" Margin="5,0" VerticalAlignment="Top" Height="15" Content="{Binding Path=pUsername}" FontFamily="Arial" FontSize="10.667" Padding="5,0"/> <Image x:Name="Avatar" HorizontalAlignment="Center" Margin="5,0,5,5" VerticalAlignment="Top" Width="50" Height="50" IsHitTestVisible="False" Source="1045443356IMG_0972.jpg" Stretch="UniformToFill"/> </StackPanel> <TextBlock x:Name="userPostText" Margin="0,0,5,0" VerticalAlignment="Center" FontSize="10.667" Text="{Binding Path=pMsg}" TextWrapping="Wrap"/> </DockPanel> </DataTemplate> </ListView.ItemTemplate> </ListView>

    Read the article

  • Can simple javascript inheritance be simplified even further?

    - by Will
    John Resig (of jQuery fame) provides a concise and elegant way to allow simple JavaScript inheritance. It was so short and sweet, in fact, that it inspired me to try and simplify it even further (see code below). I've modified his original function such that it still passes all his tests and has the potential advantage of: readability (50% less code) simplicity (you don't have to be a ninja to understand it) performance (no extra wrappers around super/base method calls) consistency with C#'s base keyword Because this seems almost too good to be true, I want to make sure my logic doesn't have any fundamental flaws/holes/bugs, or if anyone has additional suggestions to improve or refute the code (perhaps even John Resig could chime in here!). Does anyone see anything wrong with my approach (below) vs. John Resig's original approach? if (!window.Class) { window.Class = function() {}; window.Class.extend = function(members) { var prototype = new this(); for (var i in members) prototype[i] = members[i]; prototype.base = this.prototype; function object() { if (object.caller == null && this.initialize) this.initialize.apply(this, arguments); } object.constructor = object; object.prototype = prototype; object.extend = arguments.callee; return object; }; } And the tests (below) are nearly identical to the original ones except for the syntax around base/super method calls (for the reason enumerated above): var Person = Class.extend( { initialize: function(isDancing) { this.dancing = isDancing; }, dance: function() { return this.dancing; } }); var Ninja = Person.extend( { initialize: function() { this.base.initialize(false); }, dance: function() { return this.base.dance(); }, swingSword: function() { return true; } }); var p = new Person(true); alert("true? " + p.dance()); // => true var n = new Ninja(); alert("false? " + n.dance()); // => false alert("true? " + n.swingSword()); // => true alert("true? " + (p instanceof Person && p instanceof Class && n instanceof Ninja && n instanceof Person && n instanceof Class));

    Read the article

  • Problem with jQuery.ajax with 'delete' method in ie

    - by Max Williams
    I have a page where the user can edit various content using buttons and selects that trigger ajax calls. In particular, one action causes a url to be called remotely, with some data and a 'put' request, which (as i'm using a restful rails backend) triggers my update action. I also have a delete button which calls the same url but with a 'delete' request. The 'update' ajax call works in all browsers but the 'delete' one doesn't work in IE. I've got a vague memory of encountering something like this before...can anyone shed any light? here's my ajax calls: //update action - works in all browsers jQuery.ajax({ async:true, data:data, dataType:'script', type:'put', url:"/quizzes/"+quizId+"/quiz_questions/"+quizQuestionId, success: function(msg){ initializeQuizQuestions(); setPublishButtonStatus(); } }); //delete action - fails in ie function deleteQuizQuestion(quizQuestionId, quizId){ //send ajax call to back end to change the difficulty of the quiz question //back end will then refresh the relevant parts of the page (progress bars, flashes, quiz status) jQuery.ajax({ async:true, dataType:'script', type:'delete', url:"/quizzes/"+quizId+"/quiz_questions/"+quizQuestionId, success: function(msg){ alert("success"); initializeQuizQuestions(); setSelectStatus(quizQuestionId, true); jQuery("tr[id*='quiz_question_"+quizQuestionId+"']").removeClass('selected'); }, error: function(msg){ alert("error:" + msg); } }); } I put the alerts in success and error in the delete ajax just to see what happens, and the 'error' part of the ajax call is triggered, but WITH NO CALL BEING MADE TO THE BACK END (i know this by watching my back end server logs). So, it fails before it even makes the call. I can't work out why - the 'msg' i get back from the error block is blank. Any ideas anyone? Is this a known problem? I've tested it in ie6 and ie8 and it doesn't work in either. thanks - max EDIT - the solution - thanks to Nick Craver for pointing me in the right direction. Rails (and maybe other frameworks?) has a subterfuge for the unsupported put and delete requests: a post request with the parameter "_method" (note the underscore) set to 'put' or 'delete' will be treated as if the actual request type was that string. So, in my case, i made this change - note the 'data' option': jQuery.ajax({ async:true, data: {"_method":"delete"}, dataType:'script', type:'post', url:"/quizzes/"+quizId+"/quiz_questions/"+quizQuestionId, success: function(msg){ alert("success"); initializeQuizQuestions(); setSelectStatus(quizQuestionId, true); jQuery("tr[id*='quiz_question_"+quizQuestionId+"']").removeClass('selected'); }, error: function(msg){ alert("error:" + msg); } }); } Rails will now treat this as if it were a delete request, preserving the REST system. The reason my PUT example worked was just because in this particular case IE was happy to send a PUT request, but it officially does not support them so it's best to do this for PUT requests as well as DELETE requests.

    Read the article

  • Prolog: Sentence Parser Problem

    - by Devon
    Hey guys, Been sat here for hours now just staring at this code and have no idea what I'm doing wrong. I know what's happening from tracing the code through (it is going on an eternal loop when it hits verbPhrase). Any tips are more then welcome. Thank you. % Knowledge-base det(the). det(a). adjective(quick). adjective(brown). adjective(orange). adjective(sweet). noun(cat). noun(mat). noun(fox). noun(cucumber). noun(saw). noun(mother). noun(father). noun(family). noun(depression). prep(on). prep(with). verb(sat). verb(nibbled). verb(ran). verb(looked). verb(is). verb(has). % Sentece Structures sentence(Phrase) :- append(NounPhrase, VerbPhrase, Phrase), nounPhrase(NounPhrase), verbPhrase(VerbPhrase). sentence(Phrase) :- verbPhrase(Phrase). nounPhrase([]). nounPhrase([Head | Tail]) :- det(Head), nounPhrase2(Tail). nounPhrase(Phrase) :- nounPhrase2(Phrase). nounPhrase(Phrase) :- append(NP, PP, Phrase), nounPhrase(NP), prepPhrase(PP). nounPhrase2([]). nounPhrase2(Word) :- noun(Word). nounPhrase2([Head | Tail]) :- adjective(Head), nounPhrase2(Tail). prepPhrase([]). prepPhrase([Head | Tail]) :- prep(Head), nounPhrase(Tail). verbPhrase([]). verbPhrase(Word) :- verb(Word). verbPhrase([Head | Tail]) :- verb(Head), nounPhrase(Tail). verbPhrase(Phrase) :- append(VP, PP, Phrase), verbPhrase(VP), prepPhrase(PP).

    Read the article

  • Locating Rogue Perl Script

    - by Gary Garside
    I've been trying to source the location of a perl script which is causing havoc on a server which i control. I'm also trying to find out exactly how this script was installed on the server - my best guess is through a wordpress exploit. The server is a basic web setup running Ubuntu 9.04, Apache and MySQL. I use IPTables for firewall, the site runs around 20 sites and the load never really creeps above 0.7. From what i can see the script is making outbound connection to other servers (most likely trying to brute force entry). Here is a top dump of one of the processes: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22569 www-data 20 0 22784 3216 780 R 100 0.2 47:00.60 perl The command the process is running is /usr/sbin/sshd . I've tried to find an exact file name but im having no luck... i've ran a lsof -p PID and here is the output: COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME perl 22569 www-data cwd DIR 8,6 4096 2 / perl 22569 www-data rtd DIR 8,6 4096 2 / perl 22569 www-data txt REG 8,6 10336 162220 /usr/bin/perl perl 22569 www-data mem REG 8,6 26936 170219 /usr/lib/perl/5.10.0/auto/Socket/Socket.so perl 22569 www-data mem REG 8,6 22808 170214 /usr/lib/perl/5.10.0/auto/IO/IO.so perl 22569 www-data mem REG 8,6 39112 145112 /lib/libcrypt-2.9.so perl 22569 www-data mem REG 8,6 1502512 145124 /lib/libc-2.9.so perl 22569 www-data mem REG 8,6 130151 145113 /lib/libpthread-2.9.so perl 22569 www-data mem REG 8,6 542928 145122 /lib/libm-2.9.so perl 22569 www-data mem REG 8,6 14608 145125 /lib/libdl-2.9.so perl 22569 www-data mem REG 8,6 1503704 162222 /usr/lib/libperl.so.5.10.0 perl 22569 www-data mem REG 8,6 135680 145116 /lib/ld-2.9.so perl 22569 www-data 0r FIFO 0,6 157216 pipe perl 22569 www-data 1w FIFO 0,6 197642 pipe perl 22569 www-data 2w FIFO 0,6 197642 pipe perl 22569 www-data 3w FIFO 0,6 197642 pipe perl 22569 www-data 4u IPv4 383991 TCP outsidesoftware.com:56869->server12.34.56.78.live-servers.net:www (ESTABLISHED) My gut feeling is outsidesoftware.com is also under attacK? Or possibly being used as a tunnel. I've managed to find a number of rouge files in /tmp and /var/tmp, here is a brief output of one of these files: #!/usr/bin/perl # this spreader is coded by xdh # xdh@xxxxxxxxxxx # only for testing... my @nickname = ("vn"); my $nick = $nickname[rand scalar @nickname]; my $ircname = $nickname[rand scalar @nickname]; #system("kill -9 `ps ax |grep httpdse |grep -v grep|awk '{print $1;}'`"); my $processo = '/usr/sbin/sshd'; The full file contents can be viewed here: http://pastebin.com/yenFRrGP Im trying to achieve a couple of things here... Firstly i need to stop these processes from running. Either by disabling outbound SSH or any IP Tables rules etc... these scripts have been running for around 36 hours now and my main concern is to stop these things running and respawning by themselves. Secondly i need to try and source where and how these scripts have been installed. If anybody has any advise on what to look for in access logs or anything else i would be really grateful. Thanks in advance

    Read the article

  • Issues configuring Exchange 2010 as well as SSL problems.

    - by Eric Smith
    Possibly-Relevant Background Info: I've recently moved up from icky shared hosting to a glorious, Remote Desktop-administrated VPS server running Windows Server 2008 R2. Even though I'm only 21 now and a computer science major, I've tried to play with every Windows Server release since '03, just to learn new things. What usually happens is inevitably I'll do something wrong and pretty much ruin the install. You're dealing with an amateur here :) Through the past few months of working with my new server, I've mastered DNS, IIS, got Team Foundation Server running (yay!), and can install all of the other basics like SQL Server and Active Directory. The Problem: Now, these last few weeks I've been trying to install Exchange Server 2010 (SP1). To make a long story short, it took me several attempts, and I even had to get my server wiped just so I could start fresh since Exchange decided uninstalling properly was for sissies (cost me $20, bah). Today, at long last, I got Exchange mostly working. There were two main problems left, however, that left me unsatisfied: Exchange installed itself and all of its child sites into Default Web Site. I wanted to access Exchange via mail.domain.com, but instead everything was configured to domain.com. My limited server admin knowledge was not enough to configure IIS or Exchange to move itself over to the website I had set up for it, appropriately titled 'mail.domain.com', which I had bound to a dedicated IP address (I was told this was necessary, but he may have been wrong). I have two SSL certificates: one for my main domain and one for my mail subdomain. For whatever reason, I had issues geting Exchange to use my mail certificate, even though I had assigned the proper roles in the MMC. I did, at one point, get it to work (or mostly work, anyways. Frankly, my memory of today is clouded by intense frustration). Additionally, I was confused which type of SSL certificate I should be using for Exchange. My SSL provider, GoDaddy, allows me to request a new certificate whenever, so I can use either the certificate request provided by IIS or the more complicated and specific request you can create with Exchange. Which type should I be using, the IIS or Exchange certificate? If I must use the Exchange certificate, will that 1) cause issues when I bind that certificate to my mail.domain.com subdomain or 2) is that an unnecessary step? The SSL Certificate Strikes Back When I thought I had the proper SSL certificate assigned for those brief, sweet moments, Google Chrome reported the correct mail.domain.com certificate when browsing https://mail.domain.com. However, Outlook 2010 threw up an error when trying to configure my email account claiming that the certificate didn't match the domain of "mail.domain.com". Is this an issue that will be resolved by problem #2 or is it a separate one entirely? Apologies for the massive wall of text, but I wanted to provide as much info as I possibly could. Exchange is the last thing I'd like installed on my server, and naturally it's turning out to be the hardest. Thanks for any info at all. Even a point in a vague direction would be a huge help at this point. Thanks! -Eric P.S.: The reason I keep ruining my install is that when I attempt to uninstall Exchange, something invariably goes wrong. The last time the uninstaller complained that there was still a mailbox active and it couldn't proceed until I deleted it. ... The only mailbox left was the Administrator account, the built-in one I couldn't delete. So I attempted to manually uninstall it following several guides online only to now be stuck unable to launch the installer and have to get my system wiped AGAIN for the second time today ($40 down the drain, bah!). I do not understand at all why "uninstall" just can't mean "hey, you, delete everything and go away". There's not even a force uninstall option, only a "recover system" option that just fails to fix anything and makes it so I can't even use the GUI uninstaller. </rant>

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58  | Next Page >