Search Results

Search found 14184 results on 568 pages for 'peter small'.

Page 157/568 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Pet Store Loyalty Programs: I'm Not Loyal Yet!

    - by ruth.donohue
    After two years of constantly being asked (aka "pestered) by my now eight-year-old daughter for a dog (or any pet that is more interactive than a goldfish), I've finally compromised with a hamster purely by chance. Friends of ours had recently brought home a female hamster, and (surprise, surprise) two weeks later, they were looking for homes for 11 baby hamster pups. Since the pups were not yet ready to be weaned from their mother, my daughter and I had several weeks to get ready -- and we spent that extra time visiting a number of local pet stores and purchasing an assortment of hamster books, toys, exercise equipment, food, bedding, and cage -- not cheap! Now, I'm usually an online shopper (i.e. I love reading user reviews and comparing prices), but for kids, there is absolutely no online substitute for actually walking into a store and physically picking out something you want. We have two competing pet shops within close proximity to where we live, and I signed up for their rewards programs to get discounts on select items. I'm sure it takes a while to get my data into the system (after all, I did fill out a form the old-fashioned way), but as it has been more than two weeks for one store and over a week for the other, the window of opportunity is getting smaller as we by now pretty much have most of what we think we need. Everything I've purchased has been purely hamster or small animal related, so in an ideal world, the stores would have me easily figured out as a hamster owner. Here is what I would be expecting of a loyalty rewards program: Point me to some useful links, either information provied by the company or external websites where I can learn more. Any value-add a business can provide to make my life easier makes me a much more loyal customer. What things can I expect as a new pet owner? Any hamster communities? Any hamster-related events? Any vets that specialize in small animals in the vicinity? Send me an email with other related products I may be interested in. Upsell and cross-sell to me. We've go the basics and a couple of luxuries, but at this point, I'm pretty excited (surprisingly) about the hamster, and my daughter is footing the bill with her birthday and Christmas money. She and I would be more than happy to spend her money! Get this information to me faster. As I mentioned, my window of opportunity is getting smaller, as eithe rmy daughter's money will run out on other things or we'll start losing the thrill of buying new hamster toys and treats. I realize this is easier said than done, and undoubtedly, the stores are getting value knowing my basic customer information and purchase history. Buth, they could really benefit by delivering a loyalty program that really earned my loyalty. "Goldeen" needs a new water bottle, yogurt chips, and chew toys as he doesn't seem to like the ones we bought. So for now, I'll just go to whichever store is the most convenient. Oh, and just for fun (not related to this post), here are a couple of videos my daughter really got a kick out of watching: Hamster on a Piano Tic in a Spin-Dryer

    Read the article

  • &lsquo;Publish&hellip;&rsquo; Resulting in Directory With No Files

    - by ToStringTheory
    I was pulling my hair out with this one…  Which isn’t good considering I have so little of it left!  I had just upgraded to the Windows Azure 1.7 SDK the day before with no problems, and used the upgraded ‘Publish…’ dialog to successfully publish a website to my hard disk for hosting on an internal development server.  However, when trying to deploy another project to my file system, it said it was successful, but there were no files in the directory.  The only difference, the first project was an Azure project, the second was a standard ASP.Net Web Application.  If you installed the Windows Azure 1.7 SDK, you may want to read this. The Problem At first it appears that there is no problem: However you may remember that when publishing a web application, the output window will generally iterate through each of the directories as it copies the files from that directory over.  Sure enough, when looking at the output directory – there are no files, no bin directory, no nothing… Troubleshooting Since one site published and the other did not, I believed that the failure may have been to a failed SQL Server 2012 installation that happened between publish.  I rolled back the installation, however that did not work either.  I also checked the Configuration Manager dialog, and ensured that the projects were selected to actually build (just checking, even though the output said it built them..)  I checked the properties of the solution and the projects, and a selection of files in the project to make sure that they were selected for content…  Nothing seemed to work. I then decided to uninstall the Azure 1.7 SDK to see if that was the culprit.  When I opened the Windows 7 ‘Uninstall a Program’ dialog, I noticed that the Azure SDK came with 2 extra packages that just so happen to be in a Release Candidate state from Microsoft – ‘Microsoft Web Deploy 3.0’ and ‘Microsoft Web Publish – Visual Studio 2010’.  It dawned on me that the publish dialog must not be just for Azure, since it appeared when I tried to deploy the regular web application as well.  Therefore, it must have been an upgrade to the publish mechanism in Visual Studio.  I uninstalled both of the programs and received my old publish dialog once again, and was able to successfully publish the solution above as I had done before. After celebrating solving the problem, I tried reinstalling the Azure package, to see if it would repair the publishing process. Even though it brought back the updated dialogs, it did not publish any files. Instead of uninstalling and retreating, I now KNEW what the cause was, and these were packages not just for Azure. I now knew a product name to search for. The Solution Sure enough, with the correct search term in Google – ‘microsoft web publish no files’, and setting the timeline to 1 week, I found what I needed - Microsoft Connect - Publish Web Application FAILS! (by Andrew Rits). I am surprised that I missed something that ended up being so simple…  In the Configuration Manager, I had the following settings: This is how I had been building and debugging the solution always…  However, apparently when installing the new Web Publishing package, it does things a little differently in its configuration for publishing: You see the difference?  The configuration here is set to ‘x86’ instead of ‘Any CPU’.  Sure enough, as soon as I switched the configuration to ‘Release – Any CPU’, the deployment built and published all of my files as I expected. Conclusion It was a small change, but apparently the new ‘Publish web application’ defaults to the x86 configuration, thereby not copying any of the project/bin files to the publish target directory.  I spent forever trying things, but this small drop down eluded me until I was able to target that the dialog was actually working apparently, I just didn’t have the correct configuration. I hope that this saves you the hours of frustration and hastened hair loss that it caused me…  I also hope that before Microsoft brings this publishing package out of RC status, that they change the behavior of that menu to default to the settings of the old publish menu for the first time. Happy Coding!

    Read the article

  • Getting a 'base' Domain from a Domain

    - by Rick Strahl
    Here's a simple one: How do you reliably get the base domain from full domain name or URI? Specifically I've run into this scenario in a few recent applications when creating the Forms Auth Cookie in my ASP.NET applications where I explicitly need to force the domain name to the common base domain. So, www.west-wind.com, store.west-wind.com, west-wind.com, dev.west-wind.com all should return west-wind.com. Here's the code where I need to use this type of logic for issuing an AuthTicket explicitly:private void IssueAuthTicket(UserState userState, bool rememberMe) { FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, userState.UserId, DateTime.Now, DateTime.Now.AddDays(10), rememberMe, userState.ToString()); string ticketString = FormsAuthentication.Encrypt(ticket); HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, ticketString); cookie.HttpOnly = true; if (rememberMe) cookie.Expires = DateTime.Now.AddDays(10); // write out a domain cookie cookie.Domain = Request.Url.GetBaseDomain(); HttpContext.Response.Cookies.Add(cookie); } Now unfortunately there's no Uri.GetBaseDomain() method unfortunately, as I was surprised to find out. So I ended up creating one:public static class NetworkUtils { /// <summary> /// Retrieves a base domain name from a full domain name. /// For example: www.west-wind.com produces west-wind.com /// </summary> /// <param name="domainName">Dns Domain name as a string</param> /// <returns></returns> public static string GetBaseDomain(string domainName) { var tokens = domainName.Split('.'); // only split 3 segments like www.west-wind.com if (tokens == null || tokens.Length != 3) return domainName; var tok = new List<string>(tokens); var remove = tokens.Length - 2; tok.RemoveRange(0, remove); return tok[0] + "." + tok[1]; ; } /// <summary> /// Returns the base domain from a domain name /// Example: http://www.west-wind.com returns west-wind.com /// </summary> /// <param name="uri"></param> /// <returns></returns> public static string GetBaseDomain(this Uri uri) { if (uri.HostNameType == UriHostNameType.Dns) return GetBaseDomain(uri.DnsSafeHost); return uri.Host; } } I've had a need for this so frequently it warranted a couple of helpers. The second Uri helper is an Extension method to the Uri class, which is what's used the in the first code sample. This is the preferred way to call this since the URI class can differentiate between Dns names and IP Addresses. If you use the first string based version there's a little more guessing going on if a URL is an IP Address. There are a couple of small twists in dealing with 'domain names'. When passing a string only there's a possibility to not actually pass domain name, but end up passing an IP address, so the code explicitly checks for three domain segments (can there be more than 3?). IP4 Addresses have 4 and IP6 have none so they'll fall through. Then there are things like localhost or a NetBios machine name which also come back on URL strings, but also shouldn't be handled. Anyway, small thing but maybe somebody else will find this useful.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Networking   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Last GUID used up - new ScottGuID unique ID to replace it

    - by Eilon
    You might have heard in recent news that the last ever GUID was used up. The GUID {FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF} was just consumed by a soon to be released project at Microsoft. Immediately after the GUID's creation the word spread around the Microsoft campuses around the globe. Microsoft's approximately 100,000 worldwide employees then started blogging, tweeting, and facebooking about the dubious "achievement." The following screenshot shows GUIDGEN (the Windows tool for creating GUIDs) with the last ever GUID. All GUIDs created by projects at Microsoft must be registered in a central repository for record keeping. This allows quick-fix engineers, security engineers, anti-malware developers, and testers to do a quick look up of an unknown GUID and find out if it belongs to Microsoft. The following screenshot shows the Microsoft GUID Tracker internal application and the last few GUIDs being used up by various Microsoft projects. What is perhaps more interesting than the news about the GUID is the project that used that last GUID. The recent announcements regarding the development experience for the Windows Phone 7 Series (WP7S) all involve free editions of Visual Studio 2010. One of the lesser known developer tools is based on a resurrected project that many of you are probably familiar with, but have never used. The tool is in fact Microsoft Bob 7 Series (MB7S). MB7S is an agent-based approach for mobile phone app development. The UI incorporates both natural language interfaces and motion gesture behaviors, similar to the Windows Phone 7 Series “Metro” interface. If it works, it will help to expand the breadth of mobile app developers. After the GUID: The ScottGuID It came as no big surprise that eventually the last GUID would be used up. Knowing this, a group of engineers at Microsoft has designed, implemented, and tested a replacement to the GUID: The ScottGuID. There are several core principles of the ScottGuID: 1. The concepts used in ScottGuIDs must be easily understood by a developer who is already familiar with GUIDs 2. There must exist a compatibility layer between ScottGuIDs and GUIDs 3. A ScottGuID must be usable in a practical manner in non-computing environments 4. There must exist ScottGuID APIs for all common platforms: Win32/Win64/WinCE, .NET (incl. Silverlight), Linux, FreeBSD, MacOS (incl. iPhone OS), Symbian, RIM BlackBerry, Google Android, etc. 5. ScottGuIDs must never run out ScottGuID use cases One of the more subtle principles of the ScottGuID is principle #3. While technically a GUID could be used in any environment, it was not practical to do so in terms of data entry and error detection. In order to have the ScottGuID be a true universal ID it must be usable in non-computing environments. Prior to the announcement of the ScottGuID there have been a number of until-now confidential projects. One of the tools that will soon become public is ScottGuIDGen, which is in essence an updated version of GUIDGEN that can create ScottGuIDs. The following screenshot shows a sample ScottGuID. To demonstrate the various applications of the ScottGuID there were test deployments around the globe. The following examples are a small showcase of the applications that have already been prototyped. Log in to Hotmail: Pay for gas: Sign in to Twitter: Dispense cat food: Conclusion I hope that this brief introduction to the ScottGuID shows how technology can continue to move forward, even when it appears there is a point that cannot be passed. With a small number of principles, a team of smart engineers, and a passion for "getting it right" the ScottGuID should last well past our lifetimes. In the coming months expect further announcements regarding additional developer tools, samples, whitepapers, podcasts, and videos. Please leave a comment on this post if you have any questions about the ScottGuID or what you would like to see us do with it. With ScottGuID, the possibilities are nearly endless and we want to stretch their reach as far as possible.

    Read the article

  • Observations From The Corner of a Starbucks

    - by Chris Williams
    I’ve spent the last 3 days sitting in a Starbucks for 4-8 hours at a time. As a result, I’ve observed a lot of interesting behavior and people (most of whom were uninteresting themselves.) One of the things I’ve noticed is that most people don’t sit down. They come in, get their drink and go. The ones that do sit down, stay much longer than it takes to consume their drink. The drink is just an incidental purchase. Certainly not the reason they are here. Most of the people who sit also have laptops. Probably around 75%. Only a few have kids (with them) but the ones that do, have very small kids. Toddlers or younger. Of all the “campers” only a small percentage are wearing headphone, presumably because A) external noise doesn’t bother them or B) they aren’t working on anything important. My buddy George falls into category A, but he grew up in a house full of people. Silence freaks him out far more than noise. My brother and I, on the other hand, were both only children and don’t handle noisy distractions well. He needs it quiet (like a tomb) and I need music. Go figure… I can listen to Britney Spears mixed with Apoptygma Berzerk and Anthrax and crank out 30 pages, but if your toddler is banging his spoon on the table, you’re getting a dirty look… unless I have music, then all is right with the world. Anyway, enough about me. Most of the people who come in as a group are smiling when they enter. Half as many are smiling when they leave. People who come in alone typically aren’t smiling at all. The average age, over the last three days seems to be early 30s… with a couple of senior citizens and teenagers at either end of the curve. The teenagers almost never stay. They have better stuff to do on a nice day. The senior citizens are split nearly evenly between campers and in&outs. Most of the non-solo campers have 1 person with a laptop, while the other reads the paper or a book. Some campers bring multiple laptops… but only really look at one of them. This Starbucks has a drive through. The line is almost never more than 2-3 cars long but apparently a lot of the in&out people would rather come in and stand in line behind (up to) 5 people. The music in here sucks. My musical tastes can best be described as eclectic to bad, but I can still get work done (see above.) I find the music in this particular Starbucks to be discordant and jarring. At this Starbucks, the coffee lingo is apparently something that is meant to occur between employees only. The nice lady at the counter can handle orders in plain English and translate them to Baristaspeak (Baristese?) quite efficiently. If you order in Baristaspeak however, she will look confused and repeat your order back to you in plain English to confirm you actually meant what you said. Then she will say it in Baristaspeak to the lady making your drink. Nobody in this Starbucks (other than the Baristas) makes eye-contact… at least not with me. Of course that may be indicative of a separate issue. ;)

    Read the article

  • Restyling RadDataPager for WPF and Silverlight

      A small but powerful control has joined the great family of Telerik XAML controls with the recent release. RadDataPager is a result of an increasing demand from our customers who needed to work with large amounts of server data presented in small portions at the client side.  The primary goal was to make a powerful data paging control to be used in any scenarios requiring getting and presenting portions of data. How good is RadDataPager in working with paged data you can see in Rossen's announcement. Yet being a XAML control it challenged us with making one more visually appealing and highly customizable control. It offers a slick look in all available WPF/Silverlight Telerik themes:     As you can see the default design is targeting mainly business scenarios. At the same time being an example for a truly lookless XAML control RadDataPager allows restyling with minimal efforts this way widening the possible range of applications.  Lets see what we can doo with a few lines of XAML : <Style x:Key="buttonStyle" TargetType="ButtonBase" > <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="ButtonBase"> <Grid Width="40" Background="Black"> <Ellipse StrokeThickness="2" VerticalAlignment="Center" Width="15" Height="15" HorizontalAlignment="Center" Fill="Gray" /> <Ellipse Visibility="{Binding IsCurrent, Converter={StaticResource VisibilityConverter}}" VerticalAlignment="Center" Height="16" Fill="White" HorizontalAlignment="Center" Width="16"/> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> ... <telerik:RadDataPager NumericButtonStyle="{StaticResource buttonStyle}" ... And the result: The IPhone-style-thing bellow the RadCoverFlow is actually our RadDataPager.       With a few more lines of XAML we may get  almost any imaginable look of the pager control including the fascinating result demonstrated in the short video on the top.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Increase the size of Taskbar Preview Thumbnails in Windows 7

    - by Matthew Guay
    Taskbar thumbnail previews are incredibly useful in Windows 7, but for some users they may be too small.  Here’s a tool to help you make your taskbar thumbnail previews just like you want them. A few years ago we featured a tool to increase the size of your thumbnail previews in Windows Vista, but unfortunately this application doesn’t work correctly in Windows 7.  However, there is a new tool for Windows 7 that lets you customize your taskbar thumbnail previews even more in Windows 7.  With it, you can change almost anything about your taskbar thumbnail previews.  The default taskbar thumbnails are nice, but may be too small for users with vision problems or with very high resolution monitors.  Whatever your need, this is a great tool to make the thumbnails looks and work just like you want. Let’s get started Download the Windows 7 Taskbar Thumbnail Customizer (link below), and unzip the files.  Run the Windows 7 Taskbar Thumbnail Customizer when you’re done.  Simply double-click on it; you don’t need to run it as administrator. Now, you change the size, spacing, margin, and delay time of your taskbar thumbnails.  The Delay Time setting is very handy; to speed things up, we set it to 0 so there’s no delay between when you mouse-over a taskbar icon to when you see the thumbnail.  Simply drag the slider to the size (or time in the delay settings) you want, and click Apply settings.  Windows Explorer will automatically restart, and your new taskbar thumbnails will be ready to use. Here is the default Windows 7 thumbnail preview of a video playing in Media player: And here’s the taskbar thumbnail enlarged to 380px.  Now you can really watch a video from your taskbar thumbnail. The larger taskbar thumbnails show up a little different in Internet Explorer.  It shows a larger preview of your active tab, and smaller previews of your other tabs.  Notice also that Aero peek shows the tab you’re hovering over in Internet Explorer, but the tab name in IE’s toolbar doesn’t change to the one you’re previewing.   Here we increased the width between the thumbnails, while keeping the thumbnails at their default size.  This could be useful if you have trouble selecting the correct preview, and we can imagine it would be a very useful modification on touch screens. And, if you ever take your changes too far, and want to revert to your default Windows 7 taskbar thumbnail previews, simply run the Customizer again and select Restore Defaults.  Windows Explorer will restart again, and your taskbar thumbnails will be back to their default settings.   Conclusion This tool makes it safe and easy to change the size, spacing, and more of your taskbar thumbnail previews.  And since you can always revert to the default settings, you can experiment without fear of messing up your computer.  If you’d prefer to change the settings manually without using a dedicated application, here’s a list of the registry changes you can make to accomplish this by hand. Link Download the Windows 7 Taskbar Thumbnail Customizer from The Windows Club Vista Users: Increase Size of Windows Vista Taskbar Previews Similar Articles Productive Geek Tips Bounty(Paid!) for Increasing Windows Vista Taskbar Preview SizeGet Vista Taskbar Thumbnail Previews in Windows XPVista Style Popup Previews for Firefox TabsIncrease Size of Windows Vista Taskbar PreviewsWhat is dwm.exe And Why Is It Running? TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos

    Read the article

  • Using the Katana Authentication handlers with NancyFx

    - by cibrax
    Once you write an OWIN Middleware service, it can be reused everywhere as long as OWIN is supported. In my last post, I discussed how you could write an Authentication Handler in Katana for Hawk (HMAC Authentication). Good news is NancyFx can be run as an OWIN handler, so you can use many of existing middleware services, including the ones that are ship with Katana. Running NancyFx as a OWIN handler is pretty straightforward, and discussed in detail as part of the NancyFx documentation here. After run the steps described there and you have the application working, only a few more steps are required to register the additional middleware services. The example bellow shows how the Startup class is modified to include Hawk authentication. public class Startup { public void Configuration(IAppBuilder app) { app.UseHawkAuthentication(new HawkAuthenticationOptions { Credentials = (id) => { return new HawkCredential { Id = "dh37fgj492je", Key = "werxhqb98rpaxn39848xrunpaw3489ruxnpa98w4rxn", Algorithm = "hmacsha256", User = "steve" }; } }); app.UseNancy(); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This code registers the Hawk Authentication Handler on top of the OWIN pipeline, so it will try to authenticate the calls before the request messages are passed over to NancyFx. The authentication handlers in Katana set the user principal in the OWIN environment using the key “server.User”. The following code shows how you can get that principal in a NancyFx module, public class HomeModule : NancyModule { public HomeModule() { Get["/"] = x => { var env = (IDictionary<string, object>)Context.Items[NancyOwinHost.RequestEnvironmentKey]; if (!env.ContainsKey("server.User") || env["server.User"] == null) { return HttpStatusCode.Unauthorized; } var identity = (ClaimsPrincipal)env["server.User"]; return "Hello " + identity.Identity.Name; }; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Thanks to OWIN, you don’t know any details of how these cross cutting concerns can be implemented in every possible web application framework.

    Read the article

  • Integrating Windows Form Click Once Application into SharePoint 2007 &ndash; Part 1 of 2

    - by Kelly Jones
    Last year, I had the opportunity to build a solution that involved integrating a Windows Form application into a SharePoint 2007 (WSS version 3.0). In this post, I’ll layout our architecture thinking and in part two, I’ll describe the technical details. Business Case Our challenge was this: we needed an easy way for a small group of our users to upload documents, in batches.  They also needed to quickly set the meta data values, as well as set security on individual files. Using the out of the box uploads just didn’t fit.  The single file upload allows set the meta data, but our users would be uploading dozens of files.  The multiple upload would allow our users to upload batches of files, but it doesn’t allow them to set the meta data during upload.  Also, neither upload method allows the users to set the permissions on the file. Our Solution We looked into building a web control of some kind, but ruled that out due to security complexities (if I remember correctly).  Another option would have been using a technology like Silverlight (or Flash?), but our team didn’t have the skills necessary to build with these. So, after looking at what was technically possible, and also what skills our team had, we settled on a Windows Form application.  We also decided to deliver it to the clients via Click Once, so we would have the ability to easily update the application in the future. Lessons Learned After deploying our solution, we’ve learned a few lessons.  First, you’ll need to have the .Net Framework installed on the client computers.  We knew this, but we still ran into issues making sure our users had the proper framework version installed.  Second, we had issues with authentication.  Our issues were due to our testing domain being a separate Active Directory domain from the domain that our end users and their workstations were members of.  (See my earlier post about Clearing Saved Passwords for the fix to our problem). Our third issue was how we dealt with uploading files that were named the same.  Our application would replace the existing file with the new file, which is the way we expected it to work.  However, our users wanted to upload weekly reports, named the same as the previous week.  We solved this by using folders within the document library to keep the sets of reports separate from previous weeks. One last thing to consider before implementing a solution like this, is what browsers and platforms your users will be working from.  We only needed to support IE and Windows, which works fine.  However, if you need to support Firefox, there are add-ons that allow Click Once to work with Firefox.  This is still a Windows only solution though.  In order to support Macs, you’d have to focus on either browser techniques (AJAX?) or Silverlight/Flash. Summary Our users are happy with the Click Once app.  It allowed them to move all of their content to our SharePoint site in under a couple hours, which they were thrilled with.  We’re happy because we can easily deploy updates, our development time was small, and we met all of our business requirements.

    Read the article

  • Add Social Elements to Your Gmail Contacts with Rapportive

    - by Matthew Guay
    Would you like to discover more about your contacts?  Xobni is a great tool for this in Outlook, and thanks to a small plugin for Gmail, you can get similar functionality right from your favorite webmail app. Setup Rapportive on Your Gmail Browse to the Rapportive site (link below), and click install to add it to your browser.  Rapportive currently only supports Firefox and Google Chrome.  In this test, we installed it on Google Chrome.  Notice that Chrome warns Rapportive may access your private data from Gmail, though Rapportive says that they only use this data securely on your computer or their servers. Next time you log into Gmail, open a message to see the new Rapportive sidebar.  Click Log in to get started. Choose if you want to let Rapportive to access your data. Finally, choose whether to stay logged into Rapportive or to log out when you log out of Gmail.   Using Rapportive Now, when you open an email, you should see more information about your contact on the right side of the message where you usually see Google AdSense ads. You may see an avatar, short bio, and links to their social networks.  You can add notes about a contact also, which lets you use Rapportive as a CRM. You may see more information on some contacts.  Here we see a contact that shows recent Tweets and links to several social networks. Take Rapportive Further You can add more features to Rapportive with Raplets, which are small extensions that add more information or CRM functionality.  To add these, click the Rapportive button on the top of Gmail, and select Add Raplets to Rapportive. Find a Raplet you want, and click Add This. A popup will open to give you more information about the Raplet; click the Add button at the bottom if you still want it. And, if you’re wish to close Rapportive without logging out of Gmail, click the Rapportive link in Gmail and select Log out. Conclusion Whether you want to find out more about your contacts or keep track of notes about them, Rapportive is a great way to do this from Gmail.  With tools like this, Gmail gets a bit more powerful and feels more like a desktop application. If you would like this type of functionality in Outlook, check out our article on how to power up Outlook’s search and contacts with Xobni. Add Rapportive to Gmail Similar Articles Productive Geek Tips How to Import Gmail Contacts Into Outlook 2007Enhance Your Gmail Account in ChromeFigure out which Online accounts are selling your email to spammersAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogFix for New Contact Group Button Not Displaying in Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Easily Search Food Recipes With Recipe Chimp Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools

    Read the article

  • Clone a Hard Drive Using an Ubuntu Live CD

    - by Trevor Bekolay
    Whether you’re setting up multiple computers or doing a full backup, cloning hard drives is a common maintenance task. Don’t bother burning a new boot CD or paying for new software – you can do it easily with your Ubuntu Live CD. Not only can you do this with your Ubuntu Live CD, you can do it right out of the box – no additional software needed! The program we’ll use is called dd, and it’s included with pretty much all Linux distributions. dd is a utility used to do low-level copying – rather than working with files, it works directly on the raw data on a storage device. Note: dd gets a bad rap, because like many other Linux utilities, if misused it can be very destructive. If you’re not sure what you’re doing, you can easily wipe out an entire hard drive, in an unrecoverable way. Of course, the flip side of that is that dd is extremely powerful, and can do very complex tasks with little user effort. If you’re careful, and follow these instructions closely, you can clone your hard drive with one command. We’re going to take a small hard drive that we’ve been using and copy it to a new hard drive, which hasn’t been formatted yet. To make sure that we’re working with the right drives, we’ll open up a terminal (Applications > Accessories > Terminal) and enter in the following command sudo fdisk –l We have two small drives, /dev/sda, which has two partitions, and /dev/sdc, which is completely unformatted. We want to copy the data from /dev/sda to /dev/sdc. Note: while you can copy a smaller drive to a larger one, you can’t copy a larger drive to a smaller one with the method described below. Now the fun part: using dd. The invocation we’ll use is: sudo dd if=/dev/sda of=/dev/sdc In this case, we’re telling dd that the input file (“if”) is /dev/sda, and the output file (“of”) is /dev/sdc. If your drives are quite large, this can take some time, but in our case it took just less than a minute. If we do sudo fdisk –l again, we can see that, despite not formatting /dev/sdc at all, it now has the same partitions as /dev/sda.  Additionally, if we mount all of the partitions, we can see that all of the data on /dev/sdc is now the same as on /dev/sda. Note: you may have to restart your computer to be able to mount the newly cloned drive. And that’s it…If you exercise caution and make sure that you’re using the right drives as the input file and output file, dd isn’t anything to be scared of. Unlike other utilities, dd copies absolutely everything from one drive to another – that means that you can even recover files deleted from the original drive in the clone! Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDHow to Browse Without a Trace with an Ubuntu Live CDRecover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDCreate a Bootable Ubuntu 9.10 USB Flash DriveWipe, Delete, and Securely Destroy Your Hard Drive’s Data the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7

    Read the article

  • Cross platform application revolution

    - by anirudha
    Every developer know that if they make a windows application that they work only on windows. that’s a small pity thing we all know. this is a lose point for windows application who make developer thing small means only for windows and other only for mac. this is a big point behind success of web because who purchase a operating system if they want to use a application on other platform. why they purchase when they can’t try them. that’s a thing better in Web means IE 6 no problem IE 6 to IE 8 chrome to chrome 8 Firefox to Firefox 3.6.13 even that’s beta no problem the good website is shown as same as other browser. some minor difference may be can see. the cross platform application development thinking is much big then making a application who is only for some audience. the difference between audience make by OS what they use Windows or mac. if they use mac they can’t use this they use windows they can’t use this. Web for Everyone starting from a children to grandfather. male and female Everyone can use internet.no worrying what you have even you have Windows or mac , any browser even as silly IE 6. the cross platform have a good thing that “People”. everyone can use them without a problem that. just like some time problem come in windows that “some component is missing click here to get them” , you can’t use this [apps] software because you have windows sp1 , sp2  sp3. you need to install this first before this. this stupidity mainly comes in Microsoft software. in last year i found a issue on WPI that they force user to install another software when they get them from WPI. ex:- you need to install Visual studio 2008 before installing Visual studio 2010 express. are anyone tell me why user get old version 2008 when they get latest and express version. i never try again their to check the issue is solved or not. a another thing is you can’t get IE 9 on windows XP version. in that’case don’t thing and worrying about them because Firefox and Chrome is much better. the stupidity from Microsoft is too much. they never told you about Firebug even sometime they discuss about damage tool in IE they called them developer tool because they are Microsoft and they only thing how they can market their products. you need to install many thing without any reason such as many SQL server component even you use other RDBMS. you can’t say no to them because you need a tool and tool require a useless component called SQL server. i never found any software force me to install this for this and this for this before install me. that’s another good thing in WEB that no thing require i means you not need to install dotnet framework 4 before enjoy facebook or twitter. may be you found out that Microsoft's fail project Window planet force you to get silverlight before going their. i never hear about them. some month ago my friend talked to me about them i found nothing better their. Wha’t user do when facebook force user to install silverlight or adobe flash or may be Microsoft dotnet framework 4. if you not install them facebook tell  you bye bye tata ! never come here before installing Microsoft dotnet framework 4. the door is open for you after installing them not before. the story is same as “ tell me sorry before coming in home” as mother says to their child when they do something wrong. the web never force you to do something for them. sometime they allow you to use other website account their that’s very fast login for you. because they know the importance of your time.

    Read the article

  • To sample or not to sample...

    - by [email protected]
    Ideally, we would know the exact answer to every question. How many people support presidential candidate A vs. B? How many people suffer from H1N1 in a given state? Does this batch of manufactured widgets have any defective parts? Knowing exact answers is expensive in terms of time and money and, in most cases, is impractical if not impossible. Consider asking every person in a region for their candidate preference, testing every person with flu symptoms for H1N1 (assuming every person reported when they had flu symptoms), or destructively testing widgets to determine if they are "good" (leaving no product to sell). Knowing exact answers, fortunately, isn't necessary or even useful in many situations. Understanding the direction of a trend or statistically significant results may be sufficient to answer the underlying question: who is likely to win the election, have we likely reached a critical threshold for flu, or is this batch of widgets good enough to ship? Statistics help us to answer these questions with a certain degree of confidence. This focuses on how we collect data. In data mining, we focus on the use of data, that is data that has already been collected. In some cases, we may have all the data (all purchases made by all customers), in others the data may have been collected using sampling (voters, their demographics and candidate choice). Building data mining models on all of your data can be expensive in terms of time and hardware resources. Consider a company with 40 million customers. Do we need to mine all 40 million customers to get useful data mining models? The quality of models built on all data may be no better than models built on a relatively small sample. Determining how much is a reasonable amount of data involves experimentation. When starting the model building process on large datasets, it is often more efficient to begin with a small sample, perhaps 1000 - 10,000 cases (records) depending on the algorithm, source data, and hardware. This allows you to see quickly what issues might arise with choice of algorithm, algorithm settings, data quality, and need for further data preparation. Instead of waiting for a model on a large dataset to build only to find that the results don't meet expectations, once you are satisfied with the results on the initial sample, you can  take a larger sample to see if model quality improves, and to get a sense of how the algorithm scales to the particular dataset. If model accuracy or quality continues to improve, consider increasing the sample size. Sampling in data mining is also used to produce a held-aside or test dataset for assessing classification and regression model accuracy. Here, we reserve some of the build data (data that includes known target values) to be used for an honest estimate of model error using data the model has not seen before. This sampling transformation is often called a split because the build data is split into two randomly selected sets, often with 60% of the records being used for model building and 40% for testing. Sampling must be performed with care, as it can adversely affect model quality and usability. Even a truly random sample doesn't guarantee that all values are represented in a given attribute. This is particularly troublesome when the attribute with omitted values is the target. A predictive model that has not seen any examples for a particular target value can never predict that target value! For other attributes, values may consist of a single value (a constant attribute) or all unique values (an identifier attribute), each of which may be excluded during mining. Values from categorical predictor attributes that didn't appear in the training data are not used when testing or scoring datasets. In subsequent posts, we'll talk about three sampling techniques using Oracle Database: simple random sampling without replacement, stratified sampling, and simple random sampling with replacement.

    Read the article

  • How Exactly Is One Linux OS “Based On” Another Linux OS?

    - by Jason Fitzpatrick
    When reviewing different flavors of Linux, you’ll frequently come across phrases like “Ubuntu is based on Debian” but what exactly does that mean? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader PLPiper is trying to get a handle on how Linux variants work: I’ve been looking through quite a number of Linux distros recently to get an idea of what’s around, and one phrase that keeps coming up is that “[this OS] is based on [another OS]“. For example: Fedora is based on Red Hat Ubuntu is based on Debian Linux Mint is based on Ubuntu For someone coming from a Mac environment I understand how “OS X is based on Darwin”, however when I look at Linux Distros, I find myself asking “Aren’t they all based on Linux..?” In this context, what exactly does it mean for one Linux OS to be based on another Linux OS? So, what exactly does it mean when we talk about one version of Linux being based off another version? The Answer SuperUser contributor kostix offers a solid overview of the whole system: Linux is a kernel — a (complex) piece of software which works with the hardware and exports a certain Application Programming Interface (API), and binary conventions on how to precisely use it (Application Binary Interface, ABI) available to the “user-space” applications. Debian, RedHat and others are operating systems — complete software environments which consist of the kernel and a set of user-space programs which make the computer useful as they perform sensible tasks (sending/receiving mail, allowing you to browse the Internet, driving a robot etc). Now each such OS, while providing mostly the same software (there are not so many free mail server programs or Internet browsers or desktop environments, for example) differ in approaches to do this and also in their stated goals and release cycles. Quite typically these OSes are called “distributions”. This is, IMO, a somewhat wrong term stemming from the fact you’re technically able to build all the required software by hand and install it on a target machine, so these OSes distribute the packaged software so you either don’t need to build it (Debian, RedHat) or they facilitate such building (Gentoo). They also usually provide an installer which helps to install the OS onto a target machine. Making and supporting an OS is a very complicated task requiring a complex and intricate infrastructure (upload queues, build servers, a bug tracker, and archive servers, mailing list software etc etc etc) and staff. This obviously raises a high barrier for creating a new, from-scratch OS. For instance, Debian provides ca. 37k packages for some five hardware architectures — go figure how much work is put into supporting this stuff. Still, if someone thinks they need to create a new OS for whatever reason, it may be a good idea to use an existing foundation to build on. And this is exactly where OSes based on other OSes come into existence. For instance, Ubuntu builds upon Debian by just importing most packages from it and repackaging only a small subset of them, plus packaging their own, providing their own artwork, default settings, documentation etc. Note that there are variations to this “based on” thing. For instance, Debian fosters the creation of “pure blends” of itself: distributions which use Debian rather directly, and just add a bunch of packages and other stuff only useful for rather small groups of users such as those working in education or medicine or music industry etc. Another twist is that not all these OSes are based on Linux. For instance, Debian also provide FreeBSD and Hurd kernels. They have quite tiny user groups but anyway. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Welcome 2011

    - by WeigeltRo
    Things that happened in 2010 MIX10 was absolutely fantastic. Read my report of MIX10 to see why.   The dotnet Cologne 2010, the community conference organized by the .NET user group Köln and my own group Bonn-to-Code.Net became an even bigger success than I dared to dream of.   There was a huge discrepancy between the efforts by Microsoft to support .NET user groups to organize public live streaming events of the PDC keynote (the dotnet Cologne team joined forces with netug  Niederrhein to organize the PDCologne) and the actual content of the keynote. The reaction of the audience at our event was “meh” and even worse I seriously doubt we’ll ever get that number of people to such an event (which on top of that suffered from technical difficulties beyond our control).   What definitely would have deserved the public live streaming event treatment was the Silverlight Firestarter (aka “Silverlight Damage Control”) event. And maybe we would have thought about organizing something if it weren’t for the “burned earth” left by the PDC keynote. Anyway, the stuff shown at the firestarter keynote was the topic of conversations among colleagues days later (“did you see that? oh yeah, that was seriously cool”). Things that I have learned/observed/noticed in 2010 In the long run, there’s a huge difference between “It works pretty well” and “it just works and I never have to think about it”. I had to get rid of my USB graphics adapter powering the third monitor (read about it in this blog post). Various small issues (desktop icons sometimes moving their positions after a reboot for no apparent reasons, at least one game I couldn’t get run at all, all three monitors sometimes simply refusing to wake up after standby) finally made me buy a PCIe 1x graphics adapter. If you’re interested: The combination of a NVIDIA GTX 460 and a GT 220 is running in “don’t make me think” mode for a couple of months now.   PowerPoint 2010 is a seriously cool piece of software. Not only the new hardware-accelerated effects, but also features like built-in background removal and picture processing (which in many cases are simply “good enough” and save a lot of time) or the smart guides.   Outlook 2010 crashes on me a lot. I haven’t been successful in reproducing these crashes, they just happen when every couple of days on different occasions (only thing in common: I clicked something in the main window – yeah, very helpful observation)   Visual Studio 2010 reminds me of Visual Studio 2005 before SP1, which is actually not a good thing to say about a piece of software. I think it’s telling that Microsoft’s message regarding the beta of SP1 has been different from earlier service pack betas (promising an upgrade path for a beta to the RTM sounds to me like “please, please use it NOW!”).   I have a love/hate relationship with ReSharper. I don’t want to develop without it, but at the same time I can’t fail to notice that ReSharper is taking a heavy toll in terms of performance and sometimes stability. Things I’m looking forward to in 2011 Obviously, the dotnet Cologne 2011. We already have been able to score some big name sponsors (Microsoft, Intel), but we’re still looking for more sponsors. And be assured that we’ll make sure that our partners get the most out of their contribution, regardless of how big or small.   MIX11, period.    Silverlight 5 is going to be great. The only thing I’m a bit nervous about is that I still haven’t read anything official on whether C# next version’s async/await will be in it. Leaving that out would be really stupid considering the end-of-2011 release of SL5 (moving the next release way into the future).

    Read the article

  • InfiniBand Enabled Diskless PXE Boot

    - by Neeraj Gupta
    When you want to bring up a compute server in your environment and need InfiniBand connectivity, usually you go through various installation steps. This could involve operating systems like Linux, followed by a compatible InfiniBand software distribution, associated dependencies and configurations. What if you just want to run some InfiniBand diagnostics or troubleshooting tools from a test machine ? What if something happened to your primary machine and while recovering in rescue mode, you also need access to your InfiniBand network ? Often times we use opensource community supported small Linux distributions but they don't come with required InfiniBand support and tools. In this weblog, I am going to provide instructions on how to add InfniBand support to a specific Linux image - Parted Magic.This is a free to use opensource Linux distro often used to recover or rescue machines. The distribution itself will not be changed at all. Yes, you heard it right ! I have built an InfiniBand Add-on package that will be passed to the default kernel and initrd to get this all working. Pr-requisites You will need to have have a PXE server ready on your ethernet based network. The compute server you are trying to PXE boot should have a compatible IB HCA and must be connected to an active IB network. Required Downloads Download the Parted Magic small distribution for PXE from Parted Magic website. Download InfiniBand PXE Add On package. Right Click and Download from here. Do not extract contents of this file. You need to use it as is. Prepare PXE Server Extract the contents of downloaded pmagic distribution into a temporary directory. Inside the directory structure, you will see pmagic directory containing two files - bzImage and initrd.img. Copy this directory in your TFTP server's root directory. This is usually /tftpboot unless you have a different setup. For Example: cp pmagic_pxe_2012_2_27_x86_64.zip /tmp cd /tmp unzip pmagic_pxe_2012_2_27_x86_64.zip cd pmagic_pxe_2012_2_27_x86_64 # ls -l total 12 drwxr-xr-x  3 root root 4096 Feb 27 15:48 boot drwxr-xr-x  2 root root 4096 Mar 17 22:19 pmagic cp -r pmagic /tftpboot As I mentioned earlier, we dont change anything to the default pmagic distro. Simply provide the add-on package via PXE append options. If you are using a menu based PXE server, then add an entry to your menu. For example /tftpboot/pxelinux.cfg/default can be appended with following section. LABEL Diskless Boot With InfiniBand Support MENU LABEL Diskless Boot With InfiniBand Support KERNEL pmagic/bzImage APPEND initrd=pmagic/initrd.img,pmagic/ib-pxe-addon.cgz edd=off load_ramdisk=1 prompt_ramdisk=0 rw vga=normal loglevel=9 max_loop=256 TEXT HELP * A Linux Image which can be used to PXE Boot w/ IB tools ENDTEXT Note: Keep the line starting with "APPEND" as a single line. If you use host specific files in pxelinux.cfg, then you can use that specific file to add the above mentioned entry. Boot Computer over PXE Now boot your desired compute machine over PXE. This does not have to be over InfiniBand. Just use your standard ethernet interface and network. If using menus, then pick the new entry that you created in previous section. Enable IPoIB After a few minutes, you will be booted into Parted Magic environment. Open a terminal session and see if InfiniBand is enabled. You can use commands like: ifconfig -a ibstat ibv_devices ibv_devinfo If you are connected to InfiniBand network with an active Subnet Manager, then your IB interfaces must have come online by now. You can proceed and assign IP address to them. This will enable you at IPoIB layer. Example InfiniBand Diagnostic Tools I have added several InfiniBand Diagnistic tools in this add-on. You can use from following list: ibstat, ibstatus, ibv_devinfo, ibv_devices perfquery, smpquery ibnetdiscover, iblinkinfo.pl ibhosts, ibswitches, ibnodes Wrap Up This concludes this weblog. Here we saw how to bring up a computer with IPoIB and InfiniBand diagnostic tools without installing anything on it. Its almost like running diskless !

    Read the article

  • Revisioning the CeBIT 2011

    - by hechtsuppe
    Hey guys, I am living in the CeBIT's hometown, the beautiful city Hanover. So I am visiting this exhibition since 2002 and I've seen a lot of changes during all this time. But this time, it was the most boring fair I've ever seen. Lets start with the first halls: "The same procedure as every year"- directly behind the entrance are the exhibitioners from far east (China, Taiwan...). In the past, they've shown a lot of nice toys. But this time, they got very serious, the only great gimmick was the motorcycle suitcase for the iPad, I watched the presentation until I reminded myself that I am not owning an iPad and these cases weren't suitable for my BMW motorcycle. So I started looking for the business stuff (I was there for business). I walked deeper in the exhibition area: During the way to the business halls, I came across a hall where I heard a big bass- the gamer's hall I think so I made a quick getaway from there. I saw a lot of teenagers with gaming bags on thier shoulders and I was really confused. I thought 'Damn it is tuesday 11:00 am, the trade fair is opened for public on saturday, why they are here and not at school?'. So the german schools seem to be too easy for students. At the time I was a pupil I visited the CeBIT on saturday! At the business halls: I visited IBM's booth but there were only guys looking like penguins and I weared a white chemise. So nobody was interested in talking to me. At the coffeebar I met a very nice guy from Bangladesh I think he was round about 25, but he told me that he was the first time in germany and he thought that germans are still nazis. I laughed at him and went to DELL. I was really really really interested in client solutions from DELL because I want to get away from our current client manufacturer. At the DELL booth I became recognized and a really nice guy told me where to use which client products. But there were too many people for trying the notebooks so the DELL guy asked for my business card. But I am still waiting for information, dear 'DELL dude'. I went to the Microsoft booth for informing myself about new IT trends. There were nothig new, only a few presentations about the 'new' Windows Live, Windows Phone 7 and 'the allmighty cloud'. But there was a very small presentation corner with the title 'geeks corner'. A guy inside the 'geeks corner' started Visual Studio 2010, I was really agog for the presentation. But then he started talking about Windows Phone 7 and how to program. He began with drag'n drop a textbox and a button on the form. He wrote really basic code and explained the functionality of a textbox- then I stood up and left the room. At the end: Before leaving the fairground, I've visited a few small booths and the big anti-virus program companies. But there was nothing new, I was really disappointed  this year. I've seen only ten exhibition babes and the rest of the week I stood ~3 hours in traffic jams. But I really love the flair in the whole town during this exhibition. The people in the city railways, which are really confused and the people in the pubs. Cheers Vince

    Read the article

  • SQL SERVER – Read Only Files and SQL Server Management Studio (SSMS)

    - by pinaldave
    Just like any other Developer or DBA SQL Server Management Studio is my favorite application. Any any moment of the time I have multiple instances of the same application are open and I am working on it. Recently, I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. First create a read only SQL file. You can make any file read by Right Click >> Properties >> Select Attribute Read Only. Now open the same file in SQL Server Management Studio. You will find that besides the file name there is a small ‘lock’ icon. This small icon indicates that the file is read only. Now let us attempt to edit the read only file. It will let us edit the file any way we want, however when we attempt to save it, it gives following pop-up value. The options in the pop-up are self explanatory and I liked it. The goal of the read only file is to prevent users to make un-intended changes. However, when a user should have complete control over the user file. User should be aware that the file is read only but if he wants to edit the file or save as a new file the choices should be present in front of it and the pop-up menu precisely captures the same. Now let us check option related to this feature in SSMS. Go to Menu >> Options >> Environment >> Documents You will find the third option which is “Allow editing of read-only files; warn when attempt to save”. In the above scenario it was already checked. Let us uncheck the same and do the same exercise which we have done earlier. I closed all the earlier window to avoid confusion. With the new option selected when I attempt to even modify the Read Only file, it gives me totally different pop up screen. It gives me an option like “Edit In-Memory”, “Make Writeable” etc. When you select “Edit In-Memory” it allows you to edit the file and later you can save as new file – just like the earlier scenario which we have discussed. . If clicked on the Make Writeable it will remove the restriction of the Read Only and file can be edited as pleased. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Finding the maximum value/date across columns

    - by AtulThakor
    While working on some code recently I discovered a neat little trick to find the maximum value across several columns….. So the starting point was finding the maximum date across several related tables and storing the maximum value against an aggregated record. Here's the sample setup code: USE TEMPDB IF OBJECT_ID('CUSTOMER') IS NOT NULL BEGIN DROP TABLE CUSTOMER END IF OBJECT_ID('ADDRESS') IS NOT NULL BEGIN DROP TABLE ADDRESS END IF OBJECT_ID('ORDERS') IS NOT NULL BEGIN DROP TABLE ORDERS END SELECT 1 AS CUSTOMERID, 'FREDDY KRUEGER' AS NAME, GETDATE() - 10 AS DATEUPDATED INTO CUSTOMER SELECT 100000 AS ADDRESSID, 1 AS CUSTOMERID, '1428 ELM STREET' AS ADDRESS, GETDATE() -5 AS DATEUPDATED INTO ADDRESS SELECT 123456 AS ORDERID, 1 AS CUSTOMERID, GETDATE() + 1 AS DATEUPDATED INTO ORDERS .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Now the code used a function to determine the maximum date, this performed poorly. After considering pivoting the data I opted for a case statement, this seemed reasonable until I discovered other areas which needed to determine the maximum date between 5 or more tables which didn't scale well. The final solution involved using the value clause within a sub query as followed. SELECT C.CUSTOMERID, A.ADDRESSID, (SELECT MAX(DT) FROM (Values(C.DATEUPDATED),(A.DATEUPDATED),(O.DATEUPDATED)) AS VALUE(DT)) FROM CUSTOMER C INNER JOIN ADDRESS A ON C.CUSTOMERID = A.CUSTOMERID INNER JOIN ORDERS O ON O.CUSTOMERID = C.CUSTOMERID .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } As you can see the solution scales well and can take advantage of many of the aggregate functions!

    Read the article

  • Monitor System Resources from the Windows 7 Taskbar

    - by Asian Angel
    The problem with most system monitoring apps is that they get covered up with all of your open windows, but you can solve that problem by adding monitoring apps to the Taskbar. Setting Up & Using SuperbarMonitor All of the individual monitors and the .dll files necessary to run them come in a single zip file for your convenience. Simply unzip the contents, add them to an appropriate “Program Files Folder”, and create shortcuts for the monitors that you would like to use on your system. For our example we created shortcuts for all five monitors and set the shortcuts up in their own “Start Menu Folder”. You can see what the five monitors (Battery, CPU, Disk, Memory, & Volume) look like when running…they are visual in appearance without text to clutter up the looks. The monitors use colors (red, green, & yellow) to indicate the amount of resources being used for a particular category. Note: Our system is desktop-based but the “Battery Monitor” was shown for the purposes of demonstration…thus the red color seen here. Hovering the mouse over the “Battery, CPU, Disk, & Memory Monitors” on our system displayed a small blank thumbnail. Note: The “Battery Monitor” may or may not display more when used on your laptop. Going one step further and hovering the mouse over the thumbnails displayed a small blank window. There really is nothing that you will need to worry with outside of watching the color for each individual monitor. Nice and simple! The one monitor with extra features on the thumbnail was the “Volume Monitor”. You can turn the volume down, up, on, or off from here…definitely useful if you have been wanting to hide the “Volume Icon” in the “System Tray”. You can also pin the monitors to your “Taskbar” if desired. Keep in mind that if you do close any of the monitors they will “temporarily” disappear from the “Taskbar” until the next time they are started. Note: If you want the monitors to start with your system each time you will need to add the appropriate shortcuts to the “Startup Sub-menu” in your “Start Menu”. Conclusion If you have been wanting a nice visual way to monitor your system’s resources then SuperbarMonitor is definitely worth trying out. Links Download SuperbarMonitor Similar Articles Productive Geek Tips Monitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersUse Windows Vista Reliability Monitor to Troubleshoot CrashesTaskbar Eliminator Does What the Name Implies: Hides Your Windows TaskbarBring Misplaced Off-Screen Windows Back to Your Desktop (Keyboard Trick)How To Fix System Tray Tooltips Not Displaying in Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites

    Read the article

  • AdventureWorks2012 now available for all on SQL Azure

    - by jamiet
    Three days ago I tweeted this: Idea. MSFT could host read-only copies of all the [AdventureWorks] DBs up on #sqlazure for the SQL community to use. RT if agree #sqlfamily — Jamie Thomson (@jamiet) March 24, 2012 Evidently I wasn't the only one that thought this was a good idea because as you can see from the screenshot that tweet has, so far, been retweeted more than fifty times. Clearly there is a desire to see the AdventureWorks databases made available for the community to noodle around on so I am pleased to announce that as of today you can do just that - [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use* for their own means. *By use I mean "issue some SELECT statements". You don't have permission to issue INSERTs, UPDATEs, DELETEs or EXECUTEs I'm afraid - if you want to do that then you can get the bits and host it yourself. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected]: Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more that we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. OK, with the prickly subject of begging for cash out of the way let me share the details that you need to connect to [AdventureWorks2012] on SQL Azure: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly That user sqlfamily has all the permissions required to enable you to query away to your heart's content. Here is the code that I used to set it up: CREATE USER sqlfamily FOR LOGIN sqlfamily;CREATE ROLE sqlfamilyrole;EXEC sp_addrolemember 'sqlfamilyrole','sqlfamily';GRANT VIEW DEFINITION ON Database::AdventureWorks2012 TO sqlfamilyrole;GRANT VIEW DATABASE STATE ON Database::AdventureWorks2012 TO sqlfamilyrole;GRANT SHOWPLAN TO sqlfamilyrole;EXEC sp_addrolemember 'db_datareader','sqlfamilyrole'; You can connect to the database using SQL Server Management Studio (instructions to do that are provided at Walkthrough: Connecting to SQL Azure via the SSMS) or you can use the web interface at https://mhknbn2kdz.database.windows.net: Lastly, just for a bit of fun I created a table up there called [dbo].[SqlFamily] into which you can leave a small calling card. Simply execute the following SQL statement (changing the values of course): INSERT [dbo].[SqlFamily]([Name],[Message],[TwitterHandle],[BlogURI])VALUES ('Your name here','Some Message','your twitter handle (optional)','Blog URI (optional)'); [Id] is an IDENTITY field and there is a default constraint on [DT] hence there is no need to supply a value for those. Note that you only have INSERT permissions, not UPDATE or DELETE so make sure you get it right first time! Any offensive or distasteful remarks will of course be deleted :) Thank you for reading this far and have fun using AdventureWorks on Azure. I hope it proves to be useful for some of you. @jamiet AdventureWorks on Azure - Provided by the SQL Server community, for the SQL Server community!

    Read the article

  • DAC pack up all your troubles

    - by Tony Davis
    Visual Studio 2010, or perhaps its apparently-forthcoming sister, "SQL Studio", is being geared up to become the natural way for developers to create databases. Central to this drive is the introduction of 'data-tier application components', or DACs. Applications are developed as normal but when it comes to deployment, instead of supplying the DBA with a bunch of scripts to create the required database objects, the developer creates a single DAC Package ("DAC Pack"); a zipped XML file containing all the database objects needed by the application, along with versioning information, policies for deployment, and so on. It's an intriguing prospect. Developers can work on their development database using their existing tools and source control, and then package up the changes into a single DACPAC for deployment and management. DBAs get an "application level view" of how their instances are being used and the ability to collectively, rather than individually, manage the objects. The DBA needing to manage a large number of relatively small databases can use "DAC snapshots" to get a quick overview of what has changed across all the databases they manage. The reason that DAC packs haven't caused more excitement is that they can only be pushed to SQL Server 2008 R2, and they must be developed or inspected using Visual Studio 2010. Furthermore, what we see right now in VS2010 is more of a 'work-in-progress' or 'vision of the future', with serious shortcomings and restrictions that render it unsuitable for anything but small 'non-critical' departmental databases. The first problem is that DAC packs support a limited set of schema objects (corresponding closely to the features available on 'Azure'). This means that Service Broker queues, CLR Objects, and perhaps most critically security (permissions, certificates etc.), are off-limits. Applications that require these objects will need to add them via a post-deployment TSQL script, rather defeating the whole idea. More worrying still is the process for altering a database with a DAC pack. The grand 'collective' philosophy, whereby a single XML file can be used for deploying and managing builds and changes, extends, unfortunately, to database upgrades. Any change to a database object will result in the creation of a new database, copying the data from the old version, nuking the previous one, and then renaming the new one. Simple eh? The problem is that even something as trivial as adding a comment to a stored procedure in a 5GB database will require the server to find at least twice as much space, as well sufficient elbow-room in the transaction log for copying the largest table. Of course, you'll need to take the database offline for the full course of the deployment, which is likely to take a long time if there is a lot of data. This upgrade/rename process breaks the log chain, makes any subsequent full restore operation highly complicated, and will also break log shipping. As with any grand vision, the devil is always in the detail. It's hard to fathom why Microsoft hasn't used a SQL Compare-style approach to the upgrade process, altering a database with a change script, and this will surely be adopted in the near future. Something had to be in place for VS2010, but right now DAC packs only make sense for Azure. For this, they're cute, but hardly compelling. Nevertheless, DBAs would do well to get familiar with VS 2010 and DAC packs. Like it or not, they're both coming. Cheers, Tony.

    Read the article

  • AppKata - Enter the next level of programming exercises

    - by Ralf Westphal
    Doing CodeKatas is all the rage lately. That´s great since widely accepted exercises are important to further the art. They provide a means of communication across platforms and allow to compare results which is part of any deliberate practice. But CodeKatas suffer from their size. They are intentionally small, so they can be done again and again. Repetition helps to build habit and to dig deeper. Over time ever new nuances of the problem or one´s approach become visible. On the other hand, though, their small size limits the methods, techniques, technologies that can be applied. To improve your TDD skills doing CodeKatas might be enough. But what about other skills? Developing on a software in a team, designing larger pieces of software, iteratively releasing software… all this and more is kinda hard to train using the tiny CodeKata problems. That´s why I´d like to present here another kind of kata I call Application Kata (or just AppKata). AppKatas are larger programming problems. They require the development of “whole” applications, i.e. not just one class or method, but bunches of classes accessible through a user interface. Also AppKata problems always are split into iterations. To get the most out of them, just look at the requirements of one iteration at a time. This way you´re closer to reality where requirements evolve in unexpected ways. So if you´re looking for more of a challenge for your software development skills, check out these AppKatas – or invent your own. AppKatas are platform independent like CodeKatas. Use whatever programming language and IDE you like. Also use whatever approach to software development you like. Just be sensitive to how easy it is to evolve your code across iterations. Reflect on what went well and what not. Compare your solutions with others. Or – for even more challenge – go for the “Coding Carousel” (see below). CSV Viewer An application to view CSV files. Sounds easy, but watch out! Requirements sometimes drastically change if the customer is happy with what you delivered. Iteration 1 Iteration 2 Iteration 3 Iteration 4 Iteration 5 (to come) Questionnaire If you like GUI programming, this AppKata might be for you. It´s about an app to let people fill out questionnaires. Also this problem might be interestin for you, if you´re into DDD. Iteration 1 Iteration 2 (to come) Iteration 3 (to come) Iteration 4 (to come) Tic Tac Toe For developers who like game programming. Although Tic Tac Toe is a trivial game, this AppKata poses some interesting infrastructure challenges. The GUI, however, stays simple; leave any 3D ambitions at home ;-) Iteration 1 Iteration 2 (to come) Iteration 3 (to come) Iteration 4 (to come) Iteration 5 (to come) Coding Carousel There are many ways you can do AppKatas. Work on them alone or in a team, pitch several devs against each other in an AppKata contest – or go around in a Coding Carousel. For the Coding Carousel you need at least 3 dev teams (regardless of size). All teams work on the same iteration at the same time. But here´s the trick: After each iteration the teams swap their code. Whatever they did for iteration n will be the basis for changes another team has to apply in iteration n+1. The code is going around the teams like in a carousel. I promise you, that´s gonna be fun! :-)

    Read the article

  • SQL SERVER – BACKUPIO, BACKUPBUFFER – Wait Type – Day 14 of 28

    - by pinaldave
    Backup is the most important task for any database admin. Your data is at risk if you are not performing database backup. Honestly, I have seen many DBAs who know how to take backups but do not know how to restore it. (Sigh!) In this blog post we are going to discuss about one of my real experiences with one of my clients – BACKUPIO. When I started to deal with it, I really had no idea how to fix the issue. However, after fixing it at two places, I think I know why this is happening but at the same time, I am not sure the fix is the best solution. The reality is that the fix is not a solution but a workaround (which is not optimal, but get your things done). From Book On-Line: BACKUPIO Occurs when a backup task is waiting for data, or is waiting for a buffer in which to store data. This type is not typical, except when a task is waiting for a tape mount. BACKUPBUFFER Occurs when a backup task is waiting for data, or is waiting for a buffer in which to store data. This type is not typical, except when a task is waiting for a tape mount. BACKUPIO and BACKUPBUFFER Explanation: This wait stats will occur when you are taking the backup on the tape or any other extremely slow backup system. Reducing BACKUPIO and BACKUPBUFFER wait: In my recent consultancy, backup on tape was very slow probably because the tape system was very old. During the time when I explained this wait type reason in the consultancy, the owners immediately decided to replace the tape drive with an alternate system. They had a small SAN enclosure not being used on side, which they decided to re-purpose. After a week, I had received an email from their DBA, saying that the wait stats have reduced drastically. At another location, my client was using a third party tool (please don’t ask me the name of the tool) to take backup. This tool was compressing the backup along with taking backup. I have had a very good experience with this tool almost all the time except this one sparse experience. When I tried to take backup using the native SQL Server compressed backup, there was a very small value on this wait type and the backup was much faster. However, when I attempted with the third party backup tool, this value was very high again and was taking much more time. The third party tool had many other features but the client was not using these features. We end up using the native SQL Server Compressed backup and it worked very well. If I get to see this higher in my future consultancy, I will try to understand this wait type much more in detail and so probably I would able to come to some solid solution. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • BizTalk Testing Series - The xpath Function

    - by Michael Stephenson
    Background While the xpath function in a BizTalk orchestration is a very powerful feature I have often come across the situation where someone has hard coded an xpath expression in an orchestration. If you have read some of my previous posts about testing I've tried to get across the general theme like test-driven or test-assisted development approaches where the underlying principle is that your building up your solution of small well tested units that are put together and the resulting solution is usually quite robust. You will be finding more bugs within your unit tests and fewer outside of your team. The thing I don't like about the xpath functions usual usage is when you come across an orchestration which has something like the below snippet in an expression or assign shape: string result = xpath(myMessage,"string(//Order/OrderItem/ProductName)"); My main issue with this is that the xpath statement is hard coded in the orchestration and you don't really know it works until you are running the orchestration. Some of the problems I think you end up with are: You waste time with lengthy debugging of the orchestration when your statement isn't working You might not know the function isn't working quite as expected because the testable unit around it is big You are much more open to regression issues if your schema changes     Approach to Testing The technique I usually follow is to hold the xpath statement as a constant in a helper class or to format a constant with a helper function to get the actual xpath statement. It is then used by the orchestration like follows. string result = xpath(myMessage, MyHelperClass.ProductNameXPathStatement); This means that because the xpath statement is available outside of the orchestration it now becomes testable in its own right. This means: I can test it in its own right I'm less likely to waste time tracking down problems caused by an error in the statement I can reduce the risk or regression issuess I'm now able to implement some testing around my xpath statements which usually are something like the following:    The test will use a sample xml file The sample will be validated against the schema The test will execute the xpath statement and then check the results are as expected     Walk-through BizTalk uses the XPathNavigator internally behind the xpath function to implement the queries you will usually use using the navigators select or evaluate functions. In the sample (link at bottom) I have a small solution which contains a schema from which I have generated a sample instance. I will then use this instance as the basis for my tests.     In the below diagram you can see the helper class which I've encapsulated my xpath expressions in, and some helper functions which will format the expression in the case of a repeating node which would want to inject an index into the xpath query.             I have then created a test class which has some functions to execute some queries against my sample xml file. An example of this is below.         In the test class I have a couple of helper functions which will execute the xpath expressions in a similar way to BizTalk. You could have a proper helper class to do this if you wanted.         You can see now in the BizTalk expression editor I can use these functions alongside the xpath function.         Conclusion I hope you can see with very little effort you can make your life much easier by testing xpath statements outside of an orchestration rather than using them directly hard coded into the orchestration.     This can also save you lots of pain longer term because your build should break if your schema changes unexpectedly causing these xpath tests to fail where as your tests around the orchestration will be more difficult to troubleshoot and workout the cause of the problem.     Sample Link The sample is available from the following link: http://code.msdn.microsoft.com/testbtsxpathfunction     Other Tools On the subject of using the xpath function, if you don't already use it the below tool is very useful for creating your xpath statements (thanks BizBert) http://www.bizbert.com/bizbert/2007/11/30/XPath+The+Hidden+Language+Of+BizTalk.aspx

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >