Search Results

Search found 12984 results on 520 pages for 'little fish'.

Page 207/520 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Streaming Netflix Media with My Wii

    - by Ben Griswold
    Late last year, I wrote about Streaming Media with my Sony Blu-ray Disc Player. I am still digging the Blu-ray player setup but guess what showed up in the mail yesterday?   That’s right!  A free Netflix disc which now let’s me instantly watch TV episodes and movies via my Wii console.  I popped the disc into the console and in less than 2 minutes the brain-numbingly simple activation was complete.  (Full-disclosure: I already had my Wi-Fi connection configured, but I’m confident that the Netflix installation disc would have helpfully walked me through this additional step if need be.) As it turns out, the Wii Netflix UI offers far more options than what one gets with the Blu-ray setup.  Not only can I view my Instant Queue, but there’s a list of recently watched movies, a list of recommended titles by category, the star rating system, movies information and nearly everything you find on the web.  I reread Steve Krug’s Don’t Make Me Think: A Common Sense Approach to Web Usability on a flight back from Orlando on Wednesday, so my current view of the world may be a little skewed but, the brilliance of Netflix Wii’s user interface is undeniable. It’s not like the Blu-ray navigation is complicated but the Wii navigation feels familiar and intuitive. How intuitive?  Well, you won’t find a single bit of help text on any of the Wii screens – just a simple and obvious point-and-click navigation system.  And the UI is really pretty (which is still very important if you ask me) and so easy it became fun. Did I mention the media streaming works!  Yep, we watched 2 half-hour kid videos yesterday without any streaming issues at all.  If you have a Netflix account and a Wii, order your disc and give it a go. It’s good stuff.

    Read the article

  • Kids don’t mark their own homework

    - by jamiet
    During a discussion at work today in regard to doing some thorough acceptance testing of the system that I currently work on the topic of who should actually do the testing came up. I remarked that I didn’t think that I as the developer should be doing acceptance testing and a colleague, Russ Taylor, agreed with me and then came out with this little pearler: Kids don’t mark their own homework Maybe its a common turn of phrase but I had never heard it before and, to me, it sums up very succinctly my feelings on the matter. I tweeted about it and it got a couple of retweets as well as a slightly different perspective from Bruce Durling who said: I'm of the opinion that testers should be in the dev team & the dev *team* should be responsible for quality Bruce makes a good point that testers should be considered part of the dev team. I agree wholly with that and don’t think that point of view necessarily conflicts with Russ’s analogy. Yes, developers should absolutely be responsible for testing their own work – I also think that in the murky world of data integration there is often a need for a 3rd party to validate that work. Improving testing mechanisms for data integration projects is something that is near and dear to my heart so I would welcome any other thoughts around this. Let me know if you have any in the comments! @Jamiet

    Read the article

  • SQLAuthority News – 18 Seconds of Fame – My PASS Experience

    - by pinaldave
    Happy Holidays to All of YOU! Life is full of little and happy surprises. I think Christmas and Santa are based on it. I just received very interesting email earlier today, I had no idea about it. Earlier this year, I had visited Seattle to attend SQLPASS – read the complete summary over here: SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience. While I was walking down, someone has stopped me and asked if they can talk to me for 15 seconds, I said yes and they had shot quick movie with mobile. The conversation was very quick and I had forgotten about it. Today I received email from one of the blog reader about it being on YouTube. Honestly, I did not know if this was ever going to be on YouTube. I am surprised and thrilled. Watch my 18 seconds fame movie. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology

    Read the article

  • Enabling 32-Bit Applications on IIS7 (also affects 32-bit oledb or odbc drivers) [Solved]

    - by Humprey Cogay, C|EH
    We just bought a new Web Server, after installing Windows 2008 R2(which is a 64bit OS and IIS7), SQL Server Standard 2008 R2 and IBM Client Access for V5R3 with its Dot Net Data Providers, I tried deploying our new project which is fully functional on an IIS6 Based Web Server, I encountered this Error The 'IBMDA400.DataSource.1' provider is not registered on the local machine. To remove the doubt that I still lack some Software Pre-Requesites or version conflicts  since I encountered some erros while installing my IBM Client Access, I created a Connection Tester which is Windows App that accepts a connection string as a parameter and verifies if that parameter is valid. After entering the Proper Conn String I tried hitting the button and the Test was Succesful. So now I trimmed my suspects to My Web App and IIS7. After Googling around I found this post by a Rakki Muthukumar(Microsoft Developer Support Engineer for ASP.NET and IIS7) http://blogs.msdn.com/b/rakkimk/archive/2007/11/03/iis7-running-32-bit-and-64-bit-asp-net-versions-at-the-same-time-on-different-worker-processes.aspx So I tried scouting on IIS7's management console and found this little tweak under the Application Pool where my App is a member of. After changing this parameter to TRUE Yahoo (although I'm a Google kind of person) the Web App Works .......

    Read the article

  • Fun with Database Mail

    - by DavidWimbush
    I just had some fun with a new server I set up. I configured Database Mail but it wouldn't send mail. In the Database Log was an error message I've never had before: The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 1 (2010-02-15T16:58:54). Exception Message: Could not connect to mail server. (A non-recoverable error occurred during a database lookup). ) I looked around and found this forum thread: http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=139909. I looked into the anti-virus software and firewall but neither of those helped. Then I noticed the last post where they original poster said "oops, I put an @ in the name of the SMTP server". How I laughed. What a ***! At least my problem isn't because I did something that careless. But then that little voice in my head said "Better check, just to be on the safe side". SMTP Server Name: smtp@domain. Who's the *** now? So, rbarlow, thanks very much for making the same silly mistake first so I could find the answer. (And I'm sorry for thinking you were a ***!)

    Read the article

  • Pluralsight Meet the Author Podcast on HTML5 Canvas Programming

    - by dwahlin
      In the latest installment of Pluralsight’s Meet the Author podcast series, Fritz Onion and I talk about my new course, HTML5 Canvas Fundamentals.  In the interview I describe different canvas technologies covered throughout the course and a sample application at the end of the course that covers how to build a custom business chart from start to finish. Meet the Author:  Dan Wahlin on HTML5 Canvas Fundamentals   Transcript [Fritz] Hi. This is Fritz Onion. I’m here today with Dan Wahlin to talk about his new course HTML5 Canvas Fundamentals. Dan founded the Wahlin Group, which you can find at thewahlingroup.com, which specializes in ASP.NET, jQuery, Silverlight, and SharePoint consulting. He’s a Microsoft Regional Director and has been awarded Microsoft’s MVP for ASP.NET, Connected Systems, and Silverlight. Dan is on the INETA Bureau’s — Speaker’s Bureau, speaks at conferences and user groups around the world, and has written several books on .NET. Thanks for talking to me today, Dan. [Dan] Always good to talk with you, Fritz. [Fritz] So this new course of yours, HTML5 Canvas Fundamentals, I have to say that most of the really snazzy demos I’ve seen with HTML5 have involved Canvas, so I thought it would be a good starting point to chat with you about why we decided to create a course dedicated just to Canvas. If you want to kind of give us that perspective. [Dan] Sure. So, you know, there’s quite a bit of material out there on HTML5 in general, and as people that have done a lot with HTML5 are probably aware, a lot of HTML5 is actually JavaScript centric. You know, a lot of people when they first learn it, think it’s tags, but most of it’s actually JavaScript, and it just so happens that the HTML5 Canvas is one of those things. And so it’s not just, you know, a tag you add and it just magically draws all these things. You mentioned there’s a lot of cool things you can do from games to there’s some really cool multimedia applications out there where they integrate video and audio and all kinds of things into the Canvas, to more business scenarios such as charting and things along those lines. So the reason we made a course specifically on it is, a lot of the material out there touches on it but the Canvas is actually a pretty deep topic. You can do some pretty advanced stuff or easy stuff depending on what your application requirements are, and the API itself, you know, there’s over 30 functions just in the Canvas API and then a whole set of properties that actually go with that as well. So it’s a pretty big topic, and that’s why we created a course specifically tailored towards just the Canvas. [Fritz] Right. And let’s — let me just review the outline briefly here for everyone. So you start off with an introduction to getting started with Canvas, drawing with the HTML5 Canvas, then you talk about manipulating pixels, and you finish up with building a custom data chart. So I really like your example flow here. I think it will appeal to even business developers, right. Even if you’re not into HTML5 for the games or the media capabilities, there’s still something here for everyone I think working with the Canvas. Which leads me to another question, which is, where do you see the Canvas fitting in to kind of your day-to-day developer, people that are working business applications and maybe vanilla websites that aren’t doing kind of cutting edge stuff with interactivity with users? Is there a still a place for the Canvas in those scenarios? [Dan] Yeah, definitely. I think a lot of us — and I include myself here — over the last few years, the focus has generally been, especially if you’re, let’s say, a PHP or ASP.NET or Java type of developer, we’re kind of accustomed to working on the server side, and, you know, we kind of relied on Flash or Silverlight or these other plug-ins for the client side stuff when it was kind of fancy, like charts and graphs and things along those lines. With the what I call massive shift of applications, you know, mainly because of mobile, to more of client side, one of the big benefits I think from a maybe corporate standard way of thinking of things, since we do a lot of work with different corporations, is that, number one, rather than having to have the plug-in, which of course isn’t going to work on iPad and some of these other devices out there that are pretty popular, you can now use a built-in technology that all the modern browsers support, and that includes things like Safari on the iPad and iPhone and the Android tablets and things like that with their browsers, and actually render some really sophisticated charts. Whether you do it by scratch or from scratch or, you know, get a third party type of library involved, it’s just JavaScript. So it downloads fast so it’s good from a performance perspective; and when it comes to what you can render, it’s extremely robust. You can do everything from, you know, your basic circles to polygons or polylines to really advanced gradients as well and even provide some interactivity and animations, and that’s some of the stuff I touch upon in the class. In fact, you mentioned the last part of the outline there is building a custom data chart and that’s kind of gears towards more of the, what I’d call enterprise or corporate type developer. [Fritz] Yeah, that makes sense. And it’s, you know, a lot of the demos I’ve seen with HTML5 focus on more the interactivity and kind of game side of things, but the Canvas is such a diverse element within HTML5 that I can see it being applicable pretty much anywhere. So why don’t we talk a little bit about some of the specifics of what you cover? You talk about drawing and then manipulating pixels. You want to kind of give us the different ways of working with the Canvas and what some of those APIs provide for you? [Dan] Sure. So going all the way back to the start of the outline, we actually started off by showing different demonstrations of the Canvas in action, and we show some fun stuff — multimedia apps and games and things like that — and then also some more business scenarios; and then once you see that, hopefully it kinds of piques your interest and you go, oh, wow, this is actually pretty phenomenal what you can do. So then we start you off with, so how to you actually draw things. Now, there are some libraries out there that will draw things like graphs, but if you want to customize those or just build something you have from scratch, you need to know the basics, such as, you know, how do you draw circles and lines and arcs and Bezier curves and all those fancy types of shapes that a given chart may have on it or that a game may have in it for that matter. So we start off by covering what I call the core API functions; how do you, for instance, fill a rectangle or convert that to a square by setting the height and the width; how do you draw arcs or different types of curves and there’s different types supported such as I mentioned Bezier curves or quadratic curves; and then we also talk about how do you integrate text into it. You might have some images already that are just regular bitmap type images that you want to integrate, you can do that with a Canvas. And you can even sync video into the Canvas, which actually opens up some pretty interesting possibilities for both business and I think just general multimedia apps. Once you kind of get those core functions down for the basic shapes that you need to be able to draw on any type of Canvas, then we go a little deeper into what are the pixels that are there to manipulate. And that’s one of the important things to understand about the HTML5 Canvas, scalable vector graphics is another thing you can use now in the modern browsers; it’s vector based. Canvas is pixel based. And so we talk about how to do gradients, how can you do transforms, you know, how do you scale things or rotate things, which is extremely useful for charts ’cause you might have text that, you know, flips up on its side for a y-axis or something like that. And you can even do direct pixel manipulation. So it’s really, really powerful. If you want to get down to the RGBA level, you can do that, and I show how to do that in the course, and then kind of wrap that section up with some animation fundamentals. [Fritz] Great. Yeah, that’s really powerful stuff for programmatically rendering data to clients and responding to user inputs. Look forward to seeing what everyone’s going to come up with building this stuff. So great. That’s — that’s HTML5 Canvas Fundamentals with Dan Wahlin. Thanks very much, Dan. [Dan] Thanks again. I appreciate it.

    Read the article

  • What are the reasons why Clojure is hyped and PicoLisp widely ignored?

    - by Thorsten
    I recently discovered the Lisp family of programming languages, and it's definitely one of the more diverse and widespread families in the programming language world. I like Elisp because that most wonderful tool Emacs is an Elisp interpreter. But I was looking for one more Lisp dialect to learn and thought Clojure would be the obvious choice nowadays - until I discovered the well hidden gem PicoLisp. That must be the most intelligent programming environment I have ever seen, like taking the best ideas from Lisp and Smalltalk and adding performance and practicability - and the beauty of parsimony. There is even an Emacs-mode for it. PicoLisp must be the productivity world champion when it comes to building business applications with database and web-client - and that's a very common task. It seems that throwing more and more hardware cores at your PicoLisp application makes it faster and faster, and the database is very performant anyway. However, reactions to PicoLisp in in general mailing-lists etc. are almost hostile (envy?), and there is absolutely no hype and very little publicity (ie not one book published). Are there real justified reasons for this (except the vast amount of java-libs accessible by Clojure, I know that one)? Or is the mainstream it getting wrong again (see C vs Lisp, Java vs Smalltalk, Windows vs Linux) and will come to the conclusion 10 years later that the JVM was good as in between solution, but a really fast Lisp interpreter on multicore machines is much better and allows much cleaner concepts? PS 1: Please note: I'm not interested in Scheme or any Common Lisp dialect, although they might be fine languages. It's just PicoLisp vs Clojure. PS 2: another thing I like about PicoLisp is its similarity to Elisp in certain aspects (both are descendants from MacLisp?) - it's easier to learn two similar languages. There is so much "dynamic binding bashing" on the web, but two of the most appealing Lisp applications use it.

    Read the article

  • Work Item Visualizer for TFS 2010 - New Extension

    - by MikeParks
    I released another new extension to the Visual Studio Gallery again today called Work Item Visualizer for TFS 2010. I've only heard positive things about it so far, hopefully it stays that way :) Basically, it creates a diagram of all work items linked to a work item ID which the user specifies in a search box. This extension was coded using DGML (the same graph rendering language used for the Visual Studio 2010 Architecture Tools). It was pretty cool getting a chance to create something using some of the newest technology out there. Well, I just wanted to throw a blog up to get the word out on it a little more. If you're using Visual Studio 2010 with Team Foundation Server 2010, feel free to check it out! Thanks everyone. Download Link: http://visualstudiogallery.msdn.microsoft.com/en-us/a35b6010-750b-47f6-a7a5-41f0fa7294d2   What it does: ·         Creates a DGML graph to visualize linked TFS Work Items by entering a Work Item ID in the toolbar search box   How it benefits you: ·         Allows you to easily analyze the hierarchy of your TFS Work Items ·         Gain the ability to perform basic risk/impact analysis when creating or editing Work Items ·         Great for meetings in the case that you need to discuss the entire scope of linked Work Items ·         Easier project planning ·         Eliminates the need to create TFS queries or reports to view tree of Work Items ·         Easily lets you see the entire tree of work items linked to the one you’re working on   Navigation Tips: ·         Use Ctrl + Mouse Wheel Scroll to zoom in and out ·         Use Ctrl + Left Mouse click (and hold) to move document around ·         Right click on DGML area for more options (Like copy image or viewing in groups) ·         Clicking on each node highlights that node and the links connected to it ·         Colors in the legend can be changed ·         When work item nodes are deleted, the view is automatically updated ·         Double clicking on work item node will open up the Work Items URL   Try it out on work items that have several of links and let us know what you think. A big thanks goes out to everyone working on the http://visualization.codeplex.com/ project for publishing the source code on CodePlex which really helped me learn how DGML (Directed Graph Markup Language - New to Visual Studio 2010 Architecture Tools) works!    - Mike

    Read the article

  • C#: Adding Functionality to 3rd Party Libraries With Extension Methods

    - by James Michael Hare
    Ever have one of those third party libraries that you love but it's missing that one feature or one piece of syntactical candy that would make it so much more useful?  This, I truly think, is one of the best uses of extension methods.  I began discussing extension methods in my last post (which you find here) where I expounded upon what I thought were some rules of thumb for using extension methods correctly.  As long as you keep in line with those (or similar) rules, they can often be useful for adding that little extra functionality or syntactical simplification for a library that you have little or no control over. Oh sure, you could take an open source project, download the source and add the methods you want, but then every time the library is updated you have to re-add your changes, which can be cumbersome and error prone.  And yes, you could possibly extend a class in a third party library and override features, but that's only if the class is not sealed, static, or constructed via factories. This is the perfect place to use an extension method!  And the best part is, you and your development team don't need to change anything!  Simply add the using for the namespace the extensions are in! So let's consider this example.  I love log4net!  Of all the logging libraries I've played with, it, to me, is one of the most flexible and configurable logging libraries and it performs great.  But this isn't about log4net, well, not directly.  So why would I want to add functionality?  Well, it's missing one thing I really want in the ILog interface: ability to specify logging level at runtime. For example, let's say I declare my ILog instance like so:     using log4net;     public class LoggingTest     {         private static readonly ILog _log = LogManager.GetLogger(typeof(LoggingTest));         ...     }     If you don't know log4net, the details aren't important, just to show that the field _log is the logger I have gotten from log4net. So now that I have that, I can log to it like so:     _log.Debug("This is the lowest level of logging and just for debugging output.");     _log.Info("This is an informational message.  Usual normal operation events.");     _log.Warn("This is a warning, something suspect but not necessarily wrong.");     _log.Error("This is an error, some sort of processing problem has happened.");     _log.Fatal("Fatals usually indicate the program is dying hideously."); And there's many flavors of each of these to log using string formatting, to log exceptions, etc.  But one thing there isn't: the ability to easily choose the logging level at runtime.  Notice, the logging levels above are chosen at compile time.  Of course, you could do some fun stuff with lambdas and wrap it, but that would obscure the simplicity of the interface.  And yes there is a Logger property you can dive down into where you can specify a Level, but the Level properties don't really match the ILog interface exactly and then you have to manually build a LogEvent and... well, it gets messy.  I want something simple and sexy so I can say:     _log.Log(someLevel, "This will be logged at whatever level I choose at runtime!");     Now, some purists out there might say you should always know what level you want to log at, and for the most part I agree with them.  For the most party the ILog interface satisfies 99% of my needs.  In fact, for most application logging yes you do always know the level you will be logging at, but when writing a utility class, you may not always know what level your user wants. I'll tell you, one of my favorite things is to write reusable components.  If I had my druthers I'd write framework libraries and shared components all day!  And being able to easily log at a runtime-chosen level is a big need for me.  After all, if I want my code to really be re-usable, I shouldn't force a user to deal with the logging level I choose. One of my favorite uses for this is in Interceptors -- I'll describe Interceptors in my next post and some of my favorites -- for now just know that an Interceptor wraps a class and allows you to add functionality to an existing method without changing it's signature.  At the risk of over-simplifying, it's a very generic implementation of the Decorator design pattern. So, say for example that you were writing an Interceptor that would time method calls and emit a log message if the method call execution time took beyond a certain threshold of time.  For instance, maybe if your database calls take more than 5,000 ms, you want to log a warning.  Or if a web method call takes over 1,000 ms, you want to log an informational message.  This would be an excellent use of logging at a generic level. So here was my personal wish-list of requirements for my task: Be able to determine if a runtime-specified logging level is enabled. Be able to log generically at a runtime-specified logging level. Have the same look-and-feel of the existing Debug, Info, Warn, Error, and Fatal calls.    Having the ability to also determine if logging for a level is on at runtime is also important so you don't spend time building a potentially expensive logging message if that level is off.  Consider an Interceptor that may log parameters on entrance to the method.  If you choose to log those parameter at DEBUG level and if DEBUG is not on, you don't want to spend the time serializing those parameters. Now, mine may not be the most elegant solution, but it performs really well since the enum I provide all uses contiguous values -- while it's never guaranteed, contiguous switch values usually get compiled into a jump table in IL which is VERY performant - O(1) - but even if it doesn't, it's still so fast you'd never need to worry about it. So first, I need a way to let users pass in logging levels.  Sure, log4net has a Level class, but it's a class with static members and plus it provides way too many options compared to ILog interface itself -- and wouldn't perform as well in my level-check -- so I define an enum like below.     namespace Shared.Logging.Extensions     {         // enum to specify available logging levels.         public enum LoggingLevel         {             Debug,             Informational,             Warning,             Error,             Fatal         }     } Now, once I have this, writing the extension methods I need is trivial.  Once again, I would typically /// comment fully, but I'm eliminating for blogging brevity:     namespace Shared.Logging.Extensions     {         // the extension methods to add functionality to the ILog interface         public static class LogExtensions         {             // Determines if logging is enabled at a given level.             public static bool IsLogEnabled(this ILog logger, LoggingLevel level)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         return logger.IsDebugEnabled;                     case LoggingLevel.Informational:                         return logger.IsInfoEnabled;                     case LoggingLevel.Warning:                         return logger.IsWarnEnabled;                     case LoggingLevel.Error:                         return logger.IsErrorEnabled;                     case LoggingLevel.Fatal:                         return logger.IsFatalEnabled;                 }                                 return false;             }             // Logs a simple message - uses same signature except adds LoggingLevel             public static void Log(this ILog logger, LoggingLevel level, object message)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message);                         break;                     case LoggingLevel.Informational:                         logger.Info(message);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message);                         break;                     case LoggingLevel.Error:                         logger.Error(message);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message);                         break;                 }             }             // Logs a message and exception to the log at specified level.             public static void Log(this ILog logger, LoggingLevel level, object message, Exception exception)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message, exception);                         break;                     case LoggingLevel.Informational:                         logger.Info(message, exception);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message, exception);                         break;                     case LoggingLevel.Error:                         logger.Error(message, exception);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message, exception);                         break;                 }             }             // Logs a formatted message to the log at the specified level.              public static void LogFormat(this ILog logger, LoggingLevel level, string format,                                          params object[] args)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.DebugFormat(format, args);                         break;                     case LoggingLevel.Informational:                         logger.InfoFormat(format, args);                         break;                     case LoggingLevel.Warning:                         logger.WarnFormat(format, args);                         break;                     case LoggingLevel.Error:                         logger.ErrorFormat(format, args);                         break;                     case LoggingLevel.Fatal:                         logger.FatalFormat(format, args);                         break;                 }             }         }     } So there it is!  I didn't have to modify the log4net source code, so if a new version comes out, i can just add the new assembly with no changes.  I didn't have to subclass and worry about developers not calling my sub-class instead of the original.  I simply provide the extension methods and it's as if the long lost extension methods were always a part of the ILog interface! Consider a very contrived example using the original interface:     // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsWarnEnabled)             {                 _log.WarnFormat("Statement {0} took too long to execute.", statement);             }             ...         }     }     Now consider this alternate call where the logging level could be perhaps a property of the class          // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // allow logging level to be specified by user of class instead         public LoggingLevel ThresholdLogLevel { get; set; }                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsLogEnabled(ThresholdLogLevel))             {                 _log.LogFormat(ThresholdLogLevel, "Statement {0} took too long to execute.",                     statement);             }             ...         }     } Next time, I'll show one of my favorite uses for these extension methods in an Interceptor.

    Read the article

  • Slides and Samples for My TechEd / Microsoft BI Conference Talks

    - by plitwin
    I posted the slides and samples for my talks I delivered in New Orleans on June 8th at Microsoft TechEd and Business Intelligence Conference. They can be downloaded from Paul Litwin's Conference Downloads. #1 Creating Report Subscriptions with SQL Server 2008 Reporting Services at 8 AM on Tuesday. Room 241In this session, learn how to set up standard and data-driven subscriptions using Report Manager. We discuss creating file-share, email, and null subscriptions; and how to deal with potential issues with parameters and security. We also demonstrate a sophisticated Microsoft ASP.NET-based application that creates subscriptions by calling the SSRS Web Services API.  #2 ASP.NET MVC for Web Forms Programmers at 3:15 PM Tuesday. Room 391Are you comfortable creating ASP.NET Web Form applications but even a little curious about what all the fuss is about MVC and test-driven development? In this session, Web Form junkie Paul Litwin takes a critical look at the world of ASP.NET MVC, but not from any expert point of view. Instead, Paul shares his experience as a Web Form developer who decided to take a closer look at this radical new approach to ASP.NET development. Come hear what Paul learned and if he plans to employ ASP.NET MVC in his future ASP.NET applications.

    Read the article

  • When runs a product out of support?

    That is a question I get regularly from customers. Microsoft has a great site where you can find that information. Unfortunately this site is not easy to find, and a lot of people are not aware of this site. A good reason to promote it a little. So if you ever get a question on this topic, go to http://support.microsoft.com/lifecycle/search/Default.aspx. At that site, you can find also the details of the policy Microsoft Support Lifecycle Policy The Microsoft Support Lifecycle policy took effect in October 2002, and applies to most products currently available through retail purchase or volume licensing and most future release products. Through the policy, Microsoft will offer a minimum of: 10 years of support (5 years Mainstream Support and 5 years Extended Support) at the supported service pack level for Business and Developer products 5 years Mainstream Support at the supported service pack level for Consumer/Hardware/Multimedia products 3 years of Mainstream Support for products that are annually released (for example, Money, Encarta, Picture It!, and Streets & Trips) Phases of the Support Lifecycle Mainstream Support Mainstream Support is the first phase of the product support lifecycle. At the supported service pack level, Mainstream Support includes: Incident support (no-charge incident support, paid incident support, support charged on an hourly basis, support for warranty claims) Security update support The ability to request non-security hotfixes Please note: Enrollment in a maintenance program may be required to receive these benefits for certain products Extended Support The Extended Support phase follows Mainstream Support for Business and Developer products. At the supported service pack level, Extended Support includes: Paid support Security update support at no additional cost Non-security related hotfix support requires a separate Extended Hotfix Support Agreement to be purchased (per-fix fees also apply) Please note: Microsoft will not accept requests for warranty support, design changes, or new features during the Extended Support phase Extended Support is not available for Consumer, Hardware, or Multimedia products Enrollment in a maintenance program may be required to receive these benefits for certain products Self-Help Online Support Self-Help Online Support is available throughout a product's lifecycle and for a minimum of 12 months after the product reaches the end of its support. Microsoft online Knowledge Base articles, FAQs, troubleshooting tools, and other resources, are provided to help customers resolve common issues. Please note: Enrollment in a maintenance program may be required to receive these benefits for certain products (source: http://support.microsoft.com/lifecycle/#tab1)

    Read the article

  • enable email on Godaddy when using Zerigo on Heroku hosted app

    - by joelmaranhao
    A little recap of what I have done ... and then my questions Q1, Q2 and Q3 1 - I developed a RoR app that I deployed on Heroku. biowatts.heroku.com 2 - I bought a domain name at GoDaddy: biowattsonline.com 3 - I am using Zerigo addon as for the DNS heroku addons:add custom_domains heroku addons:add zerigo_dns:basic 4 - Added my domains in Heroku heroku domains:add biowattsonline.com heroku domains:add www.biowattsonline.com and subdomains heroku domains:add calculator.biowattsonline.com Q1: Where do we configure the forward to http://biowattsonline.com/biogas_calculator ? 5 - Configured GoDaddy adding the Zerigo domains In the Nameservers section a.ns.zerigo.net b.ns.zerigo.net c.ns.zerigo.net d.ns.zerigo.net e.ns.zerigo.net The GoDaddy DNS section is empty: "Not hosted here" Ok this works all fine ... http://biowattsonline.com is correctly found 6 - Subdomain forward I want calculator.biowattsonline.com to be forwarded to biowattsonline.com/biogas_calculator Q2: So I created the forward in GoDaddy ... but is that correct? 7 - GoDaddy email Q3: I have one free email account with go GoDaddy, only now that I am using Zerigo I don't know how configure GoDaddy to make it work again?... because it work with the default values Any ideason Q1, Q2 and Q3? Thanks, Joel

    Read the article

  • Copyrights concerning code snippets and larger amounts of code

    - by JustcallmeDrago
    I am designing a public code repository. Users will be allowed to post and edit whatever amount of code they want, from code snippets to entire multi-file projects. I have a few major legal concerns about this: Not getting sued/shut down - I feel the site would be a much easier target than tracking down an individual user to sue. I have looked around a bit and see links to legal info in the footer of each page is common. What specific things should I do--and what does does a site such as YouTube (which I see copyrighted material on all the time) do--for protection? Citing sources and editing sourced code - If a user wants to post code that isn't theirs, what concerns/safeguards should I have? Will a link suffice, and what do I need further to allow the code to be edited (to improve it for example)? What can happen if a user posts copyrighted code without citing it? Large chunks of code - What legal differences should I look out for as the amount grows? Not having a mess of licenses for the site - I would like to have a single license (like RosettaCode) that keeps things simple for interaction on the site. I want the code to be postable and editable. I have looked into StackOverflow's CreativeCommons license a little and it says that If you alter, transform, or build upon this work, you may distribute the resulting work only under the same or similar license to this one. And on RosettaCode: All software found on Rosetta Code should be considered potentially hazardous. Use at your own risk. Be aware that all code on Rosetta Code is under the GNU Free Documentation License, as are any edits made by contributors. See Rosetta Code:Copyrights for details. What other licenses are like this? Commercializing the site - In what ways can I and can't I make money off of a site that contains code like this? All code will be publicly visible. Initial thoughts are having ads or making money by charging for advanced features.

    Read the article

  • Android “open for embedded”? Must-read Ars Technica article

    - by terrencebarr
    A few days ago ars technica published an article “Google’s iron grip on Android: Controlling open source by any means necessary”. If you are considering Android for embedded this article is a must-read to understand the severe ramifications of Google’s tight (and tightening) control on the Android technology and ecosystem. Some quotes from the ars technica article: “Android is open – except for all the good parts“ “Android actually falls into two categories: the open parts from the Android Open Source Project (AOSP) … and the closed source parts, which are all the Google-branded apps” “Android open source apps … turn into abandonware by moving all continuing development to a closed source model.” “Joining the OHA requires a company to sign its life away and promise to not build a device that runs a competing Android fork.” “Google Play Services is a closed source app owned by Google … to turn the “Android App Ecosystem” into the “Google Play Ecosystem” “You’re allowed to contribute to Android and allowed to use it for little hobbies, but in nearly every area, the deck is stacked against anyone trying to use Android without Google’s blessing“ Compare this with a recent Wired article “Oracle Makes Java More Relevant Than Ever”: “Oracle has actually opened up Java even more — getting rid of some of the closed-door machinations that used to be part of the Java standards-making process. Java has been raked over the coals for security problems over the past few years, but Oracle has kept regular updates coming. And it’s working on a major upgrade to Java, due early next year.” Cheers, – Terrence Filed under: Embedded, Mobile & Embedded Tagged: Android, embedded, Java Embedded, Open Source

    Read the article

  • AutoScroll panel working intermittently.

    - by Edward Boyle
    I spent hours last week trying to get AutoScroll to function properly on a derived/inherited panel control I have been writing. I found no answers on my own so I posted to several forums and move onto other code while I wait for a reply. Then out of nowhere, it started working properly. Now, Today (about a week later) I notice it is no longer working again!  I go back to those old posts with hopes I will find an answer – No such luck. I Google for about two hours reading everything I come across. I was just about to write a new custom control from the ground up, perhaps use a little unmanaged code to force things to function properly. All I knew was “options in front of me = dealys”.  Just before I gave up, my head in my hands,  Jordan Sirwin’s appropriately titled blog post: “C#: Windows Panel AutoScroll Bug / Intended Suckyness” saves the day! In order for scroll bars to display, there must be at least one control in the Panel with AutoSize set to true. This is absurd… I’m not sure if this is a bug or intended, but it’s stupid. –I feel your pain. How many others have spent hours on this, or worse,  just plain given up? I want those hours back Damnit!

    Read the article

  • What do you do with coder's block?

    - by Garet Claborn
    Lately it has been a bit rough. I basically know all the things I need and all the avenues to get there for work. There's been no real issue of a problem with too high complexity, and performance is good. Still, after three major projects this year, my mind is behaving a little strange. It's like I'm used to working in O(1+log(N-neatTricks)) but for some reason it processes in O(N^2)! I've experienced a sort of burnout after long deadlines and drudging projects before, but when it turns into a longer experience, I haven't found the usual suspects to be helpful. Take more walks Work on other code Overdesign everything until I feel intensely driven to just make it (sorta works) How can a programmer recoup from the specific hole in your head programming leaves after being mentally ransacked by these bloody corporations and their fancy money? Hopefully some of you have some better ideas, because I could really use another round of being looted and pillaged.I've often wondered if there are special puzzles or some kind of activity that would de-stress the tangled balance of left and right braininess programmers often deal with. Do any special techniques, activities, anything seem to help with the developer's mindset especially?

    Read the article

  • Etch a Circuit Board using a Simple Homemade Mixture

    - by ETC
    If you’ve been dabbling in DIY electronics projects but you’re not so excited about keeping strong acids around to etch your circuit boards, this simple DIY recipe uses common household chemicals in lieu of strong acid. Electronics hobbyist Stephen Hobley wanted to see if he could create an etching solution that wasn’t as dangerous and noxious smelling at traditional muriatic acid solutions. By combining regular white vinegar, hydrogen peroxide, and table salt, he created a homemade etching solution from ingredients safe enough to store in your pantry. The only downside to his recipe is that, compared to traditional etching solutions, the process takes a little bit longer so you’ll have to leave your board in the solution longer. Not a bad trade off for the ability to skip using any oops-I-burned-my-skin-off acids. Check out the process in the video below: Hit up the link below for more information and and interesting explanation of the chemical process (he talks about not quite understanding it in the video but two chemists write in and give him the full run down). DIY Etching Solution [Stephen Hobley via Make] Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Etch a Circuit Board using a Simple Homemade Mixture Sync Blocker Stops iTunes from Automatically Syncing The Journey to the Mystical Forest [Wallpaper] Trace Your Browser’s Roots on the Browser Family Tree [Infographic] Save Files Directly from Your Browser to the Cloud in Chrome and Iron The Steve Jobs Chronicles – Charlie and the Apple Factory [Video]

    Read the article

  • JMX Based Monitoring - Part One

    - by Anthony Shorten
    In all versions of the Oracle Utilities Application Framework there is an ability to use Java Management eXtensions (JMX) to both manage and monitor the various components of the product. This means that sites can use a JSR120 compliant JMX browser or JMX console to view or manage the components of the product with little or no configuration required. In each version we have progressively added JMX capabilities to allow IT groups more detailed information. In Oracle Utilities Application Framework V2.1 and above it was possible to use JMX on the Web Application Server provided Mbeans to allow you to monitor the online component of the product as well as manage the configuration. Also with a few additional java options it is possible to get a good level of detail about the Java Virtual machine including memory and thread usage. In Oracle Utilities Application Framework V2.2 and above, we added support for Java 5 statistics (Java enabled them by default), database pool statistics and also added the ability to manage and moinitor the batch component of the architecture. Now, in Oracle Utilities Application Framework V4 and above, we added support for Java 6 MXBeans, online management of the cache using JMX, additional JVM information and Performance monitoring using JMX. JMX allows the product to be managed from a common console such as Oracle Enterprise Manager, Tivoli, HP OpenView (and a lot more). Over the next week or so I will be compiling a set of blog entries discussing what is available (in summary format) using JMX and how to get access to the JMX statistics for your version of the product.

    Read the article

  • You Can't Win on Price

    - by David Dorf
    This year I did the majority of my Christmas shopping from the comfort of my home office. There aren't many things in stores you can't find online these days. I find it easier to search, research, and compare products online rather than walking the mall anyway. But there's a segment of the population that likes to be in the store, touching the products. For those people, smartphones avail them some of the e-commerce features I mentioned right there in the aisles. First it was RedLaser, then TheFind, ShopSavvy and many others. But the one that should be scaring retailers is Amazon's PriceCheck application. It lets you scan the product barcode, take a picture of the product, or speak the product's name. Once the product is identified, it shows the online prices, with Amazon at the top of the list. Within 10 seconds you can order the item and Amazon Prime members get free 2-day shipping too. I don't think fashion and grocery retailers need to worry much, but I have to believe smartphones are helping Amazon win a little more of the brand-name hardgoods market. So what's a retailer to do? Best Buy has begun to put QR Codes on their shelf labels that are easily scanned by smartphones and take the consumer to a Best Buy Web page where they can get extended information about the product. The consumer is getting the additional information they want, and Best Buy avoids the price comparisons. Of course if a consumer chooses to use the Amazon PriceCheck app, then all bets are off. That's when Best Buy has to hope the in-store experience and customer service will save the sale. My point is that the internet makes information available to everyone, and smartphones make it available anywhere. Unless you want your store to be Amazon's local showroom, you need to be price-competitive but differentiate on other aspects of the shopping experience. With the cost of running a physical store, you can't win on price.

    Read the article

  • Davicom Semiconductor, Inc. 21x4x DEC-Tulip not detected by Wireshark but IP operational

    - by deepsix86
    Recently flipped to Ubuntu 11.10 on a Dell 4300 (Intel). Getting IP address and no issues (ping/surf) but Wireshark unable to detect eth0 interface. I see references in forums to blacklist tulip but looks like I am running dmfe. Not sure if the blacklist is required and where to go from here. Maybe Driver update? Got a little lost looking in that area. Some h/w details below (IP/MAC/HOSTNAME removed) Linux xxxxxx 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 17:34:21 UTC 2012 i686 i686 i386 GNU/Linux network-admin (HOSTS TAB) does not list eth0, only loopback and bunch of IPv6 interfaces ifconfig eth0 Link encap:Ethernet HWaddr xxxxxxxx inet addr:192.168.x.xx Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: xxxxxxxxxxx 64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:36662 errors:0 dropped:1 overruns:0 frame:0 TX packets:24975 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:42115779 (42.1 MB) TX bytes:3056435 (3.0 MB) Interrupt:18 Base address:0xe800 lspci 02:09.0 Ethernet controller: Davicom Semiconductor, Inc. 21x4x DEC-Tulip compatible 10/100 Ethernet (rev 31) Subsystem: Device 4554:434e Flags: bus master, medium devsel, latency 64, IRQ 18 I/O ports at e800 [size=256] Memory at fe1ffc00 (32-bit, non-prefetchable) [size=256] Expansion ROM at fe200000 [disabled] [size=256K] Capabilities: [50] Power Management version 2 Kernel driver in use: dmfe Kernel modules: dmfe hwinfo --netcard 20: PCI 209.0: 0200 Ethernet controller [Created at pci.318] Unique ID: rBUF.0NgK5ZS9c0D Parent ID: 6NW+.siohrLUzzI4 SysFS ID: /devices/pci0000:00/0000:00:1e.0/0000:02:09.0 SysFS BusID: 0000:02:09.0 Hardware Class: network Model: "Davicom 21x4x DEC-Tulip compatible 10/100 Ethernet" Vendor: pci 0x1282 "Davicom Semiconductor, Inc." Device: pci 0x9102 "21x4x DEC-Tulip compatible 10/100 Ethernet" SubVendor: pci 0x4554 SubDevice: pci 0x434e Revision: 0x31 Driver: "dmfe" Driver Modules: "dmfe" Device File: eth0 I/O Ports: 0xe800-0xe8ff (rw) Memory Range: 0xfe1ffc00-0xfe1ffcff (rw,non-prefetchable) Memory Range: 0xfe200000-0xfe23ffff (ro,non-prefetchable,disabled) IRQ: 18 (61379 events) HW Address: 00:08:a1:01:35:70 Link detected: yes Module Alias: "pci:v00001282d00009102sv00004554sd0000434Ebc02sc00i00" Driver Info #0: Driver Status: dmfe is active Driver Activation Cmd: "modprobe dmfe" Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #11 (PCI bridge)

    Read the article

  • Move Files from a Failing PC with an Ubuntu Live CD

    - by Trevor Bekolay
    You’ve loaded the Ubuntu Live CD to salvage files from a failing system, but where do you store the recovered files? We’ll show you how to store them on external drives, drives on the same PC, a Windows home network, and other locations. We’ve shown you how to recover data like a forensics expert, but you can’t store recovered files back on your failed hard drive! There are lots of ways to transfer the files you access from an Ubuntu Live CD to a place that a stable Windows machine can access them. We’ll go through several methods, starting each section from the Ubuntu desktop – if you don’t yet have an Ubuntu Live CD, follow our guide to creating a bootable USB flash drive, and then our instructions for booting into Ubuntu. If your BIOS doesn’t let you boot using a USB flash drive, don’t worry, we’ve got you covered! Use a Healthy Hard Drive If your computer has more than one hard drive, or your hard drive is healthy and you’re in Ubuntu for non-recovery reasons, then accessing your hard drive is easy as pie, even if the hard drive is formatted for Windows. To access a hard drive, it must first be mounted. To mount a healthy hard drive, you just have to select it from the Places menu at the top-left of the screen. You will have to identify your hard drive by its size. Clicking on the appropriate hard drive mounts it, and opens it in a file browser. You can now move files to this hard drive by drag-and-drop or copy-and-paste, both of which are done the same way they’re done in Windows. Once a hard drive, or other external storage device, is mounted, it will show up in the /media directory. To see a list of currently mounted storage devices, navigate to /media by clicking on File System in a File Browser window, and then double-clicking on the media folder. Right now, our media folder contains links to the hard drive, which Ubuntu has assigned a terribly uninformative label, and the PLoP Boot Manager CD that is currently in the CD-ROM drive. Connect a USB Hard Drive or Flash Drive An external USB hard drive gives you the advantage of portability, and is still large enough to store an entire hard disk dump, if need be. Flash drives are also very quick and easy to connect, though they are limited in how much they can store. When you plug a USB hard drive or flash drive in, Ubuntu should automatically detect it and mount it. It may even open it in a File Browser automatically. Since it’s been mounted, you will also see it show up on the desktop, and in the /media folder. Once it’s been mounted, you can access it and store files on it like you would any other folder in Ubuntu. If, for whatever reason, it doesn’t mount automatically, click on Places in the top-left of your screen and select your USB device. If it does not show up in the Places list, then you may need to format your USB drive. To properly remove the USB drive when you’re done moving files, right click on the desktop icon or the folder in /media and select Safely Remove Drive. If you’re not given that option, then Eject or Unmount will effectively do the same thing. Connect to a Windows PC on your Local Network If you have another PC or a laptop connected through the same router (wired or wireless) then you can transfer files over the network relatively quickly. To do this, we will share one or more folders from the machine booted up with the Ubuntu Live CD over the network, letting our Windows PC grab the files contained in that folder. As an example, we’re going to share a folder on the desktop called ToShare. Right-click on the folder you want to share, and click Sharing Options. A Folder Sharing window will pop up. Check the box labeled Share this folder. A window will pop up about the sharing service. Click the Install service button. Some files will be downloaded, and then installed. When they’re done installing, you’ll be appropriately notified. You will be prompted to restart your session. Don’t worry, this won’t actually log you out, so go ahead and press the Restart session button. The Folder Sharing window returns, with Share this folder now checked. Edit the Share name if you’d like, and add checkmarks in the two checkboxes below the text fields. Click Create Share. Nautilus will ask your permission to add some permissions to the folder you want to share. Allow it to Add the permissions automatically. The folder is now shared, as evidenced by the new arrows above the folder’s icon. At this point, you are done with the Ubuntu machine. Head to your Windows PC, and open up Windows Explorer. Click on Network in the list on the left, and you should see a machine called UBUNTU in the right pane. Note: This example is shown in Windows 7; the same steps should work for Windows XP and Vista, but we have not tested them. Double-click on UBUNTU, and you will see the folder you shared earlier! As well as any other folders you’ve shared from Ubuntu. Double click on the folder you want to access, and from there, you can move the files from the machine booted with Ubuntu to your Windows PC. Upload to an Online Service There are many services online that will allow you to upload files, either temporarily or permanently. As long as you aren’t transferring an entire hard drive, these services should allow you to transfer your important files from the Ubuntu environment to any other machine with Internet access. We recommend compressing the files that you want to move, both to save a little bit of bandwidth, and to save time clicking on files, as uploading a single file will be much less work than a ton of little files. To compress one or more files or folders, select them, and then right-click on one of the members of the group. Click Compress…. Give the compressed file a suitable name, and then select a compression format. We’re using .zip because we can open it anywhere, and the compression rate is acceptable. Click Create and the compressed file will show up in the location selected in the Compress window. Dropbox If you have a Dropbox account, then you can easily upload files from the Ubuntu environment to Dropbox. There is no explicit limit on the size of file that can be uploaded to Dropbox, though a free account begins with a total limit of 2 GB of files in total. Access your account through Firefox, which can be opened by clicking on the Firefox logo to the right of the System menu at the top of the screen. Once into your account, press the Upload button on top of the main file list. Because Flash is not installed in the Live CD environment, you will have to switch to the basic uploader. Click Browse…find your compressed file, and then click Upload file. Depending on the size of the file, this could take some time. However, once the file has been uploaded, it should show up on any computer connected through Dropbox in a matter of minutes. Google Docs Google Docs allows the upload of any type of file – making it an ideal place to upload files that we want to access from another computer. While your total allocation of space varies (mine is around 7.5 GB), there is a per-file maximum of 1 GB. Log into Google Docs, and click on the Upload button at the top left of the page. Click Select files to upload and select your compressed file. For safety’s sake, uncheck the checkbox concerning converting files to Google Docs format, and then click Start upload. Go Online – Through FTP If you have access to an FTP server – perhaps through your web hosting company, or you’ve set up an FTP server on a different machine – you can easily access the FTP server in Ubuntu and transfer files. Just make sure you don’t go over your quota if you have one. You will need to know the address of the FTP server, as well as the login information. Click on Places > Connect to Server… Choose the FTP (with login) Service type, and fill in your information. Adding a bookmark is optional, but recommended. You will be asked for your password. You can choose to remember it until you logout, or indefinitely. You can now browse your FTP server just like any other folder. Drop files into the FTP server and you can retrieve them from any computer with an Internet connection and an FTP client. Conclusion While at first the Ubuntu Live CD environment may seem claustrophobic, it has a wealth of options for connecting to peripheral devices, local computers, and machines on the Internet – and this article has only scratched the surface. Whatever the storage medium, Ubuntu’s got an interface for it! Similar Articles Productive Geek Tips Backup Your Windows Live Writer SettingsMove a Window Without Clicking the Titlebar in UbuntuRecover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDCreate a Bootable Ubuntu USB Flash Drive the Easy WayReset Your Ubuntu Password Easily from the Live CD TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals

    Read the article

  • I want to hit Apex SQL with a big stick

    - by Michael Stephenson
    <Whinge> Thought id just have a little whinge about this product which caused me a load of grief the other day..... So the background was that my development machine had a completely full hard disk which I needed to sort out.  Upon investigation I found the issue was that the msdb database had managed to get very large. This was caused because a long time ago (and I cant even remember why) I tried out Apex SQL.  After a few days I decided to uninstall it and thought nothing more of it.  What I didnt realise was that uninstalling it doesnt actually uninstall it (and it doesnt inform you about this), but there was still some assemblies left on my machine.  Everytime SQL Server was running it was starting the Apex SQL Connection monitor which was then running in the background and regularly recording information in the msdb database.  Over time it had recorded enough to fill the disk. The below article advises how to sort this out by removing this fully so if your having a problem then try this out:http://knowledgebase.apexsql.com/2007/08/how-to-uninstall-apexsqlconnectionmonit_09.htm Once this was sorted out its interesting to read the above article because I just dont think the approach used by the vendor of this software is a very good one.  So for the Apex team just wanted to pass on a thought: If I want to uninstall your product you should tell me if stuff is left on the machine especially if a process will be running which is going to fill my machine with useless data, </Whinge>

    Read the article

  • SQL SERVER – What is Fill Factor and What is the Best Value for Fill Factor

    - by pinaldave
    Working in performance tuning area, one has to know about Index and Index Maintenance. For any Index the most important property is Fill Factor. Fill factor is the value that determines the percentage of space on each leaf-level page to be filled with data. In an SQL Server, the smallest unit is a page, which is made of  Page with size 8K. Every page can store one or more rows based on the size of the row. The default value of the Fill Factor is 100, which is same as value 0. The default Fill Factor (100 or 0) will allow the SQL Server to fill the leaf-level pages of an index with the maximum numbers of the rows it can fit. There will be no or very little empty space left in the page, when the fill factor is 100. I have written following two article about Fill Factor. What is Fill factor? – Index, Fill Factor and Performance – Part 1 What is the best value for the Fill Factor? – Index, Fill Factor and Performance – Part 2 I strongly encourage read them and provide your feedback. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Plex won't enter my home directory or other partitions

    - by RobinJ
    I just installed the Plex media server from the Ubuntu Software Center, and opened the web interface. I wanted to start by adding a collection. When it gave me a file browser, I wanted to go to /home/robin/Videos. /home is as far as I got. It showed robin, with an arrow in front of it, but when I tried to expand the directory tree it was empty. The same happened when trying to access /media/Data. For me it's quite useless like this, as all of my media files are inside those 2 directories. Help would be much appreciated. My first guess seemed to be a correct one; It is, as always, a permissions problem. How do I give plex access to my home folder without also giving other users access to it? My home folder is encrypted by the way, so that'll probably complicate things a little. robin@RobinJ:~$ sudo -u plex bash [sudo] password for robin: bash: /home/robin/.bashrc: Permission denied plex@RobinJ:~$ ls -al ls: cannot open directory .: Permission denied plex@RobinJ:~$ cd /home plex@RobinJ:/home$ cd robin bash: cd: robin: Permission denied plex@RobinJ:/home$ ls -al robin ls: cannot open directory robin: Permission denied

    Read the article

  • 'rman' cheat-sheet and rlwrap completion

    - by katsumii
    I started using 'rlwrap' some monthes ago like one of my colleague does.bash-like features in sqlplus, rman and other Oracle command line tools (Oracle Luxembourg Core Tech' Blog by Gilles Haro)One can find specific Oracle extension for databases 9i, 10g and 11g (keyword textfile) over here. This will avoid you the need to create this .oracle_keywords file.There is 'rman' keyword file in the link above. I experimented a little and found some missing keywords which are:MAXCORRUPTION PRIMARY NOCFAU VIRTUAL COMPRESSION FOREIGN With these words added, 'rman' works like this:$ rlwrap -f ~/rman $ORACLE_HOME/bin/rman Recovery Manager: Release 11.2.0.3.0 - Production on Mon Dec 3 02:56:04 2012 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. RMAN> <-- Hit TAB Display all 211 possibilities? (y or n) As you can guess, this completion is not context aware.I found these missing words by creating a kind of 'cheat sheet' for rman with the script like below. This sheet contains list of verbs and 1st operands. I uploaded to here so one can create a coffee cup with a lot of esoteric words printed on :)validWords() { sed -n 's/^RMAN-01009: syntax error: found "identifier": expecting one of: //p' \ | sed -r 's/double-quoted-string, single-quoted-string/Some String/;s/, /" "/g;s/""//' } echo "Bogus" | rman | validWords > /tmp/rman.$$ for i in $(cat /tmp/rman.$$) do i=$(echo $i | tr -d '"') echo "#### $i ####" echo "$i Bogus" | rman | validWords done One can find more keywords in the document here.

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >