Search Results

Search found 1962 results on 79 pages for 'slightly offtopic'.

Page 50/79 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Hard Reset USB in Ubuntu 10.04

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: $ sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: $ sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on serverfault as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • How to train users converting from PC to Mac/Apple at a small non profit?

    - by Everette Mills
    Background: I am part of a team that provides volunteer tech support to a local non profit. We are in the position to obtain a grant to update almost all of our computers (many of them 5 to 7 year old machines running XP), provide laptops for users that need them, etc. We are considering switching our users from PC (WinXP) to Macs. The technical aspects of switching will not be an issue for the team. We are in the process of planning data conversions, machine setup, server changes, etc regardless of whether we switch to Macs or much newer PCs. About 1/4 of the staff uses or has access to a Mac at home, these users already understand the basics of using the equipment. We have another set of (generally younger) users that are technically savvy and while slightly inconvenienced and slowed for a few days should be able to switch over quickly. Finally, several members of the staff are older and have many issues using there computers today. We think in the long run switching to Macs may provide a better user experience, fewer IT headaches, and more effective use of computers. The questions we have is what resources and training (webpages, Books, online training materials or online courses) do you recommend that we provide to users to enable the switchover to happen smoothly. Especially, with a focus on providing different levels of training and support to users with different skill levels. If you have done this in your own organization, what steps were successful, what areas were less successful?

    Read the article

  • Monitoring Between EC2 Regions

    - by ABrown
    I'm working on a small EC2 project that involves a handful of servers in two different regions (US East and EU West). My first task is to implement a Nagios monitoring solution. Monitoring within a region is simple - I just use the private domain names/IPs, but I'm a little unsure of the best way to handle monitoring the second region without setting up a second Nagios install. The environment is fairly static, so I'm not going to be scripting the configuration with the EC2 tools just yet. As I see it, I have two options. Two Nagios installations (which is over-kill for the small number of servers I'm dealing with). Pros: I don't have to alter the group permissions nor do I have to pay for the traffic, redundancy in the monitoring solution - I could monitor the Nagios servers. Cons: two installations to deal with and I'd need to run another server instance. Have the single installation monitor both regions. Pros: one installation to deal with. Cons: slightly reduced security - security group will have to have NRPE (5666) opened for one source IP and also paying for a small amount of bandwidth at the Internet rate for data transfer between the regions. I guess my question is - how have others handled this problem and what are your recommendations? Thanks!

    Read the article

  • How can I find which logon script is being run?

    - by user2517266
    I'm having an issue with network drives. Suddenly some computers and users aren't getting their mapped network drives from the logon script. I am NOT a domain admin, I don't have permission to login to the domain controller. And I know very little about Active Directory. The issue seems random, some users this day, different users tomorrow. Some computers run fine and some won't map no matter who logs in. They are mixed OS's XP (SP3), Vista, and 7. I was looking at the domain in windows explorer and I have found the batch file(s) that maps the drives in several locations, how do I know which one is actually being ran? The .bat file is located in \DOMAIN\NETLOGON\script.bat and \DOMAIN\SYSVOL\DOMAIN\scripts\script.bat and \DOMAIN\SYSVOL\DOMAIN\policies\GUID(Right? It's a crazy string)\User\Scripts\Logon\script.bat So, how can I figure out which one is actually being ran per computer or user? Cause they are all slightly different from each other and one of them doesn't map properly. Do all the files in NETLOGON get ran? Cause there are 15+ files in there. Or is it specified in Group Policy which one(s) get ran? EDIT: I am able to access a program called Active Directory Users and Computers, but the properties tab for any user is blank for the logon script.

    Read the article

  • Git workflow for two tight-knit projects

    - by Pioul
    Two very similar projects I'm maintaining an online Markdown editor using Git as RCS (and accessorily made available on GitHub). From this web app, I've created a Chrome app: the code is the same, aside from some Chrome technicalities. I care about open sourcing these two projects. Still, the Chrome app's code being the same as the web app's except for some dull details, I've first chosen to (1) not publish the Chrome app on GitHub, and (2) not use Git to manage its code. Instead, I would manually review the web app's commits, then replicate the few changes in the Chrome app. … slightly drifting apart However, I've decided to add a feature to the Chrome app only. So, even though both codebases will remain broadly similar, they'll be diverging enough to make me reconsider the rationale behind my initial decision to not version control nor share the Chrome app's source code. Since I'm now willing to use Git to version control both apps, and that I want to share both of them on GitHub, how should I go about it? Should I use two different repositories, or one repo with two long-running branches? What would be the pros and cons of each approach in that context? What would be the easiest/fastest way to regularly "import" commits from the web app to the Chrome app, since the web app is going to remain the master branch? Is cherry-picking the only solution?

    Read the article

  • No "New Folder" button in windows 7

    - by user1125620
    My sibling's laptop is running windows 7 x64. The torrents folder in Documents doesn't show the New Folder button. ctrl+shift+n doesn't work either. I tried EVERYTHING here: Can't create new folder from anywhere in Windows 7 ..but nothing worked. As with the OP there, running the .reg file brings an error that says something about not being able to change the registry value while something is using it. I removed one entry at a time in the .reg file until I narrowed down the ones that were causing the problem, which were in HKEY_CLASSES_ROOT/CLSID. The only different reg value, however, was in HKEY_CLASSES_ROOT\CLSID{11dbb47c-a525-400b-9e80-a54615a090c0}\InProcServer32, for which the default value was %SystemRoot%\system32\explorerframe.dll and the value trying to be set ExplorerFrame.dll. I'm on windows 7 32bit and that's the same value I have for the entry, so I doubt that's it. The only thing I think is slightly off is that there is a user group with a strange name that only has execute and read access, and I can't grant it full control. Every time I try, it acts as if it works, but doesn't change it. I tried booting into safe mode and changing it, but it did the same thing. It is the folder where utorrent puts any new downloads, so it's possible utorrent did something, though that's never happened to me before. edit: I had renamed the folder to something else to avoid the problem, and then went onto my own computer to try to figure out what was wrong (I personally don't like using the touchpad on laptops). While searching, my sibling starting watching a movie. I minimized the movie and saw that the same thing had happened to the folder I renamed. Also changed was the file layout. It showed the different days and the files modified on those days. So, I was able to fix it by doing: Clicking Organize Layout Menu Bar On the menu bar clicking View Arrange By Folder

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Improved appointment rendering in RadScheduler for ASP.NET AJAX, Q1 2010

    Now that Q1 2010 release is out in the wild, we can sit down and discuss some of the changes we decided to make in the new release. One of them is the new appointment rendering of RadScheduler - a potentially breaking change, but a much needed one. If you have problems with your old custom skins, include the old base stylesheet along with your RadScheduler and set EnableEmbeddedBaseStylesheet=false in your RadScheduler. You can find the said base stylesheet attached to this post.   While trying to improve the performance of RadScheduler, I noticed that the number of resources slows down the rendering and overall performance considerably. This had to be expected - the images to support the appointment rounded corners (and the predefined resources) were quite large. However, I didnt take into account that all browsers keep for performance reasons their images uncompressed in memory and with the color depth of the current desktop. A simple calculation later I discovered that the appointment sprite itself is taking 25MB memory when loaded. Add 5 resources to the fray and you have 150MB memory down with a single blow. As it turns out - a sprite image is not a panacea, if it gets too big - dont be afraid to break it in two. The loading time may suffer, but your browser suffers more while rendering a 25MB monster. First I thought of undertaking the aforementioned solution - breaking the appointment sprite in two and thus reducing the two appointment sprites to mere 2MB uncompressed. Then I thought - the rounded corners are small - I can use borders and backgrounds to simulate rounded appointment borders while still keeping the same HTML structure. The gradients can be done with a single 10x50px image plus we have a gain - border colors and backgrounds can be changed on the fly.  I started with five rendering elements at first, then tried with four and finally I settled on only three elements.  Behold the new appointment rendering (quite simple really):       On the left you can see that the first container has only top and bottom borders and a background. In fact, the background isnt even needed since it will be obscured by the elements on top of it. The whole first container is only needed for the four dots that reside in the four corners of the appointment. On top of this container is another one that holds the left and right borders and slightly lighter background to create the illusion of a second lighter border beside the other two. At last on top of all others is placed the text container that also holds the top and bottom borders and the gradient background. On the right you can see the final result - Im quite happy with it and I hope you will be too. After creating the new rendering we took another step further - we decided to use alpha gradients for the resource rendering, thus supporting any color appointments with rounded corners and gradients. You can see some examples below:We plan on adding BorderColor and BackColor properties  to the ResourceStyles definitions for Q1 SP1. However with the new rendering in Q1 2010 we do support BackColor and BorderColor appointment properties - you only need to set AppointmentStyleMode=Default to keep RadScheduler from switching to Simple appointment rendering. Here is one screenshot of RadScheduler with appointments set to different colors: I hope that you will enjoy working with the new appointments in RadScheduler. RadScheduler base stylesheet Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Improved appointment rendering in RadScheduler for ASP.NET AJAX, Q1 2010

    Now that Q1 2010 release is out in the wild, we can sit down and discuss some of the changes we decided to make in the new release. One of them is the new appointment rendering of RadScheduler - a potentially breaking change, but a much needed one. If you have problems with your old custom skins, include the old base stylesheet along with your RadScheduler and set EnableEmbeddedBaseStylesheet=false in your RadScheduler. You can find the said base stylesheet attached to this post.   While trying to improve the performance of RadScheduler, I noticed that the number of resources slows down the rendering and overall performance considerably. This had to be expected - the images to support the appointment rounded corners (and the predefined resources) were quite large. However, I didnt take into account that all browsers keep for performance reasons their images uncompressed in memory and with the color depth of the current desktop. A simple calculation later I discovered that the appointment sprite itself is taking 25MB memory when loaded. Add 5 resources to the fray and you have 150MB memory down with a single blow. As it turns out - a sprite image is not a panacea, if it gets too big - dont be afraid to break it in two. The loading time may suffer, but your browser suffers more while rendering a 25MB monster. First I thought of undertaking the aforementioned solution - breaking the appointment sprite in two and thus reducing the two appointment sprites to mere 2MB uncompressed. Then I thought - the rounded corners are small - I can use borders and backgrounds to simulate rounded appointment borders while still keeping the same HTML structure. The gradients can be done with a single 10x50px image plus we have a gain - border colors and backgrounds can be changed on the fly.  I started with five rendering elements at first, then tried with four and finally I settled on only three elements.  Behold the new appointment rendering (quite simple really):       On the left you can see that the first container has only top and bottom borders and a background. In fact, the background isnt even needed since it will be obscured by the elements on top of it. The whole first container is only needed for the four dots that reside in the four corners of the appointment. On top of this container is another one that holds the left and right borders and slightly lighter background to create the illusion of a second lighter border beside the other two. At last on top of all others is placed the text container that also holds the top and bottom borders and the gradient background. On the right you can see the final result - Im quite happy with it and I hope you will be too. After creating the new rendering we took another step further - we decided to use alpha gradients for the resource rendering, thus supporting any color appointments with rounded corners and gradients. You can see some examples below:We plan on adding BorderColor and BackColor properties  to the ResourceStyles definitions for Q1 SP1. However with the new rendering in Q1 2010 we do support BackColor and BorderColor appointment properties - you only need to set AppointmentStyleMode=Default to keep RadScheduler from switching to Simple appointment rendering. Here is one screenshot of RadScheduler with appointments set to different colors: I hope that you will enjoy working with the new appointments in RadScheduler. RadScheduler base stylesheet Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Pigs in Socks?

    - by MightyZot
    My wonderful wife Annie surprised me with a cruise to Cozumel for my fortieth birthday. I love to travel. Every trip is ripe with adventure, crazy things to see and experience. For example, on the way to Mobile Alabama to catch our boat, some dude hauling a mobile home lost a window and we drove through a cloud of busting glass going 80 miles per hour! The night before the cruise, we stayed in the Malaga Inn and I crawled UNDER the hotel to look at an old civil war bunker. WOAH! Then, on the way to and from Cozumel, the boat plowed through two beautiful and slightly violent storms. But, the adventures you have while travelling often pale in comparison to the cult of personalities you meet along the way.  :) We met many cool people during our travels and we made some new friends. Todd and Andrea are in the publishing business (www.myneworleans.com) and teaching, respectively. Erika is a teacher too and Matt has a pig on his foot. This story is about the pig. Without that pig on Matt’s foot, we probably would have hit a buoy and drowned. Alright, so…this pig on Matt’s foot…this is no henna tatt, this is a man’s tattoo. Apparently, getting tattoos on your feet is very painful because there is very little muscle and fat and lots of nifty nerves to tell you that you might be doing something stupid. Pig and rooster tattoos carry special meaning for sailors of old. According to some sources, having a tattoo of a pig or rooster on one foot or the other will keep you from drowning. There are many great musings as to why a pig and a rooster might save your life. The most plausible in my opinion is that pigs and roosters were common livestock tagging along with the crew. Since they were shipped in wooden crates, pigs and roosters were often counted amongst the survivors when ships succumbed to Davy Jones’ Locker. I didn’t spend a whole lot of time researching the pig and the rooster, so consider these musings as you would a grain of salt. And, I was not able to find a lot of what you might consider credible history regarding the tradition. What I did find was a comfort, or solace, in the maritime tradition. Seems like raw traditions like the pig and the rooster are in danger of getting lost in a sea of non-permanence. I mean, what traditions are us old programmers and techies leaving behind for future generations? Makes me wonder what Ward Christensen has tattooed on his left foot.  I guess my choice would have to be a Commodore 64.   (I met Ward, by the way, in an elevator after he received his Dvorak awards in 1992. He was a very non-assuming individual sporting business casual and was very much a “sailor” of an old-school programmer. I can’t remember his exact words, but I think they were essentially that he felt it odd that he was getting an award for just doing his work. I’m sure that Ward doesn’t know this…he couldn’t have set a more positive example for a young 22 year old programmer. Thanks Ward!)

    Read the article

  • Surface Review from Canadian Guy Who Didn&rsquo;t Go To Build

    - by D'Arcy Lussier
    I didn’t go to Build last week, opted to stay home and go trick-or-treating with my daughters instead. I had many friends that did go however, and I was able to catch up with James Chambers last night to hear about the conference and play with his Surface RT and Nokia 920 WP8 devices. I’ve been using Windows 8 for a while now, so I’m not going to comment on OS features – lots of posts out there on that already. Let me instead comment on the hardware itself. Size and Weight The size of the tablet was awesome. The Windows 8 tablet I’m using to reference this against is the one from Build 2011 (Samsung model) we received as well as my iPad. The Surface RT was taller and slightly heavier than the iPad, but smaller and lighter than the Samsung Win 8 tablet. I still don’t prefer the default wide-screen format, but the Surface RT is much more usable even when holding it by the long edge than the Samsung. Build Quality No issues with the build quality, it seemed very solid. But…y’know, people have been going on about how the Surface RT materials are so much better than the plastic feeling models Samsung and others put out. I didn’t really notice *that* much difference in that regard with the Surface RT. Interesting feature I didn’t expect – the Windows button on the device is touch-sensitive, not a mechanical one. I didn’t try video or anything, so I can’t comment on the media experience. The kickstand is a great feature, and the way the Surface RT connects to the combo case/keyboard touchcover is very slick while being incredibly simple. What About That Touch Cover Keyboard? So first, kudos to Microsoft on the touch cover! This thing was insanely responsive (including the trackpad) and really delivered on the thinness I was expecting. With that said, and remember this is with very limited use, I would probably go with the Type Cover instead of the Touch Cover. The difference is buttons. The Touch Cover doesn’t actually have “buttons” on the keyboard – hence why its a “touch” cover. You tap on a key to type it. James tells me after a while you get used to it and you can type very fast. For me, I just prefer the tactile feeling of a button being pressed/depressed. But still – typing on the touch case worked very well. Would I Buy One? So after playing with it, did I cry out in envy and rage that I wasn’t able to get one of these machines? Did I curse my decision to collect Halloween candy with my kids instead of being at Build getting hardware? Well – no. Even with the keyboard, the Surface RT is not a business laptop replacement device. While Office does come included, you can’t install any other applications outside of Windows Store Apps. This might be limiting depending on what other applications you need to have available on your computer. Surface RT is a great personal computing device, as long as you’re not already invested in a competing ecosystem. I’ve heard people make statements that they’re going to replace all the iPads in their homes with Surface tablets. In my home, that’s not feasible – my wife and daughters have amassed quite a collection of games via iTunes. We also buy all our music via iTunes as well, so even with the XBox streaming music service now available we’re still tied quite tightly to iTunes. So who is the Surface RT for? In my mind, if you’re looking for a solid, compact device that provides basic business functionality (read: email) or if you have someone that needs a very simple to use computer for email, web browsing, etc., then Surface RT is a great option. For me, I’m waiting on the Samsung Ativ Smart PC Pro and am curious to see what changes the Surface Pro will come with.

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 4: Service Client (using Service Metadata)

    - by Your DisplayName here!
    See parts 1, 2 and 3 first. In this part we will finally build a client for our federated service. There are basically two ways to accomplish this. You can use the WCF built-in tooling to generate client and configuration via the service metadata (aka ‘Add Service Reference’). This requires no WIF on the client side. Another approach would be to use WIF’s WSTrustChannelFactory to manually talk to the ADFS 2 WS-Trust endpoints. This option gives you more flexibility, but is slightly more code to write. You also need WIF on the client which implies that you need to run on a WIF supported operating system – this rules out e.g. Windows XP clients. We’ll start with the metadata way. You simply create a new client project (e.g. a console app) – call ‘Add Service Reference’ and point the dialog to your service endpoint. What will happen then is, that VS will contact your service and read its metadata. Inside there is also a link to the metadata endpoint of ADFS 2. This one will be contacted next to find out which WS-Trust endpoints are available. The end result will be a client side proxy and a configuration file. Let’s first write some code to call the service and then have a closer look at the config file. var proxy = new ServiceClient(); proxy.GetClaims().ForEach(c =>     Console.WriteLine("{0}\n {1}\n  {2} ({3})\n",         c.ClaimType,         c.Value,         c.Issuer,         c.OriginalIssuer)); That’s all. The magic is happening in the configuration file. When you in inspect app.config, you can see the following general configuration hierarchy: <client /> element with service endpoint information federation binding and configuration containing ADFS 2 endpoint 1 (with binding and configuration) ADFS 2 endpoint n (with binding and configuration) (where ADFS 2 endpoint 1…n are the endpoints I talked about in part 1) You will see a number of <issuer /> elements in the binding configuration where simply the first endpoint from the ADFS 2 metadata becomes the default endpoint and all other endpoints and their configuration are commented out. You now need to find the endpoint you want to use (based on trust version, credential type and security mode) and replace that with the default endpoint. That’s it. When you call the WCF proxy, it will inspect configuration, then first contact the selected ADFS 2 endpoint to request a token. This token will then be used to authenticate against the service. In the next post I will show you the more manual approach using the WIF APIs.

    Read the article

  • SharePoint OCR image files indexing

    Introduction This article describes how to setup indexing of the image files (including TIFF, PDF, JPEG, BMP...) using OCR technology. The indexing described below utilizes Microsoft IFilter technology and as such is not specific to SharePoint, but can be used with any product that uses Microsoft indexing: Microsoft Search, Desktop search, SQL Server search, and through the plug-ins with Google desktop search. I however use it with Microsoft Windows SharePoint Services 2003. For those other products, the registration may need to be slightly different. Background  One of the projects I was working on required a storage of old documents scanned into PDF files. Then there was a separate team of people responsible for providing a tags for a search engine so those image documents could be found. The whole process was clumsy, labor intensive, and error prone. That was what started me on my exploration path. OCR The first search I fired was for the Open Source OCR products. Pretty quickly, I narrowed it down to TESSERACT (http://code.google.com/p/tesseract-ocr/). Tesseract is an orphaned brain child of HP that worked on it from 1985 to 1995. Then it was moved to the Open Source, and now if I understand it correctly, Google is working on it. With credentials like that, it's no wonder that Tesseract scores one of the highest marks on OCR recognition and accuracy. After downloading and struggling just a bit, I got Tesseract to work. The struggling part was that the home page claims that its base input format is a TIFF file. May be my TIFFs were bad, but I was able to get it to work only for BMP files. Image files conversion So now that I have an OCR that can convert BMP files into text, how do I get text out of the image PDF files? One more search, and I settled down on ImageMagic (http://www.imagemagick.org/). This is another wonderful Open Source utility that can convert any file into image. It did work out of the box, converting any TIFF files into bitmaps, but to get PDF files converted, it requires a GhostScript (http://mirror.cs.wisc.edu/pub/mirrors/ghost/GPL/gs864/gs864w32.exe). Dealing with text PDFs With that utility installed, I was cooking - I can convert any file (in particular PDF and TIFF) into bitmap, and then I can extract the text out of the bitmap. The only consideration was to somehow treat PDF files containing text differently - after all, OCR is very computation intensive and somewhat error prone even with perfect image quality and resolution. So another quick search, and I have a PDFTOTEXT (ftp://ftp.foolabs.com/pub/xpdf/xpdf-3.02pl4-win32.zip) - thank God for Open Source! With these guys, I can pull text out of PDF in an eye blink. However, I would get nothing for pure image PDFs, but I already have a solution for that! Batch process It took another 15 minutes to setup a batch script to automate the process: Check the file extension If file is a PDF file try to extract text out of it if there is more than certain amount of text in the file - done! if there is no text, convert first page into bitmap run OCR on the bitmap For any other file type, convert file into bitmap Run OCR on the bitmap Once you unzip the attached project, check out the bin\OCR.BAT file. It will create a temporary file in the directory where your source file is with the same name + the '.txt' extension.Continue span.fullpost {display:none;}

    Read the article

  • New security options in UCM Patch Set 3

    - by kyle.hatlestad
    While the Patch Set 3 (PS3) release was mostly focused on bug fixes and such, some new features sneaked in there. One of those new features is to the security options. In 10gR3 and prior versions, UCM had a component called Collaboration Manager which allowed for project folders to be created and groups of users assigned as members to collaborate on documents. With this component came access control lists (ACL) for content and folders. Users could assign specific security rights on each and every document and folder within a project. And it was even possible to enable these ACL's without having the Collaboration Manager component enabled (see technote# 603148.1). When 11g came out, Collaboration Manager was no longer available. But the configuration settings to turn on ACLs were still there. Well, in PS3 they're implemented slightly differently. And there is a new component available which adds an additional dimension to define security on the object, Roles. So now instead of selecting individual users or groups of users (defined as an Alias in User Admin), you can select a particular role. And if a user has that role, they are granted that level of access. This can allow for a much more flexible and manageable security model instead of trying to manage with just user and group access as people come and go in the organization. The way that it is enabled is still through configuration entries. First log in as an administrator and go to Administration -> Admin Server. On the Component Manager page, click the 'advanced component manager' link in the description paragraph at the top. In the list of Disabled Components, enable the RoleEntityACL component. Then click the General Configuration link on the left. In the Additional Configuration Variables text area, enter the new configuration values: UseEntitySecurity=true SpecialAuthGroups=<comma separated list of Security Groups to honor ACLs> The SpecialAuthGroups should be a list of Security Groups that honor the ACL fields. If an ACL is applied to a content item with a Security Group outside this list, it will be ignored. Save the settings and restart the instance. Upon restart, three new metadata fields will be created: xClbraUserList, xClbraAliasList, xClbraRoleList. If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. On the Check In, Search, and Update pages, values are added by simply typing in the value and getting a type-ahead list of possible values. Select the value, click Add and then set the level of access (Read, Write, Delete, or Admin). If all of the fields are blank, then it simply falls back to just Security Group and Account access. For Users and Groups, these values are automatically picked up from the corresponding database tables. In the case of Roles, this is an explicitly defined list of choices that are made available. These values must match the role that is being defined from WebLogic Server or you LDAP/AD repository. To add these values, go to Administration -> Admin Applets -> Configuration Manager. On the Views tab, edit the values for the ExternalRolesView. By default, 'guest' and 'authenticated' are added. Once added to through the view, they will be available to select from for the Roles Access List. As for how they are stored in the metadata fields, each entry starts with it's identifier: ampersand (&) symbol for users, "at" (@) symbol for groups, and colon (:) for roles. Following that is the entity name. And at the end is the level of access in paranthesis. e.g. (RWDA). And each entry is separated by a comma. So if you were populating values through batch loader or an external source, the values would be defined this way. Detailed information on Access Control Lists can be found in the Oracle Fusion Middleware System Administrator's Guide for Oracle Content Server.

    Read the article

  • I spy a Live Framework portal

    - by jamiet
    Those that have followed my blogs for a while may know that I have a slightly banal interest in Windows Live and, more specifically, the Live Services developer platform'; if that doesn’t sound interesting to you then stop reading now. My interest mainly stems from the Live Mesh technology that was announced a couple of years ago and the data synchronisation platform API that underpins it; that platform is called the Live Framework or LiveFX for short. At the Professional Developer’s Conference (PDC) 2008 Microsoft made LiveFX available to the public as a Tech Preview and I spent some time learning to use it and also built a few test apps on it too. In August 2009 an announcement came that that tech preview was getting shut down: "At the Professional Developer Conference 2008, we gave the developer community access to the technical preview of the Live Framework. The Live Framework is core to our vision of providing you with a consistent programming interface. Now we are working to integrate existing services, controls and the Live Framework into the next release of Windows Live. Your feedback continues to help us build the best possible offerings for Windows Live users, for you and for your customers. " Since then news on LiveFX has disappeared save for a throwaway session at PDC09 and I was hoping that news was going to appear at this week’s MIX conference but nothing was forthcoming. Instead though today I stumbled upon an unannounced portal for future LiveFX applications on Microsoft’s Azure portal at http://live.azure.com. Check it out: I consider this to be very good news. This Azure portal was built after the LiveFX tech preview was decommissioned so seeing Live Services existing so prominently alongside Microsoft’s other cloud efforts like Windows Azure and SQL Azure vindicates my early investment in the platform and gives me hope that we’re going to see something get released very very soon. I believe that the potential uses for this platform are extremely compelling and I’m looking forward to trying some out in the near future. I am also expecting LiveFX to have a heavy dependency on the OData protocol that I talked about yesterday in my post OData.org updated - gives clues about future sql azure enhancements so you can tell where my interest in that stems from. In case you’re wondering the projects that you see listed above (Basic List Sample, JT-proj etc…) are projects that I built on the old Tech Preview platform so clearly that stuff has not gone for good which is also good news; not just because it means I’ll have access to the code I wrote before but I also assume it means that LiveFX won’t have changed much since its tech preview incarnation. I know there are other LiveFX buffs out there and hopefully this news reaches some of them. If you are one of them the please put a comment below and let me know your thoughts! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Google, typography, and cognitive fluency for persuasion

    - by Roger Hart
    Cognitive fluency is - roughly - how easy it is to think about something. Mere Exposure (or familiarity) effects are basically about reacting more favourably to things you see a lot. Which is part of why marketers in generic spaces like insipid mass-market lager will spend quite so much money on getting their logo daubed about the place; or that guy at the bus stop starts to look like a dating prospect after a month or two. Recent thinking suggests that exposure effects likely spin off cognitive fluency. We react favourably to things that are easier to think about. I had to give tech support to an older relative recently, and suggested they Google the problem. They were confused. They could not, apparently, Google the problem, because part of it was that their Google toolbar had mysteriously vanished. Once I'd finished trying not to laugh, I started thinking about typography. This is going somewhere, I promise. Google is a ubiquitous brand. Heck, it's a verb, and their recent, jaw-droppingly well constructed Paris advert is more or less about that ubiquity. It trades on Google's integration into any information-seeking behaviour. But, as my tech support encounter suggests, people settle into comfortable patterns of thinking about things. They build schemas, and altering them can take work. Maybe the ubiquity even works to cement that. Alongside their online effort, Google is running billboard campaigns to advertise Chrome, a free product in a crowded space. They are running these ads in some kind of kooky Calibri / Comic Sans hybrid. Now, at first it seems odd that one of the world's more ubiquitous brands needs to run a big print campaign in public places - surely they have all the fluency they need? Well, not so much. Chrome, after all, is not the same as their core product, so there's some basic awareness work to do, and maybe a whole new batch of exposure effect to try and grab. But why the typeface? It's heavily foregrounded, and the ads are extremely textual. Plus, don't we all know that jovial, off-beat fonts look unprofessional, or something? There's a whole bunch of people who want (often rightly) to ban Comic Sans I wonder, though. Are Google trying to subtly disrupt cognitive fluency? There's an interesting paper (pdf) about - among other things - the effects of typography on they way people answer survey questions. Participants given the slightly harder to read question gave more abstract answers. The paper references other work suggesting that generally speaking, less-fluent question framing elicits more considered answers. The Chrome ad typeface is less fluent for print. Reactions may therefore be more considered, abstract, and disruptive. Is that, in fact, what Google need? They have brand ubiquity, but they want here to change accustomed behaviour, to get people to think about changing their browser. Is this actually a very elegant piece of persuasive information design? If you think about their "what is a browser?" vox pop research video, there's certainly a perceptual barrier they're going to have to tackle somehow.

    Read the article

  • Plymouth and GRUB do not show at all

    - by WarriorIng64
    I am using Ubuntu 11.04 64-bit as my only OS on my desktop computer, which used to only run Ubuntu 10.04 LTS until I had the time to upgrade it with a fresh install. It uses integrated NVIDIA graphics (listed as a GeForce 6150SE nForce 430 by the NVIDIA X Server Settings utility) with the current proprietary driver as provided by the Additional Drivers utility, and has a VGA connection to a 1680x1050 Acer monitor. I used to get the (ugly-looking version of) Plymouth graphical boot screen while under 10.04. It didn't look that great, but I was fine with it. Now, it doesn't show on 11.04 at all during boot (I just get an error message in a moving gray box from the monitor saying "Input Not Supported"), and only rarely it will show on shutdown, all garbled up. I could not get GRUB to show during boot while holding down Shift, either (same error message), but pressing Enter while it should be up starts the system normally. A picture of the error message I was getting: Once fully booted, the system still shows the login screen and desktop just fine. Any information on how to troubleshoot this would be appreciated. If there's any hardware-specific stuff I forgot to include here, let me know the relevant commands to run in a comment below. Things that I've tried: Running plymouth in a framebuffer: no effect Booting with nomodeset as my grub boot: option no effect Booting with nomodeset and plymouth in a framebuffer: no effect other than Plymouth showing during shutdown only Following the Softpedia instructions for fixing Plymouth's resolution: Problem mostly solved, except logo does not show in Plymouth during boot, and both grub and Plymouth are slightly off-center #4 above, but with nomodeset removed as a grub boot option: same effect as #4 #5 above, but with vt.handoff=7 added as a grub boot option: same effect as #4 I have added the current contents of /etc/default/grub as requested in the comments: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash video=uvesafb:mode_option=1280x1024-24,mtrr=3,scroll=ywrap" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' GRUB_GFXMODE=1280x1024 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" CURRENT STATUS: I forgot to uncomment one line as per "things that I've tried" #4, so I took care of that. I can now see GRUB during startup when I hold Shift and a normal-looking Plymouth during shutdown...but Plymouth during boot is now just a solid purple screen. In each case, it's displayed a little off-center to the left, with a thin black bar running down the right side of the monitor. The error pictured above no longer shows. I'd say this problem is about 2/3 solved now. UPDATE: After Natty started freezing up on me, I decided to dual-boot with Oneiric, which unfortunately shows the same problems. Rather than trying all these workarounds though, I decided to do what I should have done from the start and file a pair of bug reports. LAST UPDATE: Bug 850908 has been confirmed as a legitimate nouveaufb bug. I have overwritten my 11.04 partition with 12.04 LTS, and I can confirm at this time that the issue is present there, as well. I will now flag this question to be closed, yet I hope it was helpful for anyone who experienced similar issues; if you are still having the same problem as me, please go there and mark yourself as affected. Thanks!

    Read the article

  • Xorg does not see my monitor EDID

    - by sean farley
    Below is the output from my Xorg.0. X.Org X Server 1.11.3 Release Date: 2011-12-16 [ 22.311] X Protocol Version 11, Revision 0 [ 22.311] Build Operating System: Linux 2.6.42-23-generic x86_64 Ubuntu [ 22.311] Current Operating System: Linux sean-P55-USB3 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:48:16 UTC 2012 x86_64 [ 22.311] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-34-generic root=UUID=0a34603e-aee9-44d1-8982-a5a5a38c3e4d ro quiet splash [ 22.311] Build Date: 29 August 2012 12:12:33AM [ 22.311] xorg-server 2:1.11.4-0ubuntu10.8 (For technical support please see http://www.ubuntu.com/support) [ 22.311] Current version of pixman: 0.24.4 [ 22.311] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 22.311] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 22.311] (==) Log file: "/var/log/Xorg.0.log", Time: Sat Nov 17 13:20:45 2012 [ 22.311] (==) Using config file: "/etc/X11/xorg.conf" [ 22.311] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 22.311] (==) No Layout section. Using the first Screen section. [ 22.311] (==) No screen section available. Using defaults. [ 22.311] (**) |-->Screen "Default Screen Section" (0) [ 22.311] (**) | |-->Monitor "<default monitor>" [ 22.311] (==) No device specified for screen "Default Screen Section". Using the first device section listed. [ 22.311] (**) | |-->Device "Default Device" [ 22.311] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 22.311] (==) Automatically adding devices I have searched all over, and followed lots of dead ends and unanswered questions on this issue. I need to get this monitor recognised so I can use the native resolution of 1600x1200. The Nvidia driver in Windows has no problem with this. The monitor is an old Iiyama HM204DT A. Is there a way of configuring Xorg manually to get these working? I have tried xrandr but this will not work. Output:- sean@sean-P55-USB3:~$ xrandr Screen 0: minimum 8 x 8, current 1152 x 864, maximum 16384 x 16384 DVI-I-0 connected 1152x864+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0 + 1360x768 60.0 59.8 1152x864 60.0* 800x600 72.2 60.3 56.2 680x384 119.9 119.6 640x480 59.9 512x384 120.0 400x300 144.4 320x240 120.1 DVI-I-1 disconnected (normal left inverted right x axis y axis) DVI-I-2 disconnected (normal left inverted right x axis y axis) HDMI-0 disconnected (normal left inverted right x axis y axis) DVI-I-3 disconnected (normal left inverted right x axis y axis) Tried Nvidia Xorg.config: sean@sean-P55-USB3:~$ sudo nvidia-xconfig [sudo] password for sean: Using X configuration file: "/etc/X11/xorg.conf". VALIDATION ERROR: Data incomplete in file /etc/X11/xorg.conf. Device section "Default Device" must have a Driver line. Backed up file '/etc/X11/xorg.conf' as '/etc/X11/xorg.conf.nvidia-xconfig-original' Backed up file '/etc/X11/xorg.conf' as '/etc/X11/xorg.conf.backup' New X configuration file written to '/etc/X11/xorg.conf' How do I insert a driver line? This is a bit of a pain as I want to use my Vectorworks cad program in a WinXP Vbox at 1600x1200 but all virtual drives are restricted to the host screen resolution. Do i need to manually create EIDI info in Xorg? I am slightly confused about how Xorg and Nvidia relate Please help

    Read the article

  • ASP.NET MVC CRUD Validation

    - by Ricardo Peres
    One thing I didn’t refer on my previous post on ASP.NET MVC CRUD with AJAX was how to retrieve model validation information into the client. We want to send any model validation errors to the client in the JSON object that contains the ProductId, RowVersion and Success properties, specifically, if there are any errors, we will add an extra Errors collection property. Here’s how: 1: [HttpPost] 2: [AjaxOnly] 3: [Authorize] 4: public JsonResult Edit(Product product) 5: { 6: if (this.ModelState.IsValid == true) 7: { 8: using (ProductContext ctx = new ProductContext()) 9: { 10: Boolean success = false; 11:  12: ctx.Entry(product).State = (product.ProductId == 0) ? EntityState.Added : EntityState.Modified; 13:  14: try 15: { 16: success = (ctx.SaveChanges() == 1); 17: } 18: catch (DbUpdateConcurrencyException) 19: { 20: ctx.Entry(product).Reload(); 21: } 22:  23: return (this.Json(new { Success = success, ProductId = product.ProductId, RowVersion = Convert.ToBase64String(product.RowVersion) })); 24: } 25: } 26: else 27: { 28: Dictionary<String, String> errors = new Dictionary<String, String>(); 29:  30: foreach (KeyValuePair<String, ModelState> keyValue in this.ModelState) 31: { 32: String key = keyValue.Key; 33: ModelState modelState = keyValue.Value; 34:  35: foreach (ModelError error in modelState.Errors) 36: { 37: errors[key] = error.ErrorMessage; 38: } 39: } 40:  41: return (this.Json(new { Success = false, ProductId = 0, RowVersion = String.Empty, Errors = errors })); 42: } 43: } As for the view, we need to change slightly the onSuccess JavaScript handler on the Single view: 1: function onSuccess(ctx) 2: { 3: if (typeof (ctx.Success) != 'undefined') 4: { 5: $('input#ProductId').val(ctx.ProductId); 6: $('input#RowVersion').val(ctx.RowVersion); 7:  8: if (ctx.Success == false) 9: { 10: var errors = ''; 11:  12: if (typeof (ctx.Errors) != 'undefined') 13: { 14: for (var key in ctx.Errors) 15: { 16: errors += key + ': ' + ctx.Errors[key] + '\n'; 17: } 18:  19: window.alert('An error occurred while updating the entity: the model contained the following errors.\n\n' + errors); 20: } 21: else 22: { 23: window.alert('An error occurred while updating the entity: it may have been modified by third parties. Please try again.'); 24: } 25: } 26: else 27: { 28: window.alert('Saved successfully'); 29: } 30: } 31: else 32: { 33: if (window.confirm('Not logged in. Login now?') == true) 34: { 35: document.location.href = '<% 1: : FormsAuthentication.LoginUrl %>?ReturnURL=' + document.location.pathname; 36: } 37: } 38: } The logic is as this: If the Edit action method is called for a new entity (the ProductId is 0) and it is valid, the entity is saved, and the JSON results contains a Success flag set to true, a ProductId property with the database-generated primary key and a RowVersion with the server-generated ROWVERSION; If the model is not valid, the JSON result will contain the Success flag set to false and the Errors collection populated with all the model validation errors; If the entity already exists in the database (ProductId not 0) and the model is valid, but the stored ROWVERSION is different that the one on the view, the result will set the Success property to false and will return the current (as loaded from the database) value of the ROWVERSION on the RowVersion property. On a future post I will talk about the possibilities that exist for performing model validation, stay tuned!

    Read the article

  • Monitoring the Application alongside SQL Server

    - by Tony Davis
    Sometimes, on Simple-Talk, it takes a while to spot strange and unexpected patterns of user activity, or small bugs. For example, one morning we spotted that an article’s comment count had leapt to 1485, but that only four were displayed. With some rooting around in Google Analytics, and the endlessly annoying Community Server admin-interface, we were able to work out that a few days previously the article had been subject to a spam attack and that the comment count was for some reason including both accepted and unaccepted comments (which in turn uncovered a bug in the SQL). This sort of incident made us a lot keener on monitoring Simple-talk website usage more effectively. However, the metrics we wanted are troublesome, because they are far too specific for Google Analytics to measure, and the SQL Server backend doesn’t keep sufficient information to enable us to plot trends. The latter could provide, for example, the total number of comments made on, or votes cast for, articles, over all time, but not the number that occur by hour over a set time. We lacked a baseline, in other words. We couldn’t alter the database, as it is a bought-in package. We had neither the resources nor inclination to build-in dedicated application monitoring. Possibly, we could investigate a third-party tool to do the job; but then it occurred to us that we were already using a monitoring tool (SQL Monitor) to keep an eye on the database. It stored data, made graphs and sent alerts. Could we get it to monitor some aspects of the application as well? Of course, SQL Monitor’s single purpose is to check and monitor SQL Server, over time, rather than to monitor applications that use SQL Server. However, how different is the business of gathering and plotting SQL Server Wait Stats, from gathering and plotting various aspects of user activity on the site? Not a lot, it turns out. The latest version allows us to write our own custom monitoring scripts, meaning that we could now monitor any metric in the application that returns an integer. It took little time to write a simple SQL Query that collects basic metrics of the total number of subscribers, votes cast, comments made, or views of articles, over time. The SQL Monitor database polls Simple-Talk every second or so in order to get the latest totals, and can then store and plot this information, or even correlate SQL Server usage to application usage. You can see the live data by visiting monitor.red-gate.com. Click the "Analysis" tab, and select one of the "Simple-talk:" entries in the "Show" box and an appropriate data range (e.g. last 30 days). It’s nascent, and we’re still working on it, but it’s already given us more confidence that we’ll spot quickly trends, bugs, or bursts of ‘abnormal’ activity. If there is a sudden rise in comments, we get an alert, and if it’s due to a spam attack, we can moderate or ban the perpetrator very quickly. We’ve often argued that a tool should perform a single job well rather than turn into a Swiss-army knife, but ironically we’ve rather appreciated being able to make best use of what’s there anyway for a slightly different purpose. Is this a good or common practice? What do you think? Cheers, Tony.

    Read the article

  • P90X or How I Stopped Worrying and Love Exercise

    - by Matt Christian
    Last Wednesday, after many UPS delivery failures, I received P90X in the mail.  P90X is a series of DVD's and a nutrition guide you use to shed pounds and gain muscle.  Odds are you've seen the infomercial on TV at some point if you watch a little tube now and again.  I started last Thursday and am still standing to tell this tale. At it's core, P90X is a 12 DVD set of exercise videos.  Each video is comprised of a different workout routine that typically last around an hour (some up to 1 1/2 hours).  Every day you are supposed to do one of the workouts which are different every day (sometimes you may repeat a shorter 6 min workout dedicated to abs twice a week).  There are different 'programs' focused on different areas, for weight loss you do the Lean Program, standard weight loss and muscle gain do the Regular Program, and for those hardcore health-nuts, the Insane Program (which consists of 2 - 1 hour long exercises per day).  Each Program has a different set of workouts per week which you repeat for 3 weeks, followed by a 'Relaxation Week' which is essentially a slightly different order.  After the month of workouts is over, you've finished 1 phase out of 3.  P90X takes 90 days, split into 3 Phases (1 phase per month).  Every phase has a different workout order which is also focused on different areas (Weight Loss, Muscle Gain, etc...)  With the DVD's you also get a small glossy book of about 100 pages detailing the different workouts and the different programs as well as a sample workout to see if you're even ready to start P90X. The second part of P90X, which can also be considered the 'core' (actually the other half of the core) is the nutrition guide that is included.  The Nutrition Guide is a book similar to the one that defines the exercises (about 100 glossy pages) though it details foods you should eat, the amounts, and a number of healthy (and tasty!) recipes.  The guide is split up into 3 phases as well, promoting high protein and low carb/dairy at during Phase 1, and levelling off through to Phase 3 where you have a relatively balanced amount of every food group. So after 1 week where am I?  I've stuck quite close to the nutrition guide (there isn't 'diet food' in here people, it's ACTUALLY food) and done my exercise every day.  I think a lot of the first week is getting into the whole idea and learning the moves performed on the DVD.  Have I lost weight?  No.  Do I feel some definition already starting to poke out?  Absolutely (no pun intended). Tony Horton (the 51-year old hulk that runs the whole thing) is very fun to listen and work along with and the 'diet' really isn't too hard to follow unless all you eat is carbs.  I've tried the gym thing and could not get motivated enough to continue going.  P90X is the first time I've ached from a workout, BEFORE starting my next workout.  For anyone interested, Google 'P90X' or 'BeachBody' to find out more information about this awesome program!

    Read the article

  • Prepping the Raspberry Pi for Java Excellence (part 1)

    - by HecklerMark
    I've only recently been able to begin working seriously with my first Raspberry Pi, received months ago but hastily shelved in preparation for JavaOne. The Raspberry Pi and other diminutive computing platforms offer a glimpse of the potential of what is often referred to as the embedded space, the "Internet of Things" (IoT), or Machine to Machine (M2M) computing. I have a few different configurations I want to use for multiple Raspberry Pis, but for each of them, I'll need to perform the following common steps to prepare them for their various tasks: Load an OS onto an SD card Get the Pi connected to the network Load a JDK I've been very happy to see good friend and JFXtras teammate Gerrit Grunwald document how to do these things on his blog (link to article here - check it out!), but I ran into some issues configuring wi-fi that caused me some needless grief. Not knowing if any of the pitfalls were caused by my slightly-older version of the Pi and not being able to find anything specific online to help me get past it, I kept chipping away at it until I broke through. The purpose of this post is to (hopefully) help someone else recognize the same issues if/when they encounter them and work past them quickly. There is a great resource page here that covers several ways to get the OS on an SD card, but here is what I did (on a Mac): Plug SD card into reader on/in Mac Format it (FAT32) Unmount it (diskutil unmountDisk diskn, where n is the disk number representing the SD card) Transfer the disk image for Debian to the SD card (dd if=2012-08-08-wheezy-armel.img of=/dev/diskn bs=1m) Eject the card from the Mac (diskutil eject diskn) There are other ways, but this is fairly quick and painless, especially after you do it several times. Yes, I had to do that dance repeatedly (minus formatting) due to the wi-fi issues, as it kept killing the ability of the Pi to boot. You should be able to dramatically reduce the number of OS loads you do, though, if you do a few things with regard to your wi-fi. Firstly, I strongly recommend you purchase the Edimax EW-7811Un wi-fi adapter. This adapter/chipset has been proven with the Raspberry Pi, it's tiny, and it's cheap. Avoid unnecessary aggravation and buy this one! Secondly, visit this page for a script and instructions regarding how to configure your new wi-fi adapter with your Pi. Here is the rub, though: there is a missing step. At least for my combination of Pi version, OS version, and uncanny gift of timing and luck there was. :-) Here is the sequence of steps I used to make the magic happen: Plug your newly-minted SD card (with OS) into your Pi and connect a network cable (for internet connectivity) Boot your Pi. On the first boot, do the following things: Opt to have it use all space on the SD card (will require a reboot eventually) Disable overscan Set your timezone Enable the ssh server Update raspi-config Reboot your Pi. This will reconfigure the SD to use all space (see above). After you log in (UID: pi, password: raspberry), upgrade your OS. This was the missing step for me that put a merciful end to the repeated SD card re-imaging and made the wi-fi configuration trivial. To do so, just type sudo apt-get upgrade and give it several minutes to complete. Pour yourself a cup of coffee and congratulate yourself on the time you've just saved.  ;-) With the OS upgrade finished, now you can follow Mr. Engman's directions (to the letter, please see link above), download his script, and let it work its magic. One aside: I plugged the little power-sipping Edimax directly into the Pi and it worked perfectly. No powered hub needed, at least in my configuration. To recap, that OS upgrade (at least at this point, with this combination of OS/drivers/Pi version) is absolutely essential for a smooth experience. Miss that step, and you're in for hours of "fun". Save yourself! I'll pick up next time with more of the Java side of the RasPi configuration, but as they say, you have to cross the moat to get into the castle. Hopefully, this will help you do just that. Until next time! All the best, Mark 

    Read the article

  • Secret Agent Man

    - by Bil Simser
    Just a quick one this morning as we all get started in the week. Something that comes into play (sometimes in a big way) is the user agent string your browser gives off. So for example using the User-Agent field in the request header, you can determine what browser the user is running and act accordingly.Internet Explorer 9 modified the UA string slightly so just in case you're looking for it here are the user agent strings for IE9 (in various modes):Internet Explorer 9 Mode: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)Internet Explorer 8 Mode: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)Internet Explorer 7 Mode: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)Internet Explorer 9 (Compatibility Mode): Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)A couple of things to note here:This was from a 64-bit Windows 7 client so that might account for the WOW64 in the agent string (I don't have a 32-bit client to test from)Various applications and platforms add to the UA string just like they do in previous IE releases. So for example you can see I have various .NET versions installed as well as Zune. You can take advantage of this by querying the UA string for compatibilities and present options accordingly to the end user.As applications will continue to add and modify this string you'll want to query the string for parts not the entire string. For example if you want to detect if you're coming from IE running  on a Windows Phone 7 just look for "iemobile" in the user agent stringHappy hacking!

    Read the article

  • Entity Framework 4, WCF &amp; Lazy Loading Tip

    - by Dane Morgridge
    If you are doing any work with Entity Framework and custom WCF services in EFv1, everything works great.  As soon as you jump to EFv4, you may find yourself getting odd errors that you can’t seem to catch.  The problem is almost always has something to do with the new lazy loading feature in Entity Framework 4.  With Entity Framework 1, you didn’t have lazy loading so this problem didn’t surface.  Assume I have a Person entity and an Address entity where there is a one-to-many relationship between Person and Address (Person has many Addresses). In Entity Framework 1 (or in EFv4 with lazy loading turned off), I would have to load the Address data by hand by either using the Include or Load Method: var people = context.People.Include("Addresses"); or people.Addresses.Load(); Lazy loading works when the first time the Person.Addresses collection is accessed: 1: var people = context.People.ToList(); 2:  3: // only person data is currently in memory 4:  5: foreach(var person in people) 6: { 7: // EF determines that no Address data has been loaded and lazy loads 8: int count = person.Addresses.Count(); 9: } 10:  Lazy loading has the useful (and sometimes not useful) feature of fetching data when requested.  It can make your life easier or it can make it a big pain.  So what does this have to do with WCF?  One word: Serialization. When you need to pass data over the wire with WCF, the data contract is serialized into either XML or binary depending on the binding you are using.  Well, if I am using lazy loading, the Person entity gets serialized and during that process, the Addresses collection is accessed.  When that happens, the Address data is lazy loaded.  Then the Address is serialized, and the Person property is accessed, and then also serialized and then the Addresses collection is accessed.  Now the second time through, lazy loading doesn’t kick in, but you can see the infinite loop caused by this process.  This is a problem with any serialization, but I personally found it trying to use WCF. The fix for this is to simply turn off lazy Loading.  This can be done at each call by using context options: context.ContextOptions.LazyLoadingEnabled = false; Turning lazy loading off will now allow your classes to be serialized properly.  Note, this is if you are using the standard Entity Framework classes.  If you are using POCO,  you will have to do something slightly different.  With POCO, the Entity Framework will create proxy classes by default that allow things like lazy loading to work with POCO.  This proxy basically creates a proxy object that is a full Entity Framework object that sits between the context and the POCO object.  When using POCO with WCF (or any serialization) just turning off lazy loading doesn’t cut it.  You have to turn off the proxy creation to ensure that your classes will serialize properly: context.ContextOptions.ProxyCreationEnabled = false; The nice thing is that you can do this on a call-by-call basis.  If you use a new context for each set of operations (which you should) then you can turn either lazy loading or proxy creation on and off as needed.

    Read the article

  • How to inhibit suspend temporarily?

    - by Zorn
    I have searched around a bit for this and can't seem to find anything helpful. I have my PC running Ubuntu 12.10 set up to suspend after 30 minutes of inactivity. I don't want to change that, it works great most of the time. What I do want to do is disable the automatic suspend if a particular application is running. How can I do this? The closest thing I've found so far is to add a shell script in /usr/lib/pm-utils/sleep.d which checks if the application is running and returns 1 to indicate that suspend should be prevented. But it looks like the system then gives up on suspending automatically, instead of trying again after another 30 minutes. (As far as I can tell, if I move the mouse, that restarts the timer again.) It's quite likely the application will finish after a couple of hours, and I'd rather my PC then suspended automatically if I'm not using it at that point. (So I don't want to add a call to pm-suspend when the application finishes.) Is this possible? Any advice would be appreciated. Cheers. EDIT: As I noted in one of the comments below, what I actually wanted was to inhibit suspend when my PC was serving files over NFS; I just wanted to focus on the "suspend" part of the question because I already had an idea how to solve the NFS part. Using the 'xdotool' idea given in one of the answers, I have come up with the following script which I run from cron every few minutes. It's not ideal because it stops the screensaver kicking in as well, but it does work. I need to have a look at why 'caffeine' doesn't correctly re-enable suspend later on, then I could probably do better. Anyway, this does seem to work, so I'm including it here in case anyone else is interested. #!/bin/bash # If the output of this function changes between two successive runs of this # script, we inhibit auto-suspend. function check_activity() { /usr/sbin/nfsstat --server --list } # Prevent the automatic suspend from kicking in. function inhibit_suspend() { # Slightly jiggle the mouse pointer about; we do a small step and # reverse step to try to stop this being annoying to anyone using the # PC. TODO: This isn't ideal, apart from being a bit hacky it stops # the screensaver kicking in as well, when all we want is to stop # the PC suspending. Can 'caffeine' help? export DISPLAY=:0.0 xdotool mousemove_relative --sync -- 1 1 xdotool mousemove_relative --sync -- -1 -1 } LOG="$HOME/log/nfs-suspend-blocker.log" ACTIVITYFILE1="$HOME/tmp/nfs-suspend-blocker.current" ACTIVITYFILE2="$HOME/tmp/nfs-suspend-blocker.previous" echo "Started run at $(date)" >> "$LOG" if [ ! -f "$ACTIVITYFILE1" ]; then check_activity > "$ACTIVITYFILE1" exit 0; fi /bin/mv "$ACTIVITYFILE1" "$ACTIVITYFILE2" check_activity > "$ACTIVITYFILE1" if cmp --quiet "$ACTIVITYFILE1" "$ACTIVITYFILE2"; then echo "No activity detected since last run" >> "$LOG" else echo "Activity detected since last run; inhibiting suspend" >> "$LOG" inhibit_suspend fi

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >