Search Results

Search found 18766 results on 751 pages for 'me again'.

Page 17/751 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Windows 7 Icons, Buttons, and Tabs corrupted...Professional 32-bit

    - by xhyperx
    The other day, about two or three ago, I was simply typing in a Microsoft Word document when my screen froze. After a few moments, it went black...I thought it was my vid hardware (dual nVidia 9800 GTs). Anyway, I did a hard reboot, and chose to Start Normally. The system blue screened telling me there was a failure in the Memory Manager. So then I thought maybe a RAM failure or vid memory failure. I attempted reboot again, this time I got presented with the option to repair windows...so I went with that. The repair app finished and did an auto reboot. This time I got all the way back to my desktop where in a matter of a about 30 seconds, the system blue screened again and pointed to the Memory Manager as the area of cause. Again I rebooted, the repair thingy came up again and I allowed it to do its thing. Deciding if the same failure occured I'd begin pulling hardware to see at what point I may have found the possibly defective party. However, this time it rebooted, I got back to desktop and no crash. All looked well, untill I looked at the baloon messages when hovering over the System Bar icons. Also when I opened any of my browsers, the tabs had no text, and any window that pops up that has regular buttons (OK, Cancel, etc., etc.) looks weird. The buttons are really really long and have no text. So it seems like the system is once again running smoothly, however something has gotten corrupted.. something relating to drawing basic windows user interface objects. Help...all ideas are respected and appreciated. Have a great day everyone!

    Read the article

  • This task is currently locked by a running workflow and cannot be edited. Limitation to both Nintex and SPD workflow

    - by ybbest
    Note, this post is from Nintex Forum here. These limitations apply to both SharePoint designer Workflow and Nintex Workflow as Nintex using the SharePoint workflow engine. The common cause that I experience is that ‘parent’ workflow is generating more than one task at once. This is common as you can have multiple approvers for certain approval process. You could also have workflow running when the task is created, one of the common scenario is you would like to set a custom column value in your approval task. For me this is huge limitation, as Nintex lover I really hope Nintex could solve this problem with Microsoft going forward. Introduction “This task is currently locked by a running workflow and cannot be edited” is a common message that is seen when an error occurs while the SharePoint workflow engine is processing a task item associated with a workflow. When a workflow processes a task normally, the following sequence of events is expected to occur: 1.       The process begins. 2.       The workflow places a ‘lock’ on the task so nothing else can change the values while the workflow is processing. 3.       The workflow processes the task. 4.       The lock is released when the task processing is finished. When the message is encountered, it usually indicates that an error occurred between step 2 and 4. As a result, the lock is never released. Therefore, the ‘task locked’ message is not an error itself, rather a symptom of another error – the ‘task locked’ message does not indicate what went wrong. In most cases, once this message is encountered, the workflow cannot be made to continue and must be terminated and started again. The following is a guide that can help troubleshoot the cause of these messages.  Some initial observations to narrow down the potential causes are: Is the error consistent or intermittent? When the error is consistent, it will happen every time the workflow is run. When it is intermittent, it may happen regularly, but not every time. Does the error occur the first time the user tries to respond to a task, or do they respond and notice the workflow does not continue, and when they respond again the error occurs? If the message is present when the user first responds to the task, the issue would have occurred when the task was created. Otherwise, it would have occurred when the user attempted to respond to the task. Causes Modifying the task list A cause of this error appearing consistently the first time a user tries to respond to a task is a modification to the default task list schema. For example, changing the ‘Assigned to’ field in a task list to be a multiple selection will cause the behaviour. Deleting the workflow task then restoring it from the Recycle bin If you start a workflow, delete the workflow task then restore it from the Recycle Bin in SharePoint, the workflow will fail with the ‘task locked’ error.  This is confirmed behaviour whether using a SharePoint Designer or a Nintex workflow.  You will need to terminate the workflow and start it again. Parallel simultaneous responses A cause of this error appearing inconsistently is multiple users responding to tasks in parallel at the same time. In this scenario, one task will complete correctly and the other will not process. When the user tries again, the ‘task locked’ message will display. Nintex included a workaround for this issue in build 11000. In build 11000 and later, one of the users will receive a message on the task form when they attempt to respond, stating that they need to try again in a few moments. Additional processing on the task A cause of this error appearing consistently and inconsistently is having an additional system running on the items in the task list. Some examples include: a workflow running on the task list, an event receiver running on the task list or another automated process querying and updating workflow tasks. Note: This Microsoft help article (http://office.microsoft.com/en-us/sharepointdesigner/HA102376561033.aspx#5) explains creating a workflow that runs on the task list to update a field on the task. Our experience shows that this causes the ‘Task Locked’ issues when the ‘parent’ workflow is generating more than one task at once. Isolated system error If the error is a rare event, or a ‘one off’ event, then an isolated system error may have occurred. For example, if there is a database connectivity issue while the workflow is processing the task response, the task will lock. In this case, the user will respond to a task but the workflow will not continue. When they respond again, the ‘task locked’ message will display. In this case, there will be an error in the SharePoint ULS Logs at the time that the user originally responded. Temporary delay while workflow processes If the workflow is taking a long time to process after a user submits a task, they may notice and try to respond to the task again. They will see the task locked error, but after a number of attempts (or after waiting some time) the task response page eventually indicates the task has been responded to. In this case, nothing actually went wrong, and the error message gives an accurate indication of what is happening – the workflow temporarily locked the task while it was processing. This scenario may occur in a very large workflow, or after the SharePoint application pool has just started. Modifying the task via a web service with an invalid url If the Nintex Workflow web service is used to respond to or delegate a task, the site context part of the url must be a valid alternative access mapping url. For example, if you access the web service via the IP address of the SharePoint server, and the IP address is not a valid AAM, the task can become locked. The workflow has become stuck without any apparent errors This behaviour can occur as a result of a bug in the SharePoint 2010 workflow engine.  If you do not have the August 2010 Cumulative Update (or later) for SharePoint, and your workflow uses delays, “Flexi-task”, State machine”, “Task Reminder” actions or variables, you could be affected. Check the SharePoint 2010 Updates site here: http://technet.microsoft.com/en-us/sharepoint/ff800847.  The October CU is recommended http://support.microsoft.com/kb/2553031.   The fix is described as “Consider the following scenario. You add a Delay activity to a workflow. Then, you set the duration for the Delay activity. You deploy the workflow in SharePoint Foundation 2010. In this scenario, the workflow is not resumed after the duration of the Delay activity”. If you find this is occurring in your environment, install the October CU, terminate all the running workflows affected and run them afresh. Investigative steps The first step to isolate the issue is to create a new task list on the site and configure the workflow to use it.  Any customizations that were made to the original task list should not be made to the new task list. If the new task list eliminates the issue, then the cause can be attributed to the original task list or a change that was made to it. To change the task list that the workflow uses: In Workflow Designer select Settings -> Startup Options Then configure the task list as required If any of the scenarios above do not help, check the SharePoint logs for any messages with a category of ‘Workflow Infrastructure’. Conclusion The information in this article has been gathered from observations and investigations by Nintex. The sources of these issues are the underlying SharePoint workflow engine. This article will be updated if further causes are discovered. From <http://connect.nintex.com/forums/thread/6503.aspx>

    Read the article

  • Merge replication stopping without errors in SQL 2008 R2

    - by Rob Farley
    A non-SQL MVP friend of mine, who also happens to be a client, asked me for some help again last week. I was planning on writing this up even before Rob Volk (@sql_r) listed his T-SQL Tuesday topic for this month. Earlier in the year, I (well, LobsterPot Solutions, although I’d been the person mostly involved) had helped out with a merge replication problem. The Merge Agent on the subscriber was just stopping every time, shortly after it started. With no errors anywhere – not in the Windows Event Log, the SQL Agent logs, not anywhere. We’d managed to get the system working again, but didn’t have a good reason about what had happened, and last week, the problem occurred again. I asked him about writing up the experience in a blog post, largely because of the red herrings that we encountered. It was an interesting experience for me, also because I didn’t end up touching my computer the whole time – just tapping on my phone via Twitter and Live Msgr. You see, the thing with replication is that a useful troubleshooting option is to reinitialise the thing. We’d done that last time, and it had started to work again – eventually. I say eventually, because the link being used between the sites is relatively slow, and it took a long while for the initialisation to finish. Meanwhile, we’d been doing some investigation into what the problem could be, and were suitably pleased when the problem disappeared. So I got a message saying that a replication problem had occurred again. Reinitialising wasn’t going to be an option this time either. In this scenario, the subscriber having the problem happened to be in a different domain to the publisher. The other subscribers (within the domain) were fine, just this one in a different domain had the problem. Part of the problem seemed to be a log file that wasn’t being backed up properly. They’d been trying to back up to a backup device that had a corruption, and the log file was growing. Turned out, this wasn’t related to the problem, but of course, any time you’re troubleshooting and you see something untoward, you wonder. Having got past that problem, my next thought was that perhaps there was a problem with the account being used. But the other subscribers were using the same account, without any problems. The client pointed out that that it was almost exactly six months since the last failure (later shown to be a complete red herring). It sounded like something might’ve expired. Checking through certificates and trusts showed no sign of anything, and besides, there wasn’t a problem running a command-prompt window using the account in question, from the subscriber box. ...except that when he ran the sqlcmd –E –S servername command I recommended, it failed with a Named Pipes error. I’ve seen problems with firewalls rejecting connections via Named Pipes but letting TCP/IP through, so I got him to look into SQL Configuration Manager to see what kind of connection was being preferred... Everything seemed fine. And strangely, he could connect via Management Studio. Turned out, he had a typo in the servername of the sqlcmd command. That particular red herring must’ve been reflected in his cheeks as he told me. During the time, I also pinged a friend of mine to find out who I should ask, and Ted Kruger (@onpnt) ‘s name came up. Ted (and thanks again, Ted – really) reconfirmed some of my thoughts around the idea of an account expiring, and also suggesting bumping up the logging to level 4 (2 is Verbose, 4 is undocumented ridiculousness). I’d just told the client to push the logging up to level 2, but the log file wasn’t appearing. Checking permissions showed that the user did have permission on the folder, but still no file was appearing. Then it was noticed that the user had been switched earlier as part of the troubleshooting, and switching it back to the real user caused the log file to appear. Still no errors. A lot more information being pushed out, but still no errors. Ted suggested making sure the FQDNs were okay from both ends, in case the servers were unable to talk to each other. DNS problems can lead to hassles which can stop replication from working. No luck there either – it was all working fine. Another server started to report a problem as well. These two boxes were both SQL 2008 R2 (SP1), while the others, still working, were SQL 2005. Around this time, the client tried an idea that I’d shown him a few years ago – using a Profiler trace to see what was being called on the servers. It turned out that the last call being made on the publisher was sp_MSenumschemachange. A quick interwebs search on that showed a problem that exists in SQL Server 2008 R2, when stored procedures have more than 4000 characters. Running that stored procedure (with the same parameters) manually on SQL 2005 listed three stored procedures, the first of which did indeed have more than 4000 characters. Still no error though, and the problem as listed at http://support.microsoft.com/kb/2539378 describes an error that should occur in the Event log. However, this problem is the type of thing that is fixed by a reinitialisation (because it doesn’t need to send the procedure change across as a transaction). And a look in the change history of the long stored procs (you all keep them, right?), showed that the problem from six months earlier could well have been down to this too. Applying SP2 (with sufficient paranoia about backups and how to get back out again if necessary) fixed the problem. The stored proc changes went through immediately after the service pack was applied, and it’s been running happily since. The funny thing is that I didn’t solve the problem. He had put the Profiler trace on the server, and had done the search that found a forum post pointing at this particular problem. I’d asked Ted too, and although he’d given some useful information, nothing that he’d come up with had actually been the solution either. Sometimes, asking for help is the most useful thing you can do. Often though, you don’t end up getting the help from the person you asked – the sounding board is actually what you need. @rob_farley

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • No Wi-Fi after system reboot

    - by ILya
    Something strange is happening... I've installed a Wi-Fi card into my Ubuntu Server 11.04 machine. To configure it I do the following: sudo vi /etc/network/interfaces add: iface wlan0 inet dhcp wpa-driver wext wpa-ssid "Sweet Home" wpa-ap-scan 1 wpa-proto WPA wpa-pairwise TKIP wpa-group TKIP wpa-key-mgmt WPA-PSK wpa-psk <A KEY> auto wlan0 then: $ sudo /etc/init.d/networking restart * Running /etc/init.d/networking restart is deprecated because it may not enable again some interfaces * Reconfiguring network interfaces... ssh stop/waiting ssh start/running, process 1522 ssh stop/waiting ssh start/running, process 1590 And my machine successfully gets an ip to my wireless adapter. But after reboot it doesn't get any ip in wireless network. To fix it I run /etc/init.d/networking restart again and all is fine again - it gets an ip. I understand that I simply should add it to my startup scripts to make it work properly, but maybe there is a better way to configure it?

    Read the article

  • .NET Rocks VS2010 Road Trip

    - by Blog Author
    .NET Rocks!! is going on the road again in honor of the release of VS2010, and here are the details: Carl and Richard are loading up the DotNetMobile (a 30 foot RV) and driving to your town again to show off the latest and greatest in Visual Studio 2010 and .NET 4.0!  And to make the night even more fun, we’re going to bring a mystery rock star from the Visual Studio world to the event and interview them for a special .NET Rocks Road Trip show series. Along the way we’ll be giving away some great prizes, showing off some awesome technology and having a ton of laughs. And one lucky person at the event will win “Ride Along with Carl and Richard” and get to board the RV and ride with the boys to the next town on the tour (don’t worry, we’ll get you home again!) The details can be found here: http://www.dotnetrocks.com/roadtrip.aspx

    Read the article

  • how to re-use a sprite in cocos2d-x

    - by zinking
    some times it takes time to create the sprite structures in the scene, I might need to setup structures inside this sprite to meet requirement, thus I would hope to reuse such structures with the game again and again. I tried that, remove the child from parent, detach it from parent , clean parent with the sprite. but when I try to add the sprite to another scene, it's just wont pass the assertion that the sprite already have parent did I miss some step ? add an example: I have a sprite A which involves of quite a few steps to construct, so I used it in scene A layer A, and then I want to use it in scene A layer B, scene B layer A1 etc..... generally speaking I don't want to reconstruct the sprte again.

    Read the article

  • Bridging: Loosing WLAN network connection with 4addr on option - Why?

    - by WitchCraft
    Question: For use with my Xen VM, I need to create a virtual network interface (vif) that is bridged to wlan0. If in /etc/network/interfaces I add auto xenbr0 iface xenbr0 inet dhcp And then later do brctl addif xenbr0 wlan0 I get this error message. can't add wlan0 to bridge xenbr0: Operation not supported I found out that Linux won't let you bridge a wireless interface in managed mode at all unless you enable the 4addr option (needed to recompile iw): iw dev wlan0 set 4addr on Afterwards brctl addif xenbr0 wlan0 works, and brctl show shows xenbr0 as bridged to wlan0. Unfortunately, as soon as I execute iw dev wlan0 set 4addr on my entire network connection is gone (no connection). As soon as then I execute iw dev wlan0 set 4addr off I reconnect and it works again. If I re-execute 4addr on, it breaks again, if I execute 4addr off, it works again. Unfortunately, I can't just turn 4addr on, activate the bridge and then turn it back off (error: device not ready). Does anybody know why I loose my connection ?

    Read the article

  • No BIOS, no video after ubuntu updates

    - by chubbysilk
    Hello, I am running Ubuntu 10.04 on a Dell Precision T3500 Desktop machine. Last week, when I ran the regular updates, upon restart my video disappeared. I could not even see the Dell startup screen or enter the BIOS. When I swapped out the video card (for an older one I had around), the system worked again. So Dell sent me a replacement video card. I put that in and everything appeared to be working again. Then, I ran updates again, and the same thing happened. Replacement video card appears to be broken. No startup messages, no BIOS, no video at all. Does anyone know how Ubuntu updates might be ruining the video cards? The card that keeps "breaking" is an FirePro MV2260. Thanks, RC

    Read the article

  • junior / professional / senior categorization

    - by oozoo
    Hey guys, is it just me or is the categorization of developer levels highly subjective? I get the feeling that every company tries to hire experienced developers as juniors because they don't know $technology. For example my own career: I switched technologies a couple of times, while sticking to java as a programming language. For example I first worked for 3 years using JavaSE technologies, the next company I worked for hired me as junior because I didn't have JavaEE experience - while still selling me as professional level to customers (I work in consulting). The next company hired me again as junior because I didn't have SAP experience - they mostly work with SAP Java technologies which is definitely a niche. Still, they are selling all their technology consultants for exactly the same rate while paying them significantly different wages. Now when switching jobs again I feel like this whole thing is going to start all over again because I don't have Spring experience or Oracle knowledge. tl;dr = is my observation totally off base that companies are just using these categorizations as means to keep down wages?

    Read the article

  • "Updating VMware Tools for Linux" spins forever

    - by Nicolas Raoul
    I installed Ubuntu 11.04 inside VMware Player 3.1.4 inside Ubuntu 10.10 When first booting on the ISO image I was asked if I wanted to download the VMware Tools for Linux and I accepted. It downloaded for a few minutes, and then it has been "updating" for an hour already. Even shutting down the VM does not stop it. If I try to exit VMware, I am told "Cannot exit while still downloading. Cancel all downloads and try again." but the dialog's Cancel button is greyed out. Nothing special in vmware.log I ended up killink VMware. A few days after I restarted it, it asked about the VMware Tools again, I accepted again, and same problem...

    Read the article

  • Ubuntu 12.10 64bit fresh install, wireless issue!

    - by Dave
    Just installed a fresh Ubuntu 12.10 64bit on my laptop, run the update manager, restarted and suddenly I can't use my wifi anymore. Ubuntu software center installed automatically the wifi additional driver as you can see in my screenshot. If I mark the option "Do not use the device" and apply changes, restarting Ubuntu my wifi is back and I can use it. If I run iwconfig my terminal is showing this Now if I use Ubuntu for more than 20 minutes surfing the web my wifi it keeps to be connected but I don't receive any signal from it. Any page I try to open it simply don't open (just waiting icon). If I disconnect my wifi and connect it again, same issue, it doesn't work. The only way to make it work again is to restart Ubuntu. And the same story it happens again after aprox. 20, 30 minutes. WIFI device details: 03:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller [14e4:4727] (rev 01) Thanks, Dave

    Read the article

  • Download mirrors down after fresh installation

    - by user169866
    I dont know whats the problem but it seems after every fresh installation of ubuntu in any device (have started noticing this problem since ubuntu 11.04) the connection to the software sources are down. I cannot download any software from either apt-get or software center. Also i cannot use the Universal Source for packages like chromium, etc. And then suddenly after a few days, the mirrors are back up again!! Its normal back again! I dont know what the problem is, but it happens every time. Is there any solution to this or we just have to wait for a few days till the mirrors are back again? (and I have also tried changing the download server, It doesn't helps!)

    Read the article

  • How does btrfs RAID work in degraded mode?

    - by turbo
    My idea was that (using loopback devices) it works like this Create the raid array sudo mkfs.btrfs -m raid1 -d raid1 /dev/loop1 /dev/loop2 You mount them sudo mount /dev/loop1 /mnt and mark them touch goodcondition You unmount and simulate disk failure (remove disk or delete loopback device loop2 in my case) You mount degraded -o degraded and mark again touch degraded You add the bad disk again sudo btrfs dev add /dev/loop2 You rebalance sudo btrfs fi ba /mnt And Raid 1 should work again. But that's not the case. sudo btrfs fi show: Total devices 3 FS bytes used 28.00KB devid 3 size 4.00GB used 264.00MB path /dev/loop1 devid 2 size 4.00GB used 272.00MB path /dev/loop2 *** Some devices missing The file degraded lives on loop1 but not on loop2 when loop2 is mounted in degraded mode. Why is that?

    Read the article

  • Installing Ubuntu Desktop on usb stick

    - by Tobias Gårdner
    trying to install Ubuntu Desktop on a USB stick but I do not succeed. First time I tried, the USB stick contained an installation of USB server and I wanted to start over again. However, it complained about partioning. Removed all the partitions from the stick and tried again, hoping that the installer would help me out with partioning... But now the USB stick did not show up at all... Created one partion NTFS on the USB stick and tried again but the only "automated" alternative I get for installing is to overwrite or add Ubuntu to my HDD which already contains Windows, something that I do not want... Do I need to manually create partions on the stick in the installer? Which partitions should I create? The USB stick is 8GB and the machine that I will test it on has 8GB memory. Helpful for any support here. Regards, Tobbe G

    Read the article

  • My ubuntu partition was deleted and I can't boot from either a DVD or USB

    - by ropudito
    It's my first time installing ubuntu. I have Windows 7 and 8 before in my laptop (Acer Aspire 4752z). After Ubuntu (12.04) installed in my laptop, the Windows boot loader didn't recognize my Ubuntu so I updated grub from a live USB. And after grub updated, Ubuntu was booting perfectly but my Windows wasn't listed in the grub menu. So I follow someone instruction to update grub again. And after reboot, the grub menu didn't show anymore. After searching about this problem on the net and trying to update grub again and again, I decided to delete the Ubuntu partition from live USB. And shortly after, I booted to Windows with Hiren boot and I used mbrfix. But i think it failed. Now i cant show my BIOS setup or boot from DVD or USB. After that, the only screen i can see after booting is: error: unknown filesystem grub rescue>

    Read the article

  • Installing updates from hard drive [closed]

    - by Ajay
    I am using Oneiric Beta 2. Installed it yesterday. Then downloaded 350+ MB of updates and installed it. Then when I tried to auto-mount my drives using Storage manager, I screwed up and the system will boot right up to the Ubuntu Splash screen, then turn off. Anyways planning to reinstall Ubuntu again. But I do not want to have to download the updates again. I have a copy of all the downloaded update files with me. Can anyone tell me how I can install the updates from the hard drive without downloading them again ?? Thanks in advance.....

    Read the article

  • gpg passcode error

    - by shantanu
    I just created my gpg key using developer.ubuntu.com packaging page's documentation and also upload the public key in key server. When i import the key into launchpad it sent me an encrypted message. Now i give a command to decrypt the message gpg -d launchpad.txt it asked for a passcode. I gave one which i used to create gpg key. but it shows error again again. I repeat the process(gpg creation) again but stuck on .... passcode(decryption). What is the problem?

    Read the article

  • 13.10 Cursor Disappearing?

    - by ConnorRoberts
    Upgraded from 13.04 to 13.10 on my Ideapad Yoga yesterday, hoping it might have fixes for a couple of issues I've been having. While it seems to have made the touchscreen more usable, it has made the problem with my trackpad even worse :( In 13.04 and below, the cursor occasionally (maybe once or twice a day) would completely stop working and to get it working again I would run sudo modprobe -r psmouse sudo modprobe psmouse and it would be happy again, now every hour or so, my cursor just goes invisible, you can tell its still there as you can hover over things and they will respond. Again running sudo modprobe -r psmouse sudo modprobe psmouse works but it's getting a little annoying now its so frequent! Does anyone have any suggestions on things to try? :( Thanks in advance!

    Read the article

  • My ubuntu with unity not loading after last reboot

    - by Abonec
    I have asus u36sd and after last reboot I can't start up my ubuntu 11.10. Usually I suspend my notebook by closing cover but today I reboot it and it not starting up. Booting flowing by normal till to login screen but if I move mouse cursor after that image immediately switch to console (without any error; only normal loading startup processes) and back to login screen. I can type my password and boot continuing loading but after few moment it again switch back to dark console and switch again to login screen. I can load recovery mode but if I try touch my cursor (by mouse or internal notebook touchpad) it again switch back to console and to login screen. But if I use only keyboard it work fine. Where I can see detailed log information about my problem?

    Read the article

  • Can't single boot Ubuntu 13.04 64 bit

    - by stanleyhunk
    I'm new to Ubuntu, I bought an Asus A45VS laptop recently pre-installed with Windows 8, but I have already uninstalled it and wipef the whole HDD. I plan to install Ubuntu 13.04 64 bit on it. I have tried several times to install and uninstall Ubuntu again and again with boot-able USB, but it still fail to boot. All the installation process go fine, after rebooting my laptop, it just stick to the purple screen. Then I boot it with USB again, tried boot-repair, tried make an EFI partition, still the same. I have searched on the web, and all of them was about dual booting with windows 7 or windows 8, I don't wish to do dual booting as I wish to have single OS which is Ubuntu on this laptop. please help, thanks in advance.

    Read the article

  • How do I get an Xstream mobile phone working as a modem?

    - by callmegus
    I bought a CDMA mobile phone that can use as modem, this http://smartfren.com/xstream/ and i wanna try it on my ubuntu plug usb , and run "lsusb" callmegus@Presario-CQ43-304AU:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 058f:a001 Alcor Micro Corp. Bus 003 Device 002: ID 1bbb:0106 T & A Mobile Phones Bus 003 Device 003: ID 15d9:0a4c Trust International B.V. USB+PS/2 Optical Mouse Bus 004 Device 002: ID 0a5c:21e3 Broadcom Corp. I successfully "modprobe" this device, and connect to internet, but it's disconnected somehow .. and i can recconect it. I try "lsusb" again .. and something went wrong, here Bus 003 Device 002: ID 1bbb:1000 T & A Mobile Phones I do "modprobe" again and again , but it still won't connect Please guide me

    Read the article

  • Key binding in Compiz no longer works

    - by Dave M G
    I have a key binding set in CompizConfig settings manager that runs this command: sleep .5 && xset -display :0.0 dpms force off I have it attached to Super+~. It's worked fine for years. Now, suddenly, it stopped working. When I open CompizConfig the key binding is blank. I set it again, close CompizConfig, and it doesn't work. So I open CompizConfig again, and the key binding is blank again. It won't save what I set it to. How do I get my key binding and command to work, and stay working.

    Read the article

  • OTN APAC Tour 2012: Bangkok, Thailand - RECAP

    - by Mike Dietrich
    Thanks to everybody who did attend at the OTN APAC Tour in Bangkok on Monday, Oct 21. It was a pleasure for me to be back in Bangkok again even though I didn't have much time again due to my overnight flight to Seoul after the workshop. But thanks for your questions - I will follow up as soon as I'm getting back home. And thanks to Francisco Alvarez, Oracle ACE Director from New Zealand, for inviting me. It was a pleasure presenting together with Francisco, Kamran Nagayev, Oracle ACE Director from Azerbaidjan and Tanakorn Tavornsasnavong from Bangkok. I have learned a lot during that day. In case you'd like to download my presentations from this day please find theme via this link. You may access the other slides on either the local OTN page or get them directly from Francisco's blog and Kamran's blog (and you'll find a lot of excellent and helpful articles there as well). And many thanks to the local OTN group organizing the entire event so well   Hope to see you soon again

    Read the article

  • grub rescue: grub not installed

    - by linda8
    I tried to install beside my windows 7. It didn't work. After installing it could not find the root directory and I got some other error messages... I just wanted my computer to be faster again so instead I decided to reinstall windows from recovery disk. It formatted the disk and installed windows again. The hard-disk has msdos structure now and the ubuntu partition is empty When I start up the computer grab is starting up an I get a grab rescue problem, but grab is no longer installed. How can I boot directly into windows? i can boot Ubuntu from an usb stick but I'm not able to install it anymore. I just want windows to work again... I'm hoping for advice... Linda

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >