Search Results

Search found 11703 results on 469 pages for 'dev to production'.

Page 66/469 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • SSD suddenly full

    - by Daniel
    Today the hard drive of our server was suddenly full. The disk usage always stayed around 50 % in the weeks and months before (old data is regularly expunged from the server). I deleted 10 GB of files in /tmp, which strangely freed 51 GB. Here is what I did: root@***:~# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 137G 0 100% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot root@***:/var# du -hs * 3,3M backups 438M cache 9,4G lib 4,0K local 12K lock 76M log 24K mail 4,0K opt 88K run 184K spool 10G tmp 12K www root@***:/var/tmp# find -type f -print0 | xargs -0 rm root@***:/var/tmp# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 81G 51G 62% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot Any explanation as to why deleting 10 GB in /tmp gave me back 51 GB on the disk? Could this point to an SSD failure? Are there any tools for Debian to test SSD health? I already have checked syslog. The first entry relating to this incidient is a mysql message: 1:22:02 [ERROR] /usr/sbin/mysqld: Disk is full writing... So I have absolutely no idea what caused this.

    Read the article

  • Rackspace Ubuntu 12.04 server stuck in initramfs after kernel upgrade

    - by Znarkus
    Can't boot after I did a aptitude full-upgrade and let it update menu.lst (did a diff first and it looked good). This is what I've done so far in the BusyBox shell: mkdir /tmp/xvda1 mount /dev/xvda1 /tmp/xvda1 chroot /dev/xvda1 nano /boot/grub/menu.lst This file looks like this: title Ubuntu 12.04.1 LTS, kernel 3.2.0-31-virtual root(hd0,0) kernel /boot/vmlinuz-3.2.0-31-virtual root=UUID=/dev/xvda1 ro quiet splash initrd /boot/initrd.img-3.2.0-31-virtual title Ubuntu 12.04.1 LTS, kernel 3.2.0-31-virtual (recovery mode) root(hd0,0) kernel /boot/vmlinuz-3.2.0-31-virtual root=UUID=/dev/xvda1 ro single initrd /boot/initrd.img-3.2.0-31-virtual titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-virtual root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-virtual root=UUID=/dev/xvda1 ro quiet splash initrd/boot/initrd.img-3.2.0-24-virtual titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-virtual (recovery mode) root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-virtual root=UUID=/dev/xvda1 ro single initrd/boot/initrd.img-3.2.0-24-virtual titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-generic root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-generic root=UUID=/dev/xvda1 ro quiet splash initrd/boot/initrd.img-3.2.0-24-generic titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-generic (recovery mode) root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-generic root=UUID=/dev/xvda1 ro single initrd/boot/initrd.img-3.2.0-24-generic titleChainload into GRUB 2 root(hd0,0) kernel/boot/grub/core.img titleUbuntu 12.04.1 LTS, memtest86+ root(hd0,0) kernel/boot/memtest86+.bin From what I remember, the upgrade added the UUID= string. Should I remove these? Or rather, how do I get my system back online again? Thanks. Update: Seems like I can't even edit the file. [ Error writing /boot/grub/menu.lst: Read-only file system ]

    Read the article

  • Unable to resize ec2 ebs root volume

    - by nathanjosiah
    I have followed many of the tutorials that pretty much all say the same thing which is basically: Stop the instance Detach the volume Create a snapshot of the volume Create a bigger volume from the snapshot Attach the new volume to the instance Start the instance back up Run resize2fs /dev/xxx However, step 7 is where the problems start happening. In any case running resize2fs always tells me that it is already xxxxx blocks big and does nothing, even with -f passed. So I start to continue with tutorials which all basically say the same thing and that is: Delete all partitons Recreate them back to what they were except with the bigger sizes Reboot the instance and run resize2fs (I have tried these steps both from the live instance and by attaching the volume to another instance and running the commands there) The main problem is that the instance won't start back up again and the system error log provided in the AWS console doesn't provide any errors. (it does however stop at the grub bootloader which to me indicates that it doesn't like the partitions(yes, the boot flag was toggled on the partition with no affect)) The other thing that happens regardless of what changes I make to the partitions is that the instance that the volume is attached to says that the partition has an invalid magic number and the super-block is corrupt. However, if I make no changes and reattach the volume, the instance runs without a problem. Can anybody shed some light on what I could be doing wrong? Edit On my new volume of 20GB with the 6GB image,df -h says: Filesystem Size Used Avail Use% Mounted on /dev/xvde1 5.8G 877M 4.7G 16% / tmpfs 836M 0 836M 0% /dev/shm And fdisk -l /dev/xvde says: Disk /dev/xvde: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x7d833f39 Device Boot Start End Blocks Id System /dev/xvde1 1 766 6144000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvde2 766 784 146432 82 Linux swap / Solaris Partition 2 does not end on cylinder boundary. Also, sudo resize2fs /dev/xvde1 says: resize2fs 1.41.12 (17-May-2010) The filesystem is already 1536000 blocks long. Nothing to do!

    Read the article

  • VirtualName-based local development host behind corporate proxy (MAMP)

    - by geerlingguy
    I am behind a corporate proxy server/firewall, and this firewall seems to not be too happy with my idea of local development. On my home computer (Mac/Leopard), I have MAMP running, with a rule in /etc/hosts that directs dev.example.com to 127.0.0.1, and I have a virtualhost set up in the httpd.conf file which works great for me. However, at work, I set up the exact same configuration, but am not able to access dev.example.com, likely due to some address/DNS translation going on via the proxy server. Here are the relevant details from Terminal: $ ping dev.example.com PING dev.example.com (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.025 ms $ host dev.example.com Host dev.example.com not found: 3(NXDOMAIN) I've tried adding dev.example.com to the list of bypass addresses in System Preferences (the 'Bypass proxy settings for these Hosts & Domains' list), but that had no effect. Is there any way I can develop locally using name-based hosts at work? I can access localhost, but can't get to the dev.example.com (or any other custom virtualhosts) here at work, which complicates other matters related to the sites on which I'm working...

    Read the article

  • mdadm lvm and ext4 slowness - How can I speed it up?

    - by beatbreaker
    I can't figure out why I'm getting such terrible times out of my mdadm and in particular the lvm partitions in it. I made the raid: mdadm --create --verbose /dev/md0 --level=5 --chunk=1024 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sda1[0] sdd1[3] sdc1[2] sdb1[1] 2930279424 blocks level 5, 1024k chunk, algorithm 2 [4/4] [UUUU] I then created the physical volume, volume group, and logical volumes, I then formatted the logical volumes to ext4 using the following commands I got from here: http://busybox.net/~aldot/mkfs_stride.html mkfs.ext3 -b 4096 -E stride=256,stripe-width=768 /dev/datavg/blah Now I'm confused, I had these lvs running real quick before in mdadm but now that I've 'optimized' everything it's slower, eg, before: /dev/datavg/lv_audio: Timing buffered disk reads: 598 MB in 3.01 seconds = 198.85 MB/sec but now after: /dev/datavg/audio: Timing buffered disk reads: 198 MB in 3.00 seconds = 65.96 MB/sec That's pitiful! What's happened here? Did I not follow the instructions correctly? Can i reshape the ext4 partitons to default back to what they were? (I used defaults before and they were fine!)

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Can I install two Ubuntu versions on the same machine?

    - by Abh
    Hello, I have Ubuntu 10.10 32 bit already installed on my machine..I am using MongoDB and it does not work properly with 32 bit machine. So I want to install 64 bit Ubuntu 10.10 on my system on another partition (so that I can have both 32 bit and 64 bit versions). Is it okay to install both 32 bit and 64 bit? I mean will it give any problems? On which partition should I install the 64 bit version? My partitions are as follows: Filesystem Size Used Avail Use% Mounted on /dev/sda1 37G 11G 25G 30% / none 1.4G 260K 1.4G 1% /dev none 1.4G 776K 1.4G 1% /dev/shm none 1.4G 244K 1.4G 1% /var/run none 1.4G 0 1.4G 0% /var/lock /dev/sda6 129G 73G 50G 60% /home /dev/sda7 127G 76G 45G 64% /vol Waiting for your replies.

    Read the article

  • Why does my Grub have two entries for Windows?

    - by Tomas Lycken
    I have a dual boot system with Ubuntu 12.04 and Windows 7, using GRUB2 (with Burg) as boot loader. For some reason, the Windows installation shows up twice in the boot menu: Ubuntu GNU/Linux, with Linux 3.2.0-24-generic Ubuntu GNU/Linux, with Linux 3.2.0-24-generic (recovery mode) Windows 7 (loader) (on /dev/sda1) Windows 7 (loader) (on /dev/sda2) If I look in my partition table, /dev/sda2 is C:\ of the Windows installation, and /dev/sda1 is the "System Reserved" partition (which, IIRC, is Windows' own bootloader). Furthermore, gparted shows /dev/sda2 - but no other partitions - with a boot flag: What is going on here? I'd like to have only the entries for Ubuntu and one entry for Windows in my boot menu - how do I remove one of them?

    Read the article

  • Kernel Panic: Not booting after upgrade from 10.04 to 12.04

    - by Jitesh
    I upgraded from 10.04 to 12.04LTS. Upgrade went fine, even restarted couple of times. Then the next day while booting into Ubuntu, after the grub, it gave the error Kernel panic : not syncing vfs unable to mount root fs on unknown block (0,0). I then booted into live CD and tried the following commands, based on other posts on this forum: sudo fdisk -l As the 8 was on /dev/sda1, sudo mount /dev/sda1 /mnt sudo mount --bind /dev /mnt/dev Now I got the message: mount: mount point /mnt/dev does not exist Then tried sudo mount --bind /proc /mnt/proc Again got the message: mount point point /mnt/proc does not exist. then tried sudo chroot /mnt Got message: chroot: failed to run comman '/bin/bash': No such file or directory Now have no clue what to do next. Unable to boot into Ubuntu. Please help. Jitesh

    Read the article

  • Can't remove the libpcap0.8 package

    - by Yogesh
    I am getting error when running apt-get remove root@System:~/Downloads# sudo apt-get remove The following packages have unmet dependencies: libpcap0.8 : Breaks: libpcap0.8:i386 (!= 1.4.0-2) but 1.5.3-2 is installed libpcap0.8:i386 : Breaks: libpcap0.8 (!= 1.5.3-2) but 1.4.0-2 is installed libpcap0.8-dev : Depends: libpcap0.8 (= 1.5.3-2) but 1.4.0-2 is installed E: Unmet dependencies. Try using -f. and when I ran apt-get remove -f this is what happens: root@System:~/Downloads# sudo apt-get remove -f The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads# clear root@System:~/Downloads# sudo apt-get remove -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads# root@System:~/Downloads# sudo apt-get check Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libpcap0.8 : Breaks: libpcap0.8:i386 (!= 1.4.0-2) but 1.5.3-2 is installed libpcap0.8:i386 : Breaks: libpcap0.8 (!= 1.5.3-2) but 1.4.0-2 is installed libpcap0.8-dev : Depends: libpcap0.8 (= 1.5.3-2) but 1.4.0-2 is installed E: Unmet dependencies. Try using -f. root@System:~/Downloads# apt-cache policy libpcap0.8:amd64 libpcap0.8 libpcap0.8-dev libpcap0.8: Installed: 1.4.0-2 Candidate: 1.5.3-2 Version table: 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages *** 1.4.0-2 0 100 /var/lib/dpkg/status libpcap0.8: Installed: 1.4.0-2 Candidate: 1.5.3-2 Version table: 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages *** 1.4.0-2 0 100 /var/lib/dpkg/status libpcap0.8-dev: Installed: 1.5.3-2 Candidate: 1.5.3-2 Version table: *** 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages 100 /var/lib/dpkg/status root@System:~/Downloads# root@System:~/Downloads# sudo apt-get -f remove libpcap0.8 libpcap0.8-dev libpcap0.8-dev:i386 libpcap0.8:i386 Reading package lists... Done Building dependency tree Reading state information... Done Package 'libpcap0.8-dev:i386' is not installed, so not removed. Did you mean 'libpcap0.8-dev'? You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libpcap-dev : Depends: libpcap0.8-dev but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). root@System:~/Downloads# sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads#

    Read the article

  • Grub bootloader goes fail to boot in dual boot system after installing Ubuntu 12.10

    - by K.K Patel
    I had 12.04 with my dual boot system. Yesterday I downloaded Ubuntu 12.10 make bootable USB and choose Upgrade option in Installer. After installation Grub failed to boot my machine. I tried following to fix grub bootloder . Same problem I fixed with Ubuntu 12.04 using live USB but this solution not work for Ubuntu 12.10. Now coming at exactly where this solution goes fail. I followed this steps after booting Live USB and opening terminal. 1) sudo fdisk -l to see where Linux is installed 2) sudo mount /dev/sda9 /mnt where sda9 is my linux partition 3) sudo mount /dev/sda9 /mnt/boot 4) sudo mount --bind /dev /mnt/dev 5) sudo chroot /mnt (No problems with this steps done perfectly) 6) grub-install /dev/sda when I type command I got error that source_dir doesn't exist Please specify --target or --directory How can I solve this ?

    Read the article

  • Bridged virtual interface is not available or visible to ifconfig.

    - by Omniwombat
    Hello all. I'm running Ubuntu 9.04, kernel 2.6.28-18, and vmware-server 2.0.1. I'm attempting to setup a virtual linux machine to use a bridged interface rather than NAT or host-only. Both NAT and host-only work just fine. When running vmware-config.pl, I set /dev/vmnet0 to bridge eth0, /dev/vmnet1 to host-only, and /dev/vmnet8 to NAT. When I run ifconfig -a I see the physical interface (eth0), vmnet1 and vmnet8 both of which are up and have IP addresses assigned to them. I also see other various interfaces that are not relevant here. In the web console, when I ask that the guest machine's network card be bridged, it states that a bridged setup is "Not available" and shows the disabled device icon. Inside the guest machine, I do have an eth0 interface which I can set to anything I like, however it can't see my external network, or the host. I do see errors in my vmware/hostd.log which state: "The network bridge on device vmnet0 is not running. The virtual machine will not be able to communicate with the host or with other machines on your network" which confirms the problem. vmnet-bridge is running, and I see the following in my process table: /usr/bin/vmnet-bridge -d /var/run/vmnet-bridge-0.pid -n 0 -i eth0 I confirm that the /var/run/vmnet-bridge-0.pid file is there and that it points to the correct process. I saw this question relating to Ubuntu 9.04 and bridged interfaces, in which the poster determined that the vsock library was not getting built due to a flaw in the vmware-config.pl script. I applied the patch, reran the script, and confirm that vsock.ko and vsock.o are in my /lib directory structure. vsock does show up in an lsmod. My /etc/vmware directory has /vmnet1 and /vmnet8 subdirectories. They contain configuration utilities for running DHCP and nat type services as expected. There is no vmnet0 subdirectory. My /etc/vmware/netmap.conf file DOES show entries for vmnet0; both the name and the device as I configured it from the script. My /dev directory contains devices vmnet0 through vmnet9. They have major device number 119, and minor device numbers 0 through 9. /proc/net/dev shows statistics for vmnet1 and vmnet8, but not vmnet0. I have a /proc/vmnet directory, but it's empty. When I start or stop the vmware service with /etc/init.d/vmware start, I see the following: Starting VMware services: Virtual machine monitor done Virtual machine communication interface done VM communication interface socket family: done Virtual ethernet done Bridged networking on /dev/vmnet0 done Host-only networking on /dev/vmnet1 (background) done DHCP server on /dev/vmnet1 done Host-only networking on /dev/vmnet8 (background) done DHCP server on /dev/vmnet8 done NAT service on /dev/vmnet8 done VMware Server Authentication Daemon (background) done Shared Memory Available done Starting VMware management services: VMware Server Host Agent (background) done VMware Virtual Infrastructure Web Access Starting VMware autostart virtual machines: Virtual machines done Nothing appears to be wrong there. What n00b thing am I doing such that vmnet0 and only vmnet0 does not show up in the interface list?

    Read the article

  • Error when trying to get movies off my Camera

    - by matt wilkie
    How do I retrieve movies and audio files from my Canon PowerShot S5 IS camera? Shotwell only sees the pictures. Mounting the camera via "Places Canon Digital Camera" and then clicking twice on the resulting desktop icon yields this error: could not display "gphoto2://[usb:001,002]/" There is nothing in /media. update: output of sudo fdisk -l Disk /dev/sda: 40.0 GB, 40007761920 bytes 255 heads, 63 sectors/track, 4864 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x3ee33ee2 Device Boot Start End Blocks Id System /dev/sda1 2364 2612 1999872 82 Linux swap / Solaris /dev/sda2 * 1 2364 18979840 83 Linux /dev/sda3 2613 4864 18089159+ f W95 Ext'd (LBA) /dev/sda5 2613 4864 18089158+ b W95 FAT32 Partition table entries are not in disk order Solution: install gphotofs, then Nautilus shows camera contents similar to a standard usb storage device (though note that gphotofs does not yet support adding to or deleting contents).

    Read the article

  • How to access photos from my Genius G-Shot P7545 camera?

    - by Florin
    I have a Genius G-Shot P7545 camera. In Windows I had just to plug in the camera to the usb and acces it like a usb stick. I tried to do that in Ubuntu 10.10 with no result. How can I acces the photots? With these comands I get: lsusb Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 015: ID 0784:1692 Vivitar, Inc. Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xcf5acf5a Device Boot Start End Blocks Id System /dev/sda1 * 1 2550 20478976 83 Linux /dev/sda3 2550 19458 135810048 f W95 Ext'd (LBA) /dev/sda5 2550 19458 135809024 83 Linux Unable to read /dev/sdb

    Read the article

  • Switch from encrypted partition to unencrypted (Error: cryptsetup: evms_activate is not available)

    - by Chris Lercher
    I initially installed Ubuntu 11.04 with an encrypted file system (from the alternate install CD: Guided Partitioning, LVM encrypted). Now I wanted to change this setup to have my root file system on an unencrypted partition. I had the following setup before: /dev/mapper/my-root on / type ext4 (rw,noatime,errors=remount-ro,commit=0,commit=0) /dev/sda1 on /boot type ext2 (rw,noatime) I backed up /, reformatted /dev/sda5 (which had contained the encrypted LVM device) to an ext3 partition, and restored / to that partition. I edited /etc/fstab, removed the line /dev/mapper/my-root / ..., and added the line: /dev/sda5 / ext3 noatime,rw,errors=remount-ro,commit=0 0 1 I edited /etc/crypttab, and commented out the single entry. On reboot, I get the grub screen as usual, but then I get the message cryptsetup:evms_activate is not available, waiting for encrypted source device. I tried reinstalling Grub2 using a LiveCD with the ChRoot method, but that didn't make any difference. Why is Ubuntu still searching for an encrypted device?

    Read the article

  • Can't see my partitions after grub recovery

    - by dimbo
    I stupidly inserted the windows CD into my dual boot Ubuntu 11.10 / windows xp system. I just wanted to see if I could install windows on my external usb HD, but didn't actually go ahead with the install. It seems like the windows CD messed up my MBR and I had to use boot-repair and the ubuntu 11.10 live CD to gain access to ubuntu again. It seems to boot up a little differently (slower) but works. However, I now cant see any of my partitions in nautilus (there are 3). When I open gparted, it just shows my whole hard drive as unallocated (I know it has a windows partition that works and my ubuntu partition that I am using now). If I insert a usb pen, it is also not visible in nautilus but in gparted shows up as a FAT32 partition (which is correct although I cannot access it). sudo fdisk -l gives the following : demian@dimbo-TP:~$ sudo fdisk -l [sudo] password for demian: omitting empty partition (5) Disk /dev/sda: 320.1 GB, 320072933376 bytes 240 heads, 63 sectors/track, 41345 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x877b877b Device Boot Start End Blocks Id System /dev/sda1 * 63 63842309 31921123+ 7 HPFS/NTFS/exFAT /dev/sda2 63844350 133484084 34819867+ 5 Extended /dev/sda3 127636488 133484084 2923798+ 82 Linux swap / Solaris /dev/sda4 133484085 625137344 245826630 83 Linux /dev/sda5 63844352 123445247 29800448 83 Linux /dev/sda6 123447296 127635455 2094080 82 Linux swap / Solaris Disk /dev/sdb: 8100 MB, 8100249600 bytes 12 heads, 40 sectors/track, 32960 cylinders, total 15820800 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc3072e18 Device Boot Start End Blocks Id System /dev/sdb1 * 5992 15820799 7907404 c W95 FAT32 (LBA) Here is my grub.conf file. Like I said before, I had to use the 'boot-repair' utility with the live cd to get grub working again. I think that this utility maybe created a new grub for me because the startup is definitely not the same. The screen goes blank for a while, and then the ubuntu loading dots come up for a brief moment (instead of during the whole startup process) before the dektop is displayed. Anyway : # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=10 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.0.0-12-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a linux /boot/vmlinuz-3.0.0-12-generic root=UUID=5349ff67-b7b7-489f-a881-ae49c1dcd84a ro quiet splash vt.handoff=7 initrd /boot/initrd.img-3.0.0-12-generic } menuentry 'Ubuntu, with Linux 3.0.0-12-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a echo 'Loading Linux 3.0.0-12-generic ...' linux /boot/vmlinuz-3.0.0-12-generic root=UUID=5349ff67-b7b7-489f-a881-ae49c1dcd84a ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.0.0-12-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Microsoft Windows XP Professional (on /dev/sda1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 72A89361A89322A1 drivemap -s (hd0) ${root} chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### How can I get things back to normal. Thanks, Demian.

    Read the article

  • Ubuntu Server 13.04.3 doesn't boot w/ EFI

    - by user1004816
    I'm was actually trying to install Debian Wheezy (which failed horribly), then tried Ubuntu Server 13.04 and got the exact same problems as w/ Debian: After installing, the system doesn't show any boot-selection and tells me "Missing operating system". My setup is pretty simple: /dev/sdc - 1TB HDD (+ 3 other NTFS HDD) /dev/sdc1 - EFI, 100MiB, bootable /dev/sdc4 - ext4, 65GiB, Ubuntu/Debian (sdc2 & 3 are NTFS w/ data. Sorta lacking SATA-ports, therefore no OS-only HDD/SSD) Grub seems to be installed on /dev/sdc4, /dev/sdc1 only contains a "EFI"-folder. Not sure if thats correct. I used UNetbootin on OS X to make an 8GB USB-drive bootable and used the standard amd64-iso, running a perl-script wich eradicates a couple of naming-errors (different story). Using this tutorial and actually disabling UEFI and using legacy only dind't work either, the usb drive dind't even bother to boot. I'm pretty clueless here. I'd just like to install and use either Debian oder Ubuntu Server!

    Read the article

  • Subdomain redirects to different server, maintaining original URL

    - by Jason Baker
    I just took over the webmaster position in my organization, and I'm trying to set up a development environment separate from the production environment. The two environments are hosted with different companies, both on shared hosting plans. Right now, production is at domain.com and development is at domain2.com What I want is to direct my development team to dev.domain.com More importantly, I don't want the URL to change from dev.domain.com back to domain2.com, but I also be responsive to page changes. For example, if my dev team navigates from dev.domain.com to a page (called "page"), I'd like the URL to show as dev.domain.com/page Is this possible, or am I just dreaming?

    Read the article

  • Mounting ntfs windows drive and retrieving its data

    - by Sarmad
    Hej, A friend gave me his laptop which had problems with booting (windows). He claims his data on the drive to be retrieved and saved. I have Linux Ubuntu Server to mount its hard drive and check the contents. Now when I have mounted it I see only boot.sdi, sources, System Volume Information I have no idea why is it so? How can I access and retrieve the contents? results of sudo fdisk -l are as follows: Enhet Start Början Slut Block Id System /dev/sdb1 2048 3074047 1536000 27 Hidden NTFS WinRE /dev/sdb2 * 3074048 246945791 121935872 7 HPFS/NTFS/exFAT /dev/sdb3 246945792 416354295 84704252 7 HPFS/NTFS/exFAT /dev/sdb4 416354304 488394751 36020224 f W95 Utökad (LBA) /dev/sdb5 416356352 488394751 36019200 7 HPFS/NTFS/exFAT Please help me in this regard.

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

  • Grub2 with BURG: duplicate Windows entries, how do I remove one?

    - by Tomas Lycken
    I have a dual boot system with Ubuntu 12.04 and Windows 7, using GRUB2 (with Burg) as boot loader. For some reason, the Windows installation shows up twice in the boot menu: Ubuntu GNU/Linux, with Linux 3.2.0-24-generic Ubuntu GNU/Linux, with Linux 3.2.0-24-generic (recovery mode) Windows 7 (loader) (on /dev/sda1) Windows 7 (loader) (on /dev/sda2) If I look in my partition table, /dev/sda2 is C:\ of the Windows installation, and /dev/sda1 is the "System Reserved" partition (which, IIRC, is Windows' own bootloader). Furthermore, gparted shows /dev/sda2 - but no other partitions - with a boot flag: What is going on here? I'd like to have only the entries for Ubuntu and one entry for Windows in my boot menu - how do I remove one of them?

    Read the article

  • How to correctly set permisions for all subfolders and files

    - by Saeid87
    I am following the guide to install TinyOS on Ubuntu 12.04 I have done up to step 3, But I am not sure if I have done the step 3 correctly. Because by doing the step 4 I get the permission error : saeid@saeid-Satellite-C660:~$ tos-install-jni /usr/bin/tos-install-jni: 13: [: =: unexpected operator Installing 32-bit Java JNI code in /usr/lib/jvm/java-6-openjdk-i386/jre/lib/i386 ... install: cannot create regular file `/usr/lib/jvm/java-6-openjdk-i386/jre/lib/i386/libgetenv.so': Permission denied Can you please tell me what would be the actual commands for step 3? What I have to replace with following lines?: /opt/tinyos-2.x files: chown -R /opt/tinyos-2.x Change the permissions on any serial (/dev/ttyS), usb (/dev/tts/usb, /dev/ttyUSB), or parallel (/dev/parport) devices you are going to use: chmod 666 /dev/ I mean how would you do those steps in your ubuntu?

    Read the article

  • Ubiquity is not recognizing existing partition while trying to install Ubuntu alongside Windows 7

    - by Bertner
    So I'm using Ubuntu live CD to install Ubuntu next to Windows 7 but it doesn't recognize partitions. Here is sudo fdisk -l: Disk /dev/sda: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0c7a859b Device Boot Start End Blocks Id System /dev/sda1 63 1250259631 625129784+ f W95 Ext'd (LBA) Partition 1 does not start on physical sector boundary. /dev/sda2 * 81920 4177919 2048000 b W95 FAT32 /dev/sda3 4177920 147535871 71678976 7 HPFS/NTFS/exFAT /dev/sda5 147538608 1147859631 500160512 7 HPFS/NTFS/exFAT I have one partition with Windows 7, one with its created partition (OS) and one for data.

    Read the article

  • cryptsetup partitions not detected at boot

    - by Luis
    I installed a fresh 12.04 and tried to mimic what I had for 10.04. swap should be encrypted with a urandom key and there's another partition that will contain home and other directories. # cat /etc/crypttab | grep -v '^#' | grep -v '^$' cryptswap /dev/sda5 /dev/urandom swap encriptado /dev/sda6 # grep -e 'cryptswap' -e 'encriptado' /etc/fstab /dev/mapper/cryptswap swap swap defaults 0 0 /dev/mapper/encriptado /encriptado ext4 defaults 0 0 I also apt-get install cryptsetup When I boot, the system says (try to translate) that either the partition is not found or is not ready. I should wait, press M for manual or S to jump over. What am I missing here?

    Read the article

  • Unable to mount Windows (NTFS) filesystem due to hibernation

    - by yotamoo
    Whenever I boot Ubuntu, I get a message that it cannot mount my windows partition, and I can choose to either wait, skip or manually mount. When I try to enter my Windows partition through Nautilus I get a message saying that this partition is hibernated and that I need to enter the file system and properly close it, something I have done with no problem so I don't know why this happens. Here's my partition table, if any more data is needed please let me know. Device Boot Start End Blocks Id System /dev/sda1 2048 20000767 9999360 83 Linux /dev/sda2 20002814 478001151 228999169 5 Extended /dev/sda3 * 478001152 622532607 72265728 7 HPFS/NTFS/exFAT /dev/sda4 622532608 625141759 1304576 82 Linux swap / Solaris /dev/sda5 20002816 478001151 228999168 83 Linux

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >