Search Results

Search found 7706 results on 309 pages for 'checked'.

Page 147/309 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • What's wrong with my ext4 partition?

    - by bumbling fool
    What is wrong with this picture? Top is output from "df -h", bottom is gparted. I suspect I'm missing a lot of free space. No problems other than that (yet). Can somebody suggest the best (non-destructive) way to correct this? sudo dumpe2fs -h /dev/sda3: (source http://pastebin.com/nAvrdT4E) Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 9f6eff64-60d7-4eec-81d5-1e8acd818b38 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 1602496 Block count: 6406144 Reserved block count: 320306 Free blocks: 4842284 Free inodes: 1361222 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1022 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8176 Inode blocks per group: 511 RAID stride: 32692 Flex block group size: 16 Filesystem created: Sun Nov 8 18:18:13 2009 Last mount time: Tue Mar 1 01:04:27 2011 Last write time: Mon Feb 28 04:27:34 2011 Mount count: 16 Maximum mount count: 28 Last checked: Thu Feb 24 06:23:39 2011 Check interval: 15552000 (6 months) Next check after: Tue Aug 23 07:23:39 2011 Lifetime writes: 227 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 268015 Default directory hash: half_md4 Directory Hash Seed: cc101517-e617-482b-a883-a72919419c84 Journal backup: inode blocks Journal features: journal_incompat_revoke Journal size: 128M Journal length: 32768 Journal sequence: 0x001d3000 Journal start: 7787 fdisk and parted output per requests: http://pastebin.com/EGVH7Ken

    Read the article

  • How do I install Postgres Graphical Installer?

    - by Tusar Das
    I am a newbie in using Ubuntu (more precisely never used linux except the production deployment ). Now I choose to hold a grip on the system and intalled a latest 11.10 Ubuntu in my 32 bit Intel** desktop. In this machine **I have a 'administrator' user 'tusar'. I want to install the PostgreSQL graphical version so downloaded this file postgresql-8.4.9-1-linux.bin from here Now If run the file with ./filename command (refuses if I am not root), it opens up the graphical interface and completes the installation (creates user postgres and ask for password, wont complete if I dont give valid password). I checked with /etc/inint.d/postgresql-8.4 status , that postgres is running. But the problem is I am unable to open the GUI (I installed it in /opt/Postgresql ), its telling permission denied even if I am logged in as 'tusar'. Then I tried using the su postgres command and it directly changed the user without password. Now if I run the psql , createdb , its prompting me to install the postgres-client-package but this utilies are already installed and I can view them in my /opt/Postgresql/8.4/bin folder. I can run query through my JDBC utility programs but unable to use neither terminal nor the graphical interface (although I can see pgAdmin3 is there ). Urgent help needed. I have used this guide for the installation.

    Read the article

  • Why can't I mount this partition?

    - by Mahmoud20070
    I have two hard disks (80 Giga ide and 500 Giga sata) and i installed Ubuntu 11.10 in hard disk 80 giga and give for him 20 gigabyte after that he saw all partition in two hard drivers until one I see it's health in gparted magic and he can see it but can't check it and this photo the problem is on partition sdb5 and I use this command to mount it from terminal and took me this although that this partition working very well in windows and Ubuntu could mount it before please give me any solution unless format or anything well delete my data because it's very important Before when I checked /dev/sdb5, it had errors - now all program didn't appear any errors unless fdisk -l All of my hard disks can be mounted except Partition D or /dev/sdb5. What can I do? I have tried to mount the partition with many different programs like Gparted, KDE Partition Manager, ntfs-3g from the terminal, and the mount command. All of them said something to the effect of: fuse: mount failed: Device or resource busy ...or... one or more block devices are holding /dev/sdb5 I installed Ubuntu 11.10 again today to see if anything had changed. The partition works fine under Windows.

    Read the article

  • Wifi not working on Acer Aspire One D270

    - by Dani
    brand new baby linux user here, never used Ubuntu or any other linux OS before, so be gentle and use short words! I installed Ubuntu 12.04 on my new Acer Aspire One D270-F61C/KF netbook (it's a Japanese computer which had Japanese windows preinstalled, and I decided to take the plunge and try Ubuntu because English Windows costs the earth and stars). Wifi isn't working; I enter my wireless password, it tries to connect for a while, then asks for my password again. And KEEPS ASKING, every few minutes. Wired connection works fine. Wireless card is a Broadcom BCM4313; I have the "additional drivers" checked and installed (I tried unchecking and then reinstalling them in case that would help, no joy, and now my home wifi connection isn't showing up in the list of available connections, argh). I've done a lot of googling and I gather there's a lot of issues with Broadcom cards, but some of the answers are for earlier ubuntu builds and many of them are a bit confusing for a new user. I gather I need to try installing some new drivers other than the proprietary ones provided, but I'm having trouble figuring out how that's done. Anyone got some simple, step by step instructions for me? Please bear in mind, TOTAL N00B. (EDIT): OKAY, got it fixed finally; after suggestions on the Ubuntu forums and messing around with drivers, what finally worked was installing Wicd. Not... using Wicd, for some reason, just installing it fixed it. ...I CHOOSE NOT TO QUESTION IT.

    Read the article

  • What can you do to decrease the number of live issues with applications?

    - by User Smith
    First off I have seen this post which is slightly similar to my question. : What can you do to decrease the number of deployment bugs of a live website? Let me layout the situation for you. The team of programmers that I belong to have metrics associated with our code. Over the last several months our errors in our live system have increased by a large amount. We require that our updates to applications be tested by at least one other programmer prior to going live. I personally am completely against this as I think that applications should be tested by end users as end users are much better testers than programmers, I am not against programmers testing, obviously programmers need to test code, but they are most of the times too close to the code. The reason I specify that I think end users should test in our scenario is due to the fact that we don't have business analysts, we just have programmers. I come from a background where BAs took care of all the testing once programmers checked off it was ready to go live. We do have a staging environment in place that is a clone of the live environment that we use to ensure that we don't have issues between development and live environments this does catch some bugs. We don't do end user testing really at all, I should say we don't really have anyone testing our code except programmers, which I think gets us into this mess (Ideally, we would have BAs or QA or professional testers test). We don't have a QA team or anything of that nature. We don't have test cases for our projects that are fully laid out. Ok, I am just a peon programmer at the bottom of the rung, but I am probably more tired of these issues than the managers complaining about them. So, I don't have the ability to tell them you are doing it all wrong.....I have tried gentle pushes in the correct direction. Any advice or suggestions on how to alleviate this issue is greatly appreciated. Thanks.

    Read the article

  • Google doesn't seem to update the description or title of my homepage

    - by Dayson
    Before we launched our website, we had set up a "coming soon" page and google picked up the title and description from its contents. So the description in the search results said, "Coming soon! Visit epicwhale.org for updates." It's been a few weeks since we launched our website. We've even created a sitemap and submitted it to google. In the google webmaster panel, the pages have been crawled and all the pages are appearing as expected on google, EXCEPT the homepage which is still not updated! The title and description of the homepage in google search results still says coming soon.. The website I am referring to is textmewidget.com and below are the images of the search result. Google: http://i.imgur.com/vAkJg.png I checked on bing too, but it appears to be fine there. Bing: http://i.imgur.com/Q8O6L.png All other pages seem to be indexed fine on google. I don't even have any crawl errors in my reports. So what seems to be the problem? I've already waited for 2 weeks. Thanks in advance!

    Read the article

  • How to install Wacom Bamboo Pen

    - by casadrya
    I have a new Wacom Bamboo Pen. I'm using Ubuntu 10.10 64bit. After googling a little bit, I checked that xserver-xorg-input-wacom was installed. I plugged in my tablet. I rebooted my computer. Nothing special happened. I opened Inkscape. The tablet didn't work. I opened Inkscape's Input devices dialog. I didn't understand anything. I tried to blindly click some options in that dialog but nothing seemed to have any effect. Same with Gimp. After googling some more I found the linuxwacom website with source code, this didn't seem to work. So... any help? As requested: lsusb Bus 005 Device 002: ID 056a:00d4 Wacom Co., Ltd dmesg | tail [ 492.961267] usb 5-1: new full speed USB device using uhci_hcd and address 3 [ 493.144862] input: Wacom Bamboo 4x5 Pen as /devices/pci0000:00/0000:00:1a.2/usb5/5-1/5-1:1.0/input/input6 [ 493.158854] input: Wacom Bamboo 4x5 Finger as /devices/pci0000:00/0000:00:1a.2/usb5/5-1/5-1:1.1/input/input7

    Read the article

  • How to create or recover Windows Bootloader after deleting Ubuntu boot drive

    - by Kincaid
    I have a computer that dual-boots (or tri-boots) Windows 8 Release Preview, Windows 7, and Ubuntu 12.04. Grub boots between Windows 8 and Ubuntu; for which I use primarily. Recently, I decided to remove Ubuntu, as I hardly used it. I deleted the Ubuntu partition accidentally before replacing the Grub bootloader. Now, whenever I want to boot the machine, it gives me the "grub-rescue" prompt -- I am unable to boot into either Windows (8 nor 7), nor Ubuntu (except via USB, of course). I do not have any Windows 7/8 recovery media, so that isn't an option. Please note that after I deleted the Ubuntu partition, I put the PC into hibernate, and then turned it on. This means the C:\ [Windows 8] drive cannot be mounted. I don't know if that is bad, but it definitely doesn't make things better. I am currently booting Ubuntu via USB, in an effort to restore the Windows bootloader. I have looked into using boot-repair to solve the problem using the instructions here, although after attempting to apply the changes, it gave the error: "Please install the [mbr] packages. Then try again." I don't know why I'm getting this error; is there a way to install the 'mbr packages?' I honestly don't know what exactly they are, nor how to install them. Are there any other options I have not yet exhausted to be able to boot back into Windows, in the case that there is a better way? I want to set the bootloader to boot into Windows 8, but booting into either Windows 7 or 8 is fine (I can use EasyBCD from there). Is there a simple solution to this? I've checked BIOS, and I haven't been able to find a way to boot into Windows.

    Read the article

  • iptables unresolved dependencies

    - by tertle
    I'm trying to setup OpenVPN Access Server on a VPS running ubuntu 9.10 for a friend so she can play games from her uni campus. The problem is I keep running into this error when trying to start openvpn. Service deferred error: IPTablesServiceBase: failed to run iptables-restore [status=1]: ['FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'iptables-restore: line 46 failed']: internet/base:1175,internet/base:752,internet/process:45,internet/process:306,internet/_baseprocess:48,internet/process:775,internet/_baseprocess:60,svc/pp:116,svc/svcnotify:26,internet/defer:238,internet/defer:307,internet/defer:323,sagent/ipts:105,sagent/ipts:39,util/error:52,util/error:32 service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['iptables_openvpn']) Now I've already got my provider to enabled the TUN/TAP device driver and I checked this using # cat /dev/net/tun Which returned “File descriptor in bad state” Which I believe means it's enabled. After extensive searching, I've been unable to find any solution other than people suggesting to make sure TUN/TAP device driver is enabled. Any ideas on how to solve my issue? I'm not very experience with linux and I feel in over my head here so any advice is greatly appreciated. --edit-- Just stumbled across this Not sure how I missed it earlier. I believe I need to get modprobe ipt_mark & modprobe ipt_MARK run on the hostnode by my provider. Is this correct and something I should try get done.

    Read the article

  • Does it make sense to write a build scripts in C++?

    - by Klaim
    I'm using CMake to generate my projects IDE/makefiles, but I still need to call custom "scripts" to manipulate my compiled files or even generate code. In previous projects I've been using Python and it was OK, but now I'm having serious trouble managing a lot of dependencies in two very big projects I'm working on so I want to minimize the dependencies everywhere. Someone suggested to me to use C++ to write my build scripts instead of adding a language dependency just for that. The projects themeselves already use C++ so there are several advantages that I can see: to build the whole project, only a C++ compiler and CMake would be necessary, nothing else (all the other dependencies are C or C++); C++ type safety (when using modern C++) makes everything easier to get "correct"; it's also the language I know the better so I'm more at ease with it even if I'm able to write some good Python code; potential gain in execution speed (but i don't think it will really be perceptible); However, I think there might be some drawbacks and I'm not sure of the real impact as I didn't try yet: might be longer to write the code (that said I'm not sure because I'm efficient enough in C++ to write something that work quickly, so maybe for this system it wouldn't be so long to write) (compilation time shouldn't be a problem for this case); I must assume that all the text files I'll read as input are in UTF-8, I'm not sure it can be easilly checked at runtime in C++ and the language will not check it for you; libraries in C++ are harder to manage than in scripting languages; I lack experience and forsight so maybe I'm missing advantages and drawbacks. So the question is: does it make sense to use C++ for this? do you have experiences to report and do you see advantages and disadvantages that might be important?

    Read the article

  • Alienware M17x R3: Possible downclock

    - by Ywen
    I installed recently Kubuntu 11.10 32 bits (had graphics driver issues, wanted to try on 32 bits version) on my new Alienware M17x, with a Core i7-2670QM CPU. Cores are supposed to be clocked at 2.2 GHz, however the output of $ cat /proc/cpuinfo | grep -i "hz" gives me: model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 If useful, the AC adapter is plugged in (yet the ouput is the same when the computer is powered only by the battery) and I have Firefox and Eclipse running. Does /proc/cpuinfo reflect a possible automatic downclock made to save power if processor load is low or is this output abnormal? EDIT: Ok, I checked and yes, the ouput does vary in function of the load. I reach 2.2 GHz when needed. But my following problem remains. I was checking my CPU clocking because I experienced poor performances when reading 720p video files on Ubuntu with VLC or mplayer when on battery (and I believe VLC by default only uses CPU, not GPU to decode), whereas I haven't got such problems with VLC on Windows (which made me think it wasn't coming from a BIOS option, plus every option in the BIOS regarding the CPU is turned ON).

    Read the article

  • Desktop switcher appears to be broken for quick double-switches

    - by Jon Blackburn
    I'm wondering if anyone else has seen this. I have three virtual desktops aligned in a horizontal row. In the middle desktop I have only a single application window. I have keyboard shortcuts mapping to navigate between the desktops. Obviously, I never use the up/down arrows because I only have one row of workspaces. Here's the problem, which only started to happen after I installed 12.04.1: When I rapidly hit to go from workspace 1 to workspace 3, the window on workspace 2 gets moved to workspace 1. I have checked using both Unity and Gnome3, and the behavior is the same under both. If I change back to the default workspace setup (a 2x2 grid of desktops) things seem to settle down (i.e., no wandering windows). Not every type of application window behaves the same way. I couldn't get a Chrome browser to jump from 2 to 1, but both Terminal and Terminator exhibit the behavior. Any thoughts? Better workarounds? Thanks in advance.

    Read the article

  • How to recover from terminal custom setup

    - by linq
    I installed ubuntu 12.04 and I added "Terminal" to the launcher bar, the default size of the terminal is a bit small, after some googling here is how I made the disaster change: At HUD menu, type "preference" and I see the option of Edit > preference, this is as I expected After I clicked the option, I forget the exact steps, but somehow I came to some configuration panel and there is a checkbox for terminal "custom" and I checked it now an input box is enabled and I type gnome-terminal --geometry 160x50 Now the problem comes, whenever I click the terminal button at the launcher bar, new terminal windows pops up endlessly from everywhere, I cannot do anything any more except logout or shutdown. The weird thing is, after I come back, I cannot get that Edit > preference any more when I type "preference" in HUD, I tried "Edit", "preference". To be clear, I still can use the computer as long as I do not click that terminal button, but I really want to use terminal to run commands. Help is appreciated greatly! Update - OK, figured out. Go to /home/jibin/.gconf/apps/gnome-terminal/profiles/Default and open that xml file, I can see the options I changed at the GUI, remove them and terminal runs good. The preference HUD is back also, it is actually Profile Preference and it is only available when terminal is open. Continue my ubuntu-ing...

    Read the article

  • Brother MFC-J470DW scan function "Check Connection"

    - by user292599
    I have a Brother MFC-J470DW printer that I have connected to a Linux desktop (running Ubuntu 14.04) using a wireless router network. The printer works fine for printing and copying, but now I want to add the scan function. To set up the scan function, I went to the Brother web page for this printer: http://support.brother.com/g/b/downloadlist.aspx?c=eu_ot&lang=en&prod=mfcj470dw_us_eu_as&os=128 and under Scanner Drivers selected "Scanner driver 64bit (deb package)", "Scan-key-tool 64bit (deb package)", and "Scanner Setting file (deb package)". For each package, I clicked the EULA, and selected "open with Ubuntu Software Center". Then after the USC window pops up, I click on Install and the red line goes from left to right. In each case, the USC window then had a green checkmark and the Install box changes to Reinstall (that's how you know it worked). So now I try it out. Hitting the Scan button on the printer, selecting "Scan to file", and hitting ok produces the message "Check Connection". I checked the Brother Linux Information FAQ (scanner) page and the 14th question seems the same as mine: When I try to use the scan key on my network connected machine, I receive the error "Check connection" or I can not select anything except "scan to FTP". I explored the solution given for this FAQ, but found from ifconfig that I am already using eth0, the default setting, so presumably that is not the problem. I also found brscan-skey installed in /usr/bin and did drrm@drrmlinux2:~$ brscan-skey -t drrm@drrmlinux2:~$ brscan-skey but that didn't help - I still get the "Check connection" message. What can you suggest to fix this problem?

    Read the article

  • Visual Studio 2010 Modeling and Architecture Tools

    - by MikeParks
    Jennifer Marsman (Microsoft Evangelist) and Cameron Skinner (Microsoft Visual Studio Product Unit Manager) recently stopped by our office while they were passing through Louisville on their tour to give us a presentation on the new Visual Studio 2010 Modeling and Architecture Tools. I checked out these new features when Visual Studio 2010 Beta versions originally rolled out and have been really impressed with this stuff ever since then. So it was pretty cool to actually learn some new techniques from Cameron himself since he helped write the actual code behind some of those features. If you've upgraded to Visual Studio 2010 recently I would highly recommend using the Architecture tools. They're awesome. If you want to make improvements to it, they even have their own SDK for it. There are plenty of blogs out there to show you how to use it. I've been waiting to find a tool that works like this where I can really analyze the code in solutions and projects and see how everything ties together. It's really handy if you're asked to work on a new project and aren't familiar with how it works. Just run the tools, analyze the DLL's, learn how everything works, and then you'll be ready to implement new code! It's a great tool to learn new systems quick and easy and it's all housed within the Visual Studio IDE. I just wanted to write a blog to brag about it a little bit, so I figured I'd throw this up here. It's a must have tool for Developers/Architects. Here's some screenshots of when I was using it earlier:   Thanks everyone! - Mike

    Read the article

  • What can be done to decrease the number of live issues with applications?

    - by User Smith
    First off I have seen this post which is slightly similar to my question. : What can you do to decrease the number of deployment bugs of a live website? Let me layout the situation for you. The team of programmers that I belong to have metrics associated with our code. Over the last several months our errors in our live system have increased by a large amount. We require that our updates to applications be tested by at least one other programmer prior to going live. I personally am completely against this as I think that applications should be tested by end users as end users are much better testers than programmers, I am not against programmers testing, obviously programmers need to test code, but they are most of the times too close to the code. The reason I specify that I think end users should test in our scenario is due to the fact that we don't have business analysts, we just have programmers. I come from a background where BAs took care of all the testing once programmers checked off it was ready to go live. We do have a staging environment in place that is a clone of the live environment that we use to ensure that we don't have issues between development and live environments this does catch some bugs. We don't do end user testing really at all, I should say we don't really have anyone testing our code except programmers, which I think gets us into this mess (Ideally, we would have BAs or QA or professional testers test). We don't have a QA team or anything of that nature. We don't have test cases for our projects that are fully laid out. Ok, I am just a peon programmer at the bottom of the rung, but I am probably more tired of these issues than the managers complaining about them. So, I don't have the ability to tell them you are doing it all wrong.....I have tried gentle pushes in the correct direction. Any advice or suggestions on how to alleviate this issue ?

    Read the article

  • Nvidia 740M still not working after Bumblebee installation

    - by Jon
    first of all, I have checked a lot of similar topics, but I still can't get my laptop to use Nvidia 740M. So first things first. I have a laptop Asus X550V(i5-3230, 4gb RAM, Nvidia 740M + Intel HD4000). I installed Ubuntu 13.10 alongside Win8(preinstalled) and both systems are running without problems. However, I have problem with second graphic card(Nvidia 740M), as Ubuntu doesn't recognize it. I installed bumblebee with this tutorial https://wiki.ubuntu.com/Bumblebee#Installation, but I still get error "Cannot access secondary GPU" error when trying to run ''optirun Steam'' in terminal. Then I tried to do this: [ERROR]Cannot access secondary GPU - error: [XORG] (EE) No devices detected. you need to edit the /etc/bumblebee/xorg.conf.nvidia (or /etc/bumblebee/xorg.conf.nouveau if using the noveau driver) and specify the correct BusID by following the instructions therein. But with lspci / VGA i get only info about Intel 4000, but no Nvidia. When I type only lspci, I get the line for Nvidia 740M,but after I edit the config file I still get second card error. Also, in /etc/bumblebee/xorg.conf.nvidia there wasn't BusID or anything similar, so I just added the whole line in device section. As I sad, I tried a lot of things to get it working, avoiding this forum(as I din't want to bother people with some solutions possible), but alas!, I had to bother you. If there is a need for some additional info, just say, no problem at all. Thank you very much in advance. :)

    Read the article

  • The effects of Agile Programming can alter the five desirable properties of modeling tools and techniques

    The effects of Agile Programming can alter the five desirable properties of modeling tools and techniques as documented by Pfleeger. The agile methodology does promote human understanding and communication through the use of short iterative software development life cycles which forces stakeholders to review the project and adjust the project for any requirement changes.  Due to the consistent evaluations of a project and requirements, process are continually being refined, upgraded, and compared against other alternatives to ensure the best design delivered to the client. Due to the short repetitive development cycles, increased time is devoted to process management due to the fact that requirements and designs could be constantly changing. This requires additional forecasting, monitoring, and planning for each iteration. Because things can change so rapidly, automated guidance in performing process must be updated for each iteration because the environment and the available reusable process could change. In addition, the original guidance and suggestions for the project also need to be updated to account for these changes as well.   In essence the automation of process execution is supported by the agile methodology because during every iteration all processes must be tested, evaluated to ensure process integrity and compliance with the customer’s requirements. I do not think the agile approach diminishes modeling, in fact I think it increases the modeling because before the start of every development cycle, modeling must be checked for accuracy based on the changed requirements. So in essence the reduced time spent initially designing the models is in fact gained as the project completes every iteration of the project.

    Read the article

  • How do I prevent my filesystems from being mounted read-only after suspending?

    - by Chas. Owens
    I have Ubuntu 12.04 installed on an SDHC card (only one ext2 partition, no swap). When I suspend using pm-suspend, my root filesystem is mounted read-only. I am currently "fixing" this with the following file: /etc/pm/sleep.d/99_make_disk_rw: #!/bin/sh mount -o remount,rw / But the disk is marked as needing an fsck on reboot. How can I prevent the filesystem from being mounted read-only or whatever is going wrong here. Update: It looks like it is getting mounted read-only because an error occurred. I have changed the mount options for / in /etc/fstab to noatime,nodiratime,errors=continue and it no longer mounts the SDHC card as read-only after it resumes. So the problem is happening when it suspends, not when it resumes as I had thought. I checked /sys/bus/usb/devices/1-4/power/persist and it is set to 1. So the SDHC card shouldn't appear disconnected to the OS (or more accurately it should recover from the disconnection without error). Here seems to be the relevant section of the syslog Sep 10 10:34:23 iubit kernel: [ 748.246226] sd 4:0:0:0: [sdb] Media Changed Sep 10 10:34:23 iubit kernel: [ 748.246234] sd 4:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Sep 10 10:34:23 iubit kernel: [ 748.246243] sd 4:0:0:0: [sdb] Sense Key : Unit Attention [current] Sep 10 10:34:23 iubit kernel: [ 748.246253] Info fld=0x0 Sep 10 10:34:23 iubit kernel: [ 748.246258] sd 4:0:0:0: [sdb] Add. Sense: Not ready to ready change, medium may have changed Sep 10 10:34:23 iubit kernel: [ 748.246271] sd 4:0:0:0: [sdb] CDB: Read(10): 28 00 00 5d 3e f0 00 00 08 00 Sep 10 10:34:23 iubit kernel: [ 748.246291] end_request: I/O error, dev sdb, sector 6110960 Sep 10 10:34:23 iubit kernel: [ 748.247027] EXT2-fs (sdb1): error: ext2_fsync: detected IO error when writing metadata buffers Sep 10 10:34:23 iubit anacron[6954]: Anacron 2.3 started on 2012-09-10 Sep 10 10:34:23 iubit anacron[6954]: Normal exit (0 jobs run) Sep 10 10:34:24 iubit laptop-mode: Laptop mode Sep 10 10:34:24 iubit laptop-mode: enabled, not active Sep 10 10:34:24 iubit kernel: [ 749.055376] sd 4:0:0:0: [sdb] No Caching mode page present Sep 10 10:34:24 iubit kernel: [ 749.055387] sd 4:0:0:0: [sdb] Assuming drive cache: write through Sep 10 10:34:25 iubit anacron[7555]: Anacron 2.3 started on 2012-09-10 Sep 10 10:34:25 iubit anacron[7555]: Normal exit (0 jobs run) Sep 10 10:34:31 iubit kernel: [ 756.090861] EXT2-fs (sdb1): previous I/O error to superblock detected

    Read the article

  • Check parameters annotated with @Nonnull for null?

    - by David Harkness
    We've begun using FindBugs with and annotating our parameters with @Nonnull appropriately, and it works great to point out bugs early in the cycle. So far we have continued checking these arguments for null using Guava's checkNotNull, but I would prefer to check for null only at the edges--places where the value can come in without having been checked for null, e.g., a SOAP request. // service layer accessible from outside public Person createPerson(@CheckForNull String name) { return new Person(Preconditions.checkNotNull(name)); } ... // internal constructor accessed only by the service layer public Person(@Nonnull String name) { this.name = Preconditions.checkNotNull(name); // remove this check? } I understand that @Nonnull does not block null values itself. However, given that FindBugs will point out anywhere a value is transferred from an unmarked field to one marked @Nonnull, can't we depend on it to catch these cases (which it does) without having to check these values for null everywhere they get passed around in the system? Am I naive to want to trust the tool and avoid these verbose checks? Bottom line: While it seems safe to remove the second null check below, is it bad practice? This question is perhaps too similar to Should one check for null if he does not expect null, but I'm asking specifically in relation to the @Nonnull annotation.

    Read the article

  • Ubuntu 12.10 ATI Driver 12.11 fails after compiz and xorg update

    - by Lukasz W.
    I updated my system via the package manager from Unity and next restart was just blackness. After being here: http://linux.hootip.com/amd-catalyst-12-11-beta-fix-and-installation-the-drivers-on-ubuntu-12-11/ I had the Catalyst 12.11 Beta driver installed. I checked my /var/log/apt/history.log and the update I received was of compiz and xorg packages. I tried to get latest release info, but all I get from their pages are commit info; I can't tell what was n the package update I got served. Anyone knows what was in the latest xorg/compiz release that broke the driver? Which driver should I use now? For completeness this is how I got the system back to boot (probably lame and not elegant): Boot with GRUB selection "More Ubuntu options (or sth like that), From secondary screen select 3.5.0-19 with boot options, When system prompts on stuff you'd like to do, select "root" - Drop to root shell, There: # mount -o remount,rw / # mv /etc/X11/xorg.conf /etc/X11/xorg.conf.failed # /usr/share/ati/fglrx-uninstall.sh # reboot This got be back on my feet.

    Read the article

  • Cloned partition not seen correctly by disk utility & gparted

    - by enrico
    Some days ago I cloned my /dev/sda1 partition with clonezilla in partition-to-partition mode to /dev/sda3. It worked, but now that I've finished the setup of system in /dev/sda3, I wanna reinstall /dev/sda1 for other stuffs. This partition is NOT mounted, but ubuntu's DISK UTILITY thinks it is, while it doesn't see as mounted the currently active / partition /dev/sda3. This is the df- TH output : Filesystem Type Size Used Avail Use% Mounted on /dev/sda3 ext4 32G 6.2G 24G 21% / udev devtmpfs 2.2G 13k 2.2G 1% /dev tmpfs tmpfs 845M 906k 844M 1% /run none tmpfs 5.3M 0 5.3M 0% /run/lock none tmpfs 2.2G 115k 2.2G 1% /run/shm /dev/sdb1 fuseblk 321G 147G 174G 46% /Dati Gparted instead, sees /dev/sda1 as NOT mounted (checked with Information option), but it display the BOOT flag on this partition, while the real booted partition /dev/sda3, hasn't it. If I try to format the /dev/sda1 partition, it gives me this error : GParted 0.8.1 --enable-libparted-dmraid Libparted 2.3 Format /dev/sda1 as ext2 00:00:02 ( ERROR ) calibrate /dev/sda1 00:00:00 ( SUCCESS ) path: /dev/sda1 start: 2048 end: 62500863 size: 62498816 (29.80 GiB) set partition type on /dev/sda1 00:00:02 ( SUCCESS ) new partition type: ext2 create new ext2 file system 00:00:00 ( ERROR ) mkfs.ext2 -L "" /dev/sda1 mke2fs 1.41.14 (22-Dec-2010) /dev/sda1 is mounted; will not make a filesystem here! How is possible to correct this behaviour ? Is this due to some lacking option in clonezilla phase ? TIA Enrico

    Read the article

  • Unity 3D does not work on Dell system with a AMD Radeon HD 6470M

    - by VeeKay
    I am running 64 bit Ubuntu on Dell with 1GB graphic card. I login with "Ubuntu" hoping to see Unity 3d but it doesn't. Unity 2D runs instead. when I type in echo "$DESKTOP_SESSION" it confirms the Unity-2D. I've checked the System info that shows like : The graphics row shows itself as empty. SO I've presumed that the graphic drivers aren't detected and hence I went to Unity- Additional Drivers and installed the fglrx driver that the UI has suggested. Even after installing so, the graphics part in System info details shows nothing and still Unity 2D runs in spite of all the effort. Please help! How can I get my Unity 3D back? Hardware Info Video Card : AMD Radeon™ HD 6470M - 1GB (For ICC) RAM : 6GB (1 X 2GB + 1 X 4GB) 2 DIMM DDR3 1333Mhz OS : 64 bit Ubuntu 11.10 Edit : Output for /usr/lib/nux/unity_support_test -p X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 155 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 21 Current serial number in output stream: 21

    Read the article

  • Problem with APTonCD application

    - by Harikrishnan
    I created a iso image using aptoncd & burned it to a dvd. Now when i tried to restore, the program does not detect the dvd in the drive. It shows "Please insert a disc in the drive." and if we click "ok" it shows E: Failed to mount the cdrom. The dvd is in the drive itself. I tried sudo lshw -C disk and the output is: *-cdrom description: DVD-RAM writer product: DVDRAM GH22NS50 vendor: HL-DT-ST physical id: 1 bus info: scsi@1:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/scd0 logical name: /dev/sr0 logical name: /media/APTonCD logical name: /media/apt version: TN02 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 mount.fstype=iso9660 mount.options=ro,relatime,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500 state=mounted status=ready *-medium physical id: 0 logical name: /dev/cdrom logical name: /media/APTonCD logical name: /media/apt configuration: mount.fstype=iso9660 mount.options=ro,relatime,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500 state=mounted Then i checked in disk utility application. in that dvd rom is shown as /dvd/sr0 My ubuntu version is 10.10. Please help me to solve the problem.

    Read the article

  • Cannot restart Ubuntu after replacing Unity with Gnome and purging lightdm

    - by OtagoHarbour
    I am running Ubuntu 11.10. I used the directions here to replace Unity with Gnome. It seemed to work fine until the end of step 2 when I rebooted the computer. When I chose the default option, I just got a black screen. When I choose the restore option and then "Restore normal boot" I get Starting load fallback graphics devices [fail] Starting GNOME Display Manager [ok] Starting Userspace bootsplash [ok] Stopping GNOME Display Manager [ok] Stopping Userspace bootsplash [ok] Starting web server apach22 [fail] (among a lot of OKs) It finally gets to Stopping System V runlevel compatibility [ok] where it hangs. I left it overnight and just had a blank screen when I checked it in the morning. I can "Drop to root shell prompt" and "Drop to root shell prompt with networking". I did see here that one should be able to fix the problem by choosing remount and then doing apt-get install lightdm I tried this as root shell with and without networking and got the message Err http://us.archive.ubuntu.com/ubuntu/ oneric-updates/main lightdm i386 1.0.6-0ubuntu1.7 Could not resolve 'us.archive.ubuntu.com/ubuntu' What is the best way to resolve this problem? Thanks, Peter.

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >