Search Results

Search found 9398 results on 376 pages for 'matt dev'.

Page 189/376 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • UPDATE MANAGER UNABLE TO UPDATE

    - by muguro
    Requires installation of untrusted packages The action would require the installation of packages from not authenticated sources. i get this error every time i try updating. the system shows that it has 466 updates but fails after clicking update more details have this accountsservice apparmor apport apport-gtk apt apt-transport-https apt-utils aptdaemon aptdaemon-data at-spi2-core bamfdaemon base-files bcmwl-kernel-source bind9-host compiz compiz-core compiz-gnome compiz-plugins-default cron cups cups-bsd cups-client cups-common cups-filters cups-ppdc dbus dbus-x11 dconf-gsettings-backend dconf-service desktop-file-utils dmsetup dnsutils empathy empathy-common eog evince evince-common evolution-data-server evolution-data-server-common firefox firefox-globalmenu firefox-gnome-support firefox-locale-en fontconfig fontconfig-config fonts-liberation fonts-opensymbol foomatic-filters gcalctool gdb ghostscript ghostscript-cups ghostscript-x ginn gir1.2-atspi-2.0 gir1.2-dbusmenu-glib-0.4 gir1.2-dbusmenu-gtk-0.4 gir1.2-gst-plugins-base-0.10 gir1.2-gtk-3.0 gir1.2-gtksource-3.0 gir1.2-gudev-1.0 gir1.2-javascriptcoregtk-3.0 gir1.2-launchpad-integration-3.0 gir1.2-pango-1.0 gir1.2-rb-3.0 gir1.2-totem-1.0 gir1.2-ubuntuoneui-3.0 gir1.2-unity-5.0 gir1.2-webkit-3.0 glib-networking glib-networking-common glib-networking-services gnome-accessibility-themes gnome-control-center gnome-control-center-data gnome-desktop3-data gnome-games-data gnome-icon-theme gnome-media gnome-orca gnome-settings-daemon gnome-sudoku gnomine gnupg google-talkplugin gpgv grub-common grub-pc grub-pc-bin grub2-common gstreamer0.10-alsa gstreamer0.10-plugins-base gstreamer0.10-plugins-base-apps gstreamer0.10-x gvfs gvfs-backends gvfs-bin gvfs-common gvfs-daemons gvfs-fuse gvfs-libs gwibber gwibber-service gwibber-service-facebook gwibber-service-identica gwibber-service-twitter hdparm hplip hplip-data indicator-sound initscripts isc-dhcp-client isc-dhcp-common jockey-common jockey-gtk krb5-locales landscape-client-ui-install language-pack-en language-pack-en-base language-pack-gnome-en language-pack-gnome-en-base launchpad-integration libaccountsservice0 libapt-inst1.4 libapt-pkg4.12 libart-2.0-2 libasound2 libatspi2.0-0 libbamf0 libbamf3-0 libbind9-80 libc-bin libc-dev-bin libc6 libc6-dev libcairo-gobject2 libcairo2

    Read the article

  • Canon MG6100 series USB printer receives job but doesn't physically print

    - by Old-linux-fan
    Printer MP6150 driver installed itself upon plugging in the printer. Printer is recognized (lsusb shows it) but does not mount. If the printer is recognized, the driver must be working (or?), but something is blocking the system from mounting the printer. Tried the usual things: power of printer, restart Ubuntu etc. Listed below result of lsusb and fstab: hans@kontor-linux:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 04a9:174a Canon, Inc. Bus 002 Device 002: ID 1058:1001 Western Digital Technologies, Inc. External Hard Disk [Elements] Bus 004 Device 002: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser hans@kontor-linux:~$ sudo cat /etc/fstab [sudo] password for hans: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=eaf3b38d-1c81-4de9-98d4-3834d674ff6e / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=93a667d3-6132-45b5-ad51-1f8a46c5b437 none swap sw 0 0

    Read the article

  • Another guvcview issue? Core dump!

    - by user290498
    guvcview is core dumping with my Sony Handycam plugged in. It works fine with my standard Logitech Webcam. I deleted the config file so it could re-create it. here is the output: $ guvcview guvcview 1.5.3 Could not open /home/rayj/.guvcviewrc for read, will try to create it write /home/rayj/.guvcviewrc OK ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib pcm_dmix.c:957:(snd_pcm_dmix_open) The dmix plugin supports only playback stream Cannot connect to server socket err = No such file or directory Cannot connect to server socket jack server is not running or cannot be started video device: /dev/video0 ERROR opening V4L2 interface for /dev/video1 Init. stk1160 (location: usb-0000:00:1d.0-1.5) { pixelformat = 'UYVY', description = '16 bpp YUY2, 4:2:2, packed' } VIDIOC_ENUM_FRAMESIZES - Error enumerating frame sizes: Inappropriate ioctl for device Unable to enumerate frame sizes. : Inappropriate ioctl for device { pixelformat = 'RGB3', description = 'RGB3' } { ?GSPCA? : width = 720, height = 480 } fmtind:2 fsizeind: 1 { pixelformat = 'BGR3', description = 'BGR3' } { ?GSPCA? : width = 720, height = 480 } fmtind:3 fsizeind: 1 { pixelformat = 'YU12', description = 'YU12' } { ?GSPCA? : width = 720, height = 480 } fmtind:4 fsizeind: 1 { pixelformat = 'YV12', description = 'YV12' } { ?GSPCA? : width = 720, height = 480 } fmtind:5 fsizeind: 1 vid:05e1 pid:0408 driver:stk1160 checking format: 1196444237 Format unavailable: 1196444237. Init v4L2 failed !! Init video returned -2 trying minimum setup ... Segmentation fault (core dumped)

    Read the article

  • what is the best program to capture my workings as an AVI or MPEG

    - by raihanchy
    I have already used recordmydesktop, xvidcap and kazam. My sound working fine with other audio or videos. xvidcap doesn't record sound at all. I have tried many ways. If I try as: 'padsp xvidcap', it also gives error, like: /dev/dsp cannot found or missing. I have changed it to /dev/snd. Still no effect. Even I can record sound through gnome-sound-recorder - after pressing record button, I open pavucontrol. Then from Recording tab, I choose 'Monitor of Analog Stereo'. But if I run xvidcap, I don't get that option in pavucontrol. kazam works a bit slow. It records at the beginning of the captured video. But for unknown reason, it eventually the sounds just go off. Also the video is not smooth as xvidcap. Though Kazam output as H64/MP4. Record my Desktop also doesn't give sound. Can you guys please help me, either - how to get sound with xvidcap or how kazam could be record nicely. I am looking something Camtasia, as used for Windows. Thanks in advance. Raihan

    Read the article

  • mdadm: brakes boot due to "is not ready yet or not present" error

    - by BarsMonster
    This is so damn frustrating :-| I've spent like 20 hours on this nice error, and seems like dozens of people over Internet too, and no clear solution yet. I have non-system RAID-5 of 5 disks, and it's fine. But during boot up it says that "/dev/md0 is not ready yet or not present" and asks to press 'S'. Very nice for Ubuntu Server - I have to bring monitor and keyboard to go next. After this system boots and it's all fine. md0 device works, /proc/mdstat is fine. When I do mount -a - it mounts this array without errors and works fine. As a dumb and shameful workaround I added noauto in /etc/fstab, and did mounting in /etc/rc.local - it works fine then. Any hints how to make it work properly? fstab: UUID=3588dfed-47ae-4c32-9855-2d69df713b86 /var/bigfatdisk ext4 noauto,noatime,data=writeback,barrier=0,nobh,commit=5 0 0 mdadm config: It is autogenerated: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR CENSORED # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 bitmap=/var/md0_intent UUID=efccbeb6:a0a65cd6:470dcdf3:62781188 name=LBox2:0 # This file was auto-generated on Mon, 10 Jan 2011 04:06:55 +0200 # by mkconf 3.1.2-2

    Read the article

  • so, my Cassandra consultant left me and now I'm thinking about reverting to mysql

    - by sathia
    I run a middle size community and some time ago I started to develop social capabilities such as follow, status update, wall etc. For some reason i thought that Cassandra was the right tool for the job so I looked online for a Cassandra developer and I found a very talented one. Unluckily in the midst of the development the dev left (too much jobs) and so I'm here with a very nice class, a very nice demo, but a lot of fears that I won't be able to handle basic things such as compaction, scaling etc. My biggest fear is to go online with all this coolness and then having a site inaccessible for hours or days. The mysql consultant (very talented too) keeps saying me that I should stick with Mysql which I know rather well and in case something's wrong we can manage. In that case I should take the class made for cassandra and abstract it for Mysql. My question is this: Should I find another dev/consultant and stick with Cassandra because for social things it is definitely the best tool for the job, or should I listen to the Mysql consultant and revert to Mysql? About 15k login each day Average 20 actions per user Avg 6 followers x user (These are current statistics, but of course I'd like to increase them as much as possible.)

    Read the article

  • Which is more important in a web application code promotion hierarchy? production environment to repo equivalence or unidirectional propagation?

    - by ghbarratt
    Lets say you have a code promotion hierarchy consisting of several environments, (the polar end) two of which are development (dev) and production (prod). Lets say you also have a web application where important (but not developer controlled) files are created (and perhaps altered) in the production environment. Lets say that you (or someone above you) decided that the files which are controlled/created/altered/deleted in the production environment needed to go into the repository. Which of the following two sets of practice / approaches do you find best: Committing these non-developed file modifications made in the production environment so that the repository reflects the production environment as closely and as often as possible. Generally ignoring the non-developed production environment alterations, placing confidence in backups to restore the production environment should it be harmed, and keeping a resolution to avoid pushing developments through the promotion hierarchy in the reverse direction (avoiding pushing from prod to dev), only committing the files found in the production environment if they were absolutely necessary in other environments for development. So, 1 or 2, and why? PS - I am currently slightly biased toward maintaining production environment to repository equivalence (option 1), but I keep an open mind and would accept an answer supporting either.

    Read the article

  • LUKS no longer accepts my my passphrase

    - by Two Spirit
    I created a 4 drive RAID5 setup using mdadm and upgrading from 2TB drives to the new Hitachi 7200RPM 4TB drives. I can initially open my luks partition, but later can no longer access it. I can no longer access my LUKS partition even tho I have the right passphrases. It was working and then at an unknown point in time loose access to LUKS. I've used the same procedures for upgrading from 500G to 1TB to 1.5TB to 2TB. After the first time this happened a week ago, I thought maybe there was some corruption so I added a 2nd Key as a backup. After the second time the LUKS became unaccessible, none of the keys worked. I put LUKS on it using cryptsetup -c aes -s 256 -y luksFormat /dev/md0 # cryptsetup luksOpen /dev/md0 md0_crypt Enter LUKS passphrase: Enter LUKS passphrase: Enter LUKS passphrase: Command failed: No key available with this passphrase. The first time this happened while I was upgrading to 4TB drives, I thought it was a fluke, and ultimately had to recover from backups. I went an used luksAddKey to add a 2nd key as a backup. It happened again and I tried both passphrases, and neither worked. The only thing I'm doing differently this time around is that I've upgraded to 4TB drives which use GPT instead of fdisk. The last time I had to even reboot the box was over 2 years ago. I'm using ubuntu-8.04-server with kernel 2.6.24-29 and upgraded to -2.6.24-31, but that didn't fix the problem.

    Read the article

  • Finishing an iteration early

    - by f1dave
    I'd like some input on this on those working with agile methodologies... A current project is finding that development on our planned user stories is finishing some time before the end of the iteration, and that the testing effort and business acceptance is what's actually dragging us out longer towards the end. This means that the devs in question have spare time, and they're essentially going out to the iteration+1 backlog and starting work on cards there before our current iteration cards are 'done'. As iteration manager, I want to put a stop to this - I want a more team-orientated approach where the group takes ownership of getting all the cards done, as opposed to "Well, dev's done so what do I dev next?" The problem I face is convincing the team of this. On one hand, I understand why the devs don't want to test the code they've written (there are unit tests they write of course, but the manual testing to be done could be influenced by their bias). The team sees working ahead as making our next iterations easier, because a lot of the work is done before we start. I see this as screwing with the whole system of planning/actuals - but it's difficult to convince the team as to why this matters. What advice can you guys and girls give? How do we stop devs reaching ahead? What should they be doing instead? How much of a problem is this in the scheme of things, if things are still getting done?

    Read the article

  • IPv6 tunnels - any easy way to turn them on and off?

    - by Rob Hoare
    I've set up a tunnelbroker.net (Hurricane Electric) IPv6 tunnel from my laptop running 12.04. Works fine, and allows me to test the dual-stack configuration on my remote webservers etc. until native IPv6 is available on my ISP. However, there are times when I don't want the tunnel. For example if I'm accessing something that requires an IPv4 address in my own country rather than the Tunnelbroker tunnel endpoint, or if I'm away from the local IPv4 tunnel endpoint, or if I simply want to test without IPv6. Is there a simple way to disable and then re-enable the IPv6 tunnel, without rebooting? For context, here's what's in my /etc/network/interfaces (NNN replaces numbers): auto he-ipv6 iface he-ipv6 inet6 v4tunnel endpoint 216.218.NNN.NNN address 2001:470:NNN:NNN::2 netmask 64 up ip -6 route add default dev he-ipv6 down ip -6 route del default dev he-ipv6 Is there a network manager application (gui or command line) to selectively enable/disable parts of /etc/network/interfaces, or IPv6 in general? I found even by commenting out that out (and reloading networking) it's tough to get the IPv6 to go away. A "tunnel on/off" button in networking would be great, like using a VPN.

    Read the article

  • Cannot get temperatures in Dell Studio 1558

    - by Athul Iddya
    I could never get proper temperatures on my Dell Studio 1558. lm-sensors and acpi give wrong readings. The output of sensors is, $ sensors acpitz-virtual-0 Adapter: Virtual device temp1: +26.8°C (crit = +100.0°C) temp2: +0.0°C (crit = +100.0°C) acpi -V gives me, $ acpi -V Battery 0: Full, 100% Battery 0: design capacity 414 mAh, last full capacity 369 mAh = 89% Adapter 0: on-line Thermal 0: ok, 0.0 degrees C Thermal 0: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 0: trip point 1 switches to mode passive at temperature 95.0 degrees C Thermal 0: trip point 2 switches to mode active at temperature 71.0 degrees C Thermal 0: trip point 3 switches to mode active at temperature 55.0 degrees C Thermal 1: ok, 26.8 degrees C Thermal 1: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 1: trip point 1 switches to mode active at temperature 71.0 degrees C Thermal 1: trip point 2 switches to mode active at temperature 55.0 degrees C Cooling 0: LCD 0 of 15 Cooling 1: Processor 0 of 10 Cooling 2: Processor 0 of 10 Cooling 3: Processor 0 of 10 Cooling 4: Processor 0 of 10 Cooling 5: Fan 0 of 1 Cooling 6: Fan 0 of 1 I suspect even hddtemp gives bogus readings as its always at 46 $ sudo hddtemp /dev/sda /dev/sda: ST9500420AS: 46°C I have gone through some bug reports and some used to have the same problem after resuming from suspend. But I always have this problem. I had updated to the latest BIOS from Windows a couple of weeks ago, will updating from Ubuntu change anything? CORRECTION: hddtemp's readings do change. Its now at 45.

    Read the article

  • How to deal with a CEO making all technical decision but with little technical knowledge ?

    - by anonymous
    Hi, Question posted anonymously for obvious reasons. I am working in a company with a dev group of 5-6 developers, and I am in a situation which I have a hard time dealing with. Every technical choice (language, framework, database, database scheme, configuration scheme, etc...) is decided by the CEO, often without much rationale. It is very hard to modify those choices, and his main argument consists in "I don't like this", even though we propose several alternative with detailed pros/cons. He will also decide to rewrite from scratch our core product without giving a reason why, and he never participates to dev meetings because he considers it makes things slower... I am already looking at alternative job opportunities, but I was wondering if there anything we (the developers) could do to improve the situation. Two examples which shocked me: he will ask us to implement something akin to configuration management, but he reject any existing framework because they are not written in the language he likes (even though the implementation language is irrelevant). He also expects us to be able to write those systems in a couple of days, "because it is very simple". he keeps rewriting from scratch on his own our core product because the current codebase is too bad (codebase whose design was his). We are at our third rewrite in one year, each rewrite worse than the previous one. Things I have tried so far is doing elaborate benchmarks on our product (he keeps complaining that our software is too slow, and justifies rewrites to make it faster), implement solutions with existing products as working proof instead of just making pros/cons charts, etc... But still 90 % of those efforts go to the trashbox (never with any kind of rationale behind he does not like it, again), and often get reprimanded because I don't do exactly as he wants (not realizing that what he wants is impossible).

    Read the article

  • How to recover broken dpkg after lucid-bleed ppa-purge?

    - by TryTryAgain
    Did a ppa-purge of lucid-bleed and dpkg didn't downgrade properly and now it is broken. dpkg: PreDepends: tar (>= 1.23) but 1.22-2ubuntu1 is to be installed What scares me is when simulating the removal of dpkg I get: Removing this package may render the system unusable. Are you sure you want to do that? and then the list of packages which depend on it, which will also be removed, is obviously very long. Is it safe for me to remove dpkg just to reinstall it? How would I ensure the list of packages which were also removed are then reinstalled? Will forcing the version of dpkg help? (FYI: simulating a forced version brings up a much smaller list of applications which will also be removed). Any other suggestions? Additional information based on comments: ppa-purge log: http://pastebin.com/1kT8cLvP If I sudo apt-get install dpkg=1.15.5.6ubuntu4.5 I get The following packages have unmet dependencies: libdpkg-perl: Depends: dpkg (= 1.15.8) but 1.15.5.6ubuntu4.5 is to be installed which sucks because that means more would be broken after doing so...but when I force the version through Synaptic I get: To be removed alien, build-essential, cdbs, checkinstall, debhelper, devscripts, dpkg-dev, google-earth-stable, googleearth-package, libdpkg-perl, lintian, lsb, lsb-core, lsb-cxx, lsb-desktop, lsb-graphics, lsb-languages, lsb-multimedia, lsb-printing, lsb-qt4, lsb-security, ubuntu-dev-tools.

    Read the article

  • Really impossible to have Gimp 2.8 on Ubuntu 11.10?

    - by ubuntico
    For days, I have been trying to find a solution and repositories to install Gimp 2.8 on Ubuntu 11.10. Each time I get this error: Tried to update pango via sudo apt-get install pango-graphite [sudo] password for xxx: Reading package lists... Done Building dependency tree Reading state information... Done pango-graphite is already the newest version. Then also tried sudo apt-get install libgdk-pixbuf2* Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'libgdk-pixbuf2.0-0' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2.0-dev' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2.0-doc' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2-ruby' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2-ruby1.8' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2-ruby1.8-dbg' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2.0-common' for regex 'libgdk-pixbuf2*' libgdk-pixbuf2-ruby is already the newest version. libgdk-pixbuf2-ruby1.8 is already the newest version. libgdk-pixbuf2-ruby1.8-dbg is already the newest version. libgdk-pixbuf2.0-doc is already the newest version. libgdk-pixbuf2.0-0 is already the newest version. libgdk-pixbuf2.0-dev is already the newest version. libgdk-pixbuf2.0-common is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 31 not upgraded. And still error :(. Please help as I do not want to upgrade to Ubuntu 12.04 just to have Gimp. Really a mission impossible????

    Read the article

  • 13.10 doesn't boot on Vaio Pro 13

    - by vaioonbuntu
    I just installed Ubuntu 13.10 on my new Vaio Pro 13, disabled safe mode, but used UEFI and not legacy mode. I did an encrypted LVM installation and erased the complete SSD. It booted just fine from USB, but after installation it doesn't boot. The Vaio failed boot screen appears. I then tried this advice here: 13.10 on vaio pro with UEFI sadly it fails for me with "/usr/sbin/grub-probe: error: failed to get canonical path of /cow." I then tried mounted the encrypted partition with Nautilus and tried this: Cannot update grub with paramters on live USB With /dev/sda2 and then to install GRUB to /dev/sda. Didn't succeed and warned me that the "GPT partition label contains no BIOS Boot Partition; embedding won't be possible" What do i have to do, go fix GRUB and be able to boot my finished install? Here's my Boot Repair Log: http://paste.ubuntu.com/6386598/ I would really appreciate any help, I'm so happy to finally be able to ditch my big fat Macbook Pro and use Ubuntu on my new, light Vaio Pro, if only I could fix GRUB. best, x

    Read the article

  • mount ext4 formated external drive->the drives green light won't stop flashing

    - by Gohlool
    I've installed kubuntu (after 10 years I am trying to play with linux) and manged to attach a external 1TB HDD drive over USB! The drive was formatted with NTFS and everything was working OK. I also changed the /etc/fstab here is my ntfs mount setting: /dev/sdb1 /media/samsung nts-3g auto,user,uid=1000,gid=1000,fmask=000,utf-8 0, 0 Now, I've reparationed the drive and formated it with ext4 filesystem! change my fstb like: /dev/sdb1 /media/samsung ext4 defaults,noatime 0, 0 now, when I plug my dive/or call sudo mount -a, my external drive's green light starts to flash and won't stop, but mount works .... What is the Problem? is this because of ext4? because with NTFS this won't happen! btw. after changing the owner of the /media/samsung and setting permissions 777, I can also access my drive like creating new folder atc. (although is's flashing constantly)! What is my mistake? btw. can you please let me know how to set the owner and the permissions for my /media/samsung directory in fstab for ext4 like I did it for NTFS? Thanks in advance

    Read the article

  • Dealing with bad/incomplete/unclear specifications?

    - by eagerMoose
    I'm working on a project where our dev team gets the specifications from the business part of the company. Both the business management and the IT management require estimates and deadline projections, as they should. The good thing is that estimates are mostly made by the actual developers who get to do the required features. The bad thing is that the specifications are usually either too simple (it turns out you're left with a lot of question marks over your head because a lot of information seems to be missing) or too complex(up to the point that you can't even visualize where everything would "fit" in the app). More often than not, the business part of the specs are either incomplete or unaware of what can and can't be done (given the previously implemented business logic). Dev team is given about a day per new spec to give an estimate and we do try to clear uncertainties, usually by meeting up with whoever did the spec. Most of the times it turns out that spec writers haven't really thought everything through, and it's usually only when we start designing and developing that we end up in trouble, as a lot of the spec seems to have holes. How do you deal with this? Are you generous on estimates in advance?

    Read the article

  • what languages are good selling points on resume? [closed]

    - by Thomas Galvin
    I have a good amount of experience with C# and Java at the moment but after education and whatnot I wish to be able in more than just 2 high-level, comparatively limited languages, and from what I've seen languages like C(++) or PHP are in demand at the moment. I've thought about learning the following: C. Very standard, lightweight and available on everything. However very old and mostly procedural. C++. Standard like C but I've read in some places that it encourages bad programming design and use of dodgy libraries - but similar things have been said about C too so I'll take that with a grain of salt. D. Quite new but looks promising, but will it be relevant or applicable in the future though? PHP. With the internet becoming ever more important I think this might be the one to go with, but the code itself isn't very intuitive. CoffeeScript (or plain JavaScript). With Microsoft's new idea of HTML5+JS for everything under the sun this doesn't look like a bad choice. However things do change and I wish to be primarily a software dev, not web dev. So out of the above list, or any others that you could suggest, what would you say I should begin to focus on? What is your opinion on staying with C#?

    Read the article

  • Diskless with Ubuntu 12.04

    - by user139462
    I'm trying to setup a new diskless solution with ubuntu 12.04 without any success. I followed this howto: https://help.ubuntu.com/community/DisklessUbuntuHowto But the initramfs seems not to be able to mount my nfs share. On my server side: My /etc/exports /srv/nfs4 192.168.0.0/24(fsid=0,rw,no_subtree_check) /srv/nfs4/nfsroot 192.168.0.0/24(rw,no_root_squash,no_subtree_check,fsid=1,nohide,insecure,sync) I'm able to mount my nfs share on standard Ubuntu installation without any problem. I can mount my nfs on any client with those commands: mount 192.168.0.3:/nfsroot /mnt or mount 192.168.0.3:/srv/nfs4/nfsroot /mnt My /tftpboot/pxelinux.cfg/default config file is DEFAULT vmlinuz-3.5.0-25-generic root=/dev/nfs initrd=initrd.img-3.5.0-25-generic nfsroot=192.168.0.3:/nfsroot ip=dhcp rw I also tried DEFAULT vmlinuz-3.5.0-25-generic root=/dev/nfs initrd=initrd.img-3.5.0-25-generic nfsroot=192.168.0.3:/srv/nfs4/nfsroot ip=dhcp rw. What I got in initramfs: With the setting [nfsroot=192.168.0.3:/nfsroot] Diskless output: mount call failed - server replied: Permission denied On Syslog of my nfs server: rpc.mountd[1266]: refused mount request from 192.168.0.10 for /nfsroot (/): not exported With the setting [nfsroot=192.168.0.3:/srv/nfs4/nfsroot] Diskless output: mount: the kernel lacks NFS v3 support On Syslog of my nfs server I got: Mar 11 14:03:06 BootFromLan rpc.mountd[1266]: authenticated mount request from 192.168.0.10:834 for /srv/nfs4/nfsroot (/srv/nfs4/nfsroot) Mar 11 14:03:06 BootFromLan rpc.mountd[1266]: refused unmount request from 192.168.0.10 for /root (/): not exported

    Read the article

  • How do I debug an overheating problem?

    - by Tab
    Hello guys. I have a problem with my Laptop (Dell Inspiron 1564 Core i5 4GB Ram VGA ATI Mobility Radeon HD 4300 running Ubuntu 10.10 32bit). It shuts down abruptly without even a lag in the application I am working with before shutdown. I think it's overheating problem. Actually the laptop is hot all the time when I am running Ubuntu. When I switch back to windows, even with intense load it won't shutdown or show any problem as long as I keep proper ventilation (when the air openings are blocked it does the same). Actually on Ubuntu i don't usually do things that need much CPU power, usually surfing internet, coding web pages and sometimes playing with python and ruby. I am not enabling desktop effects so no GPU load except the normal GNOME gui. Now as I am writing the Processor load in the panel monitor applet is 0%, Memory 11% by programs, 22% by cache. And i have CPU Frequency monitor for each of the 4 cores set to 1.20 Ghz (the lowest possible value, i am not sure if this applet does really limit CPU usage). Running sensors in terminal gave me temp1: +26.8°C (crit = +100.0°C) temp2: +0.0°C (crit = +100.0°C) hddtemp /dev/sda at the terminal gave me /dev/sda: WDC WD3200BEVT-75ZCT2: 46°C All that fine but the laptop is Really hot i can feel it in the keyboard, mouse pad is painful to touch, and the fan is always spinning. I am also placing 2 small fans running on USB under the laptop right now and the laptop is lifted over the fans so it's well ventilated. When I am running windows it doesn't get that hot except when there is a really big load on the CPU and this is keeping me away from using Linux for everyday tasks. Actually I don't care much for speed as I can deal with low speed it's not going to shutdown abruptly. So please if you can help me and tell me what are the possible causes, where should I start ?

    Read the article

  • Error while installing Komparator4

    - by Lucio
    I downloaded Komparator source from this page. The INSTALL file in the source say the following: Unpack komparator4-xxx.tar.bz2, and open a shell inside this directory mkdir build cd build cmake -DCMAKE_INSTALL_PREFIX=`kde4-config --prefix` .. make sudo make install I unpacked the file, make the directory, entered this, but when I have tried to cmake (sentece Nº3) the terminal print the following errors disabling me to make & install: CMake Error at /usr/share/cmake-2.8/Modules/FindKDE4.cmake:98 (MESSAGE): ERROR: cmake/modules/FindKDE4Internal.cmake not found in /home/lucio/.kde/share/apps;/usr/share/kde4/apps Call Stack (most recent call first): CMakeLists.txt:2 (find_package) CMake Warning (dev) in CMakeLists.txt: No cmake_minimum_required command is present. A line of code such as cmake_minimum_required(VERSION 2.8) should be added at the top of the file. The version specified may be lower if you wish to support older CMake versions for this project. For more information run "cmake --help-policy CMP0000". This warning is for project developers. Use -Wno-dev to suppress it. -- Configuring incomplete, errors occurred! What mean this errors and how can I fix it?

    Read the article

  • How to Implement Project Type "Copy", "Move", "Rename", and "Delete"

    - by Geertjan
    You've followed the NetBeans Project Type Tutorial and now you'd like to let the user copy, move, rename, and delete the projects conforming to your project type. When they right-click a project, they should see the relevant menu items and those menu items should provide dialogs for user interaction, followed by event handling code to deal with the current operation. Right now, at the end of the tutorial, the "Copy" and "Delete" menu items are present but disabled, while the "Move" and "Rename" menu items are absent: The NetBeans Project API provides a built-in mechanism out of the box that you can leverage for project-level "Copy", "Move", "Rename", and "Delete" actions. All the functionality is there for you to use, while all that you need to do is a bit of enablement and configuration, which is described below. To get started, read the following from the NetBeans Project API: http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/ActionProvider.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/CopyOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/MoveOrRenameOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/DeleteOperationImplementation.html Now, let's do some work. For each of the menu items we're interested in, we need to do the following: Provide enablement and invocation handling in an ActionProvider implementation. Provide appropriate OperationImplementation classes. Add the new classes to the Project Lookup. Make the Actions visible on the Project Node. Run the application and verify the Actions work as you'd like. Here we go: Create an ActionProvider. Here you specify the Actions that should be supported, the conditions under which they should be enabled, and what should happen when they're invoked, using lots of default code that lets you reuse the functionality provided by the NetBeans Project API: class CustomerActionProvider implements ActionProvider { @Override public String[] getSupportedActions() { return new String[]{ ActionProvider.COMMAND_RENAME, ActionProvider.COMMAND_MOVE, ActionProvider.COMMAND_COPY, ActionProvider.COMMAND_DELETE }; } @Override public void invokeAction(String string, Lookup lkp) throws IllegalArgumentException { if (string.equalsIgnoreCase(ActionProvider.COMMAND_RENAME)) { DefaultProjectOperations.performDefaultRenameOperation( CustomerProject.this, ""); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_MOVE)) { DefaultProjectOperations.performDefaultMoveOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_COPY)) { DefaultProjectOperations.performDefaultCopyOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_DELETE)) { DefaultProjectOperations.performDefaultDeleteOperation( CustomerProject.this); } } @Override public boolean isActionEnabled(String command, Lookup lookup) throws IllegalArgumentException { if ((command.equals(ActionProvider.COMMAND_RENAME))) { return true; } else if ((command.equals(ActionProvider.COMMAND_MOVE))) { return true; } else if ((command.equals(ActionProvider.COMMAND_COPY))) { return true; } else if ((command.equals(ActionProvider.COMMAND_DELETE))) { return true; } return false; } } Importantly, to round off this step, add "new CustomerActionProvider()" to the "getLookup" method of the project. If you were to run the application right now, all the Actions we're interested in would be enabled (if they are visible, as described in step 4 below) but when you invoke any of them you'd get an error message because each of the DefaultProjectOperations above looks in the Lookup of the Project for the presence of an implementation of a class for handling the operation. That's what we're going to do in the next step. Provide Implementations of Project Operations. For each of our operations, the NetBeans Project API lets you implement classes to handle the operation. The dialogs for interacting with the project are provided by the NetBeans project system, but what happens with the folders and files during the operation can be influenced via the operations. Below are the simplest possible implementations, i.e., here we assume we want nothing special to happen. Each of the below needs to be in the Lookup of the Project in order for the operation invocation to succeed. private final class CustomerProjectMoveOrRenameOperation implements MoveOrRenameOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyRenaming() throws IOException { } @Override public void notifyRenamed(String nueName) throws IOException { } @Override public void notifyMoving() throws IOException { } @Override public void notifyMoved(Project original, File originalPath, String nueName) throws IOException { } } private final class CustomerProjectCopyOperation implements CopyOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyCopying() throws IOException { } @Override public void notifyCopied(Project prjct, File file, String string) throws IOException { } } private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Also make sure to put the above methods into the Project Lookup. Check the Lookup of the Project. The "getLookup()" method of the project should now include the classes you created above, as shown in bold below: @Override public Lookup getLookup() { if (lkp == null) { lkp = Lookups.fixed(new Object[]{ this, new Info(), new CustomerProjectLogicalView(this), new CustomerCustomizerProvider(this), new CustomerActionProvider(), new CustomerProjectMoveOrRenameOperation(), new CustomerProjectCopyOperation(), new CustomerProjectDeleteOperation(), new ReportsSubprojectProvider(this), }); } return lkp; } Make Actions Visible on the Project Node. The NetBeans Project API gives you a number of CommonProjectActions, including for the actions we're dealing with. Make sure the items in bold below are in the "getActions" method of the project node: @Override public Action[] getActions(boolean arg0) { return new Action[]{ CommonProjectActions.newFileAction(), CommonProjectActions.copyProjectAction(), CommonProjectActions.moveProjectAction(), CommonProjectActions.renameProjectAction(), CommonProjectActions.deleteProjectAction(), CommonProjectActions.customizeProjectAction(), CommonProjectActions.closeProjectAction() }; } Run the Application. When you run the application, you should see this: Let's now try out the various actions: Copy. When you invoke the Copy action, you'll see the dialog below. Provide a new project name and location and then the copy action is performed when the Copy button is clicked below: The message you see above, in red, might not be relevant to your project type. When you right-click the application and choose Branding, you can find the string in the Resource Bundles tab, as shown below: However, note that the message will be shown in red, no matter what the text is, hence you can really only put something like a warning message there. If you have no text at all, it will also look odd.If the project has subprojects, the copy operation will not automatically copy the subprojects. Take a look here and here for similar more complex scenarios. Move. When you invoke the Move action, the dialog below is shown: Rename. The Rename Project dialog below is shown when you invoke the Rename action: I tried it and both the display name and the folder on disk are changed. Delete. When you invoke the Delete action, you'll see this dialog: The checkbox is not checkable, in the default scenario, and when the dialog above is confirmed, the project is simply closed, i.e., the node hierarchy is removed from the application. However, if you truly want to let the user delete the project on disk, pass the Project to the DeleteOperationImplementation and then add the children of the Project you want to delete to the getDataFiles method: private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { private final CustomerProject project; private CustomerProjectDeleteOperation(CustomerProject project) { this.project = project; } @Override public List<FileObject> getDataFiles() { List<FileObject> files = new ArrayList<FileObject>(); FileObject[] projectChildren = project.getProjectDirectory().getChildren(); for (FileObject fileObject : projectChildren) { addFile(project.getProjectDirectory(), fileObject.getNameExt(), files); } return files; } private void addFile(FileObject projectDirectory, String fileName, List<FileObject> result) { FileObject file = projectDirectory.getFileObject(fileName); if (file != null) { result.add(file); } } @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Now the user will be able to check the checkbox, causing the method above to be called in the DeleteOperationImplementation: Hope this answers some questions or at least gets the discussion started. Before asking questions about this topic, please take the steps above and only then attempt to apply them to your own scenario. Useful implementations to look at: http://kickjava.com/src/org/netbeans/modules/j2ee/clientproject/AppClientProjectOperations.java.htm https://kenai.com/projects/nbandroid/sources/mercurial/content/project/src/org/netbeans/modules/android/project/AndroidProjectOperations.java

    Read the article

  • StyleCop Custom Rules

    - by Aligned
    There are several blogs on how to do this (http://scottwhite.blogspot.com/2008/11/creating-custom-stylecop-rules-in-c.html, etc). I’ve found a few useful things to point out: Debugging is difficult, but here are the steps (thanks to Tintin’s answer). “One way: 1) Delete your custom rules 2) Open Visual Studio (for dev), open your custom rule solution 3) Build & Deploy custom rules (a PostBuild action to copy the rules into the StyleCop folder is handy) 4) Open Visual Studio (for test) 5) Use VS (dev) and Attach to process devenv.exe (the test VS instance), set breakpoints in the rules you want to debug 6) Use VS’ (test) and right-click on project, Run StyleCop 7) Debug” ~ it worked once, now I’m having problems getting it to work again ~ I also get the message “Cannot evaluate expression because the code of the current method is optimized.” when I try to look at properties. Looking at the source code of the StyleCop.CSharp.Rules.dll that comes with the install. I used JustDecompile from Telerik. Create one xml file and name it the same as the one cs file (CodingGuildelineRules.cs and CodingGuidelinRules.xml) Deploy: 1. Build in Visual Studio 2. Close Visual Studio (Style cop is running so you can’t override your dll without closing) 3. Copy the dll from the bin to the C: \Program Files (x86)\StyleCop 4.7\ 4. Open the settings file or re-open Visual Studio

    Read the article

  • Ubuntu 10.04 hung on unattended apt-get upgrade?

    - by hafichuk
    I'm looking into why our ubuntu web server hung this morning and see that there was some package upgrades a few hours prior to the problem. I was able to ssh into the system and get a snapshot from top: top - 08:13:54 up 210 days, 8:25, 2 users, load average: 433.30, 422.40, 375.70 Tasks: 1192 total, 381 running, 810 sleeping, 0 stopped, 1 zombie Cpu(s): 0.5%us, 6.1%sy, 0.0%ni, 93.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 49549772k total, 48518392k used, 1031380k free, 960152k buffers Swap: 11595768k total, 279368k used, 11316400k free, 39355664k cached This is a 16 processor system, so I would typically expect a load in the low teens. I tried to restart apache, which didn't work, and subsequently had to do a hard reboot to get it working again (which it is). One thing I found was that the server did an unattended package update. Is it possible that upgrading php or curl (which our web sites use) might have caused apache to stop responding? Here is the snip from the unattended-upgrades.log from this morning: 2012-09-18 06:48:30,076 INFO Initial blacklisted packages: 2012-09-18 06:48:30,076 INFO Starting unattended upgrades script 2012-09-18 06:48:30,077 INFO Allowed origins are: ["['Ubuntu', 'lucid-security']"] 2012-09-18 06:49:37,017 INFO Packages that are upgraded: gnupg-curl php5-dev linux-server dhcp3-common linux-libc-dev php5-curl gpgv gnupg linux-headers-server linux-image-server php5 php5-mysql php-pear php5-cli php5-common libapache2-mod-php5 dhcp3-client 2012-09-18 06:49:37,018 INFO Writing dpkg log to '/var/log/unattended-upgrades/unattended-upgrades-dpkg_2012-09-18_06:49:37.017909.log'

    Read the article

  • How to approach scrum task burn down when tasks have multiple peoples involvement?

    - by AgileMan
    In my company, a single task can never be completed by one individual. There is going to be a separate person to QA and Code Review each task. What this means is that each individual will give their estimates, per task, as to how much time it will take to complete. The problem is, how should I approach burn down? If I aggregate the hours together, assume the following estimate: 10 hrs - Dev time 4 hrs - QA 4 hrs - Code Review. Task Estimate = 18hrs At the end of each day I ask that the task be updated with "how much time is left until it is done". However, each person generally just thinks about their part of it. Should they mark the effort remaining, and then ADD the effort estimates to that? How are you guys doing this? UPDATE To help clarify a few things, at my organization each Task within a story requires 3 people. Someone to develop the task. (do unit tests, ect...) A QA specialist to review task (they primarily do integration and regression tests) A Tech lead to do code review. I don't think there is a wrong way or a right way, but this is our way ... and that won't be changing. We work as a team to complete even the smallest level of a story whenever possible. You cannot actually test if something works until it is dev complete, and you cannot review the quality of the code either ... so the best you can do is split things up into small logical slices so that the bare minimum functionality can be tested and reviewed as early into the process as possible. My question to those that work this way would be how to burn down a "task" when they are setup this way. Unless a Task has it's own sub-tasks (which JIRA doesn't allow) ... I'm not sure the best way to accomplish tracking "what's left" on a daily basis.

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >