Search Results

Search found 2719 results on 109 pages for 'gnu engineer'.

Page 39/109 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • How would I force Debian to use the physical sector size on a hard disk?

    - by Confused User
    I just purchased a few new 3TB WD drives. These have physical 4k sectors, but there is some sort of layer which is providing 512B logical sectors (see the partition table below). In order to attempt to get some more speed out of my hard drives, I would like to get rid of this logical layer and actually use the physical 4k sectors. However, I can't figure out how to do this (or even if it's possible) from the man pages of fdisk and parted, or from searching Google. Does anybody know how this could be done? As to why this is relevant, this page demonstrates that meerly aligning the sectors properly can already make up to a 25% speed difference for reads, and more than 2500% for writes in some cases! Getting rid of the logical sectors in favor of the physicals ones should improve speeds even more. Thanks! $ parted /dev/sdc GNU Parted 2.3 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print Model: ATA WDC WD30EZRX-00M (scsi) Disk /dev/sdc: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 3001GB 3001GB zfs 9 3001GB 3001GB 8389kB P.S. I don't care about the data on the drives, I was just playing with different file systems. Also, this is my first time posting here, so please let me know if my posts should be formatted differently, etc.

    Read the article

  • recursive grep started at / hangs

    - by Martin
    I have used following grep search pattern on multiple platforms: grep -r -I -D skip 'string_to_match' / For example on FreeBSD 8.0, FreeBSD 6.4 and Debian 6.0(squeeze). Command does a recursive search starting from root directory, assumes that binary files do not have the 'string_to_match' and skips devices, sockets and named pipes. FreeBSD 8.0 and FreeBSD 6.4 use GNU grep version 2.5.1 and Debian 6.0 uses GNU grep version 2.6.3. On FreeBSD 6.4, last information printed to stderr was "grep: /dev/cuad0: Device busy". After this grep just idles as according to "top -m io -o total" the I/O usage of grep is nonexistent. Same behavior is true under FreeBSD 8.0, but last information sent to stderr is "grep: /tmp/.wine-0: Permission denied" on my installation. In case of Debian, last output to stderr is "grep: /proc/sysrq-trigger: Input/output error". If I check the I/O usage of grep process under Debian, it is following: root@Debian:~# iotop -bp 22439 Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / ^Croot@Debian:~# What might cause this? Is there a way to view which file grep is currently processing in case lsof is not present? I'm able to use lsof under Debian and looks like the problematic file name there is "0xc6b2c230 file struct, ty=0, op=0xc0d34120". I'm not sure what this is.. I'm not able to use lsof or fstat under FreeBSD. PS: I know I could use find utility, but this is not the question.

    Read the article

  • What can cause kernel out_of_memory error?

    - by nbolton
    I'm running Debian GNU/Linux 5.0 and I'm experiencing intermittent out_of_memory errors coming from the kernel. The server stops responding to all but pings, and I have to reboot the server. # uname -a Linux xxx 2.6.18-164.9.1.el5xen #1 SMP Tue Dec 15 21:31:37 EST 2009 x86_64 GNU/Linux This seems to be the important bit from /var/log/messages Dec 28 20:16:25 slarti kernel: Call Trace: Dec 28 20:16:25 slarti kernel: [<ffffffff802bedff>] out_of_memory+0x8b/0x203 Dec 28 20:16:25 slarti kernel: [<ffffffff8020f825>] __alloc_pages+0x245/0x2ce Dec 28 20:16:25 slarti kernel: [<ffffffff8021377f>] __do_page_cache_readahead+0xc6/0x1ab Dec 28 20:16:25 slarti kernel: [<ffffffff80214015>] filemap_nopage+0x14c/0x360 Dec 28 20:16:25 slarti kernel: [<ffffffff80208ebc>] __handle_mm_fault+0x443/0x1337 Dec 28 20:16:25 slarti kernel: [<ffffffff8026766a>] do_page_fault+0xf7b/0x12e0 Dec 28 20:16:25 slarti kernel: [<ffffffff8026ef17>] monotonic_clock+0x35/0x7b Dec 28 20:16:25 slarti kernel: [<ffffffff80262da3>] thread_return+0x6c/0x113 Dec 28 20:16:25 slarti kernel: [<ffffffff8021afef>] remove_vma+0x4c/0x53 Dec 28 20:16:25 slarti kernel: [<ffffffff80264901>] _spin_lock_irqsave+0x9/0x14 Dec 28 20:16:25 slarti kernel: [<ffffffff8026082b>] error_exit+0x0/0x6e Full snippet here: http://pastebin.com/a7eWf7VZ I thought that perhaps the server was actually running out of memory (it has 1GB physical memory), but my Cacti memory graph looks OK to me... But strangely the load graph goes through the roof shortly before the kernel crashes: What logs can I look at for more info? Update: Maybe noteworthy - the CPU percentage and network traffic graphs were both normal at the time of the crash. The only abnormality was the average load graph.

    Read the article

  • /usr/bin/install hangs, apparently due to SELinux

    - by Cooper
    I'm trying to use the GNU coreutils install utility, however it is hanging: /usr/bin/install -v test_file test_dir/ `test_file' -> `test_dir/test_file I see the same behavior whether I run as a normal user, or root/sudo. I ran an strace -f, and this is the end of the output: ... read(4, "<username>\t-d\tsystem_u:object_r:ho"..., 4096) = 2197 <0.000012> brk(0x6e3b1000) = 0x6e3b1000 <0.000009> mmap(NULL, 29138944, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2abd831ae000 <0.000014> munmap(0x2abd815dd000, 29138944) = 0 <0.003466> The read() is reading from /etc/selinux/targeted/contexts/files/file_contexts.homedirs, apparently successfully. It appears that the process is hanging right after the munmap, but continues to eat 100% CPU. My two questions are: 1) Any good way to see what is going on with the process? I'm currently too lazy to compile a debug version of install I can run gdb on - but a strong suggestion in an answer here may motivate me to do so if needed. 2) Any idea what the SELinux issue could be? I'm not too familiar with SELinux. Additional info of possible relevance: # ls -Z drwxr-xr-x my_user 7001 user_u:object_r:user_home_t test_dir -rw-r--r-- my_user 7001 user_u:object_r:user_home_t test_file # id ... context=user_u:system_r:unconfined_t # uname -a Linux hostname 2.6.18-238.1.1.el5 #1 SMP Tue Jan 4 13:32:19 EST 2011 x86_64 x86_64 x86_64 GNU/Linux I am suspicious that SELinux + Quest Authentication Services (QAS) is causing the issue. QAS is generally well behaved, but it did cause the /etc/selinux/targeted/contexts/files/file_contexts.homedirs to get quite large (~18k users, @23 lines per user) Update: install -v -Z user_u:object_r:user_home_t file dir/ seems to work. Can anyone suggest why, given that SELinux is in permissive mode (see comments).

    Read the article

  • Want to install GDB on Fedora 7 machine..

    - by RBA
    Hi, I want to install GDB GNU debugger for debugging C Programs, onto my fedora machine.. I installed the gzip file from gnu website, but it gives error during MAKE command.. I am doing all the steps correctly which reading from the readme file, and tutorial on internet. Please guide. Also i am trying to do from yum install gdb command and sudo apt-get install gdb, yum is not found in my system, i installed it, but now it is giving some unusual error, some file missing.. So no success with this.. sudo apt-get is working, but it is also giving some following errorss... [oracle@localhost Programs]$ sudo apt-get install gdb Password: Sorry, try again. Password: Sorry, try again. Password: oracle is not in the sudoers file. This incident will be reported. [oracle@localhost Programs]$ sudo apt-get install gdb I am in real nead of this gdb tool.. how to go about it.. Please share your experiences over this.. Thankx..

    Read the article

  • Failing to load rootfs: Ubuntu 10 + grub2 + rootfs ext4 w/ RAID1

    - by James
    I am having problems booting a new Ubuntu 10 (server) install. My primary HD (/dev/sda) is laid out as follows: Device Boot Start End Blocks Id System /dev/sda1 * 1 18 144553+ 83 Linux <-- /BOOT /dev/sda2 19 182401 1464991447+ 5 Extended /dev/sda5 19 2207 17583111 fd Linux raid autodetect /dev/sda6 2208 11934 78132096 fd Linux raid autodetect <-- / (ROOTFS) /dev/sda7 11935 182401 1369276146 fd Linux raid autodetect The rootfs is part of a RAID1 (software) array (currently degraded): # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sda6[1] 78132032 blocks [2/1] [_U] The UUIDs for the partitions are as follows: # blkid /dev/sda1 /dev/sda1: UUID="b25dd301-41b9-4f4d-9b0a-0e31713dd74c" TYPE="ext2" # blkid /dev/sda6 /dev/sda6: UUID="af7b9ede-fa53-c0c1-74be-31ec752c5cd5" TYPE="linux_raid_member" # blkid /dev/md2 /dev/md2: UUID="a0602d42-6855-482f-870c-6f6ecdcdae3f" TYPE="ext4" Finally, I have my grub2 menuentry setup as follows: ### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.32-25-server' --class ubuntu --class gnu-linux --class gnu --class os { insmod ext2 insmod raid insmod mdraid set root='(hd0,1)' search --no-floppy --fs-uuid --set b25dd301-41b9-4f4d-9b0a-0e31713dd74c linux /vmlinuz-2.6.32-25-server root=UUID=a0602d42-6855-482f-870c-6f6ecdcdae3f ro nosplash noplymouth initrd /initrd.img-2.6.32-25-server } When I attempt to boot, grub loads OK, however I eventually get the following error message: Gave up waiting for root device. ALERT /dev/disk/by-uuid/a0602d42-6855-482f-870c-6f6ecdcdae3f does not exist. Dropping to a shell! If from the grub bootloader I open a grub command line, I can ls (hd0,) and it lists the correct partitions with the UUIDs as shown above - sda6 shows 'a0602d42-6855-482f-870c-6f6ecdcdae3f' (the RAID UUID). If I ls (md2)/ it properly lists all the files on the RAID1 filesystem (ext4) so it doesn't appear to be an issue accessing the raid device. Does anyone have any suggestions as to what the problem might be? I can't figure this one out.

    Read the article

  • Faking a Linux environment without chroot

    - by Pascal
    For a university project I want to test a C++11 program on a 32-core machine. Unfortunately the machine has Ubuntu 12.04 with GCC 4.6 installed (we need GCC 4.7 because of some C++11 threading features). In such an environment I would normally run a chroot with a custom linux (say a debootstrap with Ubuntu 12.10). Since we don't get root access on the machine we can't use chroot. So far I have prepared a run-time environment using debootstrap for our code, I compiled it in the debootstrap environemnt. Then copied it onto the server (using rsync). In order to run our C++ code I set the LD_LIBRARY_PATH to export LD_LIBRARY_PATH=~/debootstrap/usr/lib/:~/debootstrap/lib64/:~/debootstrap/usr/lib/x86_64-linux-gnu/:~/debootstrap/lib/x86_64-linux-gnu/:$LD_LIBRARY_PATH and so far our code seems to run. I'm however stuck with our python code. It doesn't seem to be sufficient to set the paths manually. export PYTHONPATH=~/debootstrap/usr/lib/python2.7/dist-packages:~/debootstrap/usr/lib/python2.7:~/debootstrap/usr/lib/python2.7/plat-linux2:~/debootstrap/usr/lib/python2.7/lib-tk:~/debootstrap/usr/lib/python2.7/lib-dynload:~/debootstrap/usr/local/lib/python2.7/dist-packages:~/debootstrap/usr/lib/pymodules/python2.7:~/debootstrap/usr/lib/python2.7/dist-packages/PIL:~/debootstrap/usr/lib/python2.7/dist-packages/gtk-2.0:~/debootstrap/usr/lib/python2.7 Executing our script results in ImportError: No module named _path Is there an easier way to accomplish a "fake"-chroot than just overriding and creating environment variables? Note I need python since we created a custom C++-Python module in order to run our tests. Maybe I should create two questions from this.

    Read the article

  • Growing a Linux software RAID5 array

    - by chrismetcalf
    On my home file server, I've got a 1.5TB software RAID5 array, built from four 500gb Western Digital drives. I've got a fifth drive that I usually run as a hot spare (but have out of the array at the moment), but if I can I'd like to add that to the array and grow it to 2TB since I'm running out of space. I Googled for guidance, but there seem to be a lot of differing opinions out there (many of them probably now out-of-date) as to whether or not that is possible and/or smart. What's the right way to go about this, or should I start looking into building a new array with more space? Version details: %> cat /etc/issue Debian GNU/Linux 5.0 \n \l %> uname -a Linux magrathea 2.6.26-1-686-bigmem #1 SMP Sat Jan 10 19:13:22 UTC 2009 i686 GNU/Linux %> /sbin/mdadm --version mdadm - v2.6.7.2 - 14th November 2008 %> cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md1 : active raid1 hdc1[0] hdd1[1] 293033536 blocks [2/2] [UU] md0 : active raid5 sde1[3] sda1[0] sdc1[2] sdb1[1] 1465151808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

    Read the article

  • Debian Wheezy 7.5 64bit xfce4 install error ( no desktop environment installed already )

    - by GeoMind
    i wrote a CD with an iso-image from debian.org. the debian-7.5.0-amd64-CD-1.iso from this folder. Debian Wheezy 7.5 stable 64bit There was an error at Select and install software step. It said Retrieving file 770 from 800 and then it failed the installation. I continued the instal and when i opened the computer it doesn't work the Ctrl + Alt + F7 as i waited. It starts at tty1 and after logging in i edited config file cause it had a lot of errors and said E: Unable to correct problems, you have held broken packages or Couldn't found the package. FILE: /etc/apt/sources.list # deb cdrom:[Debian GNU/Linux 7.5.0 _Wheezy_ - Official amd64 CD Binary-1 20140426-13:37]/ wheezy main #deb cdrom:[Debian GNU/Linux 7.5.0 _Wheezy_ - Official amd64 CD Binary-1 20140426-13:37]/ wheezy main deb http://security.debian.org/ wheezy/updates main contrib non-free deb-src http://security.debian.org/ wheezy/updates main contrib non-free deb http://ftp.us.debian.org/debian/ squeeze main contrib non-free deb-src http://ftp.us.debian.org/debian/ squeeze main contrib non-free After that i tried to install xfce4 as desktop environment. Guide found at Linux Panda But it print at terminal: What i sould do? How i can fix this problem?

    Read the article

  • What does this example bash startup script do?

    - by Dimitri
    I am trying to set up GNU Octave on my computer (Mac OS X 10.7.4). I am newbie in using Terminal and I need help to understand what the following script actually does: if [ -f ~/.bashrc ];then<br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;. ~/.bashrc<br> fi<br> PATH=$PATH:/usr/local/bin<br> BASH_ENV=~/.bashrc<br> export BASH_ENV PATH<br> export GNUTERM=aqua<br> alias octave="/Applications/Octave.app/Contents/Resources/bin/octave"<br> alias gnuplot="/Applications/Gnuplot.app/Contents/Resources/bin/gnuplot"<br> (taken from here: http://wikibox.stanford.edu/me112/index.php/Main/OctaveMatlabNotes) So this script begins with the simple conditional if statement. I don't understand the conditional expression - what is -f and .bashrc? What the statement . ~/.bashrc actually does? Then 2 variables are defined PATH and BASH_ENV. Why are they exported? Why GNUTERM=aqua is exported even if it's not defined anywhere? All I need is a script that would allow me to run Octave by simply typing octave in the terminal. I don't need an alias for the gnu plot. Thanks

    Read the article

  • rpmbuild gives seg fault

    - by Deepti Jain
    I am trying to build an rpm using the rpmbuild tool. I have source code which build binaries around 30 GB. This software for which I am making the rpm has dozens of executables. When I copy only the binaries of a single executable (Eg. init) my rpm builds successfully. But when I dump the entire build to the rpm, rpmbuild does everything but gives a seg fault in the end. Here is my spec file: # This is a sample spec file for wget %define _topdir /root/mywget %define name source %define release 1 %define version 1.12 %define _builddir /root/mywget/BUILD/glenlivet %define _buildrootdir /root/mywget/BUILDROOT %define _buildroot /root/mywget/BUILDROOT %define _sourcedir /root/mywget/SOURCES BuildRoot: %{_buildroot} Summary: GNU source License: GPL Name: %{name} Version: %{version} Release: %{release} Source: %{name}-%{version}.tar.gz Prefix: /usr Group: Development/Tools %description The GNU sample program downloads files from the Internet using the command-line. %prep %setup -q -n glenlivet %build cd %{_builddir} make all %install rm -rf %{_buildrootdir} mkdir -p %{_buildrootdir}/bin cp -p -r %{_builddir}/build/obj-x64/* %{_buildrootdir}/bin/ %files %defattr(-,root,root) /bin/* If I only copy some of the binaries (let say one utility and its dependent binaries) it works fine. But when I try to copy the entire build, I get a seg fault. I get the seg fault after rpmbuild has executed these sections: %prep %build %install rpmbuild also processes my source file. Processing files: source-1.12-1 Finding Provides: Finding Requires: Finding Supplements: Provides:...... Requires:...... Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Segmentation fault Any clue what wrong is going on or where does rpmbuild fails? Thanks in advance

    Read the article

  • User not in the sudoers file. This incident will be reported

    - by Sergiy Byelozyorov
    I need to install a package. For that I need root access. However the system says that I am not in sudoers file. When trying to edit one, it complains alike! How I am supposed to add myself to the sudoers file if I don't have the right to edit one? I have installed this system and only administrator. What can I do? Edit: I have tried visudo already. It requires me to be in sudoers in the first place. amarzaya@linux-debian-gnu:/$ sudo /usr/sbin/visudo We trust you have received the usual lecture from the local System Administrator. It usually boils down to these three things: #1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility. [sudo] password for amarzaya: amarzaya is not in the sudoers file. This incident will be reported. amarzaya@linux-debian-gnu:/$

    Read the article

  • My yum repository able to search packages, but not able to install it in RHEL?

    - by mandy
    I set up yum from dvd. Following is the containts of my .repo file: [dvd] name=Red Hat Enterprise Linux Installation DVD baseurl=file:///media/dvd enabled=0. I'm able to search packages. However while installation I'm getting below error: [root@localhost dvd]# yum install libstdc++.x86_64 Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. Setting up Install Process Nothing to do My Yum Search output: [root@localhost dvd]# yum search gcc Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. ============================================================================= Matched: gcc ============================================================================= compat-libgcc-296.i386 : Compatibility 2.96-RH libgcc library compat-libstdc++-296.i386 : Compatibility 2.96-RH standard C++ libraries compat-libstdc++-33.i386 : Compatibility standard C++ libraries compat-libstdc++-33.x86_64 : Compatibility standard C++ libraries cpp.x86_64 : The C Preprocessor. libgcc.i386 : GCC version 4.1 shared support library libgcc.x86_64 : GCC version 4.1 shared support library libgcj.i386 : Java runtime library for gcc libgcj.x86_64 : Java runtime library for gcc libstdc++.i386 : GNU Standard C++ Library libstdc++.x86_64 : GNU Standard C++ Library libtermcap.i386 : A basic system library for accessing the termcap database. libtermcap.x86_64 : A basic system library for accessing the termcap database. Please guide me on this, I want to install gcc on my RHEL.

    Read the article

  • Cloning to a smaller hard drive with DDRescue

    - by krebshack
    I am currently working with a 700 GB Seagate hard drive that's beginning to fail. I'll call this "SDB" from now on. I'd like to clone it while I'm still able to. However, the only hard drive that I have available is a 500 GB WD hard drive. I'll call this "SDC" from now on. The partition scheme on SDB is as follows: 9.77 GB is allocated to a recovery partition and the remaining 688.87 GB is allocated to a Windows partition. Both are formatted using NTFS. There is no partition scheme on SDC. I know how to clone one hard drive to another using DDRescue but I've only done it using hard drives that are the same size. For your reference, I'll normally use the command "ddrescue -v -r 3 /dev/sdb /dev/sdc example.log". I'd like to know if it's possible to do this with DDRescue. I've read the manual from GNU (http://www.gnu.org/software/ddrescue/manual/ddrescue_manual.html) and I haven't seen anything indicating that it is possible. I'm just looking for some confirmation that this is a correct impression. If it's not possible, then it would be helpful if any of y'all would be able to make some work around suggestions. But please don't feel obligated to do that. I don't want to have my one thread bogged down with two many questions.

    Read the article

  • What are the possible problems, when wget returns code 500 but same request works in normal browsers?

    - by markus
    What should I be looking for, when wget returns 500 but the same URL works fine in my web browser? I don't see any access_log entries that seem to be related to the error. DEBUG output created by Wget 1.14 on linux-gnu. <SSL negotiation info stripped out> ---request begin--- GET /survey/de/tools/clear-caches/password/<some-token> HTTP/1.1 User-Agent: Wget/1.14 (linux-gnu) Accept: */* Host: testing.thesurveylab.net Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.0 500 Internal Server Error Date: Wed, 12 Dec 2012 14:53:07 GMT Server: Apache/2.2.3 (CentOS) Set-Cookie: blueprint2-staging=8jnbmkqapl30hjkgo0u6956pd1; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Strict-Transport-Security: max-age=8640000;includeSubdomains X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 5 Connection: close Content-Type: text/html; charset=UTF-8 ---response end--- 500 Internal Server Error Stored cookie testing.thesurveylab.net -1 (ANY) / <session> <insecure> [expiry none] blueprint2-staging 8jnbmkqapl30hjkgo0u6956pd1 Closed 3/SSL 0x0000000001f33430 2012-12-12 15:53:07 ERROR 500: Internal Server Error.

    Read the article

  • less maximum buffer size?

    - by Tyzoid
    I was messing around with my system and found a novel way to use up memory, but it seems that the less command only holds a limited amount of data before stopping/killing the command. To test, run (careful! uses lots of system memory very fast!) $ cat /dev/zero | less From my testing, it looks like the command is killed after less reaches 2.5 gigabytes of memory, but I can't find anything in the man page that suggests that it would limit it in such a way. In addition, I couldn't find any documentation via the google on the subject. Any light to this quite surprising discovery would be great! System Information: Quad core intel i7, 8gb ram. $ uname -a Linux Tyler-Work 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux $ less --version less 458 (GNU regular expressions) Copyright (C) 1984-2012 Mark Nudelman less comes with NO WARRANTY, to the extent permitted by law. For information about the terms of redistribution, see the file named README in the less distribution. Homepage: http://www.greenwoodsoftware.com/less $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04 LTS Release: 14.04 Codename: trusty

    Read the article

  • apt unable to install particular version of package (but gives no error)

    - by Arc2009
    I'd like to install particular version of libstd++6 with following command: # apt-get install libstdc++6=4.9.0-8 -V Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libstdc++6 (4.8.2-16) 0 upgraded, 0 newly installed, 0 to remove and 216 not upgraded. It gives no error, but apt keeps version that was already installed. And also it refers to this package as "extra". There's no apt preferences set in /etc/apt/preferences.d. And the desirable version is definetely available through our local mirror. (If I try to run "apt-get download libstdc++6=4.9.0-8" it will download exactly desirable version.) System info: # cat /etc/issue.net "Debian GNU/Linux jessie/sid" # uname -a Linux www27 3.13-1-amd64 #1 SMP Debian 3.13.7-1 (2014-03-25) x86_64 GNU/Linux. # dpkg -l |egrep -i "apt|dpkg" ii apt 0.9.16.1 amd64 commandline package manager ii dpkg 1.17.6 amd64 Debian package management system Any suggestions?

    Read the article

  • Managing Cisco programatically; Telnet vs SNMP?

    - by MikeHerrera
    I was recently approached by a network-engineer, co-worker who would like to offload his minor network admin duties to a junior-level helpdesk tech. The specific location in need of management acts as an ISP for tenants on its single-site property, so there's a lot of small adjustments being made on a daily basis. I am thinking it would be helpful to write him a winform app to manage the 32 Cisco devices, on-site. I'd like to initially provide functionality which could modify access control lists, port VLAN assignments, and bandwidth limitations per VLAN... adding more to the list as its deemed valuable. My initial thought was to emulate a telnet session with the network device; utilizing my network-engineer's familiarity with the command-line / IOS interaction. Minimal time would be required to learn Cisco IOS conventions, myself. Though while searching for solutions, it appears that most people favor SNMP. That, or, their specific circumstances pushed them in the direction of SNMP. I wanted to know if I've overlooked an obvious benefit of SNMP. Should I be using SNMP? Why or why not?

    Read the article

  • Book/topic recommendations for a programmer returning to programming.

    - by Jason Tan
    I used to be a developer in Java, PHP, perl and C/C++ (the C++ bit badly - the others not too badly, I hope). This was back in the Java 1.3/1.4 days. We used raw JDBC, swing, servlets, JSP and ant (sometimes even make). Eclipse was new. Then I joined a deployment team and became a deployment engineer and then after the deployment engineer work became a full time sys admin.You get the idea - my experience is a generation or two old in programming terms - maybe older. I'm interested in getting back into Java and perhaps Ruby development, but feel I will be waaaaay behind the technological 8 ball. Can you folks suggest some books (or sites) that would be worth reading to catch up with the last 5-10 years of the development world. I.e. what should I read to try and catch up with where development is now? I see lots of stuff on the web, but what are people in the fabled "real world" using? (are lots of people being SOA based apps? Are they using XP methodology) The sorts of things I'm interested in finding out about/catching up on are: Methodologies Design patterns APIs/Frameworks/Technologies Other stuff you deem current/interesting/relevant. So if you have any thoughts or can recommend any books (especially new classics - you know the 's equivalent to K&R C or "The mythical man month"). Thanks for any thoughts you might share.

    Read the article

  • Down Tools Week Cometh: Kissing Goodbye to CVs/Resumes and Cover Letters

    - by Bart Read
    I haven't blogged about what I'm doing in my (not so new) temporary role as Red Gate's technical recruiter, mostly because it's been routine, business as usual stuff, and because I've been trying to understand the role by doing it. I think now though the time has come to get a little more radical, so I'm going to tell you why I want to largely eliminate CVs/resumes and cover letters from the application process for some of our technical roles, and why I think that might be a good thing for candidates (and for us). I have a terrible confession to make, or at least it's a terrible confession for a recruiter: I don't really like CV sifting, or reading cover letters, and, unless I've misread the mood around here, neither does anybody else. It's dull, it's time-consuming, and it's somewhat soul destroying because, when all is said and done, you're being paid to be incredibly judgemental about people based on relatively little information. I feel like I've dirtied myself by saying that - I mean, after all, it's a core part of my job - but it sucks, it really does. (And, of course, the truth is I'm still a software engineer at heart, and I'm always looking for ways to do things better.) On the flip side, I've never met anyone who likes writing their CV. It takes hours and hours of faffing around and massaging it into shape, and the whole process is beset by a gnawing anxiety, frustration, and insecurity. All you really want is a chance to demonstrate your skills - not just talk about them - and how do you do that in a CV or cover letter? Often the best candidates will include samples of their work (a portfolio, screenshots, links to websites, product downloads, etc.), but sometimes this isn't possible, or may not be appropriate, or you just don't think you're allowed because of what your school/university careers service has told you (more commonly an issue with grads, obviously). And what are we actually trying to find out about people with all of this? I think the common criteria are actually pretty basic: Smart Gets things done (thanks for these two Joel) Not an a55hole* (sorry, have to get around Simple Talk's swear filter - and thanks to Professor Robert I. Sutton for this one) *Of course, everyone has off days, and I don't honestly think we're too worried about somebody being a bit grumpy every now and again. We can do a bit better than this in the context of the roles I'm talking about: we can be more specific about what "gets things done" means, at least in part. For software engineers and interns, the non-exhaustive meaning of "gets things done" is: Excellent coder For test engineers, the non-exhaustive meaning of "gets things done" is: Good at finding problems in software Competent coder Team player, etc., to me, are covered by "not an a55hole". I don't expect people to be the life and soul of the party, or a wild extrovert - that's not what team player means, and it's not what "not an a55hole" means. Some of our best technical staff are quiet, introverted types, but they're still pleasant to work with. My problem is that I don't think the initial sift really helps us find out whether people are smart and get things done with any great efficacy. It's better than nothing, for sure, but it's not as good as it could be. It's also contentious, and potentially unfair/inequitable - if you want to get an idea of what I mean by this, check out the background information section at the bottom. Before I go any further, let's look at the Red Gate recruitment process for technical staff* as it stands now: (LOTS of) People apply for jobs. All these applications go through a brutal process of manual sifting, which eliminates between 75 and 90% of them, depending upon the role, and the time of year**. Depending upon the role, those who pass the sift will be sent an assessment or telescreened. For the purposes of this blog post I'm only interested in those that are sent some sort of programming assessment, or bug hunt. This means software engineers, test engineers, and software interns, which are the roles for which I receive the most applications. The telescreen tends to be reserved for project or product managers. Those that pass the assessment are invited in for first interview. This interview is mostly about assessing their technical skills***, although we're obviously on the look out for cultural fit red flags as well. If the first interview goes well we'll invite candidates back for a second interview. This is where team/cultural fit is really scoped out. We also use this interview to dive more deeply into certain areas of their skillset, and explore any concerns that may have come out of the first interview (these obviously won't have been serious or obvious enough to cause a rejection at that point, but are things we do need to look into before we'd consider making an offer). We might subsequently invite them in for lunch before we make them an offer. This tends to happen when we're recruiting somebody for a specific team and we'd like them to meet all the people they'll be working with directly. It's not an interview per se, but can prove pivotal if they don't gel with the team. Anyone who's made it this far will receive an offer from us. *We have a slightly quirky definition of "technical staff" as it relates to the technical recruiter role here. It includes software engineers, test engineers, software interns, user experience specialists, technical authors, project managers, product managers, and development managers, but does not include product support or information systems roles. **For example, the quality of graduate applicants overall noticeably drops as the academic year wears on, which is not to say that by now there aren't still stars in there, just that they're fewer and further between. ***Some organisations prefer to assess for team fit first, but I think assessing technical skills is a more effective initial filter - if they're the nicest person in the world, but can't cut a line of code they're not going to work out. Now, as I suggested in the title, Red Gate's Down Tools Week is upon us once again - next week in fact - and I had proposed as a project that we refactor and automate the first stage of marking our programming assessments. Marking assessments, and in fact organising the marking of them, is a somewhat time-consuming process, and we receive many assessment solutions that just don't make the cut, for whatever reason. Whilst I don't think it's possible to fully automate marking, I do think it ought to be possible to run a suite of automated tests over each candidate's solution to see whether or not it behaves correctly and, if it does, move on to a manual stage where we examine the code for structure, decomposition, style, readability, maintainability, etc. Obviously it's possible to use tools to generate potentially helpful metrics for some of these indices as well. This would obviously reduce the marking workload, and would provide candidates with quicker feedback about whether they've been successful - though I do wonder if waiting a tactful interval before sending a (nicely written) rejection might be wise. I duly scrawled out a picture of my ideal process, which looked like this: The problem is, as soon as I'd roughed it out, I realised that fundamentally it wasn't an ideal process at all, which explained the gnawing feeling of cognitive dissonance I'd been wrestling with all week, whilst I'd been trying to find time to do this. Here's what I mean. Automated assessment marking, and the associated infrastructure around that, makes it much easier for us to deal with large numbers of assessments. This means we can be much more permissive about who we send assessments out to or, in other words, we can give more candidates the opportunity to really demonstrate their skills to us. And this leads to a question: why not give everyone the opportunity to demonstrate their skills, to show that they're smart and can get things done? (Two or three of us even discussed this in the down tools week hustings earlier this week.) And isn't this a lot simpler than the alternative we'd been considering? (FYI, this was automated CV/cover letter sifting by some form of textual analysis to ideally eliminate the worst 50% or so of applications based on an analysis of the 20,000 or so historical applications we've received since 2007 - definitely not the basic keyword analysis beloved of recruitment agencies, since this would eliminate hardly anyone who was awful, but definitely would eliminate stellar Oxbridge candidates - #fail - or some nightmarishly complex Google-like system where we profile all our currently employees, only to realise that we're never going to get representative results because we don't have a statistically significant sample size in any given role - also #fail.) No, I think the new way is better. We let people self-select. We make them the masters (or mistresses) of their own destiny. We give applicants the power - we put their fate in their hands - by giving them the chance to demonstrate their skills, which is what they really want anyway, instead of requiring that they spend hours and hours creating a CV and cover letter that I'm going to evaluate for suitability, and make a value judgement about, in approximately 1 minute (give or take). It doesn't matter what university you attended, it doesn't matter if you had a bad year when you took your A-levels - here's your chance to shine, so take it and run with it. (As a side benefit, we cut the number of applications we have to sift by something like two thirds.) WIN! OK, yeah, sounds good, but will it actually work? That's an excellent question. My gut feeling is yes, and I'll justify why below (and hopefully have gone some way towards doing that above as well), but what I'm proposing here is really that we run an experiment for a period of time - probably a couple of months or so - and measure the outcomes we see: How many people apply? (Wouldn't be surprised or alarmed to see this cut by a factor of ten.) How many of them submit a good assessment? (More/less than at present?) How much overhead is there for us in dealing with these assessments compared to now? What are the success and failure rates at each interview stage compared to now? How many people are we hiring at the end of it compared to now? I think it'll work because I hypothesize that, amongst other things: It self-selects for people who really want to work at Red Gate which, at the moment, is something I have to try and assess based on their CV and cover letter - but if you're not that bothered about working here, why would you complete the assessment? Candidates who would submit a shoddy application probably won't feel motivated to do the assessment. Candidates who would demonstrate good attention to detail in their CV/cover letter will demonstrate good attention to detail in the assessment. In general, only the better candidates will complete and submit the assessment. Marking assessments is much less work so we'll be able to deal with any increase that we see (hopefully we will see). There are obviously other questions as well: Is plagiarism going to be a problem? Is there any way we can detect/discourage potential plagiarism? How do we assess candidates' education and experience? What about their ability to communicate in writing? Do we still want them to submit a CV afterwards if they pass assessment? Do we want to offer them the opportunity to tell us a bit about why they'd like the job when they submit their assessment? How does this affect our relationship with recruitment agencies we might use to hire for these roles? So, what's the objective for next week's Down Tools Week? Pretty simple really - we want to implement this process for the Graduate Software Engineer and Software Engineer positions that you can find on our website. I will be joined by a crack team of our best developers (Kevin Boyle, and new Red-Gater, Sam Blackburn), and recruiting hostess with the mostest Laura McQuillen, and hopefully a couple of others as well - if I can successfully twist more arms before Monday.* Hopefully by next Friday our experiment will be up and running, and we may have changed the way Red Gate recruits software engineers for good! Stay tuned and we'll let you know how it goes! *I'm going to play dirty by offering them beer and chocolate during meetings. Some background information: how agonising over the initial CV/cover letter sift helped lead us to bin it off entirely The other day I was agonising about the new university/good degree grade versus poor A-level results issue, and decided to canvas for other opinions to see if there was something I could do that was fairer than my current approach, which is almost always to reject. This generated quite an involved discussion on our Yammer site: I'm sure you can glean a pretty good impression of my own educational prejudices from that discussion as well, although I'm very open to changing my opinion - hopefully you've already figured that out from reading the rest of this post. Hopefully you can also trace a logical path from agonising about sifting to, "Uh, hang on, why on earth are we doing this anyway?!?" Technorati Tags: recruitment,hr,developers,testers,red gate,cv,resume,cover letter,assessment,sea change

    Read the article

  • EMC ESRS stops working when it is VMotioned

    - by makerofthings7
    EMC is on site and told me: The ESRS SAN monitoring solution will cease to function if that host is VMotioned In case anyone doesn't know, the ESRS is a dial home solution that works over IP. An EMC SecureID is required to add or modify the list of devices that are monitored. The ESRS software is installed on the customer premises. Question If ESRS truly fails to work, as the EMC engineer stated, and based on our customer experience, what is it within VMWare that is exposed to the virtualized host that allows this behavior to happen?

    Read the article

  • Outlook: Displaying email sender's job title in message list

    - by RexE
    Is there a way to display the sender's job title in the Outlook email list pane? I would like to see something like: From | Title | Subject | Received Joe Smith | President | Re: Proposal | 5:34 Bob Chen | Engineer | Fw: Request | 5:30 I am using Outlook 2010. All my mail comes through an Exchange 2010 server.

    Read the article

  • What steps can you take to ensure sane build environments when compiling software?

    - by Chris Adams
    Hi guys, I've been stuck with a compilation problem when building a standardised virtual machine on CentOS 5.4, and I'm in the dark here as to a) why this error is occurring, and b) how to fix it, and in the hope that someone else stumbles across this problem too, I'm hoping someone can help me find the solution here. I'm getting a configure: error: newly created file is older than distributed files! error when trying to compile Ruby Enterprise like below when I try to run the installer, and the solutions offered to on the forums (of checking the tine, and touching the files to update the time associated with them) don't seem to be helping here. What steps can I take to work out what the cause of this problem? [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo ./installer Welcome to the Ruby Enterprise Edition installer This installer will help you install Ruby Enterprise Edition 1.8.7-2009.10. Don't worry, none of your system files will be touched if you don't want them to, so there is no risk that things will screw up. You can expect this from the installation process: 1. Ruby Enterprise Edition will be compiled and optimized for speed for this system. 2. Ruby on Rails will be installed for Ruby Enterprise Edition. 3. You will learn how to tell Phusion Passenger to use Ruby Enterprise Edition instead of regular Ruby. Press Enter to continue, or Ctrl-C to abort. Checking for required software... * C compiler... found at /usr/bin/gcc * C++ compiler... found at /usr/bin/g++ * The 'make' tool... found at /usr/bin/make * Zlib development headers... found * OpenSSL development headers... found * GNU Readline development headers... found -------------------------------------------- Target directory Where would you like to install Ruby Enterprise Edition to? (All Ruby Enterprise Edition files will be put inside that directory.) [/opt/ruby-enterprise] : -------------------------------------------- Compiling and optimizing the memory allocator for Ruby Enterprise Edition In the mean time, feel free to grab a cup of coffee. ./configure --prefix=/opt/ruby-enterprise --disable-dependency-tracking checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... configure: error: newly created file is older than distributed files! Check your system clock This is a virtual machine running on virtualbox, and the time of the host and the virtual machine are identical, and up to date. I've also tried running this after updating time with an ntp-client, so no avail. I tried this after reading this post here of someone having a similar problem [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ date Tue Apr 27 08:09:05 BST 2010 The other approach I've tried is to touch the top level the files in the build folder like suggested here, but this hasn't worked either (an to be honest, I'm not sure why it would have worked either) [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo touch ruby-enterprise-1.8.7-2009.10/* I'm not sure what I can do next here - the problem seems to be the bash configure script that returns this error error: newly created file is older than distributed files!, at line :2214 { echo "$as_me:$LINENO: checking whether build environment is sane" >&5 echo $ECHO_N "checking whether build environment is sane... $ECHO_C" >&6; } # Just in case sleep 1 echo timestamp > conftest.file # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t $srcdir/configure conftest.file` fi rm -f conftest.file if test "$*" != "X $srcdir/configure conftest.file" \ && test "$*" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". { { echo "$as_me:$LINENO: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&5 echo "$as_me: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&2;} { (exit 1); exit 1; }; } fi ### PROBLEM LINE #### # this line is the problem line - this is returned true, sometimes it isn't and I can't # see a pattern that that determines when this will test will pass or not. test "$2" = conftest.file ) then # Ok. : else { { echo "$as_me:$LINENO: error: newly created file is older than distributed files! Check your system clock" >&5 echo "$as_me: error: newly created file is older than distributed files! Check your system clock" >&2;} { (exit 1); exit 1; }; } fi the thing that makes this really frustrating is that this script works sometimes, when the VM has been running for an hour or so it works, but not at boot. There's nothing I see in the crontab that suggests any hourly tasks are run that might change the state of the system enough make a difference to this script working. I'm totally at a loss when it comes to debugging beyond here. What's the best approach to take here? Thanks

    Read the article

  • I need Microsoft SQL Clustering/Replication/Scaling Best Practice Resources

    - by efk
    I'm trying to plan for our future scalability of our Microsoft SQL 2000/2005/2008 infrastructure. I'm having a hard time finding good information on how to best engineer such services, how to best keep these services available, and how to scale them as load increases. Can someone point me in the right direction? Books, online resources, videos, anything would be helpful.

    Read the article

  • How to set up "vi" shell environment as default

    - by Ency
    It could by silly question, but I can not find answer anywhere. So, I'd like to use vi (you know set -o vi can do the trick) as default in my shell instead of emacs, but I do not want to put it into bash startup scripts. Why? Because I work as verification engineer and I am using several user accounts, which are also quite often reinstalled. Changing of default profile is not answer too, because some of software creates its own home directory (independent on default profile).

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >