Search Results

Search found 1608 results on 65 pages for 'distro recommendation'.

Page 61/65 | < Previous Page | 57 58 59 60 61 62 63 64 65  | Next Page >

  • ubuntu 8.04lts + rdiff-backup: Should I install from source instead of using apt repositories?

    - by egarcia
    I'm trying to use rdiff-backup in order to make backup copies of some folders inside an Ubuntu 8.04LTS server. I'm attempting to do the backup on another server with a more modern Ubuntu distro (9.10). I'll call this one the "client". rdiff-backup needs to be installed on both the client and the server. It is available on the apt repositories on both machines, so I installed it using sudo apt-get install rdiff-backup. The problem is that the version installed on the server is older than the one on the client (1.1.15 vs 1.2.8). Thus I get errors when I try do make them work together. So I need both versions to be the same. What is the standard procedure in these cases? Should I attempt to upgrade the version on the server, or downgrade the version on the client? And how whould I do that? In case it is useful, I'd like to point out that the rdiff-backup apt-package has some dependencies - librsync1 & python-support Attaching the errors I got in case they help: rdiff-backup egarcia@test::/var/rails/ohwr/backup /home/kikito/backup/files Warning: Local version 1.2.8 does not match remote version 1.1.15. Exception ' Warning Security Violation! Bad request for function: rpath.make_file_dict with arguments: ['/var/rails/ohwr/backup'] ' raised of class '<class 'rdiff_backup.Security.Violation'>': File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 304, in error_check_Main try: Main(arglist) File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 321, in Main rps = map(SetConnections.cmdpair2rp, cmdpairs) File "/usr/lib/pymodules/python2.6/rdiff_backup/SetConnections.py", line 78, in cmdpair2rp return rpath.RPath(conn, filename).normalize() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 884, in __init__ else: self.setdata() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 908, in setdata self.data = self.conn.rpath.make_file_dict(self.path) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 450, in __call__ return apply(self.connection.reval, (self.name,) + args) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 370, in reval if isinstance(result, Exception): raise result Traceback (most recent call last): File "/usr/bin/rdiff-backup", line 30, in <module> rdiff_backup.Main.error_check_Main(sys.argv[1:]) File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 304, in error_check_Main try: Main(arglist) File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 321, in Main rps = map(SetConnections.cmdpair2rp, cmdpairs) File "/usr/lib/pymodules/python2.6/rdiff_backup/SetConnections.py", line 78, in cmdpair2rp return rpath.RPath(conn, filename).normalize() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 884, in __init__ else: self.setdata() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 908, in setdata self.data = self.conn.rpath.make_file_dict(self.path) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 450, in __call__ return apply(self.connection.reval, (self.name,) + args) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 370, in reval if isinstance(result, Exception): raise result rdiff_backup.Security.Violation: Warning Security Violation! Bad request for function: rpath.make_file_dict with arguments: ['/var/rails/ohwr/backup']

    Read the article

  • Trouble with Debian Lenny and Sphinx

    - by Ando
    I've very basic understanding of linux systems, but I've a server which was setup a while ago to host some web apps. Recently I decided to test out and implement Sphinx but unfortunately I cant get the install to work. I'm running a Debian Lenny distro and when I try to install sphinx it says - checking MySQL include files... configure: error: missing include files. ****************************************************************************** ERROR: cannot find MySQL include files. Check that you do have MySQL include files installed. The package name is typically 'mysql-devel'. If include files are installed on your system, but you are still getting this message, you should do one of the following: 1) either specify includes location explicitly, using --with-mysql-includes; 2) or specify MySQL installation root location explicitly, using --with-mysql; 3) or make sure that the path to 'mysql_config' program is listed in your PATH environment variable. To disable MySQL support, use --without-mysql option. ****************************************************************************** I do have mysql 5.1 installed but I can't find the include files, AND one more thing.. I read around the net that I probably need libmysqlclient15-dev but when I try to install that using apt-get i receive the following error. The following packages were automatically installed and are no longer required: libxcb-aux0 libts-0.0-0 libxcb-atom1 ttf-dejavu-extra hunspell-en-us g++-4.3 libmysql++3 libnspr4-0d libdirectfb-1.0-0 libxcb-event1 libasound2 libstdc++6-4.3-dev libhunspell-1.2-0 ttf-dejavu libmozjs2d conkeror-spawn-process-helper libnss3-1d Use 'apt-get autoremove' to remove them. The following NEW packages will be installed: libmysqlclient15-dev 0 upgraded, 1 newly installed, 0 to remove and 276 not upgraded. Need to get 7590 kB of archives. After this operation, 26.3 MB of additional disk space will be used. WARNING: The following packages cannot be authenticated! libmysqlclient15-dev Install these packages without verification [y/N]? Y Err http://ftp.us.debian.org/debian/ lenny/main libmysqlclient15-dev amd64 5.0.51a-24+lenny5 404 Not Found [IP: 35.9.37.225 80] Err http://security.debian.org/ lenny/updates/main libmysqlclient15-dev amd64 5.0.51a-24+lenny5 404 Not Found [IP: 149.20.20.6 80] Failed to fetch http://security.debian.org/pool/updates/main/m/mysql-dfsg-5.0/libmysqlclient15-dev_5.0.51a-24+lenny5_amd64.deb 404 Not Found [IP: 149.20.20.6 80] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Can you help me out by suggesting how to install the required packages and run the Sphinx.

    Read the article

  • How to change key mappings in Cygwin's Vim

    - by Boldewyn
    I'm using Vim under Debian, Win Vista and WinXP (the latter two with Cygwin). To handle tabs more easily, I mapped <C-Left> and <C-Right> to :tab(prev|next). This mapping works like a charm on the Debian machine. On the Windows machines, however, pressing <C-Left> deletes 5 lines, as far as I can tell, and meddles with cursor position, while <C-Right> does this, too, and additionally enters Insert mode. Question: To put it in a nutshell, how can I find out, why Vim behaves as it does? Is there a way to backtrace the active commands and keystrokes? Could there be a plugin the culprit? (I didn't install one, perhaps a default include by the Cygwin distro...) If so, how can I find it? Edit 1: OK, it seems, that I got a first trace: The terminal sends for <C-Left> '^[[1;5D', and for right '^[[1;5C' (evaluated with the <C-V><C-Left> trick). If vim interprets this literally and discards the first characters, it explains the strange behaviour. Any ideas, how I could change this key mapping? Additional Diagnosis: This behaviour occurs regardless of any existing ~/.vimrc file (is therefore not related to my above mentioned mapings) and is not inherited of some /etc/vim/vimrc, since this doesn't exist in the default Cygwin installation. :verbose map doesn't yield any new insights. Either nothing or my mentioned mappings appear, based on the existence of the .vimrc file :help <C-Left> suggests, that the default would be a simple cursor movement, which is apparently not the case. Vim's version under Cygwin: VIM - Vi IMproved 7.2 (2008 Aug 9, compiled Feb 11 2010 17:36:58) Included patches: 1-264 Compiled by http://cygwin.com/ Huge version without GUI. Features included (+) or not (-): +arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent -clientserver -clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +cryptv +cscope +cursorshape +dialog_con +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap +menu +mksession +modify_fname +mouse -mouseshape +mouse_dec -mouse_gpm -mouse_jsbterm +mouse_netterm -mouse_sysmouse +mouse_xterm +multi_byte +multi_lang -mzscheme -netbeans_intg -osfiletype +path_extra -perl +postscript +printer +profile -python +quickfix +reltime +rightleft -ruby +scrollbind +signs +smartindent -sniff +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp -xterm_clipboard -xterm_save system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" fall-back for $VIM: "/usr/share/vim" Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -g -O2 -D_FORTIFY_SOURCE=1 Linking: gcc -L/usr/local/lib -o vim.exe -lm -lncurses -liconv

    Read the article

  • How to find out Vim's currently mapped commandos

    - by Boldewyn
    I'm using Vim under Debian, Win Vista and WinXP (the latter two with Cygwin). To handle tabs more easily, I mapped <C-Left> and <C-Right> to :tab(prev|next). This mapping works like a charm on the Debian machine. On the Windows machines, however, pressing <C-Left> deletes 5 lines, as far as I can tell, and meddles with cursor position, while <C-Right> does this, too, and additionally enters Insert mode. Question: To put it in a nutshell, how can I find out, why Vim behaves as it does? Is there a way to backtrace the active commands and keystrokes? Could there be a plugin the culprit? (I didn't install one, perhaps a default include by the Cygwin distro...) If so, how can I find it? Additional Diagnosis: This behaviour occurs regardless of any existing ~/.vimrc file (is therefore not related to my above mentioned mapings) and is not inherited of some /etc/vim/vimrc, since this doesn't exist in the default Cygwin installation. :verbose map doesn't yield any new insights. Either nothing or my mentioned mappings appear, based on the existence of the .vimrc file :help <C-Left> suggests, that the default would be a simple cursor movement, which is apparently not the case. Vim's version under Cygwin: VIM - Vi IMproved 7.2 (2008 Aug 9, compiled Feb 11 2010 17:36:58) Included patches: 1-264 Compiled by http://cygwin.com/ Huge version without GUI. Features included (+) or not (-): +arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent -clientserver -clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +cryptv +cscope +cursorshape +dialog_con +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap +menu +mksession +modify_fname +mouse -mouseshape +mouse_dec -mouse_gpm -mouse_jsbterm +mouse_netterm -mouse_sysmouse +mouse_xterm +multi_byte +multi_lang -mzscheme -netbeans_intg -osfiletype +path_extra -perl +postscript +printer +profile -python +quickfix +reltime +rightleft -ruby +scrollbind +signs +smartindent -sniff +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp -xterm_clipboard -xterm_save system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" fall-back for $VIM: "/usr/share/vim" Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -g -O2 -D_FORTIFY_SOURCE=1 Linking: gcc -L/usr/local/lib -o vim.exe -lm -lncurses -liconv

    Read the article

  • No Telnet login prompt when used over SSH tunnel

    - by SCO
    Hi there ! I have a device, let's call it d1, runnning a lightweight Linux. This device is NATed by my internet box/router, hence not reachable from the Internet. That device runs a telnet daemon on it, and only has root as user (no pwd). Its ip address is 192.168.0.126 on the private network. From the private network (let's say 192.168.0.x), I can do: telnet 192.168.0.126 Where 192.168.0.126 is the IP address in the private network. This works correctly. However, to allow administration, I'd need to access that device from outside of that private network. Hence, I created an SSH tunnel like this on d1 : ssh -R 4455:localhost:23 ussh@s1 s1 is a server somewhere in the private network (but this is for testing purposes only, it will endup somewhere in the Internet), running a standard Linux distro and on which I created a user called 'ussh'. s1 IP address is 192.168.0.48. When I 'telnet' with the following, let's say from c1, 192.168.0.19 : telnet -l root s1 4455 I get : Trying 192.168.0.48... Connected to 192.168.0.48. Escape character is '^]'. Connection closed by foreign host . The connection is closed after roughly 30 seconds, and I didn't log. I tried without the -l switch, without any success. I tried to 'telnet' with IP addresses instead of names to avoid reverse DNS issues (although I added to d1 /etc/hosts a line refering to s1 IP/name, just in case), no success. I tried on another port than 4455, no success. I gathered Wireshark logs from s1. I can see : s1 sends SSH data to c1, c1 ACK s1 performs an AAAA DNS request for c1, gets only the Authoritave nameservers. s1 performs an A DNS request, then gets c1's IP address s1 sends a SYN packet to c1, c1 replies with a RST/ACK s1 sends a SYN to c1, C1 RST/ACK (?) After 0.8 seconds, c1 sends a SYN to s1, s1 SYN/ACK and then c1 ACK s1 sends SSH content to d1, d1 sends an ACK back to s1 s1 retries AAAA and A DNS requests After 5 seconds, s1 retries a SYN to c1, once again it is RST/ACKed by c1. This is repeated 3 more times. The last five packets : d1 sends SSH content to s1, s1 sends ACK and FIN/ACK to c1, c1 replies with FIN/ACK, s1 sends ACK to c1. The connection seems to be closed by the telnet daemon after 22 seconds. AFAIK, there is no way to decode the SSH stream, so I'm really stuck here ... Any ideas ? Thank you !

    Read the article

  • iptables management tools for large scale environment

    - by womble
    The environment I'm operating in is a large-scale web hosting operation (several hundred servers under management, almost-all-public addressing, etc -- so anything that talks about managing ADSL links is unlikely to work well), and we're looking for something that will be comfortable managing both the core ruleset (around 12,000 entries in iptables at current count) plus the host-based rulesets we manage for customers. Our core router ruleset changes a few times a day, and the host-based rulesets would change maybe 50 times a month (across all the servers, so maybe one change per five servers per month). We're currently using filtergen (which is balls in general, and super-balls at our scale of operation), and I've used shorewall in the past at other jobs (which would be preferable to filtergen, but I figure there's got to be something out there that's better than that). The "musts" we've come up with for any replacement system are: Must generate a ruleset fairly quickly (a filtergen run on our ruleset takes 15-20 minutes; this is just insane) -- this is related to the next point: Must generate an iptables-restore style file and load that in one hit, not call iptables for every rule insert Must not take down the firewall for an extended period while the ruleset reloads (again, this is a consequence of the above point) Must support IPv6 (we aren't deploying anything new that isn't IPv6 compatible) Must be DFSG-free Must use plain-text configuration files (as we run everything through revision control, and using standard Unix text-manipulation tools are our SOP) Must support both RedHat and Debian (packaged preferred, but at the very least mustn't be overtly hostile to either distro's standards) Must support the ability to run arbitrary iptables commands to support features that aren't part of the system's "native language" Anything that doesn't meet all these criteria will not be considered. The following are our "nice to haves": Should support config file "fragments" (that is, you can drop a pile of files in a directory and say to the firewall "include everything in this directory in the ruleset"; we use configuration management extensively and would like to use this feature to provide service-specific rules automatically) Should support raw tables Should allow you to specify particular ICMP in both incoming packets and REJECT rules Should gracefully support hostnames that resolve to more than one IP address (we've been caught by this one a few times with filtergen; it's a rather royal pain in the butt) The more optional/weird iptables features that the tool supports (either natively or via existing or easily-writable plugins) the better. We use strange features of iptables now and then, and the more of those that "just work", the better for everyone.

    Read the article

  • Operative systems on SD cards

    - by HisDudeness
    I was getting some wild ideas the last days, like putting some operative systems into SD cards rather than on my hard drive. I'll go further into details now and explain what lead me to consider this probably abominable decision. I am on a laptop (that means I have a native SD-card reader) which is currently running a cross-distro setup, with a bunch of Linux systems (placed in dedicated ext4 logical partitions into a huge extended one) regulated by an unique GRUB. Since today, my laptop haven't even seen any Windows system with binoculars. I was thinking about placing all the os part of my setup into a Secure Digital to save all my 500 Gb Hard Drive for documents, music, videos and so on, and being able to just remove the SD and boot my system into another computer too, as well as having the possibility of booting other systems into mine by just plugging in another SD, without having to keep it constantly placed in my PC. Also, in the remote case in the near future I just wanted to boot Windows 8 in it, I read it causes major boot incompatibility issues with other systems by needing a digital signature in order for them to start. By having it in a removable drive, I could just get rid of it when I'm needing him and switch its card with Linux one, and so not having any obstacles to their boot. Now, my questions are: I know unlikely traditional rotating disk drives, integrated circuits ones have a limited lifespan in terms of cluster rewriting. Is it an obstacle to that kind of usage? I mean, some Ultrabooks are using SSD now, is it the same issue, or there are some differences between Solid State Drives and Secure Digitals in that sense? Maybe having them to store system files which are in fixed positions (making the even-usage of cluster technology useless) constantly being re-read and updated and similar things just gets them soon unserviceable, do it? Second question: are all motherboards and BIOSes able to boot from SDs just like they are from USB pen drives (I mean, provided card reader is USB-connected, isn't it)? Or can't bootloaders like GRUB be installed on SDs working? If they can't, is it a solution installing GRUB to MBR and making boot option pointing to SD? Will it work? Are there any other problems to installing OSs on a Secure Digital?

    Read the article

  • mongodb : Can create new thread on FreeBSD?

    - by user197739
    We experienced some strange thing in our mongodb gridfs platform. The platform actually is a bi Xeon E5 (bi quad core) with 128GB of memory, running on freebsd 9 with a zfs pool dedicated for mongodb. [root@mongofile1 ~]# uname -sr FreeBSD 9.1-RELEASE our /boot/loader.conf vfs.zfs.arc_min="2048M" vfs.zfs.arc_max="7680M" vm.kmem_size_max="16G" vm.kmem_size="12G" vfs.zfs.prefetch_disable="1" kern.ipc.nmbclusters="32768" /etc/sysctl.conf net.inet.tcp.msl=15000 net.inet.tcp.keepidle=300000 kern.ipc.nmbclusters=32768 kern.ipc.maxsockbuf=2097152 kern.ipc.somaxconn=8192 kern.maxfiles=65536 kern.maxfilesperproc=32768 net.inet.tcp.delayed_ack=0 net.inet.tcp.sendspace=65535 net.inet.udp.recvspace=65535 net.inet.udp.maxdgram=57344 net.local.stream.recvspace=65535 net.local.stream.sendspace=65535 we follow the recommendation for the ulimit : [root@mongofile1 ~]# su - mongodb $ ulimit -a cpu time (seconds, -t) unlimited file size (512-blocks, -f) unlimited data seg size (kbytes, -d) 33554432 stack size (kbytes, -s) 524288 core file size (512-blocks, -c) unlimited max memory size (kbytes, -m) unlimited locked memory (kbytes, -l) unlimited max user processes (-u) 5547 open files (-n) 32768 virtual mem size (kbytes, -v) unlimited swap limit (kbytes, -w) unlimited sbsize (bytes, -b) unlimited pseudo-terminals (-p) unlimited This server have a twin (same config exactly) for ReplSet in other data center and we have a virtualized arbiter. Some time, almost 3 days, the process of mongodb exit. The problem begin with: Fri Nov 8 11:27:31.741 [conn774697] end connection 192.168.10.162:47963 (23 connections now open) Fri Nov 8 11:27:31.770 [initandlisten] can't create new thread, closing connection Fri Nov 8 11:27:31.771 [rsHealthPoll] replSet member mongofile2:27017 is now in state DOWN Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.162:47968 #774702 (20 connections now open) Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.161:28522 #774703 (21 connections now open) Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.164:15406 #774704 (22 connections now open) Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.163:25750 #774705 (23 connections now open) Fri Nov 8 11:27:31.810 [initandlisten] connection accepted from 192.168.10.182:20779 #774706 (24 connections now open) Fri Nov 8 11:27:31.855 [initandlisten] connection accepted from 192.168.10.161:28524 #774707 (25 connections now open) Fri Nov 8 11:27:31.869 [initandlisten] connection accepted from 192.168.10.182:20786 #774708 (26 connections now open) and after many "can create new thread" [root@mongofile1 /usr/mongodb]# tail -n 15000 mongod.log.old |grep "create new thread"|wc 5020 55220 421680 and finish by a magnificent Fri Nov 8 11:30:22.333 [rsMgr] replSet warning caught unexpected exception in electSelf() pure virtual method called Fri Nov 8 11:30:22.333 Got signal: 6 (Abort trap: 6). Fri Nov 8 11:30:22.337 Backtrace: 0x599efc 0x8035cb516 0x599efc <_ZN5mongo10abruptQuitEi+988> at /usr/local/bin/mongod 0x8035cb516 <_pthread_sigmask+918> at /lib/libthr.so.3 Extract of mongodb from top 78126 mongodb 77 20 0 1253G 1449M sbwait 0 0:20 0.00% mongod If I restart the process when it crash, the problem is fixed for almost 3 days. Has anyone seen this before, or know of a fix?

    Read the article

  • Broadcom HT1100 SATA controller not working properly with 1TB drives

    - by Jeff C
    I've been using RHEL distro's for several years and always managed to find the answers until now. I know this is more of a hardware issue, but I've been working on this for over a week and trust Linux and the IT community to help more then HP. I have CentOS 6.3 installed on an HP ProLiant DL145 G3 server with the BroadCom HT1100 IO controller and ServerWorks SATA Controller MMIO BIOS v3.0.0015.6 Firmware. This controller does not support large drives fully. Here's what I've tried and the results; Stock setup - Freezes on the ServerWorks POST screen. Can't even enter CMOS without disconnecting the drives. If I simply disconnect the SATA cables before it gets to the ServerWorks screen and reconnect afterwards I can boot from a CD, USB, PXE fine. However fiddling with cables at ever boot isn't practical. If I enter the BIOS config I can set it to not try booting the drives but leave the controller enabled. This lets me boot normally but the drives are not visible in the OS (live CDs or USB installed). I used method #2 to install and update CentOS. I have the /boot partition on a USB drive (everything else is on the SATA drives in software RAID1) hoping that would work around the issue but I get this Kernel panic - not syncing:Attempted to kill init! Pid: 1, comm: init Not tainted 2.6.32-279.9.1.el6.x86_6 #1 Call Trace: [<ffffffff814fd6ba>] ? panic+0xa0/0x168 [<ffffffff81070c22>] ? do_exit+0x862/0x870 [<ffffffff8117cdb5>] ? fput+0x25/0x30 [<ffffffff81070c88>] ? do_group_exit+0x58/0xd0 [<ffffffff81070d17>] ? sys_exit_group+0x17/0x20 [<ffffffff8100b0f2>] ? system_call_fastpath+0x16/0x1b panic occured, switching back to text console I'm sure it should be possible to talk to the drives without the BIOS boot check since the BIOS doesn't see them in method #2 either, their disconnected when it checks, but Linux sees them fine. If anyone could help figure out how I would greatly appreciate it! The other possible option I've come across is a complex firmware update. Tyan has a few boards on their website with the HT1100 and a ServerWorks v3.0.0015.7 update which says "adds support for TB drives" in the release notes. If someone could help me get the Tyan SATA firmware into the HP ROM file so I could just reflash that would also be very much appreciated. Thanks for any help you guys can offer!

    Read the article

  • Why is domU faster than dom0 on IO?

    - by Paco
    I have installed debian 7 on a physical machine. This is the configuration of the machine: 3 hard drives using RAID 5 Strip element size: 1M Read policy: Adaptive read ahead Write policy: Write Through /boot 200 MB ext2 / 15 GB ext3 SWAP 10GB LVM rest (~500GB) emphasized text I installed postgresql, created a big database (over 1GB). I have an SQL request that takes a lot of time to run (a SELECT statement, so it only reads data from the database). This request takes approximately 5.5 seconds to run. Then, I installed XEN, created a domU, with another debian distro. On this OS, I also installed postgresql, with the same database. The same SQL request takes only 2.5 seconds to run. I checked the kernel on both dom0 and domU. uname-a returns "Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux" on both systems. I checked the kernel parameters, which are approximately the same. For those that are relevant, I changed their values to make them match on both systems using sysctl. I saw no changes (the requests still take the same amount of time). After this, I checked the file systems. I used ext3 on domU. Still no changes. I installed hdparm, and ran hdparm -Tt on both systems, on all my partitions on both systems, and I get similar results. Now, I am stuck, I don't know what is different, and what could be the cause of such a big difference. Additional Info: Debian runs on a Dell server PowerEdge 2950 postgresql: 9.1.9 (both dom0 and domU) xen-linux-system: 3.2.0 xen-hypervisor: 4.1 Thanks EDIT: As Krzysztof Ksiezyk suggested, it might be due to some file caching system. I ran the dd command to test both the read and write speed. Here is domU: root@test1:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 18.8289 s, 107 MB/s root@test1:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 15.0549 s, 134 MB/s And here is dom0: root@debian:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 8.87281 s, 191 MB/s root@debian:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 0.501509 s, 3.4 GB/s What can be the cause of this caching system? And how can we "fix" it? Can we apply it to dom0? EDIT 2: I switched my virtual disk type. To do so I followed this article. I did a dd if=/dev/vg0/test1-disk of=/mnt/test1-disk.img bs=16M Then in /etc/xen/test1.cfg, I changed the disk parameter to use file: instead of phy: it should have removed the file caching, but I still get the same numbers (domU being much faster for Postgres)

    Read the article

  • Server to server replication and CPU and 32k\ corrupt doc

    - by nick wall
    Summary: if database contains a doc with 32K issue or corrupt, on server to server replication it causes marked increase in CPU in nserver.exe task, which effectively causes our server(s) to slow right down. We have a 5 server cluster (1 "hub" and 4 HTTP servers accessed via reverse proxy and SSO for load balancing and redundancy). All are physically located next to each other on network, they don't have dedicated network\ ports for cluster or replication. I realise IBM recommendation is dedicated port for cluster. Cluster queues are in tolerance and under heavy application user load, i.e. the maximum number of documents are being created, edited, deleted, the replication times between servers are negligible. Normally, all is well. Of the servers in the cluster, 1 is considered the "hub", and imitates a PUSH-PULL replication with it's cluster mates every 60mins, so that the replication load is taken by the hub and not cluster mates. The problem we have: every now and then we get a slow replication time from the hub to a cluster mate, sometimes up to 30mins. This maxes out the nserver.exe task on the "cluster mate" which causes it to respond to http requests very slowly. In the past, we have found that if a corrupt document is in the DB, it can have this affect, but on those occasions, the server log will show the corrupt doc noteId, we run fixup, all well. But we are not now seeing any record of corrupt docs. What we have noticed is if a doc with the 32K issue is present, the same thing can happen. Our only solution in that case is to run a : fixup mydb.nsf -V, which shows it is purging a 32K doc. Luckily we run a reverse proxy, so we can shut HTTP servers down without users noticing, but users do notice when a server has the problem! Has anyone else seen this occur? I have set up DDM event handlers for many of the replication events. I have set the replication time out limit to 5 mins (the max we usually see under full user load is 0.1min), to prevent it rep'ing for 30mins as before. This ia a temporary work around. Does anyone know of a DDM event to trap the 32K issue? we could at least then send alert. Regarding 32K issue: this prob needs another thread, but we are finding this relatively hard to find the source of the issue as the 32K event is fairly rare. Our app is fairly complex, interacting with various other external web services, with 2 way data transfer. But if we do encounter a 32K doc, we can't look at field properties, so we can't work out which field has issue which would give us a clue as to which process is culprit. As above, we run a fixup -V. Any help\ comments on this would be gratefully received.

    Read the article

  • VLAN issues between linux kernels 2.6 / 3.3 in an ESX / Cisco environment

    - by David Griffith
    I shall attempt to explain an issue I have encountered - I have a VM running on esx 4.1 with an interface connected to VLAN800 via an access port on a cisco 3750. It runs linux - kernel 2.6.24, and has about 5 to 10 Mbit of chatter on 10.10.0.0/16 and various multicast addresses to look after. I needed to isolate certain devices from certain other devices on the network, with all of them having to talk to that one VM. No, the address space can't be separated, nor can the networks be easily vlan'd apart. The software on the VM listens to one interface only. Private vlans appear to be the way to go. So as a test, I built a bridge on the VM that globs together the vlans as needed. All good, everything works as expected. But occasionally (sigh) there's some latency that trips up a couple of profinet devices on the network because, you know, you're not really supposed to trunk real-time protocols around the place willy-nilly. I shift it to our test/backup server - works nicely, but I don't want it to be running on the test server as we muck around with that a lot. So I says to myself, "I'll put it on a new VM for testing and tweaking." I download a small linux distro with kernel 3.3, and install as a new VM with a the vlans as separate interfaces for testing. I power up the testing VM - ok. I bring up all the separate interfaces - ok. I can ping the production VM, see all sorts of traffic going past with tshark, etc. I build a bridge and put the primary vlan on it - the production VM running 2.6 immediately loses its multicast traffic - Unicast is fine. (?) I shut down the bridge - still no multicast traffic (!?) I power-cycle the production VM(!?!?) - multicast traffic returns. I trunk everything into the testing VM and create vlan interfaces under linux instead - same result, as soon as I start the bridge.... no multicast on the production VM. Ok, so I take a break and leave things alone. I decide to play with a couple of ubiquiti bullet radios - I'm testing various firmware as a side project. I flash a radio with Open-wrt-12.09. I enable a trunk on a port on a cisco on our network so I can muck around with multiple vlans and SSIDs I power up the radio and connect - ok. I create a vlan interface from the trunk.... the same vlan as the production VM wayyyyy over there, three cisco routers away. Ok. I bridge the vlan interface to the wifi interface and immediately get a phone call. The production VM has (suprise!) lost its multicast traffic. Again, nothing comes back until I power-cycle the VM. What the hell is going on?

    Read the article

  • Hardware for multipurpose home server

    - by Michael Dmitry Azarkevich
    Hi guys, I'm looking to set up a multipurpose home server and hoped you could help me with the hardware selection. First of all, the services it will provide: Hosting a MySQL database (for training and testing purposes) FTP server Personal Mail Server Home media server So with this in mind I've done some research, and found some viable solutions: A standard PC with the appropriate software (Either second hand or new) A non-solid state mini-ITX system A solid state, fanless mini-ITX system I've also noted the pros and cons of each system: A standard second hand PC with old hardware would be the cheapest option. It could also have lacking processing power, not enough RAM and generally faulty hardware. Also, huge power consumption heat generation and noise levels. A standard new PC would have top-notch hardware and will stay that way for quite some time, so it's a good investment. But again, the main problem is power consumption, heat generation and noise levels. A non-solid state mini-ITX system would have the advantages of lower power consumption, lower cost (as far as I can see) and long lasting hardware. But it will generate noise and heat which will be even worse because of the size. A solid state, fanless mini-ITX system would have all the advantages of a non-solid state mini-ITX but with minimal noise and heat. The main disadvantage is the read\write problems of flash memory. All in all I'm leaning towards a non-solid state mini-ITX because of the read\write issues of flash memory. So, after this overview of what I do know, my questions are: Are all these services even providable from a single server? To my best understanding they are, but then again, I might be wrong. Is any of these solutions viable? If yes, which one is the best for my purposes? If not, what would you suggest? Also, on a more software oriented note: OS wise, I'm planning to run Linux. I'm currently thinking of four options I've been recommended: CentOS, Gentoo, DSL (Damn Small Linux) and LFS (Linux From Scratch). Any thoughts on this? Any other distro you would recomend? Regarding FTP services, I've herd good things about FileZila. Anyone has any experience with that? Do you recommend it? Do you recommend something else? Regarding the Mail service, I know nothing about this except that it exists. Any software you recommend for this task? Home media, same as mail service. Any recommended software? Thank you very much.

    Read the article

  • Installing Lubuntu 14.04.1 fails, upowerd appears to hang

    - by Rantanplan
    On the live-CD session, I tried installing Lubuntu double clicking on the install button on the desktop. Here, the CD starts running but then stops running and nothing happens. Next, I rebooted and tried installing Lubuntu directly from the boot menu screen using forcepae again. After a while, I receive the following error message: The installer encountered an unrecoverable error. A desktop session will now be run so that you may investigate the problem or try installing again. Hitting Enter brings me to the desktop. For what errors should I search? And how? Thanks for some hints! On Lubuntu 12.04: uname -a Linux humboldt 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:45:51 UTC 2014 i686 i686 i386 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise upowerd appears to hang: Aug 25 10:53:28 lubuntu kernel: [ 367.920272] INFO: task upowerd:3002 blocked for more than 120 seconds. Aug 25 10:53:28 lubuntu kernel: [ 367.920288] Tainted: G S C 3.13.0-32-generic #57-Ubuntu Aug 25 10:53:28 lubuntu kernel: [ 367.920294] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 25 10:53:28 lubuntu kernel: [ 367.920300] upowerd D e21f9da0 0 3002 1 0x00000000 Aug 25 10:53:28 lubuntu kernel: [ 367.920314] e21f9dfc 00000086 f5ef7094 e21f9da0 c1050272 c1a8d540 c1920a00 00000000 Aug 25 10:53:28 lubuntu kernel: [ 367.920333] c1a8d540 c1920a00 d9e44da0 f5ef6540 c1129061 00000002 000001c1 0001c37b Aug 25 10:53:28 lubuntu kernel: [ 367.920351] 00000000 00000002 00000000 e2276240 00000000 00000040 c12b0ec5 c19975a8 Aug 25 10:53:28 lubuntu kernel: [ 367.920368] Call Trace: Aug 25 10:53:28 lubuntu kernel: [ 367.920389] [<c1050272>] ? kmap_atomic_prot+0x42/0x100 Aug 25 10:53:28 lubuntu kernel: [ 367.920404] [<c1129061>] ? get_page_from_freelist+0x2a1/0x600 Aug 25 10:53:28 lubuntu kernel: [ 367.920417] [<c12b0ec5>] ? process_measurement+0x65/0x240 Aug 25 10:53:28 lubuntu kernel: [ 367.920432] [<c1654c73>] schedule_preempt_disabled+0x23/0x60 Aug 25 10:53:28 lubuntu kernel: [ 367.920443] [<c16565bd>] __mutex_lock_slowpath+0x10d/0x171 Aug 25 10:53:28 lubuntu kernel: [ 367.920454] [<c1655aec>] mutex_lock+0x1c/0x28 Aug 25 10:53:28 lubuntu kernel: [ 367.920478] [<f857223a>] acpi_smbus_transaction+0x48/0x210 [sbshc] Aug 25 10:53:28 lubuntu kernel: [ 367.920489] [<c11858e1>] ? do_last+0x1b1/0xf60 Aug 25 10:53:28 lubuntu kernel: [ 367.920504] [<f857242f>] acpi_smbus_read+0x2d/0x33 [sbshc] Aug 25 10:53:28 lubuntu kernel: [ 367.920520] [<f881e0f1>] acpi_battery_get_state+0x74/0x8b [sbs] Aug 25 10:53:28 lubuntu kernel: [ 367.920535] [<f881e8a9>] acpi_sbs_battery_get_property+0x2a/0x233 [sbs] Aug 25 10:53:28 lubuntu kernel: [ 367.920549] [<c14fa61f>] power_supply_show_property+0x3f/0x240 Aug 25 10:53:28 lubuntu kernel: [ 367.920561] [<c114664f>] ? handle_mm_fault+0x64f/0x8d0 Aug 25 10:53:28 lubuntu kernel: [ 367.920573] [<c14fa5e0>] ? power_supply_store_property+0x60/0x60 Aug 25 10:53:28 lubuntu kernel: [ 367.920586] [<c1407d20>] ? dev_uevent_name+0x30/0x30 Aug 25 10:53:28 lubuntu kernel: [ 367.920597] [<c1407d38>] dev_attr_show+0x18/0x40 Aug 25 10:53:28 lubuntu kernel: [ 367.920608] [<c11dad15>] sysfs_seq_show+0xe5/0x1c0 Aug 25 10:53:28 lubuntu kernel: [ 367.920621] [<c119846e>] seq_read+0xce/0x370 Aug 25 10:53:28 lubuntu kernel: [ 367.920633] [<c11983a0>] ? seq_hlist_next_percpu+0x90/0x90 Aug 25 10:53:28 lubuntu kernel: [ 367.920644] [<c1179238>] vfs_read+0x78/0x140 Aug 25 10:53:28 lubuntu kernel: [ 367.920654] [<c11799a9>] SyS_read+0x49/0x90 Aug 25 10:53:28 lubuntu kernel: [ 367.920667] [<c165efcd>] sysenter_do_call+0x12/0x28 /var/log/installer/debug shows upower related error: Ubiquity 2.18.8 Gtk-Message: Failed to load module "overlay-scrollbar" Gtk-Message: Failed to load module "overlay-scrollbar" ERROR:dbus.proxies:Introspect error on :1.23:/org/freedesktop/UPower: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. Exception in GTK frontend (invoking crash handler): Traceback (most recent call last): File "/usr/lib/ubiquity/bin/ubiquity", line 636, in <module> main(oem_config) File "/usr/lib/ubiquity/bin/ubiquity", line 622, in main install(query=options.query) File "/usr/lib/ubiquity/bin/ubiquity", line 260, in install wizard = ui.Wizard(distro) File "/usr/lib/ubiquity/ubiquity/frontend/gtk_ui.py", line 290, in __init__ mod.ui = mod.ui_class(mod.controller) File "/usr/lib/ubiquity/plugins/ubi-prepare.py", line 93, in __init__ upower.setup_power_watch(self.prepare_power_source) File "/usr/lib/ubiquity/ubiquity/upower.py", line 21, in setup_power_watch power_state_changed() File "/usr/lib/ubiquity/ubiquity/upower.py", line 18, in power_state_changed not misc.get_prop(upower, UPOWER_PATH, 'OnBattery')) File "/usr/lib/ubiquity/ubiquity/misc.py", line 809, in get_prop return obj.Get(iface, prop, dbus_interface=dbus.PROPERTIES_IFACE) File "/usr/lib/python3/dist-packages/dbus/proxies.py", line 70, in __call__ return self._proxy_method(*args, **keywords) File "/usr/lib/python3/dist-packages/dbus/proxies.py", line 145, in __call__ **keywords) File "/usr/lib/python3/dist-packages/dbus/connection.py", line 651, in call_blocking message, timeout)

    Read the article

  • SQLAuthority News – A Successful Community TechDays at Ahmedabad – December 11, 2010

    - by pinaldave
    We recently had one of the best community events in Ahmedabad. We were fortunate that we had SQL Experts from around the world to have presented at this event. This gathering was very special because besides Jacob Sebastian and myself, we had two other speakers traveling all the way from Florida (Rushabh Mehta) and Bangalore (Vinod Kumar).There were a total of nearly 170 attendees and the event was blast. Here are the details of the event. Pinal Dave Presenting at Community Tech Days On the day of the event, it seemed to be the coldest day in Ahmedabad but I was glad to see hundreds of people waiting for the doors to be opened some hours before. We started the day with hot coffee and cookies. Yes, food first; and it was right after my keynote. I could clearly see that the coffee did some magic right away; the hall was almost full after the coffee break. Jacob Sebastian Presenting at Community Tech Days Jacob Sebastian, an SQL Server MVP and a close friend of mine, had an unusual job of surprising everybody with an innovative topic accompanied with lots of question-and-answer portions. That’s definitely one thing to love Jacob, that is, the novelty of the subject. His presentation was entitled “Best Database Practices for the .Net”; it really created magic on the crowd. Pinal Dave Presenting at Community Tech Days Next to Jacob Sebastian, I presented “Best Database Practices for the SharePoint”. It was really fun to present Database with the perspective of the database itself. The main highlight of my presentation was when I talked about how one can speed up the database performance by 40% for SharePoint in just 40 seconds. It was fun because the most important thing was to convince people to use the recommendation as soon as they walk out of the session. It was really amusing and the response of the participants was remarkable. Pinal Dave Presenting at Community Tech Days My session was followed by the most-awaited session of the day: that of Rushabh Mehta. He is an international BI expert who traveled all the way from Florida to present “Self Service BI” session. This session was funny and truly interesting. In fact, no one knew BI could be this much entertaining and fascinating. Rushabh has an appealing style of presenting the session; he instantly got very much interaction from the audience. Rushabh Mehta Presenting at Community Tech Days We had a networking lunch break in-between, when we talked about many various topics. It is always interesting to get in touch with the Community and feel a part of it. I had a wonderful time during the break. Vinod Kumar Presenting at Community Tech Days After lunch was apparently the most difficult session for the presenter as during this time, many people started to fall sleep and get dizzy. This spot was requested by Microsoft SQL Server Evangelist Vinod Kumar himself. During our discussion he suggested that if he gets this slot he would make sure people are up and more interactive than during the morning session. Just like always, this session was one of the best sessions ever. Vinod is true to his word as he presented the subject of “Time Management for Developer”. This session was the biggest hit in the event because the subject was instilled in the mind of every participant. Vinod Kumar Presenting at Community Tech Days Vinod’s session was followed by his own small session. Due to “insistent public demand”, he presented an interesting subject, “Tricks and Tips of SQL Server“. In 20 minutes he has done another awesome job and all attendees wanted more of the tricks. Just as usual he promised to do that next time for us. Vinod’s session was succeeded by Prabhjot Singh Bakshi’s session. He presented an appealing Silverlight concept. Just the same, he did a great job and people cheered him. Prabhjot Presenting at Community Tech Days We had a special invited speaker, Dhananjay Kumar, traveling all the way from Pune. He always supports our cause to help the Community in empowering participants. He presented the topic about Win7 Mobile and SharePoint integration. This was something many did not even expect to be possible. Kudos to Dhananjay for doing a great job. Dhananjay Kumar Presenting at Community Tech Days All in all, this event was one of the best in the Community Tech Days series in Ahmedabad. We were fortunate that legends from the all over the world were present here to present to the Community. I’d say never underestimate the power of the Community and its influence over the direction of the technology. Vinod Kumar Presenting trophy to Pinal Dave Vinod Kumar Presenting trophy to Pinal Dave This event was a very special gathering to me personally because of your support to the vibrant Community. The following awards were won for last year’s performance: Ahmedabad SQL Server User Group (President: Jacob Sebastian; Leader: Pinal Dave) – Best Tier 2 User Group Best Development Community Individual Contributor – Pinal Dave Speakers I was very glad to receive the award for our entire Community. Attendees at Community Tech Days I want to say thanks to Rushabh Mehta, Vinod Kumar and Dhananjay Kumar for visiting the city and presenting various technology topics in Community Tech Days. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • Ubuntu 14.04:LTS , HPLIP loses USB connection to HP laserjet

    - by Gareth
    This is my first post, so please let me know if i have inadvertanly broken any rules. Problem There seems to be a problem with HPLIP and USB connections in ubuntu 14.04LTS. After upgrading i managed to get the printing to work but today it has broken. Initial Issue (Solved) After upgrading to unbutntu 14.04 LTS my printer lHP LaserJet 1018 stopped printing (code=12) Looking through the Forumsthere are several issues with printitng and HPLIP so I was able to troubleshoot this. The steps I took were : Reran HPdoctor Ran hp-check Un-installed and installed the latest version of HPLIP (3.14.4) Checked the USB connections lsusb and lsusb-v Re-ran hpcheck Removed the printer from HPLIP Re-ran hpcheck Manually configued HPLIP to the printer hp-setup-g <xxx:yyy> And this worked HPLIP was able to see the printer in the USB , test page printed and was happily working for a few weeks. Current Issue Printer Not working However today my wife complains the printer is not working and checking see that although HPLIP has the same error code and did not seem to be able to see the printer although running lsusb could see the printer. Initially thought this may be due to usb given a new bus/device after being turned on and off and went to repeat the steps above at the moment still seeing an error in that the HPLIP is complaining that it cannot see the device **error: Device not found. Please make sure your printer is properly connected and powered-on.** current Observations lsusb output ## Bus 002 Device 007: ID 03f0:4117 Hewlett-Packard LaserJet 1018 sudo hp-check output *> "duan@duan-Lenovo-B550:~$ sudo hp-check [sudo] password for duan: Saving output in log file: /home/duan/hp-check.log HP Linux Imaging and Printing System (ver. 3.14.4) Dependency/Version Check Utility ver. 15.1 Copyright (c) 2001-13 Hewlett-Packard Development Company, LP This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to distribute it under certain conditions. See COPYING file for more details. Note: hp-check can be run in three modes: 1. Compile-time check mode (-c or --compile): Use this mode before compiling the HPLIP supplied tarball (.tar.gz or .run) to determine if the proper dependencies are installed to successfully compile HPLIP. Run-time check mode (-r or --run): Use this mode to determine if a distro supplied package (.deb, .rpm, etc) or an already built HPLIP supplied tarball has the proper dependencies installed to successfully run. Both compile- and run-time check mode (-b or --both) (Default): This mode will check both of the above cases (both compile- and run-time dependencies). Full Output output of hp-setup -g 002:007 window box "device not found please make sure your printer is properly connected and powered on" duan@duan-Lenovo-B550:~$ sudo hp-setup -g 002:007 [sudo] password for duan: > HP Linux Imaging and Printing System (ver. 3.14.4) Printer/Fax Setup > Utility ver. 9.0 > > Copyright (c) 2001-13 Hewlett-Packard Development Company, LP This > software comes with ABSOLUTELY NO WARRANTY. This is free software, and > you are welcome to distribute it under certain conditions. See COPYING > file for more details. > > hp-setup[18461]: debug: param=002:007 hp-setup[18461]: debug: > selected_device_name=None Fontconfig error: > "/etc/fonts/conf.d/65-khmer.conf", line 14: out of memory Fontconfig > error: "/etc/fonts/conf.d/65-khmer.conf", line 23: out of memory > Fontconfig error: "/etc/fonts/conf.d/65-khmer.conf", line 32: out of > memory hp-setup[18461]: debug: Sys.argv=['/usr/bin/hp-setup', '-g', > '002:007'] printer_name=None param=002:007 jd_port=1 device_uri=None > remove=False Searching for device... hp-setup[18461]: debug: Trying > USB with bus=002 dev=007... hp-setup[18461]: debug: Not found. > hp-setup[18461]: debug: Trying serial number 002:007 hp-setup[18461]: > debug: Probing bus: usb hp-setup[18461]: debug: Probing bus: par > error: Device not found. Please make sure your printer is properly > connected and powered-on. hp-setup[18461]: debug: Starting GUI loop. .. USB lead Works with the Windows 7 laptop Printer Works with windows 7 laptop Questions Is this a Bug with HPLIP or an issue with laptop/printer? Supplementary question if it is a bug what information is needed and where should it be sent ? Any suggestions on how to get the printer to work correctly with Ubuntu 14.04LTS/HPLIP 13.4.3 so that it stays working ?

    Read the article

  • My Interview with Microsoft

    - by Victor Hurdugaci
    This post is for those who want to apply or have already applied (but not finished the interview) for a Microsoft Job. The recruitment process is quite similar for everyone and consists of a few steps. Application E-Mail Interview Phone Interview On Site Interview I will tell you my story and how I went through the four phases. 1. Application My blog's title (Ex Nihilo Nihil Fit) means "Nothing Comes Out of Nothing". You can't get a job at Microsoft by not doing anything - this is true for anything else. The first step you need to complete is the application process. For this, many options are available. You can... ... apply online on Microsoft's Careers website as I did ... send your CV to different e-mail addresses (there are some dedicated e-mails for different positions) ... apply through some 3rd party organization (job shop, campus recruitment, job agency, etc) On MS Careers you just have to post your CV and choose the job you want. That's all! No recommendation letter, no cover letter, no nothing. Of course, not every CV passes the selection process. Here are some tips for improving your resume (worked for me): Don't write it just before applying! Write a draft version, wait a few days and then review it. This way you will find a lot of mistakes and stupid things you wrote initially. If you review it immediately after writing, your mind will not be criticism oriented and will just ignore mistakes. Repeat the write-wait-review process as many times as necessary, until you find that the review revealed no mistakes. After you did the final review and the CV is bullet-proof, ask others to review it. They will definitely find inconsistencies and mistakes and this will make you feel stupid. This is good because will open your eyes will make you go into an 'I want to improve' mode. You'll try to correct everything. After you come up with a modified version go again through steps 1 and 2. Repeat this as many times as necessary. [Special thanks to Lucian Sasu, Nadia Comanici, Andrei Ciobanu, Monica Balan and Lavinia Tanase for reviewing my CV!] Make it short and give only relevant facts. Initially, I come up with a 5 pages CV because I wrote every single technology with which I worked. There were a lot irrelevant things, I wrote Windows Workflow Foundation just because I played with it for a few days. I added extensive descriptions for every project, made a personal details section (name, birth date, address, etc) of 1/2 page. Others suggested to cut everything that was not necessary. You don't need to give extensive descriptions, just add a few words. For example, I wrote "VS Image Visualizer - Visual Studio 2008 debug visualizer for images" and added a link to the project's page - you submit formatted andcan embed links. Add something that makes it different. I don't know if this makes a difference, but I added some lines to separate items just like in the picture below. Definitely Microsoft gets thousands of CVs per day. You need something special. Don't lie! Tell exactly what you did and what is the proficiency level of your skills. For example, don't write "Advanced" for UML if you don't know the difference between composition and aggregation. Be realistic and don't under/over estimate yourself. Use the spell chick. Make sure everything is written in correct English and there are no grammar/spelling mistakes. Noddy likes a WC with grammar mi takes. You mght fail just because of that. Once you completed your CV, choose the job that suits best your needs, apply and wait... The waiting is a problem because all these big companies like Microsoft, Google, Mozilla, Apple, etc. will contact you only if they find something interesting in your application. If you're not suitable, then no rejection is sent. I applied for an Intern Software Development Engineer position at Microsoft Redmond. I cannot apply for a full time position because I want to finish the master program on time, in the next summer - an internship is just what I need. 2. E-Mail Interview January 20, 2010. Two months since I submitted the CV. I wasn't hoping anymore that MS will contact me, when I got an e-mail titled: "Victor Hurdugaci ES DK" from Holly Peterson saying: Read more >>

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • 2010 Collaboration Summit Impressions

    - by Elena Zannoni
    It's a bit late, but there you have it anyway. April 14 to 16 I attended the Linux Foundation Collaboration Summit in SFO. I was running two tracks, one on tracing and one on tools. You can see the tracks and the slides here: http://events.linuxfoundation.org/events/collaboration-summit/slides I was pretty busy both days, Thursday with a whole day tracing track, Friday with a half day toolchain track. The sessions were well attended, the rooms were full, with people spilling in the hallways. Some new things were presented, like Kernelshark, by Steve Rostedt, a GUI (yes, believe it or not, a GUI) written in GTK. It is very nice, showing a timeline for traced kernel events, and you can zoom in and filter at will. It works on the latest kernels, and it requires some new things/fixes in GTK. I don't recall exactly what version of GTK though. Dominique Toupin from Ericsson presented something about user requirements for tracing. Mostly though about who's who in the embedded world, and eclipse. Masami and Mathieu presented an update on their work. See their slides. The interesting thing to me was of course the new version of uprobes w/o underlying utrace presented by Jim Keniston. At the end of the session we had a discussion about the future of utrace. Roland wasn't there, butTom Tromey (also from RedHat) collected the feedback. Basically we are at a standstill now that utrace has been rejected yet again. There wasn't much advise that anybody could give, except jokingly, we decided that the only way in is to make it a part of perf events. There needs to be another refactoring, but most of all, this "killer app" that would be enabled because of utrace hasn't materialized yet. We think that having a good debugging story on Linux is enough of a killer app, for instance allowing multiple tracers, and not relying on SIGCHLD etc. I think this wasn't completely clear to the kernel community. Trying to achieve debugging via a gdb stub inside the kernel interfacing to utrace and that is controlled via the gdb remote protocol also lost its appeal (thankfully, since the gdb remote protocol is archaic). Somebody would have to be creative in how to submit utrace. It doesn't have to be called utrace (it was really a random choice, for lack of a letter that was not already used in front of the word "trace"). So basically, I think the ideas behind utrace are sound, and the necessity of a new interface is acknowledged. But I believe the integration/submission process with the kernel folks has to restart from scratch, clean slate. We'll see. There are many conferences and meetings coming up in the near future where things can be discussed further. On the second day, Friday, we had the tools talks. It was interesting to observe the more "kernel" oriented people's behavior towards the gcc etc community. The first talk was by Mark Mitchell, about Gcc and its new plugin architecture. After that, Paolo talked about the new C++1x standard, which will be finalized in 2011. Many features are already implemented in the libstdc++ library and gcc and usable today. We had a few minutes (really, the half day track was quite short) where Bradley Kuhn from the Software Freedom Law Center explained the GPLv3 exception for gcc (due to the new gcc plugin architecture and the availability of the intermediate results from the compilation, which is a new thing). I will not try to explain, but basically you cannot take the result of the preprocessing and then use that in your own proprietary compiler. After, we had a talk by Ian Taylor about the new Gold linker. One good thing in that area is that they are trying to make gold the new default linker (for instance Fedora will use gold as the distro linker). However gold is very different from binutils' old linker. It doesn't use a linker script, for instance. The kernel has been linked with gold many times as an exercise (the ground work was done by Kris Van Hees), but this needs to be constantly tested/monitored because the kernel linker script is very complex, and uses esoteric features (Wenji is now monitoring that each kernel RC can be built with gold). It was positive that people are now aware of gold and the need for it to be ported to more architectures. It seems that the porting is very easy, with little arch dependent code. Finally Tom Tromey presented about gdb and the archer project. Archer is a development branch of gdb mostly done by RedHat, where they are focusing on better c++ printing, c++ expression parsing, and plugins. The archer work is merged regularly in the gdb mainline. In general it was a good conference. I did miss most of the first day, because that's when I flew in. But I caught a couple of talks. Nothing earth shattering, except for Google giving each person registered a free Android phone. Yey.

    Read the article

  • To ORM or Not to ORM. That is the question&hellip;

    - by Patrick Liekhus
    UPDATE:  Thanks for the feedback and comments.  I have adjusted my table below with your recommendations.  I had missed a point or two. I wanted to do a series on creating an entire project using the EDMX XAF code generation and the SpecFlow BDD Easy Test tools discussed in my earlier posts, but I thought it would be appropriate to start with a simple comparison and reasoning on why I choose to use these tools. Let’s start by defining the term ORM, or Object-Relational Mapping.  According to Wikipedia it is defined as the following: Object-relational mapping (ORM, O/RM, and O/R mapping) in computer software is a programming technique for converting data between incompatible type systems in object-oriented programming languages. This creates, in effect, a "virtual object database" that can be used from within the programming language. Why should you care?  Basically it allows you to map your business objects in code to their persistence layer behind them. And better yet, why would you want to do this?  Let me outline it in the following points: Development speed.  No more need to map repetitive tasks query results to object members.  Once the map is created the code is rendered for you. Persistence portability.  The ORM knows how to map SQL specific syntax for the persistence engine you choose.  It does not matter if it is SQL Server, Oracle and another database of your choosing. Standard/Boilerplate code is simplified.  The basic CRUD operations are consistent and case use database metadata for basic operations. So how does this help?  Well, let’s compare some of the ORM tools that I have used and/or researched.  I have been interested in ORM for some time now.  My ORM of choice for a long time was NHibernate and I still believe it has a strong case in some business situations.  However, you have to take business considerations into account and the law of diminishing returns.  Because of these two factors, my recent activity and experience has been around DevExpress eXpress Persistence Objects (XPO).  The primary reason for this is because they have the DevExpress eXpress Application Framework (XAF) that sits on top of XPO.  With this added value, the data model can be created (either database first of code first) and the Web and Windows client can be created from these maps.  While out of the box they provide some simple list and detail screens, you can verify easily extend and modify these to your liking.  DevExpress has done a tremendous job of providing enough framework while also staying out of the way when you need to extend it.  This sounds worse than it really is.  What I mean by this is that if you choose to follow DevExpress coding style and recommendations, the hooks and extension points provided allow you to do some pretty heavy lifting while also not worrying about the basics. I have put together a list of the top features that I have used to compare the limited list of ORM’s that I have exposure with.  Again, the biggest selling point in my opinion is that XPO is just a solid as any of the other ORM’s but with the added layer of XAF they become unstoppable.  And then couple that with the EDMX modeling tools and code generation, it becomes a no brainer. Designer Features Entity Framework NHibernate Fluent w/ Nhibernate Telerik OpenAccess DevExpress XPO DevExpress XPO/XAF plus Liekhus Tools Uses XML to map relationships - Yes - - -   Visual class designer interface Yes - - - - Yes Management integrated w/ Visual Studio Yes - - Yes - Yes Supports schema first approach Yes - - Yes - Yes Supports model first approach Yes - - Yes Yes Yes Supports code first approach Yes Yes Yes Yes Yes Yes Attribute driven coding style Yes - Yes - Yes Yes                 I have a very small team and limited resources with a lot of responsibilities.  In order to keep up with our customers, we must rely on tools like these.  We use the EDMX tool so that we can create a visual representation of the applications with our customers.  Second, we rely on the code generation so that we can focus on the business problems at hand and not whether a field is mapped correctly.  This keeps us from requiring as many junior level developers on our team.  I have also worked on multiple teams where they believed in writing their own “framework”.  In my experiences and opinion this is not the route to take unless you have a team dedicated to supporting just the framework.  Each time that I have worked on custom frameworks, the framework eventually becomes old, out dated and full of “performance” enhancements specific to one or two requirements.  With an ORM, there are a lot smarter people than me working on the bigger issue of persistence and performance.  Again, my recommendation would be to use an available framework and get to working on your business domain problems.  If your coding is not making money for you, why are you working on it?  Do you really need to be writing query to object member code again and again? Thanks

    Read the article

  • Upgrading VSIX extensions from VS2012 to VS2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/27/upgrading-vsix-extensions-from-vs2012-to-vs2013.aspx  As consumers of your Visual Studio extensions start to move over to VS 2013, you will have to upgrade the Visual Studio extensions you build for Visual Studio 2012 to Visual Studio 2013 and republish to the Visual Studio extension gallery. Failing which, it will not be possible for your consumers to install and use your extensions on Visual Studio 2013.   Objective In this blog post, I’ll show you how simple it is to upgrade your Visual Studio 2012 extension to Visual Studio 2013. There aren’t any reported breaking changes between VS 2012 SDK and VS 2013 SDK, the upgrade usually involves, rebuilding the extension against VS 2013 SDK and updating the vsix manifest file.              Walkthrough Download the Visual Studio 2013 SDK - You will need to download the Visual Studio 2013 SDK in order to open up the Visual Studio extension project in Visual Studio 2013. The SDK can be downloaded from here. Install the SDK before you proceed.                2. Once the VS 2013 SDK has been installed, open up your package project. For the purposes of this blog post, I’ll open up the Avanade Extension – Software Inventory in Visual Studio 2013. You will notice that Visual Studio doesn’t load the project but let’s you know that the project needs to be Migrated.                  3. Right click the project and choose the option ‘Reload Project’ from the Context Menu.                  4. Choosing the Reload Project option brings up an upgrade window, telling you that the upgrade is a one way only upgrade i.e. the project will be changed to work with Visual Studio 2013 and you will not be able to open the project up in Visual Studio 2012. My recommendation would be to create a Visual Studio 2013 branch and upgrading the project in that branch only, so if you need to go back to Visual Studio 2012 project at some point, you have a handy reference in a separate branch.             5. Upon clicking Ok, the project is updated. See below, the following changes are made at the time of upgrade,           - The runtime version is updated in the Resources.Designer.cs file                      - The Minimum version of Visual Studio in the package project file is changed from 11.0 to 12.0                    6. Reference VS 2013 dll’s rather than VS 2012 dll’s. So reference Microsoft.TeamFoundation.Client.dll and Microsoft.TeamFoundation.Controls.dll from C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\ReferenceAssemblies\v2.0 and C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\ReferenceAssemblies\v4.5. If you have any other API references, then change the references to point to VS 2013 instead of VS 2012.                          7. Rebuild your solution to ensure there are no breaking changes. Success!                8. Update VSIX Manifest file (the file source.extnsion.vsixmanifest contains the meta data for your VSIX).          - Update the Install Targets from 11.0 to 12.0. This basically enforces that the extension can be installed on Visual Studio 2013 version of Visual Studio.                         - Update the Dependencies from Visual Studio MPF 11.0 to Visual Studio MPF 12.0              9. Rebuild the solution and open up the bin folder for the Package project and look for the file *.vsix file [Microsoft Visual Studio Extension].         - This is basically the installer for your extension.                 - Double click the installer to launch the installer wizard. Viola! You can see the package installation wizard opens up and gives you the option to install the extension for Visual Studio 2013.                    - Click Install to Continue                    - Note – If you run into the exception “23/06/2013 10:42:18 - Install Error : Microsoft.VisualStudio.ExtensionManager.InstallByMsiException: The InstalledByMSI element in extension Avanade Extensions cannot be 'true' when installing an extension through the Extensions and Updates Installer.  The element can only be 'true' when an MSI lays down the extension manifest file.” Ensure you have the option “This VSIX is installed by Windows Installer” unchecked in the Install Targets tab.        10. Verifying that the extension has installed correctly.           - Open Extension Manager and verify that the installed extension shows up in the extension manager “list of installed VSIX”.                      11. First Look at the updated Extension                         - The links have now been moved to the context menu, so to see the navigation links, you’ll have to right click on the icon and select the option from the context menu.                                        Note – The Avanade Extension being used in the demo has been developed by Utkarsh and Tarun. The Software Inventory Extension for Visual Studio 2012…  allows you to see the list of Software installed on the hosted build server right from with in Visual Studio,  the extension also allows you to export this list to excel. More details on how this has been implemented can be found here.   I hope you found this useful. In case you have any questions or feedback, feel free to reach out on Visual Studio extensibility MSDN forums or via Microsoft Visual Studio feedback forum. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Geocaching - World wide treasure hunt

    I'm not quite sure how I came across this topic but actually I find it absolutely interesting, challenging and most of all a great fun for the family and friends. The interesting part is for sure that you can follow other peoples treasures and their preferred locations where a cache might be hidden. Of course, it wont be easy to find a cache after all. Sometimes there are even 'mystery caches' which have either riddles, further instructions or little brain games for you in order to find the actual cache - that's the challenge. And last but not least, those caches are hidden outdoor. A great experience to explore nature either on your own, or your family especially with children, or as a treasure hunting pack with a couple of friends. What is geocaching? It's a high-tech outdoor treasure hunting game that's a great way to explore the world with friends, family or on your own. Participants use GPS-enabled devices to locate hidden containers called geocaches. There are over one million geocaches hidden around the world today, waiting for you to find them. Visit Geocaching.com to search for geocaches near you.(Source: Referral Email of geocaching.com) Checkout the Geocaching 101 for further details and information. They also provide a video channel on YouTube. Which equipment do I need? Any GPS-enabled device is sufficient to go onto the hunt. I'm going to start our geocaching experience equipped with my Samsung Galaxy Tab. Additionally, I installed a geocaching.com client called c:geo that hopefully assists me soon. Combined with a map app like Google Maps and a nice Compass app you should be fully equipped and ready to go. I guess, that even a car navigation system is perfect for that task. Later on, with more experience and demand for technology (or precision) it might be interesting to opt-in for a pure GPS device, like a Garmin or any other brand on the market. {loadposition content_adsense} What is a geocache and what does it contain? In its simplest form, a cache always contains a logbook or logsheet for you to log your find. Larger caches may contain a logbook and any number of items. These items turn the adventure into a true treasure hunt. You never know what the cache owner or visitors to the cache may have left for you to enjoy. Remember, if you take something, leave something of equal or greater value in return. It is recommended that items in a cache be individually packaged in a clear, zipped plastic bag to protect them from the elements. Finding your first geocache Well, first you have to have interest to pick up the challenge. Then you have to check out the Geocache directory on geocaching.com. They have recommendations for beginner's caches but you are free to choose any. Actually, we have a Mystery Cache very close to our base, and I guess that we are going for that one on our first trip. Anyway, there is a very informative guide on the website which should answer all your questions about starting your new outdoor adventure. For sure, it's going to be rewarding. Team up with friends and family Especially as a beginner there might be misunderstandings in handling the GPS coordinates, the compass, or the map, and even finding the container at the documented position isn't easy in the first place. Luckily, there are logbook reports online from other hunters, and most of the time there are even 'spoiler' images available. But also bear in mind, that a geocache might have been removed or is lost due to unconscious people or whatever other reasons. Don't be disappointed in case that you can't find anything... There be nothing anymore. A general recommendation in this case would be to replace the missing container with a new one, and give feedback to the original owner about the state of that particular location. After all, it's about fun and active participation in a world-wide community. Geocaches in Mauritius? Yes, there are currently about 45 geocaches spread all over the island, and even a single in Rodriguez - that's gonna be a tough one. Hopefully, we will get increasing numbers as Geocaching.com allows, no better, even encourages you to hide new containers at your locations of choice. I think this is going to be real fun for us during the upcoming weeks and months. Especially, when we are travelling to other countries and transfer so-called trackable items between geocaches. On my first impression, Geocaching.com seems to be very mature, open and community-oriented. There are literally hundreds of thousands geocache 'hunters' all over the world. And usually finding a container remote from your home is very rewarding. I'll keep you updated in these matters during the next months to come...

    Read the article

  • Recover that Photo, Picture or File You Deleted Accidentally

    - by The Geek
    Have you ever accidentally deleted a photo on your camera, computer, USB drive, or anywhere else? What you might not know is that you can usually restore those pictures—even from your camera’s memory stick. Windows tries to prevent you from making a big mistake by providing the Recycle Bin, where deleted files hang around for a while—but unfortunately it doesn’t work for external USB drives, USB flash drives, memory sticks, or mapped drives. The great news is that this technique also works if you accidentally deleted the photo… from the camera itself. That’s what happened to me, and prompted writing this article. Restore that File or Photo using Recuva The first piece of software that you’ll want to try is called Recuva, and it’s extremely easy to use—just make sure when you are installing it, that you don’t accidentally install that stupid Yahoo! toolbar that nobody wants. Now that you’ve installed the software, and avoided an awful toolbar installation, launch the Recuva wizard and let’s start through the process of recovering those pictures you shouldn’t have deleted. The first step on the wizard page will let you tell Recuva to only search for a specific type of file, which can save a lot of time while searching, and make it easier to find what you are looking for. Next you’ll need to specify where the file was, which will obviously be up to wherever you deleted it from. Since I deleted mine from my camera’s SD card, that’s where I’m looking for it. The next page will ask you whether you want to do a Deep Scan. My recommendation is to not select this for the first scan, because usually the quick scan can find it. You can always go back and run a deep scan a second time. And now, you’ll see all of the pictures deleted from your drive, memory stick, SD card, or wherever you searched. Looks like what happened in Vegas didn’t stay in Vegas after all… If there are a really large number of results, and you know exactly when the file was created or modified, you can switch to the advanced view, where you can sort by the last modified time. This can help speed up the process quite a bit, so you don’t have to look through quite as many files. At this point, you can right-click on any filename, and choose to Recover it, and then save the files elsewhere on your drive. Awesome! Restore that File or Photo using DiskDigger If you don’t have any luck with Recuva, you can always try out DiskDigger, another excellent piece of software. I’ve tested both of these applications very thoroughly, and found that neither of them will always find the same files, so it’s best to have both of them in your toolkit. Note that DiskDigger doesn’t require installation, making it a really great tool to throw on your PC repair Flash drive. Start off by choosing the drive you want to recover from…   Now you can choose whether to do a deep scan, or a really deep scan. Just like with Recuva, you’ll probably want to select the first one first. I’ve also had much better luck with the regular scan, rather than the “dig deeper” one. If you do choose the “dig deeper” one, you’ll be able to select exactly which types of files you are looking for, though again, you should use the regular scan first. Once you’ve come up with the results, you can click on the items on the left-hand side, and see a preview on the right.  You can select one or more files, and choose to restore them. It’s pretty simple! Download DiskDigger from dmitrybrant.com Download Recuva from piriform.com Good luck recovering your deleted files! And keep in mind, DiskDigger is a totally free donationware software from a single, helpful guy… so if his software helps you recover a photo you never thought you’d see again, you might want to think about throwing him a dollar or two. Similar Articles Productive Geek Tips Stupid Geek Tricks: Undo an Accidental Move or Delete With a Keyboard ShortcutRestore Accidentally Deleted Files with RecuvaCustomize Your Welcome Picture Choices in Windows VistaAutomatically Resize Picture Attachments in Outlook 2007Resize Your Photos with Easy Thumbnails TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi

    Read the article

  • Selling Visual Studio ALM

    - by Tarun Arora
    Introduction As a consultant I have been selling Application Lifecycle Management services using Visual Studio and Team Foundation Server. I’ve been contacted various times by friends working in organization telling me that ALM processes in their company were benchmarked when dinosaurs walked the earth. Most of these individuals already know the great features Microsoft ALM tools offer and are keen to start a conversation with the CIO but don’t exactly know where to start. It is very important how you engage in your first conversation, if you start the conversation with ‘There is this great tooling from Microsoft which offers amazing features to boost developer productivity, … ‘ from experience I can tell you the reply from your CIO would be ‘I already know! Our existing landscape has a combination of bleeding edge open source and cutting edge licensed tools which already cover these features quite well, more over Microsoft products have a high licensing cost associated to them.’ You will always find it harder to sell by feature, the trick is to highlight the gap in the existing processes & tools and then highlight the impact of these gaps to the overall development processes, by now you would have captured enough attention to show off how the ALM tooling offered by Microsoft not only fills those gaps but offers great value adds to take their development practices to the next level. Rangers ALM Assessment Guide Image 1 – Welcome! First look at the Rangers ALM assessment guide Most organization already have some processes in place to cover aspects of ALM. How do you go about proving that there isn’t enough cover in place? This is where Visual Studio ALM Rangers ALM Assessment guide can help. The ALM assessment guide is really a tool that helps you gather information about Development practices and processes within a customer's environment. Several questionnaires are used to identify the current state of individual development lifecycle areas and decide on a desired state for those processes. It also presents guidance and roll-up summaries to help with recommendations moving forward. The ALM Rangers assessment guide can be downloaded from here. Image 2 – ALM Assessment guide divided into different functions of SDLC The assessment guide is divided into different functions of Software Development Lifecycle (listed below), this gives you the ability to access how mature the company is in different areas of SDLC. Architecture & Design Requirement Engineering & UX Development Software Configuration Management Governance Deployment & Operations Testing & Quality Assurance Project Planning & Management Each section has a set of questions, fill in the assessment by selecting “Never/Sometimes/Always” from the Answer column in the question sheets.  Each answer has weightage to the overall score. Each question has a link next to it, clicking the link takes you to the Reference sheet which gives you more details about the question along with a reason for “why you need to ask this question?”, “other ways to phrase the question” and “what to expect as an answer from the customer”. The trick is to engage the customer in a discussion. You need to probe a lot, listen to the customer and have a discussion with several team members, preferably without management to ensure that you receive candid feedback. This reminds me of a funny incident when during an ALM review a customer told me that they have a sophisticated semi-automated application deployment process, further discussions revealed that deployment actually involved 72 manual configuration steps per production node. Such observations can be recorded in the Issue Brainstorming worksheet for further consideration later. It is also worth mentioning the different levels of ALM maturity to the customer. By default the desired state of ALM maturity is set to Standard, it is possible to set a desired state by area, you should strive for Advanced or Dynamic, it always helps by explaining the classification and advantages. Image 3 – ALM levels by description The ALM assessment guide helps you arrive at a quantitative measure of the company’s ALM maturity. The resultant graph plotted on a spider’s web shows you the company’s current state of ALM maturity and the desired state of ALM maturity. Further since the results are classified by area you can immediately spot the areas where the customer needs immediate help. Image 4 – The spiders web! The red cross icons are areas shouting out for immediate attention, the yellow exclamation icons are areas that need improvement. These icons are calculated on the difference between the Current State of ALM maturity VS the Desired state of ALM maturity. Image 5 – Results by area Conclusion To conclude the Rangers ALM assessment guide gives you the ability to, Measure the customer’s current ALM maturity level Understand the ALM maturity level the customer desires to achieve Capture a healthy list of issues the customer wants to brainstorm further Now What’s next…? Download and get started with the Rangers ALM Assessment Guide. If you have successfully captured the above listed three pieces of information you are in a great state to make recommendations on the identified areas highlighting the benefits that Visual Studio ALM tools would offer. In the next post I will be covering how to take the ALM assessment results as the base to actually convert your recommendation into a sell.  Remember to subscribe to http://feeds.feedburner.com/TarunArora. I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. *** A special thanks goes out to fellow ranges Willy, Ethem and Philip for reviewing the blog post and providing valuable feedback. ***

    Read the article

  • Vacations on Rodrigues 2014

    And now something completely different compared to the usual technical or community related articles here on this blog. Yes, this time I'm writing some lines on my (and my family's) activities during our long weekend stay on Rodrigues. So, please bear with me, it's eventually a bit more personal... Grab a soda, some popcorn and a cosy place to continue to read. var googleAlbumLink = "https://plus.google.com/photos/117698191428446859536/albums/6047895311458281985"; //optional----------------------- var mySlideWidth = 580; var mySlideHeight = 340; var mySlideDelay = 7000; //delay in milliseconds Special promotions during school holidays Originally, our children started to ask more frequently about going on the plane again. Obviously, after their aunty from Germany was around during May, they were really eager to travel again. So, we decided that it might be a great opportunity to book some vacations during their school holidays. And just in time the local hotels and hotel groups started to advertise their special promotions for citizens and residents. After collecting multiple brochures over several days, we got attracted by various hotel packages on Rodrigues - most interestingly the expenses for the stay and flight ticket were less compared to other resorts here on the main island. As we have been to Rodrigues already back in 2008, we followed up on this idea and got in touch with a couple travel agencies. Well, I have to report that you should be really careful about the promotions from some of them. We had a very negative experience with Shamal Travel Agency in Quatre Bornes regarding their adverts and the actual price levels and age definition for children. Please, stay away from them if you are interested in transparent cost and services. Anyway, after some arrangements with two other close families we managed to confirm our stay at the Cotton Bay Hotel in Rodrigues. Given the fact that we already stayed there, and the hotel has been renovated recently, and it is under new management all looked very promising and relaxed for our vacation. Counting the days... As we already booked in July our children were counting down the days. And it got more interesting as soon as they were on school holidays finally. Well, the day arrived and waking them up at 2:30 hrs wasn't a problem after all. Quite the opposite it was fascinating for us parents to watch them waiting for the transport and later on during the airport transfer. Despite the early hours both didn't fall asleep and it was all so exciting. We are taking the plane! Well organised by the Cotton Bay Hotel Honestly, it was a breeze and a smooth ride during our stay at the hotel. From the airport transfer, the cleanliness of our bungalow, the organisation of our day trips, and the SPA - all very well and enjoyable. The children had great fun, and although it was a bit too windy to plunge into the pool they had a lot of fun with other activities on the beach and at the Kid's Club. Oh, and we had our private petting zoo with cows, sheep and goats just close to the terrace. Some of us went to check out the SPA facilities and I have to admit that the services regarding Hammam and Sauna are better than at some other hotels in Mauritius. I don't know after how many months or years I was once again enjoying a very hot sauna. Little draw-back but nothing to worry about... There is no cold water or at least ice cubes to cool down the body, but hey there was a nice breeze coming over the hills. Some day trips to mention Based on a friend's recommendation we walked to a "restaurant" called Chez Solange & Robert. Hahaha, restaurant is widely stretched in this case, as we enjoyed a great BBQ with fresh lobster, whole fish, and pieces of chicken breast in an open cottage. Just some wooden structure covered with dried palm leaves on the roof - island feeling pure! The other day we went to the Giant Tortoise & Cave Reserve Francois Leguat to observe the giant Aldabra turtles and to visit the Grande Caverne. The biggest limestone cave on the island. Compared to our last visit this was a novelty after checking out the Caverne Partate. The formations of stalactites and stalagmites are very impressive and imaginative. Our guide had lots of funny terms and despite the low light conditions the kids had a great time wandering around on the narrow wooden paths and stairs. And last but not least, we decided to check out the Tyrodrig zip lines... Everyone was allowed to join the trip through the air, and our little ones stayed close to our field guides. But finally went on their own on the very last traversal. Puuuh, it was astounishing to glide over the valley, and for sure something to repeat next time. Impressions of our vacation on Rodrigues 2014   Next stay has been discussed already Oh yes, Rodrigues baby! We are going to come again! Tentative dates have been discussed already and now it's up to us to earn enough our next holiday on that wonderful remote piece of paradise. Eventually, a little bit longer than this time. We'll see...

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65  | Next Page >