Search Results

Search found 4745 results on 190 pages for 'spare parts'.

Page 17/190 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How to properly remove disk from PERC 6/i RAID controller ?

    - by Stefano Borini
    I have a Dell T710, coming with PERC 6/i RAID controller. The current raid has 2x500 GB hard drives (with the OS), and 6x1000 GB hard drives (in RAID-6, currently empty). I would like to take one 1000 GB disk physically out to keep as an immediate spare in case of a crash, and configure the remaining 5x1000 GB in a single VD RAID-6. This is all nice and clean and works, until I realized that the display on the machine reports the lack of the 8th disk as an error. It's marked as error, but appears to be a warning, since the machine is fully functional. My question is: what is the best way to keep one disk as a spare out of the array? should I disassemble the disk from the cradle and insert the empty cradle in the array ? Or should I just silence the error in the display in some way (how?). I know that what I am doing sounds pretty strange, but here is academia and having a spare disk available could take weeks. Better to have one ready in my drawer for any emergency.

    Read the article

  • Removing a device in "removed" state from Linux software RAID array

    - by Sahasranaman MS
    My workstation has two disks(/dev/sd[ab]), both with similar partitioning. /dev/sdb failed, and cat /proc/mdstat stopped showing the second sdb partition. I ran mdadm --fail and mdadm --remove for all partitions from the failed disk on the arrays that use them, although all such commands failed with mdadm: set device faulty failed for /dev/sdb2: No such device mdadm: hot remove failed for /dev/sdb2: No such device or address Then I hot swapped the failed disk, partitioned the new disk and added the partitions to the respective arrays. All arrays got rebuilt properly except one, because in /dev/md2, the failed disk doesn't seem to have been removed from the array properly. Because of this, the new partition keeps getting added as a spare to the partition, and its status remains degraded. Here's what mdadm --detail /dev/md2 shows: [root@ldmohanr ~]# mdadm --detail /dev/md2 /dev/md2: Version : 1.1 Creation Time : Tue Dec 27 22:55:14 2011 Raid Level : raid1 Array Size : 52427708 (50.00 GiB 53.69 GB) Used Dev Size : 52427708 (50.00 GiB 53.69 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Nov 23 14:59:56 2012 State : active, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Name : ldmohanr.net:2 (local to host ldmohanr.net) UUID : 4483f95d:e485207a:b43c9af2:c37c6df1 Events : 5912611 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 0 0 1 removed 2 8 18 - spare /dev/sdb2 To remove a disk, mdadm needs a device filename, which was /dev/sdb2 originally, but that no longer refers to device number 1. I need help with removing device number 1 with 'removed' status and making /dev/sdb2 active.

    Read the article

  • NDC 2010: Eric Evans -What I learned since the book

    This was one of the most rewarding sessions for me. Eric Evans explained what he picked up and learned since he wrote the book, what parts that he realized was more important that he initially thought and what parts had been missing. A missing building block: Domain Events [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Rewrote GNU GPL v2 code in another language: can I change a license?

    - by Anton Gogolev
    I rewrote some parts of Mercurial (which is licensed under GNU GPL v2) in C#. Naturally, I looked a lot into original Python code and some parts are direct translations from Python to C#. Is is possible have "my code" licensed under different terms or to even make a part of a closed-source commercial application? If not, can I re-license "my-code" under LGPL, open-source it and then use this open-sourced C# library in my closed-source commercial application?

    Read the article

  • Nvidia drivers don't work with mainline kernel

    - by dutchie
    I want to try some of the new features in the btrfs filesystem, and to do that I need to use a newer kernel than is included in Ubuntu 12.04. To do that, I have installed linux-headers-3.4.0-030400_3.4.0-030400.201205210521_all.deb, linux-headers-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb, and linux-image-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb from the mainline kernel download here. However, on rebooting into the 3.4 kernel, my desktop is stuck at a very low resolution and I cannot increase it to the full. This did happen when I first installed, but a simple install of the nvidia-current package got everything working nicely with my GTX570 card. There were appear to be some DKMS errors when I installed the kernel, and they indicated I should look at /var/lib/dkms/nvidia-current/295.40/build/make.log: josh@sirius:~/Downloads$ sudo dpkg -i linux-*.deb Selecting previously unselected package linux-headers-3.4.0-030400. (Reading database ... 309400 files and directories currently installed.) Unpacking linux-headers-3.4.0-030400 (from linux-headers-3.4.0-030400_3.4.0-030400.201205210521_all.deb) ... Selecting previously unselected package linux-headers-3.4.0-030400-generic. Unpacking linux-headers-3.4.0-030400-generic (from linux-headers-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb) ... Selecting previously unselected package linux-image-3.4.0-030400-generic. Unpacking linux-image-3.4.0-030400-generic (from linux-image-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb) ... Done. Setting up linux-headers-3.4.0-030400 (3.4.0-030400.201205210521) ... Setting up linux-headers-3.4.0-030400-generic (3.4.0-030400.201205210521) ... Examining /etc/kernel/header_postinst.d. run-parts: executing /etc/kernel/header_postinst.d/dkms 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic ERROR (dkms apport): kernel package linux-headers-3.4.0-030400-generic is not supported Error! Bad return status for module build on kernel: 3.4.0-030400-generic (x86_64) Consult /var/lib/dkms/nvidia-current/295.40/build/make.log for more information. Setting up linux-image-3.4.0-030400-generic (3.4.0-030400.201205210521) ... Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic ERROR (dkms apport): kernel package linux-headers-3.4.0-030400-generic is not supported Error! Bad return status for module build on kernel: 3.4.0-030400-generic (x86_64) Consult /var/lib/dkms/nvidia-current/295.40/build/make.log for more information. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic update-initramfs: Generating /boot/initrd.img-3.4.0-030400-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic run-parts: executing /etc/kernel/postinst.d/zz-update-grub 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.4.0-030400-generic Found initrd image: /boot/initrd.img-3.4.0-030400-generic Found linux image: /boot/vmlinuz-3.2.0-24-generic Found initrd image: /boot/initrd.img-3.2.0-24-generic Found memtest86+ image: /memtest86+.bin Found Ubuntu 12.04 LTS (12.04) on /dev/sda1 Found Windows 7 (loader) on /dev/sda2 Found Windows 7 (loader) on /dev/sda3 done /var/lib/dkms/nvidia-current/295.40/build/make.log: DKMS make.log for nvidia-current-295.40 for kernel 3.4.0-030400-generic (x86_64) Thu Jun 7 00:58:39 BST 2012 NVIDIA: calling KBUILD... test -e include/generated/autoconf.h -a -e include/config/auto.conf || ( \ echo; \ echo " ERROR: Kernel configuration is invalid."; \ echo " include/generated/autoconf.h or include/config/auto.conf are missing.";\ echo " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo; \ /bin/false) mkdir -p /var/lib/dkms/nvidia-current/295.40/build/.tmp_versions ; rm -f /var/lib/dkms/nvidia-current/295.40/build/.tmp_versions/* make -f scripts/Makefile.build obj=/var/lib/dkms/nvidia-current/295.40/build cc -Wp,-MD,/var/lib/dkms/nvidia-current/295.40/build/.nv.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.6/include -I/usr/src/linux-headers-3.4.0-030400-generic/arch/x86/include -Iarch/x86/include/generated -Iinclude -include /usr/src/linux-headers-3.4.0-030400-generic/include/linux/kconfig.h -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -O2 -m64 -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -Wframe-larger-than=1024 -Wno-unused-but-set-variable -fno-omit-frame-pointer -fno-optimize-sibling-calls -pg -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -I/var/lib/dkms/nvidia-current/295.40/build -Wall -MD -Wsign-compare -Wno-cast-qual -Wno-error -D__KERNEL__ -DMODULE -DNVRM -DNV_VERSION_STRING=\"295.40\" -Wno-unused-function -Wuninitialized -mno-red-zone -mcmodel=kernel -UDEBUG -U_DEBUG -DNDEBUG -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(nv)" -D"KBUILD_MODNAME=KBUILD_STR(nvidia)" -c -o /var/lib/dkms/nvidia-current/295.40/build/.tmp_nv.o /var/lib/dkms/nvidia-current/295.40/build/nv.c In file included from include/linux/kernel.h:19:0, from include/linux/sched.h:55, from include/linux/utsname.h:35, from /var/lib/dkms/nvidia-current/295.40/build/nv-linux.h:38, from /var/lib/dkms/nvidia-current/295.40/build/nv.c:13: include/linux/bitops.h: In function ‘hweight_long’: include/linux/bitops.h:66:41: warning: signed and unsigned type in conditional expression [-Wsign-compare] In file included from /usr/src/linux-headers-3.4.0-030400-generic/arch/x86/include/asm/uaccess.h:577:0, from include/linux/poll.h:14, from /var/lib/dkms/nvidia-current/295.40/build/nv-linux.h:97, from /var/lib/dkms/nvidia-current/295.40/build/nv.c:13: /usr/src/linux-headers-3.4.0-030400-generic/arch/x86/include/asm/uaccess_64.h: In function ‘copy_from_user’: /usr/src/linux-headers-3.4.0-030400-generic/arch/x86/include/asm/uaccess_64.h:53:6: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] In file included from /var/lib/dkms/nvidia-current/295.40/build/nv.c:13:0: /var/lib/dkms/nvidia-current/295.40/build/nv-linux.h: At top level: /var/lib/dkms/nvidia-current/295.40/build/nv-linux.h:114:75: fatal error: asm/system.h: No such file or directory compilation terminated. make[3]: *** [/var/lib/dkms/nvidia-current/295.40/build/nv.o] Error 1 make[2]: *** [_module_/var/lib/dkms/nvidia-current/295.40/build] Error 2 NVIDIA: left KBUILD. nvidia.ko failed to build! make[1]: *** [module] Error 1 make: *** [module] Error 2

    Read the article

  • Conflict between Change Control and ASL Mapping

    - by Jie Chen
    Yesterday I got one strange report that on Agile 9.3.1.2, adding a Supplier into Item's Supplier tab will always remove all the data from Item.PageTwo.MultiList01 field which is assigned to a User Group list. The detailed problem description is like below. In JavaClient, MultiList01 attribute on Parts class's PageTwo tab is enabled and assigned with User Group list. On WebClient, user created a new Part and assign MultiList01 with two UserGroups: "Global User Group Test1" and "Personal Group_Test1". Then go to Suppliers tab to add three Suppliers. Switch back to Part's TitleBlock, will see MultiList01 loses the User Group data. To confirm if MultiList01 really loses the data or it saves with other wrong data, I need to check the database and find strange data that MultiList01 saves wrong data ",7976911,7976907,7976959,", which are exactly the ID of these three Suppliers. Then I can suspect the Supplier attribute on Suppliers tab must be mapped to MultiList01. However when I check Supplier in JavaClient, the "ASL mapped to" is blank. More interesting thing is the database clearly shows Supplier attribute (Base ID =2000004219) is mapped to 2090, which is PageTwo.MultiList01 Base ID. Till now, we can get a conclusion that Supplier data is really mapped to MultiList01, though we assign MultiList01 to User Group list and Supplier does not set "ASL mapped to". It must be another function which overrides "ASL mapped to" visibility in JavaClient with high priority. That is the "Change Controlled" function. We immediately see "ASL mapped to" with value "MultiList01" when we disable Change Controlled for Multilist01 If one attribute is Change Controlled, Supplier data cannot be mapped to this attribute theoretically because Supplier could be dynamically modified by users, not by Changes. In real situation of Agile 9.3.1.2, it could be a Code Defect. We can imagine the scenario customer met. He setup Parts.PageTwo.Multilist01 assigned with Supplier list, then in Parts.Suppliers.Supplier attribute, he set "ASL mapped to" to "Multilist01". Later company business is changed, so he set Multilist01 with Change Controlled and re-assign with User Group list. He forgot to remove "ASL mapped to" before he did modifications to Multilist01. Finally we know the solution, it depends on real business. If still need to mapping Supplier to Parts.PageTwo attribute, should modify "ASL mapped to" to other one attribute which already has assigned with Suppliers list. If do not need "ASL mapped to" function, should delete the data from database level. We cannot do it from JavaClient UI. delete propertytable where id in (select p.id from propertytable p, nodetable n where p.parentid =n.id and n.inherit=2000004219 and propertyid=794)

    Read the article

  • Optimizing Robots Text File

    We can block spiders to crawl restricted parts of our website. Restricted parts of our website means those links of our website which we don't want to be indexed in search engines and getting some unwanted visitors. For example:

    Read the article

  • HP SmartArray P400: How to repair failed logical drive?

    - by TegtmeierDE
    I have a HP Server with SmartArray P400 controller (incl. 256 MB Cache/Battery Backup) with a logicaldrive with replaced failed physicaldrive that does not rebuild. This is how it looked when I detected the error: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, Failed) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) unassigned physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SATA, 750 GB, OK) ~# I thought that I had drive 2I:1:8 configured as a spare for Array A and Array B, but it seems this was not the case :-(. I noticed the problem due to I/O errors on the host, even if only 1 physicaldrive of the RAID5 is failed. Does someone know why this could happen? The logicaldrive should go into "Degraded" mode but still be fully accessible from the host os!? I first tried to add the unassigned drive 2I:1:8 as a spare to logicaldrive 2, but this was not possible: ~# /usr/sbin/hpacucli ctrl slot=0 array B add spares=2I:1:8 Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# Interestingly it is possible to add the unassigned drive to the first array without problems. I thought maybe the controller put the array into "failed" state due to the missing spare and protects failed arrays from modification. So I tried was to reenable the logicaldrive (to add the spare afterwards): ~# /usr/sbin/hpacucli ctrl slot=0 ld 2 modify reenable Warning: Any previously existing data on the logical drive may not be valid or recoverable. Continue? (y/n) y Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# But as you can see, re-enabling the logicaldrive this was not possible. Now I replaced the failed drive by hotswapping it with the unassigned drive. The status now looks like this: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, OK) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) ~# The logical drive is still not accessible. Why is it not rebuilding? What can I do? FYI, this is the configuration of my controller: ~# /usr/sbin/hpacucli ctrl slot=0 show Smart Array P400 in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: XXXX Cache Serial Number: XXXX RAID 6 (ADG) Status: Enabled Controller Status: OK Chassis Slot: Hardware Revision: Rev E Firmware Version: 5.22 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Analysis Inconsistency Notification: Disabled Raid1 Write Buffering: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Disabled Total Cache Size: 256 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True ~# Thanks for you help in advance.

    Read the article

  • How to stop Cron from sending messages about errors

    - by Beck
    I have this strange mails comming from cron: Return-Path: <[email protected]> Delivered-To: [email protected] Received: by domain.com (Postfix, from userid 0) id 6F944264D0; Mon, 10 Jan 2011 10:35:01 +0000 (UTC) From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@domain> lynx -dump http://www.domain.com/cron/realqueue Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <[email protected]> Date: Mon, 10 Jan 2011 10:35:01 +0000 (UTC) /bin/sh: lynx: not found I have this cron settings in crontab file: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin */5 * * * * lynx -dump http://www.domain.com/cron/realqueue 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) Lynx is installed on my Ubuntu as well. Ofc in place of domain.com is my domain, just replaced. Thanks ;)

    Read the article

  • Loading dependencies for custom puppet functions

    - by Ben Smith
    I have written a custom puppet function, which is working fine, that depends on the cloudservers gem (a Rackspace client library). This is fine if I have pre-installed the gem on a server before running puppet but totally breaks if I have not installed the gem as the function seems to be run during the 'compilation' sweep, well before my package definition is realised. Here's what my .pp looks like, with get_hosts the function that requires the cloudservers gem. package { "rubygems": ensure => installed, provider => "gem"; } package { "cloudservers": ensure => installed, provider => "gem", require => Package["rubygems"]; } class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { $parts = split($name, ',') $address = $parts[0] $ip = $parts[1] $aliases = $parts[2] host{ $address: ip => $ip, host_aliases => $aliases } } Is there a way to stop the function getting run so early, or at least having it's run depend up the library being installed. Alternatively, is there a way that I can add dependencies somewhere in the functions folder that will be available to the function?

    Read the article

  • New to building computers worried about temps

    - by dave
    I'm new to building my own computers and I was wondering about maximum temperatures. I understand that the room temp can affect the computers temp but how relevent is it? I understand that if my room temp is 20°C none of my computer parts could be lower than that. But if my room is 27°C instead of 20°C would this cause my computers parts to heat up more/faster? My new computer I built myself for gaming is i7 2600k 16gb ram ddr3 1600 hd6970 2 gb 240gb ssd ( bought a nas with 3 2tb drives in raid 5 for my home network ) 850w modular psu I also have my old hp computer i3 2120 8gb ram hd6770 1tb hdd I also have 3 laptops in my household, but I am not worried about their temps, they heat up my legs but they are never under stress. Due to size and money reasons I used an old case and it only has one of the sides left on it. Is this bad for the computer and will the extra dust cause problems? Or should I leave it this way or take the missus wrath and buy a case? If so is there any certain case I should get? I don't care about looks I just want card reader and usb slots and for it to run as cool or cooler than now, my case has 1 fan. Also what are the max temps for my new and old computer parts? Is 40°C under load ok for my CPU, what about 70°C for my GPU is that ok too, or should I worry? What are normal and safe temps for my components? I have looked around but there seem to be lots of different answers. I know that 100°C is bad but I want my parts to last as long as possible and this site always seems to give good replies without arguing or flaming.

    Read the article

  • Weekly cron executing 2 times:

    - by yes123
    Hi guys, I have placed a .sh file that runs a php script weekly. This script should run only once, but every sunday it runs at: 1:30 am 6:50 am Any way to fix this? Add1: /etc/cron.weekly/cronweek: #!/bin/bash /usr/bin/php -f /home/my/path/to/script/cronweek.php Add2: crontab file: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # */1 * * * * root /usr/local/rtm/bin/rtm 35 > /dev/null 2> /dev/null

    Read the article

  • Benefits of PerformancePoint Services Using SharePoint Server 2010

    - by Wayne
    What is PerformancePoint Services? Most of the time it happens that the metrics that make up your key performance indicators are not simple values from a data source. In SharePoint Server 2007 PerformancePoint Services, you could create two kinds of KPI metrics: Simple single value metrics from any supported data source or Complex multiple value metrics from a single Analysis Services data source using MDX. Now things are even easier with Performance Point Services in SharePoint 2010. Let us check what is it? PerformancePoint Services in SharePoint Server 2010 is a performance management service that you can use to monitor and analyze your business. By providing flexible, easy-to-use tools for building dashboards, scorecards, reports, and key performance indicators (KPIs), PerformancePoint Services can help everyone across an organization make informed business decisions that align with companywide objectives and strategy. Scorecards, dashboards, and KPIs help drive accountability. Integrated analytics help employees move quickly from monitoring information to analyzing it and, when appropriate, sharing it throughout the organization. Prior to the addition of PerformancePoint Services to SharePoint Server, Microsoft Office PerformancePoint Server 2007 functioned as a standalone server. Now PerformancePoint functionality is available as an integrated part of the SharePoint Server Enterprise license, as is the case with Excel Services in Microsoft SharePoint Server 2010. The popular features of earlier versions of PerformancePoint Services are preserved along with numerous enhancements and additional functionality. New PerformancePoint Services features PerformancePoint Services now can utilize SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. Dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. New features and enhancements of SharePoint 2010 PerformancePoint Services • With PerformancePoint Services, functioning as a service in SharePoint Server, dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. The new architecture also takes advantage of SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. You also can include and link PerformancePoint Services Web Parts with other SharePoint Server Web Parts on the same page. The new architecture also streamlines security models that simplify access to report data. • The Decomposition Tree is a new visualization report type available in PerformancePoint Services. You can use it to quickly and visually break down higher-level data values from a multi-dimensional data set to understand the driving forces behind those values. The Decomposition Tree is available in scorecards and analytic reports and ultimately in dashboards. • You can access more detailed business information with improved scorecards. Scorecards have been enhanced to make it easy for you to drill down and quickly access more detailed information. PerformancePoint scorecards also offer more flexible layout options, dynamic hierarchies, and calculated KPI features. Using this enhanced functionality, you can now create custom metrics that use multiple data sources. You can also sort, filter, and view variances between actual and target values to help you identify concerns or risks. • Better Time Intelligence filtering capabilities that you can use to create and use dynamic time filters that are always up to date. Other improved filters improve the ability for dashboard users to quickly focus in on information that is most relevant. • Ability to include and link PerformancePoint Services Web Parts together with other PerformancePoint Services Web parts on the same page. • Easier to author and publish dashboard items by using Dashboard Designer. • SQL Server Analysis Services 2008 support. • Increased support for accessibility compliance in individual reports and scorecards. • The KPI Details report is a new report type that displays contextually relevant information about KPIs, metrics, rows, columns, and cells within a scorecard. The KPI Details report works as a Web part that links to a scorecard or individual KPI to show relevant metadata to the end user in SharePoint Server. This Web part can be added to PerformancePoint dashboards or any SharePoint Server page. • Create analytics reports to better understand underlying business forces behind the results. Analytic reports have been enhanced to support value filtering, new chart types, and server-based conditional formatting. To conclude, PerformancePoint Services, by becoming tightly integrated with SharePoint Server 2010, takes advantage of many enterprise-level SharePoint Server 2010 features. Unfortunately, SharePoint Foundation 2010 doesn’t include this feature. There are still many choices in SharePoint family of products that include SharePoint Server 2010, SharePoint Foundation, SharePoint Server 2007 and associated free SharePoint web parts and templates.

    Read the article

  • SharePoint MOSS - Serve HTTP content on an HTTPS page without Mixed Content Warning?

    - by kcb263
    Our "portal-like" SharePoint site is served using HTTPS/SSL. So a user goes to https://web.company.com and sees content and different Web Parts. So far, no problem. The desire now is to have new Web Parts added that either frame HTTP content (such as Weather Bug) or HTTP RSS feeds. The issue that arises is that by doing this, results in a "Mixed Content" warning in the browser. Has anybody successfully been able to implement such a scenario, or one similar to it? The options we have looked at, unsuccessfully, have been: using Apache Reverse Proxy Server mirror an external site Custom Web Parts

    Read the article

  • Launching mysql server: same permissions for root and for user

    - by toinbis
    Hi folks, have been directed here from stackoverflow here, am reposting the question and adding my.cnf at the end of a post. so far in my 10+ years experience with linux, all the permission problems I've ever encountered, have been successfully solved with chmod -R 777 /path/where/the/problem/has/occured (every lie has a grain of truth in it :) This time the trick doesn't work, so I'm turning to you for help. I'm compiling mysql server from scratch with zc.buildout (www . buildout . org). I do launch it by executing /home/toinbis/.../parts/mysql/bin/mysqld_safe, this works. The thing is that i'll be launching this from within supervisor (supervisord . org) script, and when used on the deployment server, it'll need it to be launched with root permissions(so that nginx server, launched with the same script, would have access to 80 port). The problem is that sudo /home/toinbis/.../parts/mysql/bin/mysqld_safe, fails, generating the error, posted bellow, in mysql error log (apache and nginx works as expected). http://lists.mysql.com/mysql/216045 suggests, that "there are two errors: A missing table and a file system that mysqld doesn't have access to". Mysqldatadir and all the mysql server binary files has 777 permissions, talbe mysql.plugin does exist and has 777 permissions (why Can't open the mysql.plugin table?), "sudo touch mysql_datadir/tmp/file" does create file (why Can't create/write to file /home/toinbis/.../runtime/mysql_datadir/tmp/ib4e9Huz?). chgrp -R mysql mysql_datadir and adding "root, toinbis, mysql" users to mysql group ( cat /etc/group | grep mysql outputs mysql:x:124:root,toinbis,mysql) has no effect - when i launch it as a casual user, it starts, when as a root - it fails. Does mysql server, even started as root, tries to operate as other, let's say, 'mysql' user? but even in that case, adding mysql user to mysql group and making all the mysql_datadirs files belong to mysql group should make things work smoothly. I do know that it might be a better idea to simply to launch one the nginx as root and mysql - as just a user, but this error irritated me enough so to devote enough energy so not to only "make things work", but to also make things work exactly as i wanted it initially, so to have a proof of concept that it's possible. and this is the generated error: 091213 20:02:55 mysqld_safe Starting mysqld daemon with databases from /home/toinbis/.../runtime/mysql_datadir /home/toinbis/.../parts/mysql/libexec/mysqld: Table 'plugin' is read only 091213 20:02:55 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. /home/toinbis/.../parts/mysql/libexec/mysqld: Can't create/write to file '/home/toinbis/.../runtime/mysql_datadir/tmp/ib4e9Huz' (Errcode: 13) 091213 20:02:55 InnoDB: Error: unable to create temporary file; errno: 13 091213 20:02:55 [ERROR] Plugin 'InnoDB' init function returned error. 091213 20:02:55 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 091213 20:02:55 [ERROR] Can't start server : Bind on unix socket: Permission denied 091213 20:02:55 [ERROR] Do you already have another mysqld server running on socket: /home/toinbis/.../runtime/var/pids/mysql.sock ? 091213 20:02:55 [ERROR] Aborting 091213 20:02:55 [Note] /home/toinbis/.../parts/mysql/libexec/mysqld: Shutdown complete 091213 20:02:55 mysqld_safe mysqld from pid file /home/toinbis/.../runtime/var/pids/mysql.pid ended My my.cnf (the basedir and datadir(including tempdir) have chmod -R 777 permissions) : [client] socket = /home/toinbis/.../runtime/var/pids/mysql.sock port = 8002 [mysqld_safe] socket = /home/toinbis/.../runtime/var/pids/mysql.sock nice = 0 [mysqld] # # * Basic Settings # socket = /home/toinbis/.../runtime/var/pids/mysql.sock port = 8002 pid-file = /home/toinbis/.../runtime/var/pids/mysql.pid basedir = /home/toinbis/.../parts/mysql datadir = /home/toinbis/.../runtime/mysql_datadir tmpdir = /home/toinbis/.../runtime/mysql_datadir/tmp skip-external-locking bind-address = 127.0.0.1 log-error =/home/toinbis/.../runtime/logs/mysql_errorlog # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 32M thread_stack = 128K thread_cache_size = 8 myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /home/toinbis/.../runtime/logs/mysql_logs/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /home/toinbis/.../runtime/logs/mysql_logs/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. #server-id = 1 #log_bin = /home/toinbis/.../runtime/mysql_datadir/mysql-bin.log #binlog_format = ROW #read_only = 0 #expire_logs_days = 10 #max_binlog_size = 100M #sync_binlog = 1 #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size=64M innodb_log_file_size=16M innodb_log_buffer_size=8M innodb_flush_log_at_trx_commit=1 innodb_file_per_table innodb_locks_unsafe_for_binlog=1 [mysqldump] quick quote-names max_allowed_packet = 32M [mysql] #no-auto-rehash # faster start of mysql but no tab completion [isamchk] key_buffer = 16M Any ideas much appreciated! regards, to P.S. sorry for messy hyperlinks, it's my first post and anti-spam feature of SF doesn't allow to post them properly :)

    Read the article

  • Install-SPSolution : This solution contains no resources scoped for a Web appli cation and cannot be deployed to a particular Web application

    - by Josh
    I have a PowerShell script that deploys about 12 web parts. They have all been created through Visual Studio 2010 and are being deployed to SharePoint 2010. I am getting the following error when running Install-SPSolution for one of my web parts: Install-SPSolution : This solution contains no resources scoped for a Web application and cannot be deployed to a particular Web application. Can someone help me debug this? Every other Install-SPSolution command uses -AllWebApplications, and I do not want to specify the web application directly using -URL. Here is the command that is breaking (this is the same command used to successfully deploy all 11 other web parts): Install-SPSolution –Identity PortalSelector.wsp -AllWebApplications -GACDeployment

    Read the article

  • SharePoint MOSS - Serve HTTP content on an HTTPS page without Mixed Content Warning?

    - by kcb263
    Our "portal-like" SharePoint site is served using HTTPS/SSL. So a user goes to https://web.company.com and sees content and different Web Parts. So far, no problem. The desire now is to have new Web Parts added that either frame HTTP content (such as Weather Bug) or HTTP RSS feeds. The issue that arises is that by doing this, results in a "Mixed Content" warning in the browser. Has anybody successfully been able to implement such a scenario, or one similar to it? The options we have looked at, unsuccessfully, have been: using Apache Reverse Proxy Server mirror an external site Custom Web Parts

    Read the article

  • Android “open for embedded”? Must-read Ars Technica article

    - by terrencebarr
    A few days ago ars technica published an article “Google’s iron grip on Android: Controlling open source by any means necessary”. If you are considering Android for embedded this article is a must-read to understand the severe ramifications of Google’s tight (and tightening) control on the Android technology and ecosystem. Some quotes from the ars technica article: “Android is open – except for all the good parts“ “Android actually falls into two categories: the open parts from the Android Open Source Project (AOSP) … and the closed source parts, which are all the Google-branded apps” “Android open source apps … turn into abandonware by moving all continuing development to a closed source model.” “Joining the OHA requires a company to sign its life away and promise to not build a device that runs a competing Android fork.” “Google Play Services is a closed source app owned by Google … to turn the “Android App Ecosystem” into the “Google Play Ecosystem” “You’re allowed to contribute to Android and allowed to use it for little hobbies, but in nearly every area, the deck is stacked against anyone trying to use Android without Google’s blessing“ Compare this with a recent Wired article “Oracle Makes Java More Relevant Than Ever”: “Oracle has actually opened up Java even more — getting rid of some of the closed-door machinations that used to be part of the Java standards-making process. Java has been raked over the coals for security problems over the past few years, but Oracle has kept regular updates coming. And it’s working on a major upgrade to Java, due early next year.” Cheers, – Terrence Filed under: Embedded, Mobile & Embedded Tagged: Android, embedded, Java Embedded, Open Source

    Read the article

  • Emacs: methods for debugging python

    - by Tom Willis
    I use emacs for all my code edit needs. Typically, I will use M-x compile to run my test runner which I would say gets me about 70% of what I need to do to keep the code on track however lately I've been wondering how it might be possible to use M-x pdb on occasions where it would be nice to hit a breakpoint and inspect things. In my googling I've found some things that suggest that this is useful/possible. However I have not managed to get it working in a way that I fully understand. I don't know if it's the combination of buildout + appengine that might be making it more difficult but when I try to do something like M-x pdb Run pdb (like this): /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/ Where .../bin/python is the interpreter buildout makes with the path set for all the eggs. ~/bin/pdb is a simple script to call into pdb.main using the current python interpreter HellooKitty:hydrant twillis$ cat ~/bin/pdb #! /usr/bin/env python if __name__ == "__main__": import sys sys.version_info import pdb pdb.main() HellooKitty:hydrant twillis$ .../bin/devappserver is the dev_appserver script that the buildout recipe makes for gae project and .../parts/hydrant-app is the path to the app.yaml I am first presented with a prompt Current directory is /Users/twillis/bin/ C-c C-f Nothing happens but HellooKitty:hydrant twillis$ ps aux | grep pdb twillis 469 100.0 1.6 168488 67188 s002 Rs+ 1:03PM 0:52.19 /usr/local/bin/python2.5 /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/ twillis 477 0.0 0.0 2435120 420 s000 R+ 1:05PM 0:00.00 grep pdb HellooKitty:hydrant twillis$ something is happening C-x [space] will report that a breakpoint has been set. But I can't manage to get get things going. It feels like I am missing something obvious here. Am I? So, is interactive debugging in emacs worthwhile? is interactive debugging a google appengine app possible? Any suggestions on how I might get this working?

    Read the article

  • Latest News on Service, Field Service and Depot Repair Products

    - by LuciaC
    Service and Depot Repair Customer Advisory Boards (CAB) In November 2012 the Service and Depot Repair CAB joined together for a combined meeting at Oracle HQ in Redwood Shores, California to discuss all the latest news in the Oracle Service, Field Service and Depot Repair products.  Over four days attendees shared their experiences with implementing and using these EBS CRM products and heard details of recent enhancements and future product plans direct from Development. You can access all the Oracle presentations via Doc ID 1511768.1.  Here are just some of the highlights: Field Service: Next Generation Dispatch Center Endeca Integration Case Study: Oracle Sun Field Service implementation. Mobile Field Service: New capabilities for technician-facing applications Service: Integration with Oracle Projects New Teleservice enhancements Spares Management: Supplier Warranty External Repair Execution Oracle Knowledge (Inquira) Introduction for Service Organizations If you weren't at the CAB, take a look at these presentations for great information about what's new and what's coming up in these products. 12.1.3++ Features for Field Service, Mobile Field Service, Spares Management, FSTP & Advanced Scheduler In June 2012 the R12.1.3++ patches were released for Field Service, Mobile Field Service, FSTP and Advanced Scheduler.  These patches contain new and updated functionality for these CRM Service suite modules.  New functionality includes: Field Service/FSTP/MFS: Support for Transfer Parts across subinventories in different organizations Validation to ensure Installed Item matches Returned Item MFS Wireless - Support fro Special Address Creation MFS Wireless - Enhanced Debrief Flow Advanced Scheduler Scheduler UI - Display of Spares Sourcing Information Auto Commit (Release) Tasks by Territory Dispatch Center UI - Display Spare Parts Arrival Information Spares Management Enhancements to the Task Reassignment Process Enhancements to the Parts Requirements UI Supply Chain Enhancements to allow filtering of ship methods from source location by distance. Check the following notes for more details and relevant patch numbers:Doc ID 1463333.1 - Oracle Field Service Release Notes, Release 12.1.3++Doc ID 1452470.1 - Field Service Technician Portal 12.1.3++ New FeaturesDoc ID 1463066.1 - Oracle Advanced Scheduler Release Notes, Release 12.1.3++ Doc ID 1463335.1 - Oracle Spares Management Release Notes, Release 12.1.3++ Doc ID 1463243.1 - Oracle Mobile Field Service Release Notes, Release 12.1.3++

    Read the article

  • rkhunter: right way to handle warnings further?

    - by zuba
    I googled some and checked out two first links it found: http://www.skullbox.net/rkhunter.php http://www.techerator.com/2011/07/how-to-detect-rootkits-in-linux-with-rkhunter/ They don't mention what shall I do in case of such warnings: Warning: The command '/bin/which' has been replaced by a script: /bin/which: POSIX shell script text executable Warning: The command '/usr/sbin/adduser' has been replaced by a script: /usr/sbin/adduser: a /usr/bin/perl script text executable Warning: The command '/usr/bin/ldd' has been replaced by a script: /usr/bin/ldd: Bourne-Again shell script text executable Warning: The file properties have changed: File: /usr/bin/lynx Current hash: 95e81c36428c9d955e8915a7b551b1ffed2c3f28 Stored hash : a46af7e4154a96d926a0f32790181eabf02c60a4 Q1: Is there more extended HowTos which explain how to deal with different kind warnings? And the second question. Were my actions sufficient to resolve these warnings? a) To find the package which contains the suspicious file, e.g. it is debianutils for the file /bin/which ~ > dpkg -S /bin/which debianutils: /bin/which b) To check the debianutils package checksums: ~ > debsums debianutils /bin/run-parts OK /bin/tempfile OK /bin/which OK /sbin/installkernel OK /usr/bin/savelog OK /usr/sbin/add-shell OK /usr/sbin/remove-shell OK /usr/share/man/man1/which.1.gz OK /usr/share/man/man1/tempfile.1.gz OK /usr/share/man/man8/savelog.8.gz OK /usr/share/man/man8/add-shell.8.gz OK /usr/share/man/man8/remove-shell.8.gz OK /usr/share/man/man8/run-parts.8.gz OK /usr/share/man/man8/installkernel.8.gz OK /usr/share/man/fr/man1/which.1.gz OK /usr/share/man/fr/man1/tempfile.1.gz OK /usr/share/man/fr/man8/remove-shell.8.gz OK /usr/share/man/fr/man8/run-parts.8.gz OK /usr/share/man/fr/man8/savelog.8.gz OK /usr/share/man/fr/man8/add-shell.8.gz OK /usr/share/man/fr/man8/installkernel.8.gz OK /usr/share/doc/debianutils/copyright OK /usr/share/doc/debianutils/changelog.gz OK /usr/share/doc/debianutils/README.shells.gz OK /usr/share/debianutils/shells OK c) To relax about /bin/which as I see OK /bin/which OK d) To put the file /bin/which to /etc/rkhunter.conf as SCRIPTWHITELIST="/bin/which" e) For warnings as for the file /usr/bin/lynx I update checksum with rkhunter --propupd /usr/bin/lynx.cur Q2: Do I resolve such warnings right way?

    Read the article

  • Effective way to check if an Entity/Player enters a region/trigger

    - by Chris
    I was wondering how multiplayer games detect if you enter a special region. Let's assume there is a huge map that is so big that simply checking it would become a huge performance issue. I've seen bukkit (a modding API for Minecraft servers) firing an Event on every single move. I don't think that larger games do the same because even if you have only a few coordinates you are interested in, you have to loop through a few trigger zone to see if the player is inside your region - for every player. This seems like an extremely CPU-intense operation to me even though I've never developed something like that. Is there a special algorithm that is used by larger games to accomplish this? The only thing I could imagine is to split up the world into multiple parts and to register the event not on the movement itself but on all the parts that are covered by your area and only check for areas that are registered in the current part. And another thing I would like to know: How could you detect when someone must have entered a trigger but you never saw him directly in it since his client only sent you an move packet shortly before entering and after leaving the trigger area. Drawing a line and calculate all colliding parts seems rather CPU intensive if you have to perform it every time.

    Read the article

  • Can't update kernel to 2.6.35.27

    - by Uri Herrera
    When I try to update I get this message, I'm guessing I'm missing something here? Filesystem Type Size Used Avail Use% Mounted on /dev/sdb6 ext4 43G 7.7G 33G 20% / none devtmpfs 1.6G 349k 1.6G 1% /dev none tmpfs 1.6G 5.9M 1.6G 1% /dev/shm none tmpfs 1.6G 218k 1.6G 1% /var/run none tmpfs 1.6G 0 1.6G 0% /var/lock /dev/sdb2 fuseblk 258G 198G 60G 77% /media/Backup /dev/sda1 fuseblk 321G 175G 146G 55% /media/Media /dev/sdb1 ext4 96M 84M 6.7M 93% /boot /dev/sdb7 ext4 175G 81G 86G 49% /home Here's the output: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.35-22-generic 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 5 not fully installed or removed. After this operation, 107MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 282211 files and directories currently installed.) Removing linux-image-2.6.35-22-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.35-22-generic /boot/vmlinuz-2.6.35-22-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.35-22-generic /boot/vmlinuz-2.6.35-22-generic /etc/default/grub: 23: Syntax error: newline unexpected run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.35-22- generic.postrm line 328. dpkg: error processing linux-image-2.6.35-22-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.35-22-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Internet of Things (IoT) Thanksgiving Special: Turkey Tweeter (Part 1)

    - by hinkmond
    It's time for the Internet of Things (ioT) Thanksgiving Special. This time we are going to work on a special Do-It-Yourself project to create an Internet of Things temperature probe to connect your Turkey Day turkey to the Internet by writing a Thanksgiving Day Java Embedded app for your Raspberry Pi which will send out tweets as it cooks in your oven. If you're vegetarian, don't worry, you can follow along and just run the simulation of the Turkey Tweeter, or better yet, try a tofu version of the Turkey Tweeter. Here is the parts list: 1 Vernier Go!Temp USB Temperature Probe 1 Uncooked Turkey 1 Raspberry Pi (not Pumpkin Pie) 1 Roll thermal reflective tape You can buy the Vernier Go!Temp USB Temperature Probe for $39 from here: http://www.vernier.com/products/sensors/temperature-sensors/go-temp/. And, you can get the thermal reflective tape from any auto parts store. (Don't tell them what you need it for. Say it's for rebuilding your V-8 engine in your Dodge Hemi. Avoids the need for a long explanation and sounds cooler...) The uncooked turkey can be found in your neighborhood grocery store. But, if you're making a vegetarian Tofurkey, you're on your own... The Java Embedded app will be the same, though (Java is vegan). So, grab all your parts and come back here for the next part of this project... Hinkmond

    Read the article

  • Error while reomving the new kernel 2.6.37

    - by Tarek
    Hi! I tried to install the new kernel but something went wrong and I'm trying to remove it now. The error massege is: mhd@Tarek-Laptop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.37-020637-generic 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 111MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 188780 files and directories currently installed.) Removing linux-image-2.6.37-020637-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic /etc/default/grub: 33: Syntax error: EOF in backquote substitution run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.37-020637-generic.postrm line 328. dpkg: error processing linux-image-2.6.37-020637-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.37-020637-generic E: Sub-process /usr/bin/dpkg returned an error code (1) The previous unsloved error is on this bug.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >