Search Results

Search found 7991 results on 320 pages for 'live wallpapers'.

Page 295/320 | < Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >

  • Mounting ddrescue image after recovery (in over my head)

    - by BorgDomination
    I'm having problems mounting the recovery image. I've tried to mount the image multiple ways. quark@DS9 ~ $ sudo mount -t ext4 /media/jump1/1recover/sdb1.img /mnt mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so quark@DS9 ~ $ sudo mount -r -o loop /media/jump1/1recover/sdb1.img recover mount: you must specify the filesystem type quark@DS9 ~ $ sudo mount /media/jump1/1recover/sdb1.img mnt mount: you must specify the filesystem type It doesn't even give me detailed information on the file I just made, nautilus says it's 160gb. quark@DS9 ~ $ file /media/jump1/1recover/sdb1.img /media/jump1/1recover/sdb1.img: data quark@DS9 ~ $ mmls /media/jump1/1recover/sdb1.img Cannot determine partition type I'm not sure what I'm doing wrong or if I started this process incorrectly from the beginning. I've outlined what I've done so far below. I'm clueless, I'd appreciate if someone had some input for me. What I have done from the beginning My laptop has two hard drives. One has the dual boot Win7 / Linux Mint system files. Secondary one contained my /home folder. The laptop was jarred and the /home disk was broken. I tried a LiveCD recovery, it failed. Wouldn't even load a Live session with the disk installed. So I turned to ddrescue. quark@DS9 ~ $ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009fc18 Device Boot Start End Blocks Id System /dev/sda1 * 2048 112642047 56320000 7 HPFS/NTFS/exFAT /dev/sda2 138033152 312580095 87273472 83 Linux /dev/sda3 112644094 138033151 12694529 5 Extended /dev/sda5 112644096 132173823 9764864 83 Linux /dev/sda6 132175872 138033151 2928640 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/sdb: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002a8ea Device Boot Start End Blocks Id System /dev/sdb1 * 63 312576704 156288321 83 Linux Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xed6d054b Device Boot Start End Blocks Id System /dev/sdc1 63 1953520064 976760001 7 HPFS/NTFS/exFAT sda - 160g internal, holds all system files and all computer functions. sdb - 160g internal, BROKEN, contains about 140g of data I'd like to recover. sdc - 1T external, contains recovery image. Only place that has space to do all this. From this site, https://apps.education.ucsb.edu/wiki/Ddrescue I used this script to create an image of the broken hard drive. I changed the destination to the external USB drive. #!/bin/sh prt=sdb1 src=/dev/$prt dst=/media/jump1/1recover/$prt.img log=$dst.log sudo time ddrescue --no-split $src $dst $log sudo time ddrescue --direct --max-retries=3 $src $dst $log sudo time ddrescue --direct --retrim --max-retries=3 $src $dst $log Everything looked like it came off without a hitch: quark@DS9 ~ $ sudo bash recover1 Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 0 B, errsize: 0 B, errors: 0 Current status rescued: 160039 MB, errsize: 4096 B, current rate: 35588 B/s ipos: 3584 B, errors: 1, average rate: 22859 kB/s opos: 3584 B, time from last successful read: 0 s Finished 12.78user 1060.42system 1:56:41elapsed 15%CPU (0avgtext+0avgdata 4944maxresident)k 312580958inputs+0outputs (1major+601minor)pagefaults 0swaps Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 160039 MB, errsize: 4096 B, errors: 1 Current status rescued: 160039 MB, errsize: 1024 B, current rate: 0 B/s ipos: 1536 B, errors: 1, average rate: 13 B/s opos: 1536 B, time from last successful read: 1.3 m Finished 0.00user 0.00system 3:43.95elapsed 0%CPU (0avgtext+0avgdata 4944maxresident)k 238inputs+0outputs (3major+374minor)pagefaults 0swaps Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 160039 MB, errsize: 1024 B, errors: 1 Current status rescued: 160039 MB, errsize: 1024 B, current rate: 0 B/s ipos: 1536 B, errors: 1, average rate: 0 B/s opos: 1536 B, time from last successful read: 3.7 m Finished 0.00user 0.00system 3:43.56elapsed 0%CPU (0avgtext+0avgdata 4944maxresident)k 8inputs+0outputs (0major+376minor)pagefaults 0swaps It looks like, from where I'm standing it worked perfectly. Here's the log: # Rescue Logfile. Created by GNU ddrescue version 1.14 # Command line: ddrescue --direct --retrim --max-retries=3 /dev/sdb1 /media/jump1/1recover/sdb1.img /media/jump1/1recover/sdb1.img.log # current_pos current_status 0x00000600 + # pos size status 0x00000000 0x00000400 + 0x00000400 0x00000400 - 0x00000800 0x254314FC00 + I'm not sure how to proceed. Does this mean all of my data is lost???????? Appreciate ANY input!

    Read the article

  • CSS3 - "connecting" 2 classes animation [closed]

    - by Nave Tseva
    I have this CSS +HTML code: <!DOCTYPE HTML> <html> <head> <meta http-equiv="content-type" content="text/html; charset=UTF-8" /> <title>What</title> <style type="text/css"> #page { width: 900px; padding: 0px; margin: 0 auto; direction: rtl; position: relative; } #box1 { position: relative; width: 500px; border: 1px solid black; box-shadow: -3px 8px 34px #808080; border-radius: 20px; box-shadow: -8px 5px 5px #888888; right: 300px; top: 250px; height: 150px; -webkit-transition: all 1s; font-size: large; color: Black; padding: 10px; background: #D0D0D0; opacity: 0; } @-webkit-keyframes myFirst { 0% { right: 300px; top: 150px; background: #D0D0D0; opacity: 0; } 100% { background: #909090; ; right: 300px; top: 200px; opacity: 1; } } #littlebox1 { top: 200px; position: absolute; display: inline-block; } .littlebox1-sentence { font-size: large; padding-bottom: 15px; padding-top: 15px; padding-left: 25px; padding-right: 10px; background: #D0D0D0; border-radius: 10px; -webkit-transition: background .25s ease-in-out; } #littlebox1:hover ~ #box1 { -webkit-transition: all 0s; background: #909090;; right: 300px; top: 200px; -webkit-animation: myFirst 1s; -webkit-animation-fill-mode: initial; opacity: 1; } .littlebox1-sentence:hover { background: #909090; } .littlebox1-sentence:hover + .triangle { border-right: 50px solid #909090; } .triangle { position: relative; width: 0; height: 0; border-right: 50px solid #D0D0D0; border-top: 24px solid transparent; border-bottom: 24px solid transparent; right: 160px; -webkit-transition: border-right .25s ease-in-out; } .triangle:hover { border-right:50px solid #909090; } </style> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script> <script> $(function() { $('.littlebox1-sentence').hover(function() { $(this).css('background', '#909090'); $('.triangle').css('border-right', '50px solid #909090'); }); </script> <script> $(function() { $('.triangle').hover(function() { $(this).css('border-right', '50px solid #909090'); $('.littlebox1-sentence').css('background', '#909090'); }); </script> </head> <body dir="rtl"> <div id="page"> <div id="littlebox1" class="littlebox1-sentence">put your mouse here</div><div id="littlebox1" class="triangle"> </div> <div id="box1"> </div> </div> </body> </html> Live example you will find here: http://jsfiddle.net/FLe4g/12/ The problem here that something here wrong in the second jquery code. I want that every time that I put the mouse on the box, or on the triangke they both will change ther color together. when I put the mouse on the box it works fine, but when I put the mouse on the triangle it don't work. Any suggestions how to fix this code?

    Read the article

  • What to leave when you're leaving

    - by BuckWoody
    There's already a post on this topic - sort of. I read this entry, where the author did a good job on a few steps, but I found that a few other tips might be useful, so if you want to check that one out and then this post, you might be able to put together your own plan for when you leave your job.  I once took over the system administrator (of which the Oracle and SQL Server servers were a part) at a mid-sized firm. The outgoing administrator had about a two- week-long scheduled overlap with me, but was angry at the company and told me "hey, I know this is going to be hard on you, but I want them to know how important I was. I'm not telling you where anything is or what the passwords are. Good luck!" He then quit that day. It took me about three days to find all of the servers and crack the passwords. Yes, the company tried to take legal action against the guy and all that, but he moved back to his home country and so largely got away with it. Obviously, this isn't the way to leave a job. Many of us have changed jobs in the past, and most of us try to be very professional about the transition to a new team, regardless of the feelings about a particular company. I've been treated badly at a firm, but that is no reason to leave a mess for someone else. So here's what you should put into place at a minimum before you go. Most of this is common sense - which of course isn't very common these days - and another good rule is just to ask yourself "what would I want to know"? The article I referenced at the top of this post focuses on a lot of documentation of the systems. I think that's fine, but in actuality, I really don't need that. Even with this kind of documentation, I still perform a full audit on the systems, so in the end I create my own system documentation. There are actually only four big items I need to know to get started with the systems: 1. Where is everything/everybody?The first thing I need to know is where all of the systems are. I mean not only the street address, but the closet or room, the rack number, the IU number in the rack, the SAN luns, all that. A picture here is worth a thousand words, which is why I really like Visio. It combines nice graphics, full text and all that. But use whatever you have to tell someone the physical locations of the boxes. Also, tell them the physical location of the folks in charge of those boxes (in case you aren't) or who share that responsibility. And by "where" in this case, I mean names and phones.  2. What do they do?For both the servers and the people, tell them what they do. If it's a database server, detail what each database does and what application goes to that, and who "owns" that application. In my mind, this is one of hte most important things a Data Professional needs to know. In the case of the other administrtors or co-owners, document each person's responsibilities.   3. What are the credentials?Logging on/in and gaining access to the buildings are things that the new Data Professional will need to do to successfully complete their job. This means service accounts, certificates, all of that. The first thing they should do, of course, is change the passwords on all that, but the first thing they need is the ability to do that!  4. What is out of the ordinary?This is the most tricky, and perhaps the next most important thing to know. Did you have to use a "special" driver for that video card on server X? Is the person that co-owns an application with you mentally unstable (like me) or have special needs, like "don't talk to Buck before he's had coffee. Nothing will make any sense"? Do you have service pack requirements for a specific setup? Write all that down. Anything that took you a day or longer to make work is probably a candidate here. This is my short list - anything you care to add? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Grub won't boot windows after update from 11.10 to 12.04

    - by Holger
    thanks for your time and reading this, here's the deal: i upgraded from 11.10 to 12.04 and everything worked out until i rebooted, i had 11.10 sucessfully running as a dual boot with windows vista. when i rebooted, my GRUB was shot to hell, what ever option i selected it said partion not found or something similar... booting into a live version on a thumb drive and running bootrepair from there fixed the issue... but only for ubuntu, when i try to boot into windows it only goes back to GRUB. i'm not at home, and heres a list of what i have here with me... 1 4gb thumb drive, empty 1 8gb thumb drive, windows vista installer bootable 1 old laptop, the one i try to save, optical drive is not existent 2 Mbps internet connection can you help me get back into my windows without having to reinstall windows? or at least show me a way how to use my illustrator through a virtual machine or something? here's my grub cfg # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e set locale_dir=($root)/boot/grub/locale set lang=de_DE insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "${linux_gfx_mode}" != "text" ]; then load_video; fi menuentry 'Ubuntu, mit Linux 3.2.0-24-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e linux /boot/vmlinuz-3.2.0-24-generic root=UUID=1063e402-b14f-45e5-92b6-d20a2e3a717e ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-24-generic } menuentry 'Ubuntu, mit Linux 3.2.0-24-generic (Wiederherstellungsmodus)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e echo 'Linux 3.2.0-24-generic wird geladen …' linux /boot/vmlinuz-3.2.0-24-generic root=UUID=1063e402-b14f-45e5-92b6-d20a2e3a717e ro recovery nomodeset echo 'Initiale Ramdisk wird geladen …' initrd /boot/initrd.img-3.2.0-24-generic } submenu "Previous Linux versions" { menuentry 'Ubuntu, mit Linux 3.0.0-19-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e linux /boot/vmlinuz-3.0.0-19-generic root=UUID=1063e402-b14f-45e5-92b6-d20a2e3a717e ro quiet splash $vt_handoff initrd /boot/initrd.img-3.0.0-19-generic } menuentry 'Ubuntu, mit Linux 3.0.0-19-generic (Wiederherstellungsmodus)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e echo 'Linux 3.0.0-19-generic wird geladen …' linux /boot/vmlinuz-3.0.0-19-generic root=UUID=1063e402-b14f-45e5-92b6-d20a2e3a717e ro recovery nomodeset echo 'Initiale Ramdisk wird geladen …' initrd /boot/initrd.img-3.0.0-19-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows Vista (loader) (on /dev/sda1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 2C9E66B39E6674EC chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ###

    Read the article

  • Windows 8/Surface Lunch Event Summary

    - by Tim Murphy
    Today was a big day for Microsoft with two separate launch event.  The first for Windows 8 and all of it’s hardware partners.  The second was specifically to introduce the Microsoft Windows 8 Surface tablet.  Below are some of the take-aways I got from the webcasts. Windows 8 Launch The three general area that Microsoft focused on were the release of the OS itself, the public unveiling of the Windows Store and the new devices available from its hardware partners. The release of the OS focused on the fact that it will be available at mid-night tonight for both new PCs and for upgrades.  I can’t say that this interested me that much since it was already known to most people.  I think what they did show well was how easy the OS really is to use. The Windows Store is also not a new feature to those of us who have been running the pre-release versions of Windows 8 or have owned Windows Phone 7 for the past 2 years.  What was interesting is that the Windows Store launches with more apps available than any other platforms store at their respective launch.  I think this says a lot about how Microsoft focuses on the ability of developers to create software and make it available.  The of course were sure to emphasize that the Windows Store has better monetary terms for developers than its competitors. The also showed off the fact that XBox Music streaming is available for to all Windows 8 user for free.  Couple this with the Bing suite of apps that give you news, weather, sports and finance right out of the box and I think most people will find the environment a joy to use. I think the hardware demo, while quick and furious, really show where Windows shine: CHOICE!  They made a statement that over 1000 devices have been certified for Windows 8.  They showed tablets, laptops, desktops, all-in-ones and convertibles.  Since these devices have industry standard connectors they give a much wider variety of accessories and devices that you can use with them. Steve Balmer then came on stage and tried to see how many times he could use the “magical”.  He focused on how the Windows 8 OS is designed to integrate with SkyDrive, Skype and Outlook.com.  He also enforced that they think Windows 8 is the best choice for the Enterprise when it comes to protecting data and integrating across devices including Windows Phone 8. With that we were left to wait for the second event of the day. Surface Launch The second event of the day started with kids with magnets.  Ok, they were adults, but who doesn’t like playing with magnets.  Steven Sinofsky detached and reattached the Surface keyboard repeatedly, clearly enjoying himself.  It turns out that there are 4 magnets in the cover, 2 for alignment and 2 as connectors. They then went to giving us the details on the display.  The 10.6” display is optically bonded to the case and is optimized to reduce glare.  I think this came through very well in the demonstrations. The properties of the case were also a great selling point.  The VaporMg allowed them to drop the device on stage, on purpose, and continue working.  Of course they had to bring out the skate boards made from Surface devices. “It just has to feel right” was the reason they gave for many of their design decisions from the weight and size of the device to the way the kickstand and camera work together.  While this gave you the feeling that the whole process was trial and error you could tell that a lot of science went into the specs.  This included making sure that the magnets were strong enough to hold the cover on and still have a 3 year old remove the cover without effort. I am glad that they also decided the a USB port would be part of the spec since it give so many options.  They made the point that this allows Surface to leverage over 420 million existing devices.  That works for me. The last feature that I really thought was important was the microSD port.  Begin stuck with the onboard memory has been an aggravation of mine with many of the devices in the market today. I think they did job of really getting the audience to understand why you want this platform and this particular device.  Using personal examples like creating a video of a birthday party and being in it or the fact that the device was being used to live blog the event and control the lights and presentation.  They showed very well that it was not only fun but very capable of getting real work done.  Handing out tablets to the crowd didn’t hurt either.  In the end I really wanted a Surface even though I really have no need for one on a daily basis.  Great job Microsoft! del.icio.us Tags: Windows 8,Win8,Windows 8 Luanch

    Read the article

  • Tweaking Hudson memory usage

    - by rovarghe
    Hudson 3.1 has some performance optimizations that greatly reduces its memory footprint. Prior to this Hudson used to always hold the entire data model (all jobs and all builds) in memory which affected scalability. Some installations configured heap sizes in excess of 1GB to counteract this. Hudson 3.1.x maintains an MRU cache and only loads jobs and builds as they are required. Because of the inability to change existing APIs and be backward compatible with plugins, there were limits to how far we could go with this approach. Memory optimizations almost always come with a related cost, in this case its additional I/O that has to be performed to load data on request. On a small site that has frequent traffic, this is usually not noticeable since the MRU cache will usually hold on to all the data. A large site with infrequent traffic might experience some delays when the first request hits the server after a long gap. If you have a large heap and are able to allocate more memory, the cache settings can be adjusted to take advantage of this and even go back to pre-3.1 behavior. All the cache settings can be passed as options to the JVM container (Tomcat or the default Jetty container) using the -D option. There are two caches, independant of each other, one for Jobs and the other for Builds. For the jobs cache: hudson.jobs.cache.evict_in_seconds ( default=60 ) Seconds from last access (could be because of a servlet request or a background cron thread) a job should be purged from the cache. Set this to 0 to never purge based on time. hudson.jobs.cache.initial_capacity ( default=1024 ) Initial number of jobs the cache can accomodate. Setting this to the number of jobs you typically display on your Hudson landing page or home page will speed up consecutive access to that page. If the default is too large you may consider downsizing and using that memory for the Builds cache instead. hudson.jobs.cache.max_entries ( default=1024) Maximum number of jobs in the cache. The default is large enough for most installations, but if you find I/O activity when always accessing the hudson home page you might consider increasing this, but first verify if the I/O is caused by frequent eviction (see above), rather than by the cache not being large enough. For the builds cache: The builds cache is used to store Build objects as they are read from storage. Typically this happens when a user drills down into the details of a particular Job from the hudson hom epage. The cache is shared among builds for different jobs since in most installations all jobs are not accessed with the same frequency, so a per-job builds cache would be a waste of memory. hudson.job.builds.cache.evict_in_seconds ( default=60 ) Same as the equivalent Job cache, applied to Build. hudson.job.builds.cache.initial_capacity" ( default=512 ) Same as equivalent Job cache setting. Note the smaller initial size. If your site stores a large number of builds and has frequent access to more builds you might consider bumping this up. hudson.job.builds.cache.max_entries ( default=10240 ) The default max is large enough for most installations, the builds cache has bigger sized objects, so be careful about increasing the upper limit on this. See section on monitoring below. Sample usage: java -jar hudson-war-3.1.2-SNAPSHOT.war -Dhudson.jobs.cache.evict_in_seconds=300 \ -Dhudson.job.builds.cache.evict_in_seconds=300 Monitoring cache usage The 'jmap' tool that comes with the JDK can be used to monitor cache performance in an indirect way by looking at the number of Job and Build objects in each cache. Find the PID of the hudson instance and run $ jmap -histo:live <pid | grep 'hudson.model.*Lazy.*Key$' Here's a sample output: num #instances #bytes class name 523: 28 896 hudson.model.RunMap$LazyRunValue$Key 1200: 3 96 hudson.model.LazyTopLevelItem$Key These are the keys to the Jobs (LazyTopLevelItem$Key) and Builds (RunMap$LazyRunValue$Key) in the caches, so counting the number of keys is a good indicator of the number of items in the cache at any given moment. The size in bytes can be ignored, they are just the size of the keys, not the actual sizes of the objects they hold. Those sizes can only be obtained with a profiler. With the output above we can conclude that there are 3 jobs and 28 builds in memory. The 28 builds can all be from 1 job or all 3 jobs. Over time on an idle system, these should get evicted and memory cache should be empty. In practice, because of background cron threads and triggers, jobs rarely fall down to zero. Access of a job or a build by a cron thread resets the eviction timer.

    Read the article

  • client problems - misaligned expectations & not following SDLC protocols

    - by louism
    hi guys, im having some serious problems with a client on a project - i could use some advice please the short version i have been working with this client now for almost 6 months without any problems (a classified website project in the range of 500 hours) over the last few days things have drastically deteriorated to the point where ive had to place the project on-hold whilst i work-out what to do (this has pissed the client off even more) to be simplistic, the root cause of the issue is this: the client doesnt read the specs i make for him, i code the feature, he than wants to change things, i tell him its not to the agreed spec and that that change will have to be postponed and possibly charged for, he gets upset and rants saying 'hes paid for the feature' and im not keeping to the agreement (<- misalignment of expectations) i think the root cause of the root cause is my clients failure to take my SDLC protocols seriously. i have a bug tracking system in place which he practically refuses to use (he still emails me bugs), he doesnt seem to care to much for the protocols i use for dealing with scope creep and change control the whole situation came to a head recently where he 'cracked it' (an aussie term for being fed-up). the more terms like 'postponed for post-launch implementation', 'costed feature addition', and 'not to agreed spec' i kept using, the worse it got finally, he began to bully me - basically insisting i shut-up and do the work im being paid for. i wrote a long-winded email explaining how wrong he was on all these different points, and explaining what all the SDLC protocols do to protect the success of the project. than i deleted that email and wrote a new one in the new email, i suggested as a solution i write up a list of grievances we both had. we than review the list and compromise on different points: he gets some things he wants, i get some things i want. sometimes youve got to give ground to get ground his response to this suggestion was flat-out refusal, and a restatement that i should just get on with the work ive been paid to do so there you have the very subjective short version. if you have the time and inclination, the long version may be a little less bias as it has the email communiques between me and my client the long version (with background) the long version works by me showing you the email communiques which lead to the situation coming to a head. so here it is, judge for yourself where the trouble started... 1. client asked me why something was missing from a feature i just uploaded, my response was to show him what was in the spec: it basically said the item he was looking for was never going to be included 2. [clients response...] Memo Louis, We are following your own title fields and keeping a consistent layout. Why the big fuss about not adding "Part". It simply replaces "model" and is consistent with your current title fields. 3. [my response...] hi [client], the 'part' field appeared to me as a redundancy / mistake. i requested clarification but never received any in a timely manner (about 2 weeks ago) the specification for this feature also indicated it wasnt going to be included: RE: "Why the big fuss about not adding "Part" " it may not appear so, but it would actually be a lot of work for me to now add a 'Part' field it could take me up to 15-20 minutes to properly explain why its such a big undertaking to do this, but i would prefer to use that time instead to work on completing your v1.1 features as a simplistic explanation - it connects to the change in paradigm from a 'generic classified ad' model to a 'specific attributes for specific categories' model basically, i am saying it is a big fuss, but i understand that it doesnt look that way - after all, it is just one ity-bitty field :) if you require a fuller explanation, please let me know and i will commit the time needed to write that out also, if you recall when we first started on the project, i said that with the effort/time required for features, you would likely not know off the top of your head. you may think something is really complex, but in reality its quite simple, you might think something is easy - but it could actually be a massive trauma to code (which is the case here with the 'Part' field). if you also recalled, i said the best course of action is to just ask, and i would let you know on a case-by-case basis 4. [email from me to client...] hi [client], the online catalogue page is now up live (see my email from a few days ago for information on how it works) note: the window of opportunity for input/revisions on what data the catalogue stores has now closed (as i have put the code up live now) RE: the UI/layout of the online catalogue page you may still do visual/ui tweaks to the page at the moment (this window for input/revisions will close in a couple of days time) 5. [email from client to me...] *(note: i had put up the feature & asked the client to review it, never heard back from them for a few days)* Memo Louis, Here you go again. CLOSED without a word of input from the customer. I don't think so. I will reply tomorrow regarding the content and functionality we require from this feature. 5. [from me to client...] hi [client]: RE: from my understanding, you are saying that the mini-sale yard control would change itself based on the fact someone was viewing for parts & accessories <- is that correct? this change is outside the scope of the v1.1 mini-spec and therefore will need to wait 'til post launch for costing/implementation 6. [email from client to me...] Memo Louis, Following your v1.1 mini-spec and all your time paid in full for the work selected. We need to make the situation clear. There will be no further items held for post-launch. Do not expect us to pay for any further items other than those we have agreed upon. You have undertaken to complete the Parts and accessories feature as follows. Obviously, as part of this process the "mini search" will be effected, and will require "adaption to make sense". 7. [email from me to client...] hi [client], RE: "There will be no further items held for post-launch. Do not expect us to pay for any further items other than those we have agreed upon." a few points to consider: 1) the specification for the 'parts & accessories' feature was as follows: (i.e. [what] "...we have agreed upon.") 2) you have received the 'parts & accessories' feature free of charge (you have paid $0 for it). ive spent two days coding that feature as a gesture of good will i would request that you please consider these two facts carefully and sincerely 8. [email from client to me...] Memo Louis, I don't see how you are giving us anything for free. From your original fee proposal you have deleted more than 30 hours of included features. Your title "shelved features". Further you have charged us twice by adding back into the site, at an addition cost, some of those "shelved features" features. See v1.1 mini-spec. Did include in your original fee proposal a change request budget but then charge without discussion items included in v1.1 mini-spec. Included a further Features test plan for a regression test, a fee of 10 hours that would not have been required if the "shelved features" were not left out of the agreed fee proposal. I have made every attempt to satisfy your your uneven business sense by offering you everything your heart desired, in the v1.1 mini-spec, to be left once again with your attitude of "its too hard, lets leave it for post launch". I am no longer accepting anything less than what we have contracted you to do. That is clearly defined in v1.1 mini-spec, and you are paid in advance for delivering those items as an acceptable function. a few notes about the above email... i had to cull features from the original spec because it didnt fit into the budget. i explained this to the client at the start of the project (he wanted more features than he had budget hours to do them all) nothing has been charged for twice, i didnt charge the client for culled features. im charging him to now do those culled features the draft version of the project schedule included a change request budget of 10 hours, but i had to remove that to meet the budget (the client may not have been aware of this to be fair to them) what the client refers to as my attitude of 'too hard/leave it for post-launch', i called a change request protocol and a method for keeping scope creep under control 9. [email from me to client...] hi [client], RE: "...all your grievances..." i had originally written out a long email response; it was fantastic, it had all these great points of how 'you were wrong' and 'i was right', you would of loved it (and by 'loved it', i mean it would of just infuriated you more) so, i decided to deleted it start over, for two reasons: 1) a long email is being disrespectful of your time (youre a busy businessman with things to do) 2) whos wrong or right gets us no closer to fixing the problems we are experiencing what i propose is this... i prepare a bullet point list of your grievances and my grievances (yes, im unhappy too about how things are going - and it has little to do with money) i submit this list to you for you to add to as necessary we then both take a good hard look at this list, and we decide which areas we are willing to give ground on as an example, the list may look something like this: "louis, you keep taking away features you said you would do" [your grievance 2] [your grievance 3] [your grievance ...] "[client], i feel you dont properly read the specs i prepare for you..." [my grievance 2] [my grievance 3] [my grievance ...] if you are willing to give this a try, let me know will it work? who knows. but if it doesnt, we can always go back to arguing some more :) obviously, this will only work if you are willing to give it a genuine try, and you can accept that you may have to 'give some ground to get some ground' what do you think? 10. [email from client to me ...] Memo Louis, Instead of wasting your time listing grievances, I would prefer you complete the items in v1.1 mini-spec, to a satisfactory conclusion. We almost had the website ready for launch until you brought the v1.1 mini-spec into the frame. Obviously I expected you could complete the v1.1 mini-spec in a two-week time frame as you indicated and give the site a more profession presentation. Most of the problems have been caused by you not following our instructions, but deciding to do what you feel like at the time. And then arguing with us how the missing information is not necessary. For instance "Parts and Accessories". Why on earth would you leave out the parts heading, when it ties-in with the fields you have already developed. It replaces "model" and is just as important in the context of information that appears in the "Details" panel. We are at a stage where the the v1.1 mini-spec needs to be completed without further time wasting and the site is complete (subject to all features working). We are on standby at this end to do just that. Let me know when you are back, working on the site and we will process and complete each v1.1 mini-spec, item by item, until the job is complete. 11. [last email from me to client...] hi [client], based on this reply, and your demonstrated unwillingness to compromise/give any ground on issues at hand, i have decided to place your project on-hold for the moment i will be considering further options on how to over-come our challenges over the next few days i will contact you by monday 17/may to discuss any new options i have come up with, and if i believe it is appropriate to restart work on your project at that point or not told you it was long... what do you think?

    Read the article

  • Openlayers - Redraw() layer. Add / Remove layer.

    - by Ozaki
    TLDR: I have an Openlayers map with a layer called 'track' I want to remove track and add track back in. I have an image 'imageFeature' on a layer that rotates on load to the direction being set. I want it to update this rotation that is set in 'styleMap' on a layer called 'tracking'. I set the var 'stylemap' to apply the external image & rotation. The 'imageFeature' is added to the layer at the coords specified. 'imageFeature' is removed. 'imageFeature' is added again in its new location. Rotation is not applied.. As the 'styleMap' applies to the layer I think that I have to remove the layer and add it again rather than just the 'imageFeature' Layer: var tracking = new OpenLayers.Layer.GML("Tracking", "coordinates.json", { format: OpenLayers.Format.GeoJSON, styleMap: styleMap }); styleMap: var styleMap = new OpenLayers.StyleMap({ fillOpacity: 1, pointRadius: 10, rotation: heading, }); Now wrapped in a timed function the imageFeature: map.layers[3].addFeatures(new OpenLayers.Feature.Vector( new OpenLayers.Geometry.Point(longitude, latitude), {rotation: heading, type: parseInt(Math.random() * 3)} )); Type refers to a lookup of 1 of 3 images.: styleMap.addUniqueValueRules("default", "type", lookup); var lookup = { 0: {externalGraphic: "Image1.png", rotation: heading}, 1: {externalGraphic: "Image2.png", rotation: heading}, 2: {externalGraphic: "Image3.png", rotation: heading} } I have tried the 'redraw()' function: but it returns "tracking is undefined" or "map.layers[2]" is undefined. tracking.redraw(true); map.layers[2].redraw(true); Heading is a variable: from a JSON feed. var heading = 13.542; But so far can't get anything to work it will only rotate the image onload. The image will move in coordinates as it should though. So what am I doing wrong with the redraw function or how can I get this image to rotate live? Thanks in advance -Ozaki Add: I managed to get map.layers[2].redraw(true); to sucessfully redraw layer 2. But it still does not update the rotation. I am thinking because the stylemap is updating. But it runs through the style map every n sec, but no updates to rotation and the variable for heading is updating correctly if i put a watch on it in firebug.

    Read the article

  • WPF CommandParameter is NULL first time CanExecute is called

    - by Jonas Follesø
    I have run into an issue with WPF and Commands that are bound to a Button inside the DataTemplate of an ItemsControl. The scenario is quite straight forward. The ItemsControl is bound to a list of objects, and I want to be able to remove each object in the list by clicking a Button. The Button executes a Command, and the Command takes care of the deletion. The CommandParameter is bound to the Object I want to delete. That way I know what the user clicked. A user should only be able to delete their "own" objects - so I need to do some checks in the "CanExecute" call of the Command to verify that the user has the right permissions. The problem is that the parameter passed to CanExecute is NULL the first time it's called - so I can't run the logic to enable/disable the command. However, if I make it allways enabled, and then click the button to execute the command, the CommandParameter is passed in correctly. So that means that the binding against the CommandParameter is working. The XAML for the ItemsControl and the DataTemplate looks like this: <ItemsControl x:Name="commentsList" ItemsSource="{Binding Path=SharedDataItemPM.Comments}" Width="Auto" Height="Auto"> <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <Button Content="Delete" FontSize="10" Command="{Binding Path=DataContext.DeleteCommentCommand, ElementName=commentsList}" CommandParameter="{Binding}" /> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> So as you can see I have a list of Comments objects. I want the CommandParameter of the DeleteCommentCommand to be bound to the Command object. So I guess my question is: have anyone experienced this problem before? CanExecute gets called on my Command, but the parameter is always NULL the first time - why is that? Update: I was able to narrow the problem down a little. I added an empty Debug ValueConverter so that I could output a message when the CommandParameter is data bound. Turns out the problem is that the CanExecute method is executed before the CommandParameter is bound to the button. I have tried to set the CommandParameter before the Command (like suggested) - but it still doesn't work. Any tips on how to control it. Update2: Is there any way to detect when the binding is "done", so that I can force re-evaluation of the command? Also - is it a problem that I have multiple Buttons (one for each item in the ItemsControl) that bind to the same instance of a Command-object? Update3: I have uploaded a reproduction of the bug to my SkyDrive: http://cid-1a08c11c407c0d8e.skydrive.live.com/self.aspx/Code%20samples/CommandParameterBinding.zip

    Read the article

  • Configuring multiple distinct WCF binding configurations causes an exception to be thrown

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Using multiple distinct TCP security binding configurations in a single WCF IIS-hosted WCF service a

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Drawing a TextBox in an extended Glass Frame (C# w/o WPF)

    - by Lazlo
    I am trying to draw a TextBox on the extended glass frame of my form. I won't describe this technique, it's well-known. Here's an example for those who haven't heard of it: http://www.danielmoth.com/Blog/Vista-Glass-In-C.aspx The thing is, it is complex to draw over this glass frame. Since black is considered to be the 0-alpha color, anything black disappears. There are apparently ways of countering this problem: drawing complex GDI+ shapes are not affected by this alpha-ness. For example, this code can be used to draw a Label on glass (note: GraphicsPath is used instead of DrawString in order to get around the horrible ClearType problem): public class GlassLabel : Control { public GlassLabel() { this.BackColor = Color.Black; } protected override void OnPaint(PaintEventArgs e) { GraphicsPath font = new GraphicsPath(); font.AddString( this.Text, this.Font.FontFamily, (int)this.Font.Style, this.Font.Size, Point.Empty, StringFormat.GenericDefault); e.Graphics.SmoothingMode = SmoothingMode.HighQuality; e.Graphics.FillPath(new SolidBrush(this.ForeColor), font); } } Similarly, such an approach can be used to create a container on the glass area. Note the use of the polygons instead of the rectangle - when using the rectangle, its black parts are considered as alpha. public class GlassPanel : Panel { public GlassPanel() { this.BackColor = Color.Black; } protected override void OnPaint(PaintEventArgs e) { Point[] area = new Point[] { new Point(0, 1), new Point(1, 0), new Point(this.Width - 2, 0), new Point(this.Width - 1, 1), new Point(this.Width -1, this.Height - 2), new Point(this.Width -2, this.Height-1), new Point(1, this.Height -1), new Point(0, this.Height - 2) }; Point[] inArea = new Point[] { new Point(1, 1), new Point(this.Width - 1, 1), new Point(this.Width - 1, this.Height - 1), new Point(this.Width - 1, this.Height - 1), new Point(1, this.Height - 1) }; e.Graphics.FillPolygon(new SolidBrush(Color.FromArgb(240, 240, 240)), inArea); e.Graphics.DrawPolygon(new Pen(Color.FromArgb(55, 0, 0, 0)), area); base.OnPaint(e); } } Now my problem is: How can I draw a TextBox? After lots of Googling, I came up with the following solutions: Subclassing the TextBox's OnPaint method. This is possible, although I could not get it to work properly. It should involve painting some magic things I don't know how to do yet. Making my own custom TextBox, perhaps on a TextBoxBase. If anyone has good, valid and working examples, and thinks this could be a good overall solution, please tell me. Using BufferedPaintSetAlpha. (http://msdn.microsoft.com/en-us/library/ms649805.aspx). The downsides of this method may be that the corners of the textbox might look odd, but I can live with that. If anyone knows how to implement that method properly from a Graphics object, please tell me. I personally don't, but this seems the best solution so far. Thanks!

    Read the article

  • Very slow compile times on Visual Studio

    - by johnc
    We are getting very slow compile times, which can take upwards of 20+ minutes on dual core 2GHz, 2G Ram machines. A lot of this is due to the size of our solution which has grown to 70+ projects, as well as VSS which is a bottle neck in itself when you have a lot of files. (swapping out VSS is not an option unfortunately, so I don't want this to descend into a VSS bash) We are looking at combing projects (not nice, as we like the separation of concerns, but is a good opportunity to refactor away some dead wood). We are also looking at having multiple solutions to achieve greater separation of concerns and quicker compile times for each element of the application. This I can see will become a dll hell as we try to keep things in synch. I am interested to know how other teams have dealt with this scaling issue, what do you do when your code base reaches a critical mass that you are wasting half the day watching the status bar deliver compile messages UPDATE Apologies, I neglected to mention this is a C# solution. Thanks for all the cpp suggestions, but it's been a few years since I've had to worry about headers. At a distance I say I miss C++, but I'm not sure I want to go back EDIT: Nice suggestions that have helped so far (not saying there aren't other nice suggestions below, just what has helped) New 3GHz laptop - the power of lost utilization works wonders when whinging to management Disable Anti Virus during compile 'Disconnecting' from VSS (actually the network) during compile - I may get us to remove VS-VSS integration altogether and stick to using the VSS UI Still not rip-snorting through a compile, but every bit helps. Orion did mention in a comment that generics may have a play also. From my tests there does appear to be a minimal performance hit, but not high enough to sure - compile times can be inconsistent due to disc activity. Due to time limitations, my tests didn't include as many Generics, or as much code, as would appear in live system, so that may accumulate. I wouldn't avoid using generics where they are supposed to be used, just for compile time performance WORKAROUND We are testing the practice of building new areas of the application in new solutions, importing in the latest dlls as required, them integrating them into the larger solution when we are happy with them. We may also do them same to existing code by creating temporary solutions that just encapsulate the areas we need to work on, and throwing them away after reintegrating the code. We need to weigh up the time it will take to reintegrate this code against the time we gain by not having Rip Van Winkle like experiences with rapid recompiling during development.

    Read the article

  • *UPDATED* help with django and accented characters?

    - by Asinox
    Hi guys, i have a problem with my accented characters, Django admin save my data without encoding to something like "&aacute;" Example: if im trying a word like " Canción ", i would like to save in this way: Canci&oacute;n, and not Canción. im usign Sociable app: {% load sociable_tags %} {% get_sociable Facebook TwitThis Google MySpace del.icio.us YahooBuzz Live as sociable_links with url=object.get_absolute_url title=object.titulo %} {% for link in sociable_links %} <a href="{{ link.link }}"><img alt="{{ link.site }}" title="{{ link.site }}" src="{{ link.image }}" /></a> {% endfor %} But im getting error if my object.titulo (title of the article) have a accented word. aught KeyError while rendering: u'\xfa' Any idea ? i had in my SETTING: DEFAULT_CHARSET = 'utf-8' i had in my mysql database: utf8_general_ci COMPLETED ERROR: Traceback: File "C:\wamp\bin\Python26\lib\site-packages\django\core\handlers\base.py" in get_response 100. response = callback(request, *callback_args, **callback_kwargs) File "C:\wamp\bin\Python26\lib\site-packages\django\views\generic\date_based.py" in object_detail 366. response = HttpResponse(t.render(c), mimetype=mimetype) File "C:\wamp\bin\Python26\lib\site-packages\django\template\__init__.py" in render 173. return self._render(context) File "C:\wamp\bin\Python26\lib\site-packages\django\template\__init__.py" in _render 167. return self.nodelist.render(context) File "C:\wamp\bin\Python26\lib\site-packages\django\template\__init__.py" in render 796. bits.append(self.render_node(node, context)) File "C:\wamp\bin\Python26\lib\site-packages\django\template\debug.py" in render_node 72. result = node.render(context) File "C:\wamp\bin\Python26\lib\site-packages\django\template\loader_tags.py" in render 125. return compiled_parent._render(context) File "C:\wamp\bin\Python26\lib\site-packages\django\template\__init__.py" in _render 167. return self.nodelist.render(context) File "C:\wamp\bin\Python26\lib\site-packages\django\template\__init__.py" in render 796. bits.append(self.render_node(node, context)) File "C:\wamp\bin\Python26\lib\site-packages\django\template\debug.py" in render_node 72. result = node.render(context) File "C:\wamp\bin\Python26\lib\site-packages\django\template\loader_tags.py" in render 62. result = block.nodelist.render(context) File "C:\wamp\bin\Python26\lib\site-packages\django\template\__init__.py" in render 796. bits.append(self.render_node(node, context)) File "C:\wamp\bin\Python26\lib\site-packages\django\template\debug.py" in render_node 72. result = node.render(context) File "C:\wamp\bin\Python26\lib\site-packages\sociable\templatetags\sociable_tags.py" in render 37. 'link': sociable.genlink(site, **self.values), File "C:\wamp\bin\Python26\lib\site-packages\sociable\sociable.py" in genlink 20. values['title'] = quote_plus(kwargs['title']) File "C:\wamp\bin\Python26\lib\urllib.py" in quote_plus 1228. s = quote(s, safe + ' ') File "C:\wamp\bin\Python26\lib\urllib.py" in quote 1222. res = map(safe_map.__getitem__, s) Exception Type: TemplateSyntaxError at /noticia/2010/jun/10/matan-domingo-paquete-en-la-avenida-san-vicente-de-paul/ Exception Value: Caught KeyError while rendering: u'\xfa' thanks, sorry with my English

    Read the article

  • Why do many software projects fail today?

    - by TomTom
    As long as there are software projects, the world is wondering why they fail so often. I would like to know if there is a list or something equivalent which shows how many software projects fail today. Would be nice if there would be a comparison over the last 20 - 30 years. You can also add your top reason why a software project fails. Mine is "Requirements are poor or not even existing." which includes also "No (real) customer / user involved". EDIT: It is nearly impossible to clearly define the term "fail". Let's say that fail means: The project was more than 10% over budget and time. In my opinion the 10% + / - is a good range for an offer / tender. EDIT: Until now (Feb 11) it seems that most posters agree that a fail of the project is basically a failure of the project management (whatever fail means). But IMHO it comes out, that most developers are not happy with this situation. Perhaps because not the manager get penalized when a project was not successful, but the lazy, incompetent developer teams? When I read the posts I can also hear-out that there is a big "gap" between the developer side and the managment side. The expectations (perhaps also the requirements) seem to be so different, that a project cannot be successful in the end (over time / budget; users are not happy; not all first-prio features implemented; too many bugs because developers were forced to implement in too short timeframes ...) I',m asking myself: How can we improve it? Or do we have the possibility to improve it? Everybody seems to be unsatisfied with the way it goes now. Can we close the gap between these two worlds? Should we (the developers) go on strike and fight for "high quality reqiurements" and "realistic / iteration based time shedules"? EDIT: Ralph Westphal and Stefan Lieser have founded a new "community" called: Clean-Code-Developer. The aim of the group is to bring more professionalism into software engineering. Independently if a developer has a degree or tons of years of experience you can be part of this movement. Clean Code Developers live principles like SOLID every day. A professional developer is the biggest reviewer of his own work. And he has an internal value system which helps him to improve and become better. Check it out on: Clean Code Developer EDIT: Our company is doing at the moment a thing called "Application Development and Maintenance Benchmarking". This is a service offered by IBM to get a feedback from someone external on your software engineering process quality etc. When we get the results, I will tell you more about it.

    Read the article

  • Django: What's an awesome plugin to maintain images in the admin?

    - by meder
    I have an articles entry model and I have an excerpt and description field. If a user wants to post an image then I have a separate ImageField which has the default standard file browser. I've tried using django-filebrowser but I don't like the fact that it requires django-grappelli nor do I necessarily want a flash upload utility - can anyone recommend a tool where I can manage image uploads, and basically replace the file browse provided by django with an imagepicking browser? In the future I'd probably want it to handle image resizing and specify default image sizes for certain article types. Edit: I'm trying out adminfiles now but I'm having issues installing it. I grabbed it and added it to my python path, added it to INSTALLED_APPS, created the databases for it, uploaded an image. I followed the instructions to modify my Model to specify adminfiles_fields and registered but it's not applying in my admin, here's my admin.py for articles: from django.contrib import admin from django import forms from articles.models import Category, Entry from tinymce.widgets import TinyMCE from adminfiles.admin import FilePickerAdmin class EntryForm( forms.ModelForm ): class Media: js = ['/media/tinymce/tiny_mce.js', '/media/tinymce/load.js']#, '/media/admin/filebrowser/js/TinyMCEAdmin.js'] class Meta: model = Entry class CategoryAdmin(admin.ModelAdmin): prepopulated_fields = { 'slug': ['title'] } class EntryAdmin( FilePickerAdmin ): adminfiles_fields = ('excerpt',) prepopulated_fields = { 'slug': ['title'] } form = EntryForm admin.site.register( Category, CategoryAdmin ) admin.site.register( Entry, EntryAdmin ) Here's my Entry model: class Entry( models.Model ): LIVE_STATUS = 1 DRAFT_STATUS = 2 HIDDEN_STATUS = 3 STATUS_CHOICES = ( ( LIVE_STATUS, 'Live' ), ( DRAFT_STATUS, 'Draft' ), ( HIDDEN_STATUS, 'Hidden' ), ) status = models.IntegerField( choices=STATUS_CHOICES, default=LIVE_STATUS ) tags = TagField() categories = models.ManyToManyField( Category ) title = models.CharField( max_length=250 ) excerpt = models.TextField( blank=True ) excerpt_html = models.TextField(editable=False, blank=True) body_html = models.TextField( editable=False, blank=True ) article_image = models.ImageField(blank=True, upload_to='upload') body = models.TextField() enable_comments = models.BooleanField(default=True) pub_date = models.DateTimeField(default=datetime.datetime.now) slug = models.SlugField(unique_for_date='pub_date') author = models.ForeignKey(User) featured = models.BooleanField(default=False) def save( self, force_insert=False, force_update= False): self.body_html = markdown(self.body) if self.excerpt: self.excerpt_html = markdown( self.excerpt ) super( Entry, self ).save( force_insert, force_update ) class Meta: ordering = ['-pub_date'] verbose_name_plural = "Entries" def __unicode__(self): return self.title Edit #2: To clarify I did move the media files to my media path and they are indeed rendering the image area, I can upload fine, the <<<image>>> tag is inserted into my editable MarkItUp w/ Markdown area but it isn't rendering in the MarkItUp preview - perhaps I just need to apply the |upload_tags into that preview. I'll try adding it to my template which posts the article as well.

    Read the article

  • getting autodiscover URL from Exchange email address

    - by Anthony
    I'm starting with an address for an Exchange 2007 server: [email protected] And I attempted to send an autodiscover request, as documented at MSDN. I attempted to use the generic autodiscover address documented at the TechNet White Paper. So, using curl on PHP, I sent the following request: <Autodiscover xmlns="http://schemas.microsoft.com/exchange/autodiscover/outlook/requestschema/2006"> <Request> <EMailAddress>[email protected]</EMailAddress> <AcceptableResponseSchema> http://schemas.microsoft.com/exchange/autodiscover/outlook/responseschema/2006a </AcceptableResponseSchema> </Request> </Autodiscover> to the following URL: https://domain.exchangeserver.org/autodiscover/autodiscover.xml But got no response, just an eventual timeout. I also tried: https://autodiscover.domain.exchangeserver.org/autodiscover/autodiscover.xml With the same result. Now, since my larger goal is to use Autodiscover with Exchange Web Services, and since all of the EWS URLs start with the same sub-domain as the Outlook Web Access address, I thought I'd give that a try: OWA: https://wmail.domain.exchangeserver.org So I tried: https://wmail.domain.exchangeserver.org/autodiscover/autodiscover.xml And sure enough, I got back the expected response. However, I only knew the OWA sub-domain because it's the server I have access to and that I'm using to test everything. I would not know it for sure or be able to guess it if this were a live app and the user was entering in their own Exchange email. I know that whatever generic autodiscover settings must be turned on, because I can enter: [email protected] into Apple Mail on Snow Leopard and it finds everything without trouble. So the question is... Should https://domain.exchangeserver.org/autodiscover/autodiscover.xml have worked, and I just missed a step when trying to connect to it? Or, Is there some trick (maybe involving pinging the email address?) that Apple Mail and other clients use to resolve the address to the OWA subdomain before sending the autodiscover request? Thanks to anyone who knows or can take a wild guess.

    Read the article

  • How to assess a job offer with stock options? [closed]

    - by seas
    Cannot help expressing my curiosity about this issue. First of all, Russia, a country, I live in, as far as I understand, is a virgin land on this matter. Motivation is purely developed here and options are not existed at all. So, I am absolutely not aware about this. I roughly understand the mechanism of options. And, as far as I understand, the major white spot in the whole that story, how much will a singe share cost. How much will I get from all those N x 1000 options? I see two ways one can get money from the business: 1. Business goes IPO !!! 2. Business is sold as a whole to another owner. Next way is rather questionable about getting money: 3. Business goes without IPO "forever" (a generation would rather die before it IPO). I am also interested some explanations about situation ?3. Situation ?1 is clear - market decides everything, you either wait for stock price you satisfied or sell everything now. But topic is rather about ?2 - business is sold to another business. I am considering the following model: I am well payed specialist with company X. Somebody, with a company Y makes me an offer. Y is a startup. They cannot offer me much money and cannot overbid my salary, but they grow fast and hope to be bought soon. Instead of money, they offer me N x 1000 options. My problem is "how to assess this offer against my current, stable and well-payed job"? Are there any average cost of virtual share during selling one company to another? Are there any average stock price companies prefer to go to IPO? Are there any barriers against spreading the value of sock before selling the company or IPO (hiring too much people with options package too fast will decrease each package value, I mean)? Are there any good articles with explanations? Is all that somehow written in the law?

    Read the article

  • Revisiting .NET, but what should I focus on?

    - by Wayne M
    After about a two-year hiatus, I'm brushing up on my .NET skills to find a .NET job (my previous two positions have very little development, or development using legacy technologies, so apart from a few very minor apps I have not touched .NET in close to two years). I'm aware of things like ASP.NET MVC, and I have previously read on things like NHibernate and DI/IOC, albeit I have yet to use them apart from very trivial "Hello World" type applications. I have a subscription to Rob Conery's Tekpub website and occasionally watch these videos when I have free time. My concern is this: I don't live in a very technical area. I would be surprised if any but the most tech-savvy companies have heard of, let alone use, ASP.NET MVC, NHibernate (or even LINQ/EF), or know about IoC. I would be willing to bet a large sum of money that 95% of the possible jobs I could obtain will use the following: Visual Source Safe, if any VCS at all ASP.NET 2.0 Webforms (3.5 if lucky) Raw ADO.NET on top of a very thin implementation of the Gateway pattern Stored Procedures in the database for most CRUD operations Gratuitous use of code-behind, with a Service layer if I'm lucky If I were extremely lucky, I might find a shop that has heard of ORMs and either uses one, or has wrote their own data abstraction. Also if I were lucky, the company would be using Model-View-Presenter. In light of this I'm not sure what I should focus on learning. Personally, I would prefer to be using the latest stuff - ASP.NET MVC, NHibernate, jQuery, WCF etc. Reality says I should go back to the basics, since it looks like most potential opportunities aren't going to be anywhere near the cutting edge, or anywhere close to it. And, as much as I would like to find a position and start to show the other developers the benefits, in my past experience this has usually resulted in my being fired for "not being a team player" and doing things the bad old way. So, I am curious how you would approach a situation like this? What should I focus on, in order to A) Reaquaint myself with .NET, and B) Prepare myself to obtain a .NET job again that is more than likely going to use techniques that I and most other knowledgeable developers will scoff at?

    Read the article

  • phpmailer with hotmail?

    - by nikola
    I'm trying to send emails from my server by a PHP script. I used to send it by a native php function mail and everything worked OK. Here's the code I used: $to = $sMail;<br> $subject = $sSubject;<br> $message = $sMessage; $headers = 'From: [email protected]' . "\r\n";<br> $headers .= 'Reply-To: [email protected]' . "\r\n";<br> $headers .= 'MIME-Version: 1.0' . "\r\n";<br> $headers .= 'Content-type: text/html; charset=UTF-8' . "\r\n";<br> $bRes = mail($to, $subject, $message, $headers); I then switched to PHPMailer, and wasn't able to send mail to Hotmail accounts (all the other still worked). Hotmail server reports the error: "550 SC-001 Mail rejected by Windows Live Hotmail for policy reasons." This is the code I used for PHPMailer: $mail = new PHPMailer();<br> $mail->IsHTML(true);<br> $mail->CharSet = 'UTF-8';<br> $mail->From = '[email protected]';<br> $mail->FromName = 'domain.com';<br> $mail->Subject = $sSubject;;<br> $mail->Body = $sMessage;<br> $mail->AltBody = strip_tags($sMessage;);<br> $mail->AddAddress($sMail);<br> $mail->Send();<br> $mail->ClearAddresses();<br> $mail->ClearAttachments(); As the sending works with the native function, I'm sure my server is able to send mails to hotmail. There must be a property to set when using PHPMailer, but I can't seem to find the right one. Anyone knows something abouth this? Thank you very much!

    Read the article

  • Delphi - restore actual row in DBGrid

    - by durumdara
    Hi! D6 prof. Formerly we used DBISAM and DBISAMTable. That handle the RecNo, and it is working good with modifications (Delete, edit, etc). Now we replaced with ElevateDB, that don't handle RecNo, and many times we use Queries, not Tables. Query must reopen to see the modifications. But if we Reopen the Query, we need to repositioning to the last record. Locate isn't enough, because Grid is show it in another Row. This is very disturbing thing, because after the modification record is moving into another row, you hard to follow it, and users hate this. We found this code: function TBaseDBGrid.GetActRow: integer; begin Result := -1 + Row; end; procedure TBasepDBGrid.SetActRow(aRow: integer); var bm : TBookMark; begin if IsDataSourceValid(DataSource) then with DataSource.DataSet do begin bm := GetBookmark; DisableControls; try MoveBy(-aRow); MoveBy(aRow); //GotoBookmark(bm); finally FreebookMark(bm); EnableControls; end; end; end; The original example is uses moveby. This working good with Queries, because we cannot see that Query reopened in the background, the visual control is not changed the row position. But when we have EDBTable, or Live/Sensitive Query, the MoveBy is dangerous to use, because if somebody delete or append a new row, we can relocate into wrong record. Then I tried to use the BookMark (see remark). But this technique isn't working, because it is show the record in another Row position... So the question: how to force both the row position and record in DBGrid? Or what kind of DBGrid can relocate to the record/row after the underlying DataSet refreshed? I search for user friendly solution, I understand them, because I tried to use this jump-across DBGrid, and very bad to use, because my eyes are getting out when try to find the original record after update... :-( Thanks for your every help, link, info: dd

    Read the article

  • Flash AS3 blur or liquify part of an image with mouse

    - by hamlet
    Hi, I am very beginner in flash. I want to load an image, show a cursor over the image and on mousedown I want to blur that actual part of the image. (e.g you can blur your face on the image and then save the new image). I can delete parts of the image with white line, but I would like to blur it instead // LIVE JPEG ENCODER 0.3 // from bytearray.org import asfiles.encoding.JPEGEncoder; import flash.external.ExternalInterface; ExternalInterface.addCallback("flash_saveImage", inflash_saveImage); var loader:Loader = new Loader(); loader.contentLoaderInfo.addEventListener(Event.COMPLETE, handleComplete); loader.load(new URLRequest(loaderInfo.parameters._filename)); //loader.load(new URLRequest("b.jpg")); var container_mc:MovieClip = new MovieClip;//create movieclip function handleComplete(e:Event):void { addChild(container_mc); var bitmapData:BitmapData = Bitmap(e.target.content).bitmapData; var matrix:Matrix = new Matrix(); container_mc.graphics.clear(); container_mc.graphics.beginBitmapFill(bitmapData, matrix, false); //container_mc.graphics.beginFill(0xFFFFFF,0); container_mc.graphics.drawRect(0, 0, bitmapData.width, bitmapData.height); container_mc.graphics.endFill(); swapChildren(container_mc, pencil); container_mc.addEventListener(MouseEvent.MOUSE_DOWN, startDrawing); container_mc.addEventListener(MouseEvent.MOUSE_UP, stopDrawing); container_mc.addEventListener(MouseEvent.MOUSE_MOVE, makeLine); } stage.addEventListener(MouseEvent.MOUSE_MOVE, moveCursor); Mouse.hide(); function moveCursor(event:MouseEvent):void { pencil.x = event.stageX; pencil.y = event.stageY; } function startDrawing(event:MouseEvent):void{ container_mc.graphics.lineStyle(20, 0xFFFFFF, 1); container_mc.graphics.moveTo(mouseX, mouseY); container_mc.addEventListener(MouseEvent.MOUSE_MOVE, makeLine); } function stopDrawing(event:MouseEvent):void{ container_mc.removeEventListener(MouseEvent.MOUSE_MOVE, makeLine); } function makeLine(event:MouseEvent):void{ container_mc.graphics.lineTo(mouseX, mouseY); } function inflash_saveImage ( ):void { var myURLLoader:URLLoader = new URLLoader(); var myBitmapSource:BitmapData = new BitmapData ( container_mc.width, container_mc.height ); // render the player as a bitmapdata myBitmapSource.draw ( container_mc ); // create the encoder with the appropriate quality var myEncoder:JPEGEncoder = new JPEGEncoder( 80 ); // generate a JPG binary stream to have a preview var myCapStream:ByteArray = myEncoder.encode ( myBitmapSource ); var header:URLRequestHeader = new URLRequestHeader ("Content-type", "application/octet-stream"); var myRequest:URLRequest = new URLRequest ( "save.php" ); myRequest.requestHeaders.push (header); myRequest.method = URLRequestMethod.POST; myRequest.data = myCapStream; myURLLoader.load ( myRequest ); } Thanks, hamlet

    Read the article

  • Stretching width correctly to 100% of an inline-block element in IE6 and IE7

    - by Simon Lieschke
    I have the following markup, where I am attempting to get the right hand side of the second table to align with the right hand side of the heading above it. This works in IE8, Firefox and Chrome, but in IE6/7 the table is incorrectly stretched to fill the width of the page. I'm using the Trip Switch hasLayout trigger to apply inline-block in IE6/7. Does anyone know how (or even if) I can get the table only to fill the natural width of the wrapper element displayed with inline-block in IE6/7? You can see the code running live at http://jsbin.com/uyuva. <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Test</title> <style> .wrapper { display: inline-block; border: 1px solid green; } /* display: inline-block triggers the wrapper element to have layout for IE 6/7. The trip switch then provides the inline component of the display behaviour. See http://www.brunildo.org/test/InlineBlockLayout.html for more details. */ .wrapper { *display: inline; } table { border: 1px solid red; } </style> </head> <body> <h1>No width on table:</h1> <div class="wrapper"> <h2>The right hand side of the table doesn't stretch to the end of this heading</h2> <table><tr><td>foo</td></tr></table> </div> text <h1>Width on table:</h1> <div class="wrapper"> <h2>The right hand side of the table should stretch to the end of this heading</h2> <table style="width: 100%"><tr><td>foo</td></tr></table> </div> text </body> </html>

    Read the article

  • Doubt about adopting CI (Hudson) into an existing automated Build Process (phing, svn)

    - by maraspin
    OUR CURRENT BUILD PROCESS We're a small team of developers (2 to 4 people depending on project) who currently use Phing to deploy code to a staging environment, before going live. We keep our code in a SVN repo, where the trunk holds current active development and, at certain times, we do make branches that we test and then (if successful), tag and export to the staging env. If everything goes well there too, we finally deploy'em in production servers. Actions are highly automated, but always triggered by human intervention. THE DOUBT We'd now like to introduce Continuous Integration (with Hudson) in the process; unfortunately we have a few doubts about activity syncing, since we're afraid that CI could somewhat interfere with our build process and cause certain problems. Considering that an automated CI cycle has a certain frequency of automatically executed actions, we in fact only see 2 possible cases for "integration", each with its own problems: Case A: each CI cycle produces a new branch with its own name; we do use such a name to manually (through phing as it happens now) export the code from the SVN to the staging env. The problem I see here is that (unless specific countermeasures are taken) the number of branches we have can grow out of control (let's suppose we commit often, so that we have a fresh new build/branch every N minutes). Case B: each CI cycle creates a new branch named 'current', for instance, which is tagged with a unique name only when we manually decide to export it to staging; the current branch, at any case is then deleted, as soon as the next CI cycle starts up. The problem we see here is that a new cycle could kick in while someone is tagging/exporting the 'current' branch to staging thus creating an inconsistent build (but maybe here I'm just too pessimist, since I confess I don't know whether SVN offers some built-in protection against this). With all this being said, I was wondering if anyone with similar experiences could be so kind to give us some hints on the subject, since none of the approaches depicted above looks completely satisfing to us. Is there something important we just completely left off in the overall picture? Thanks for your attention &, in advance, for your help!

    Read the article

  • JQuery Tooltip Plugin from Jorn

    - by Jeff Ancel
    I am thinking someone may have run across this one, but not sure. From a high level, I am trying to roll over a input [type=text] and display a tool tip (with the contained value) using the plugin available at http://bassitance.de. I have to use titles and classes for validation on the specific elements, so I put a blank div to hold the input [type=text] value for the roll over. Issue: It won't hold the value of 2 text boxes at once. Once I put a value in the box on the right, the tooltip on the left goes away. Same thing if I switch it aroun. I can't keep a tooltip on more than one element. Here is the code (Note: You will have to download the plugins in the source as I am not sure where the live versions are if there are any). <link rel="stylesheet" href="/scripts/jquery-tooltip/jquery.tooltip.css" /> <script type="text/javascript" src="/scripts/jquery-1.3.2.min.js"></script> <script type="text/javascript" src="/scripts/jquery-tooltip/jquery.tooltip.min.js"></script> <script type="text/javascript"> $(function(){ $("input").change(function(){ var newTitle = $(this).val(); $(this).parent().attr("title",newTitle); // re-init tool tip reload(); }); // Init tooltip reload(); }); reload = function(){ $("div").tooltip(); } </script> <body> <table border="1px solid black"> <tr> <td title="hello"> <div> <input type="text" value=""/> </div> </td> <td> <div> <input type="text" value=""/> </div> </td> </tr> </table> <div id="debug"></div> </body> </html>

    Read the article

< Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >