Search Results

Search found 23968 results on 959 pages for 'tail call'.

Page 694/959 | < Previous Page | 690 691 692 693 694 695 696 697 698 699 700 701  | Next Page >

  • GRUB doesn't recognize partitions on one harddisk

    - by knizz
    I have a dualboot computer with Windows Vista (on hd0) and Ubuntu 9.10. The bootloader is GRUB and the windows bootloader lets me decide between Vista and Ubuntu-Installation (broken WuBi). But now (i don't know why that changed) I can't use start the windows-bootloader anymore. I tried "ls" on the grub-prompt and it gave me a list like: (hd0) (hd1) (hd1,0) (hd1,1) (hd1,2) ... (fd0) It recognizes all partitions of hd1 (the ubuntu-harddisk) but not of hd0(the win-disk). .. WHY? Here is the result of the "boot info script" for the technical details: Boot Info Script 0.55 dated February 15th, 2010 ============================= Boot Info Summary: ============================== => Grub 2 is installed in the MBR of /dev/sda and looks for (UUID=a7c510e3-2399-437b-ab92-fa609e48d63f)/boot/grub. => No boot loader is installed in the MBR of /dev/sdb sda1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows Vista Boot files/dirs: /bootmgr /Boot/BCD /Windows/System32/winload.exe /wubildr.mbr /wubildr sda2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: sdb1: _________________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: mount: unbekannter Dateisystemtyp „“ sdb2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: sdb3: _________________________________________________________________________ File system: Bios Boot Partition Boot sector type: - Boot sector info: sdb4: _________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 9.10 Boot files/dirs: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb5: _________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: =========================== Drive/Partition Info: ============================= Drive: sda ___________________ _____________________________________________________ Platte /dev/sda: 640.1 GByte, 640135028736 Byte 255 Köpfe, 63 Sektoren/Spuren, 77825 Zylinder, zusammen 1250263728 Sektoren Einheiten = Sektoren von 1 × 512 = 512 Bytes Disk identifier: 0x52554d66 Partition Boot Start End Size Id System /dev/sda1 * 2,048 307,202,047 307,200,000 7 HPFS/NTFS /dev/sda2 307,202,048 1,250,258,943 943,056,896 7 HPFS/NTFS Drive: sdb ___________________ _____________________________________________________ Platte /dev/sdb: 640.1 GByte, 640135028736 Byte 255 Köpfe, 63 Sektoren/Spuren, 77825 Zylinder, zusammen 1250263728 Sektoren Einheiten = Sektoren von 1 × 512 = 512 Bytes Disk identifier: 0x00000000 Partition Boot Start End Size Id System /dev/sdb1 1 1,250,263,727 1,250,263,727 ee GPT GUID Partition Table detected. Partition Start End Size System /dev/sdb1 34 262,177 262,144 Microsoft Windows /dev/sdb2 262,178 1,131,253,933 1,130,991,756 Linux or Data /dev/sdb3 1,131,253,934 1,131,255,887 1,954 Bios Boot Partition /dev/sdb4 1,131,255,888 1,245,312,528 114,056,641 Linux or Data /dev/sdb5 1,245,312,529 1,250,263,694 4,951,166 Linux Swap blkid -c /dev/null: ____________________________________________________________ Device UUID TYPE LABEL /dev/sda1 AE1440441440122F ntfs /dev/sda2 3AE66E4DE66E0A09 ntfs data /dev/sdb2 5419D16119DAA4DE ntfs LaufwerkD /dev/sdb4 a7c510e3-2399-437b-ab92-fa609e48d63f ext4 /dev/sdb5 60a0143a-e01b-450a-bbd1-f22059e47b65 swap ============================ "mount | grep ^/dev output: =========================== Device Mount_Point Type Options /dev/sdb4 / ext4 (rw,errors=remount-ro) =========================== sdb4/boot/grub/grub.cfg: =========================== # # DO NOT EDIT THIS FILE # # It is automatically generated by /usr/sbin/grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s /boot/grub/grubenv ]; then have_grubenv=true load_env fi set default="0" if [ ${prev_saved_entry} ]; then saved_entry=${prev_saved_entry} save_env saved_entry prev_saved_entry= save_env prev_saved_entry fi insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 insmod gfxterm insmod vbe if terminal_output gfxterm ; then true ; else # For backward compatibility with versions of terminal.mod that don't # understand terminal_output terminal gfxterm fi fi if [ ${recordfail} = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/white ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### menuentry "Ubuntu, Linux 2.6.31-20-generic" { recordfail=1 if [ -n ${have_grubenv} ]; then save_env recordfail; fi set quiet=1 insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f linux /boot/vmlinuz-2.6.31-20-generic root=UUID=a7c510e3-2399-437b-ab92-fa609e48d63f ro quiet splash initrd /boot/initrd.img-2.6.31-20-generic } menuentry "Ubuntu, Linux 2.6.31-20-generic (recovery mode)" { recordfail=1 if [ -n ${have_grubenv} ]; then save_env recordfail; fi insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f linux /boot/vmlinuz-2.6.31-20-generic root=UUID=a7c510e3-2399-437b-ab92-fa609e48d63f ro single initrd /boot/initrd.img-2.6.31-20-generic } menuentry "Ubuntu, Linux 2.6.31-14-generic" { recordfail=1 if [ -n ${have_grubenv} ]; then save_env recordfail; fi set quiet=1 insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f linux /boot/vmlinuz-2.6.31-14-generic root=UUID=a7c510e3-2399-437b-ab92-fa609e48d63f ro quiet splash initrd /boot/initrd.img-2.6.31-14-generic } menuentry "Ubuntu, Linux 2.6.31-14-generic (recovery mode)" { recordfail=1 if [ -n ${have_grubenv} ]; then save_env recordfail; fi insmod ext2 set root=(hd1,4) search --no-floppy --fs-uuid --set a7c510e3-2399-437b-ab92-fa609e48d63f linux /boot/vmlinuz-2.6.31-14-generic root=UUID=a7c510e3-2399-437b-ab92-fa609e48d63f ro single initrd /boot/initrd.img-2.6.31-14-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows Vista (loader) (on /dev/sda1)" { insmod ntfs set root=(hd0,1) search --no-floppy --fs-uuid --set ae1440441440122f chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### =============================== sdb4/etc/fstab: =============================== # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 # / was on /dev/sdb4 during installation UUID=a7c510e3-2399-437b-ab92-fa609e48d63f / ext4 errors=remount-ro 0 1 # swap was on /dev/sdb5 during installation UUID=60a0143a-e01b-450a-bbd1-f22059e47b65 none swap sw 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 =================== sdb4: Location of files loaded by Grub: =================== 583.8GB: boot/grub/core.img 583.8GB: boot/grub/grub.cfg 579.7GB: boot/initrd.img-2.6.31-14-generic 580.0GB: boot/initrd.img-2.6.31-20-generic 579.7GB: boot/vmlinuz-2.6.31-14-generic 579.8GB: boot/vmlinuz-2.6.31-20-generic 580.0GB: initrd.img 579.7GB: initrd.img.old 579.8GB: vmlinuz 579.7GB: vmlinuz.old =========================== Unknown MBRs/Boot Sectors/etc ======================= Unknown BootLoader on sdb1 00000000 54 34 dc 3b 8b ff 6c fa 3e 59 3d 24 25 af 5f 9b |T4.;..l.>Y=$%._.| 00000010 72 f8 36 3d 56 30 22 fd c6 08 5e 39 7f dc 29 48 |r.6=V0"...^9..)H| 00000020 48 e5 24 52 77 b0 fc 64 b6 ce 48 c3 07 ce b5 81 |H.$Rw..d..H.....| 00000030 06 68 60 4f 6e fb 83 92 df 3a 54 b9 df 21 2a cd |.h`On....:T..!*.| 00000040 1e 2f e2 49 fe cf 81 2d 52 17 1a 4e 66 b4 f3 f0 |./.I...-R..Nf...| 00000050 41 25 e3 96 26 28 fe 19 61 72 75 f8 40 a3 b7 ef |A%..&(..aru.@...| 00000060 5f 79 dc cb 28 44 44 7c 9b 9a 7b 6c 4b 4b 60 0f |_y..(DD|..{lKK`.| 00000070 a9 97 87 bc 85 9f db bb d2 1a 88 9f aa 49 18 d5 |.............I..| 00000080 92 2d db 7e fe f7 8d 7a 18 c0 33 c5 bd 7a 46 07 |.-.~...z..3..zF.| 00000090 c8 27 13 66 94 49 62 9f bc 99 56 55 25 fb 94 a9 |.'.f.Ib...VU%...| 000000a0 3f b2 a7 0a 87 d0 a4 4e 51 f1 09 02 c4 29 eb ff |?......NQ....)..| 000000b0 26 3b 51 3e 5a 0c db ee a6 57 a7 c3 ba a1 74 90 |&;Q>Z....W....t.| 000000c0 ee 70 08 18 cc b8 d0 22 ce 96 c7 cb 68 40 98 20 |.p....."....h@. | 000000d0 49 3d 07 ec df d1 8d cf 19 bc 42 90 70 24 01 b4 |I=........B.p$..| 000000e0 28 cf c6 50 d3 95 5a 1b 18 15 33 c7 b2 a8 95 92 |(..P..Z...3.....| 000000f0 bb 93 fe 18 2b 81 c1 6b 9c 30 f1 65 50 6a 80 3d |....+..k.0.ePj.=| 00000100 74 37 a8 59 a6 51 8a 63 b6 d8 16 9f a9 47 2a 7c |t7.Y.Q.c.....G*|| 00000110 04 a7 fe 69 47 02 bf e9 b7 1b 7a ea 60 5c 3c 53 |...iG.....z.`\<S| 00000120 5b 10 78 dc 4d d2 a8 22 30 45 37 fb 56 06 9f 06 |[.x.M.."0E7.V...| 00000130 aa df cf 87 3a 3e cf 72 f2 e5 a6 c6 aa e2 7c 1c |....:>.r......|.| 00000140 64 c2 fc 80 ce 02 fc 7f 0f c6 60 81 bf cd 3b 5a |d.........`...;Z| 00000150 37 a5 38 1b 0c 1b 39 2e d6 f6 3d a2 36 e5 87 c3 |7.8...9...=.6...| 00000160 17 b5 fd ee 33 c7 ce a3 d9 c2 57 dc ee 85 48 9d |....3.....W...H.| 00000170 33 60 02 cd c5 83 44 44 ea b6 07 25 0a 4b a6 6e |3`....DD...%.K.n| 00000180 fc 51 42 cd 84 0b 65 b6 19 a1 e5 b2 eb 14 0c fa |.QB...e.........| 00000190 24 77 f5 44 6e 5d 39 dd b6 8e cc f8 30 fe 21 46 |$w.Dn]9.....0.!F| 000001a0 9c ff 95 c6 c7 b5 0a df 54 ca d2 ac bc 64 d0 97 |........T....d..| 000001b0 94 54 d9 29 0f 91 60 20 c3 e4 53 c2 b0 e4 40 72 |.T.)..` ..S...@r| 000001c0 7e 25 bc 81 06 ad 05 46 14 a7 e6 71 6b 5c db 9c |~%.....F...qk\..| 000001d0 0a 5e 76 23 ae 06 01 36 98 21 65 2c 90 e7 4b 1a |.^v#...6.!e,..K.| 000001e0 2a 2d 80 a5 48 db 9e 14 e0 9f e9 aa 00 e3 77 32 |*-..H.........w2| 000001f0 0f fd 94 db 55 a6 64 46 be ae ca de da ee 89 68 |....U.dF.......h| 00000200 =======Devices which don't seem to have a corresponding hard drive============== sdc sdd sde

    Read the article

  • Apache Caching and Expires configuration

    - by mcondiff
    I'm looking for a best possible caching/expires configuration for my specific situation. I realize that some sites have advocated turning etags off: Header unset ETag, FileETag None I know that I should use either Expires or Cache-Control. In additions, I know that I should use either Last-modified or ETAGs (Per ySlow docs). I inherited a clients server that uses the following in .htaccess: <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf|xml|txt|html|htm)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> With this server I am not going to be able to rely on staff to rename images, css and js in web applications so I do not want to set the expires far in the future without knowing (with a good certainty) that "most/all" browsers will check to see if content has changed. What I do not want to happen is someone call me and say the website is broken because they replaced an image and it's not showing up. But I do want to take the most advantage I can with caching and expires while still maintaining that mostly all browsers will check with the server to see if components have changed. I have access to both the .htaccess and apache .conf file and it is a single server, the content is not deployed on multiple servers. What would be the best .htaccess or .conf configuration for me to achieve my goals for this clients server? Thanks for your help

    Read the article

  • error 20014 with mod_proxy

    - by punkish
    I have strange situation. I need to call a program in cgi-bin from within a perl script. When I try to do that with exec($program), I get (20014)Internal error: proxy: error reading status line from remote server proxy: Error reading from remote server returned by ... The long story... I am calling mapserv (http://mapserver.org) as a cgi program from OpenLayers (http://openlayers.org). Ordinarily, my web site is served by Perl Dancer, but the mapserver calls are made directly to http://server/cgi-bin/mapserv from JavaScript. The Dancer web site is served by Starman behind an Apache2 proxy front-end. This is how it looks [browser] -> http://server/app -> [apache2] -> proxy port 5000 -> Starman | | +-> http://server/cgi-bin/mapserv -> [apache2] -> cgi-bin -> mapserv This is what I am trying to accomplish [browser] -> http://server/app -> [apache2] -> proxy port 5000 -> Starman | | mapserv <-- cgi-bin <-- [apache2] <--+ I saw this question re: 20014 error, but that suggested solution didn't help. Any other hints?

    Read the article

  • Can't install NPM after installing Node on EC2 Linux instance?

    - by frequent
    I'm trying my first attempt on getting a node server set up on an amazon ec2 linux instance. I think I made it quite far. First problem I ran into was when trying to make Node the connection timed out after a while, so I need three attempts until I got this: LINK(target) /home/ec2-user/node/out/Release/node: Finished touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_header.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_provider.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_ustack.stamp touch /home/ec2-user/node/out/Release/obj.target/node_etw.stamp make[1]: Leaving directory `/home/ec2-user/node/out' ln -fs out/Release/node node Which tells me, "Node is done", although I'm not sure it is also working as it should. Following this,this and this tutorial, I'm now stuck at installing npm. I think I first cloned into the wrong folder, which always gave me error 127, but even if I'm doing this: cd ~ git clone git://github.com/isaacs/npm.git cd npm sudo -s PATH=/usr/local/bin:$PATH make install I'm still getting this: #after cloning# make[1]: Entering directory `/root/npm' node cli.js install bash: node: command not found make[1]: *** [node_modules/.bin/ronn] Error 127 make[1]: Leaving directory `/root/npm' make: *** [man/man3/start.3] Error 2 Question:: Since I'm pretty much a newby at everything I'm trying here, can someone please tell me what I'm doing wrong and how to get npm to install? Also, in case I cloned into the wrong folder, is there a way to remove the "false clone" or is this not written to disk until I call make install and I don't need to worry? Thanks for helping out!

    Read the article

  • XDEBUG/PHP doesn't dump profile even when set up properly?

    - by John D.
    I installed xdebug from source, but also tried my package manager (separately) and they both are loaded correctly (verified by restarting Apache and seeing the xdebug copyright info in phpinfo()) but they do not dump profiling information. Out of the 40 different attempts of configuration it logged once or twice but I lost what I did, I tried with first only loading the module in php.ini with no settings, but it didn't log to /tmp/. I tried many different settings but my current is now: xdebug.profiler_enable = Off xdebug.profiler_enable_trigger = 1 xdebug.profiler_output_dir = "/tmp/" xdebug.profiler_output_name = "profiler.%t" Of course I call my script through 127.0.0.1/test.php?XDEBUG_PROFILE, which is for enable_trigger. Do you know why it would not dump profiler information? nobody (Arch Linux) can write to /tmp/ as it has before, so I'm sure it is not a permissions error. Apache's error_log does not tell me anything about xdebug either, as it has loaded correctly. It just does not "work"! EDIT: I made a subfolder "xdebug_profiles" in /tmp/ and chown'ed it to nobody, and now it works flawlessly. I'm not sure why it couldn't write before, I guess it's just a caveat with nobody on Arch. I answered my own question , not enough points to answer it or comment, so consider this answered.

    Read the article

  • Making OpenSSL work on PHP Windows 2008 server with FastCGI

    - by KacieHouser
    I have been researching all day. Here is what I have done: In C:/PHP/php.ini and C:/PHP/php-cgi-fcgi.ini I have made the extension_dir = "C:/PHP/ext" I uncommented extension=php_openssl.dll I went to http://windows.php.net/download/ and got the thread safe version with the PHP 5.4 (5.4.8) version of DLL's In C:/PHP/ext I replaced the php_openssl.dll with the one I downloaded In System32 and SysWOW64 I added the following DLL's ssleay.dll libeay.dll I restarted the IIS server in the Server Manager under Web Server and stopped and started the World Wide Web Publishing Service That didn't work, so I tried same thing with the unthreaded versions. I still get: Fatal error: Call to undefined function ftp_ssl_connect() in C:\inetpub\wwwroot\REMOVED_dev\save_data.php on line 5 Here are related things from phpinfo(): System Windows NT DEV-WEB1 6.1 build 7601 (Windows Server 2008 R2 Standard Edition Service Pack 1) i586 Compiler MSVC9 (Visual C++ 2008) Architecture x86 Configure Command cscript /nologo configure.js "--enable-snapshot-build" "--enable-debug-pack" "--disable-zts" "--disable-isapi" "--disable-nsapi" "--without-mssql" "--without-pdo-mssql" "--without-pi3web" "--with-pdo-oci=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8-11g=C:\php-sdk\oracle\instantclient11\sdk,shared" "--with-enchant=shared" "--enable-object-out-dir=../obj/" "--enable-com-dotnet" "--with-mcrypt=static" "--disable-static-analyze" "--with-pgo" Server API CGI/FastCGI Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\PHP\php-cgi-fcgi.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) Registered PHP Streams php, file, glob, data, http, ftp, zip, compress.zlib, compress.bzip2, https, ftps, sqlsrv, phar Registered Stream Socket Transports tcp, udp, ssl, sslv3, sslv2, tls FTP support enabled Protocols dict, file, ftp, ftps, gopher, http, https, imap, imaps, ldap, pop3, pop3s, rtsp, scp, sftp, smtp, smtps, telnet, tftp openssl OpenSSL support enabled OpenSSL Library Version OpenSSL 0.9.8t 18 Jan 2012 OpenSSL Header Version OpenSSL 0.9.8x 10 May 2012 What am I missing here?

    Read the article

  • Windows Server 2008 Task Scheduler: Task Started (Task=100) but did task did not complete (Task=102) when the result code is 2

    - by MacGyver
    Can someone give me a use case for setting up a Windows Server 2008 Task Scheduler task (we'll call this "test") that completes (action completed is task=201) with an error (result code=2)? This is event trigger code for another task (called "notification" that sends out an email based on the event history of the "test" task. I've got use cases for tasks that opens a program successfully and when a program fails to find the program. I'm just trying to think of how I can test a scenario when it finds the program, but something fails with warnings or errors. /* Failed - task started but had errors (result code of 2) */ <QueryList> <Query Id="0" Path="Microsoft-Windows-TaskScheduler/Operational"> <Select Path="Microsoft-Windows-TaskScheduler/Operational"> *[ System [ Provider[@Name='Microsoft-Windows-TaskScheduler'] and (Level=0 or Level=1 or Level=2 or Level=3 or Level=4 or Level=5) and (Task = 201) ] ] and *[ EventData [ Data [ @Name='TaskName' ]='\Tasks\test' ] ] and *[ EventData [ Data [ @Name='ResultCode' ]='2' ] ] </Select> </Query> </QueryList>

    Read the article

  • How can I install Satchmo?

    - by Jonathan Hayward
    I am trying to install Satchmo 0.9 on an Ubuntu 9.10 32-bit guest off of the instructions at http://bitbucket.org/chris1610/satchmo/downloads/Satchmo.pdf. I run into difficulties at 2.1.2: pip install -r http://bitbucket.org/chris1610/satchmo/raw/tip/scripts/requirements.txt pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo The first command fails because a compile error for how it's trying to build PIL. So I ran an "aptitude install python-imaging", locally copy the first line's requirements.text, and remove the line that's unsuccessfully trying to build PIL. The first line completes without reported error, as does the second. The next step tells me to change directory to the /path/to/new/store, and run: python clonesatchmo.py A little bit of trouble here; I am told that clonesatchmo.py will be in /bin by now, and it isn't there, but I put some Satchmo stuff under /usr/local, create a symlink in /bin, and run: python /bin/clonesatchmo.py This gives: jonathan@ubuntu:~/store$ python /bin/clonesatchmo.py Creating the Satchmo Application Traceback (most recent call last): File "/bin/clonesatchmo.py", line 108, in <module> create_satchmo_site(opts.site_name) File "/bin/clonesatchmo.py", line 47, in create_satchmo_site import satchmo_skeleton ImportError: No module named satchmo_skeleton A find after apparently checking out the repository reveals that there is no file with a name like satchmo*skeleton* on my system. I thought that bash might be prone to take part of the second pip invocation's URL as the beginning of a comment; I tried both: pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9\#egg=satchmo pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo Neither way of doing it seems to take care of the import error mentioned above. How can I get a Satchmo installation under Ubuntu, or at least enough of a Satchmo installation that I am able to start with a skeleton of a store and then flesh it out the way I want? Thanks, Jonathan

    Read the article

  • hMail server - sending copy of an e-mail changing the sender

    - by Beggycev
    Dear All please help me with following request. I am using hMail server in a company(test.com) and have several hundred of guest e-mail accounts ([email protected]). I need to accomplish this: When any of the guest e-mails receives a message(either from internal or external sender) this e-mail(or its copy) is sent to another address "[email protected]" which is the same for all of these guest e-mails. But I need the sender to be identified as the [email protected] not as the original sender which happens when I use forwarding. I tried to prepare a simple VBS script using the OnAcceptMessage event to accomplish this. and it works on my testing machine without internet connectivity but not in the production environment. To be specific, if I send an e-mail to [email protected] in my test env it is delivered to the [email protected] with [email protected] being a sender. But in the production env the e-mail stays in the guest mailbox with the original sender. I am interested in any solution, using a rule in hMail or script, anything is welcome. Thank you for any help! The script: Sub OnAcceptMessage(oClient, oMessage) 'creating application object in order to perform operations as hMail server administrator Dim obApp Set obApp = CreateObject("hMailServer.Application") Dim adminLogin Dim adminPassword 'Enter actual values for administrator account and password 'CHANGE HERE: adminLogin = "Admin_login" adminPassword = "password" Call obApp.Authenticate(adminLogin, adminPassword) Dim addrStart 'Take first 5 characters of recipients address addrStart = Mid(oMessage.To, 1, 5) 'if the recipient's address start with "guest" if addrStart = "guest" then Dim recipient Dim recipientAddress 'enter name of the recipient and respective e-mail address() 'CHANGE HERE: recipient = "FINAL" recipientAddress = "[email protected]" 'change the sender and sender e-mail address to the guest oMessage.FromAddress = oMessage.To oMessage.From = oMessage.To & "<" & oMessage.To & ">" 'delete recipients and enter a new one - the actual mps and its e-mail from the variables set above oMessage.ClearRecipients() oMessage.AddRecipient recipient, recipientAddress 'save the e-mail oMessage.save end if End Sub

    Read the article

  • Occasional "Could not load file or assembly" in asmx WebService on IIS and DFS

    - by jesse-mcdowell
    We have a handfull of ASMX web service hosted on two identical Windows Server 2003 boxes. The virtual directory for the web services is loaded in a DFS share, both servers point to the same share. We have a load balancer between the internet and the two web servers. At a seemingly random interval (right now about twice per week) when a user tries to access a method on the web service, IIS returns the error: "Could not load file or assembly" for one of the assemblies used in the method call, and will continue reporting it each time the method is called until the app pool is recycled. We haven't found any distinguishable pattern to the problem. This is what I know: the missing assembly varies (but it's always a home-brew assembly) the Web Service method that fails varies there is no noticeable pattern to the times or intervals where the problem appears there are no admin users accessing the servers when the problem appears the failing method will work correctly on one server and fail on the other, even though both point to the same bin folder the problem can always be corrected by recycling the app pool and making no other changes I have enabled the Assembly Binder Log, and know that the binder is looking in the correct location for the file. Our assemblies are compiled for .Net 3.5.

    Read the article

  • SSH session closing whilst virtualenv session stays open (I think)

    - by ing0
    I've been developing some sites using Flask recently (running on debian within a virtualenv), and when I am testing I can run it on a port, let's say post 5000. So I run the script like so: . env/bin/activate <- go into virtual environment python file.py <- run python script And I will be given this message: Running on http://0.0.0.0:5000/ So this all works great and I can access my site on this port fine. However... my rubbish ISP always does this thing where it resets something around 1am every morning. I have no idea what this is, everything runs like normal but I always get disconnected from any SSH sessions open. This leaves it running and all I can do is call: lsof -i Which will show me the process but if I kill it and then rerun it things get weird. The: Running on http://0.0.0.0:5000 message still shows but I cannot connect to it anymore. I've tried changing the port number and it seems the only thing that works is trying again later on or on another day. Now I'm assuming that something on my server resets inbetween these times and I would like to think it was maybe that virtualenv session timing out, but I cannot find out how to do this manually, does anyone know?

    Read the article

  • MediaWiki Math Extension Out of Sync

    - by kfmfe04
    I am trying to run MediaWiki 1.19.2 on Ubuntu Linux (3.0.0-26-generic #42-Ubuntu AMD 64). It installs fine via tar.gz, however, I run into problems when trying to install the Math extension (via git clone from its repository). I get the following error in the Apache logs: PHP Fatal error: Call to undefined method FSFileBackend::getRootStoragePath() in /var/www/abc.com/html/mediawiki-1.19.2/extensions/Math/Math.body.php on line 361, referer: http://intra.abc.com/wiki/index.php?title=Quality&action=submit so apparently, the latest Math extension is out-of-sync with MediaWiki 1.19.2; the latest Math extension is apparently using post-1.19.2 methods. I tried looking through the git logs for some indication (a tag, perhaps?) of which revision of Math worked with 1.19.2, but no luck. Anyone have any suggestions? Thanks in advance. Edit: Just my luck - after posting this message, I found this link. I think this version should do the job. Consider this question closed/answered.

    Read the article

  • Birt on Tomcat unable to find JARs

    - by LostInTheWoods
    First, my setup: BiRT Runtime: 3.7.2. Ubuntu 10.04 Tomcat 6 Sun Java 1.6.0 I have a jar file I want to deploy onto the Tomcat server so it is usable by the runtime, so I placed the jar file in /var/lib/tomcat6/webapps/birt/WEB-INF/lib. As I understand it this is the default location for JAR files that are going to be used by a BiRT report. But the jar file is not accessible by the report that is trying to call it. In the BiRT logs I see: Error evaluating Javascript expression. Script engine error: ReferenceError: "DynDSinfo" is not defined. (/report/data-sources/oda-data-source[@id="54"]/method[@name="beforeOpen"]#20) Script source: /report/data-sources/oda-data-source[@id="54"]/method[@name="beforeOpen"], line: 0, text: __bm_beforeOpen() org.eclipse.birt.data.engine.core.DataException: Fail to execute script in function __bm_beforeOpen(). Source: "DynDSinfo" is the class I am trying to reference.. and now for the kicker... this works fine on Tomcat6 on Windows 7. The same files in the same places. So is there some additional configuration or some environmental variable that needs to be set, or something different on the Linux (Ubuntu) platform? All help or ideas gratefully received, Stephen

    Read the article

  • Menu tab completion for recent history in zsh

    - by dat5h
    I am interested in a potential zle widget for zsh. Is there a way to build a widget that mimics the kill-completion selectable menu? Essentially I want to be able to press , tab in vi-command-mode, or maybe !-tab-completion at the shell and get a list of recent history (or related history compared what is already entered at the commandline) that allows me to scroll through it and possibly select a relevant function to call or compare similar calls. Looking through the manual I stumbled onto a similar widget that I have mapped like so: # tab completion history menu (vicmd) autoload -z history-beginning-search-menu zle -N history-beginning-search-menu-space-end history-beginning-search-menu bindkey -M vicmd "\t" history-beginning-search-menu-space-end # emacs binding could be "\e\t"? (I wouldn't know) Therefore, if I enter vicmd and hit tab when I enter something like "grep", then I get a list of all grep calls in history. It also asks me for the list-number and it will perform the numbered item in history. If I enter a space and then try this, it lists ALL of my history history. This is fairly close to what I want, but there are some problems. For example, 1) it prints the entire list of relevant history and does not check the number of lines of the screen so it could easily blow up the space on the terminal; 2) when I type in numbers for selecting an item in history it does not show me the numbers I type, so I may make a mistake and have to start over again; 3) I would love to be able to hook in appearance tweaks. I was wondering if there exists more updated version of this widget or if there is any way to look at the source for kill-completion or history-beginning-search-menu to see if I could think of a way to do it.

    Read the article

  • Issues with creating USB bootable Mountain Lion

    - by Sidd
    I am trying to set up a triple boot Windows 8, Mountain Lion, and Ubuntu. I am stuck though. I have got Windows 8 on a partition, and I am trying to get Mountain Lion on there at this point. I installed a VMware with a Snow Leopard 10.6.2 image on the Windows 8 platform. I used the disk utility in this program in order to get Mountain Lion on there. This is what i did specifically: I got the installesd.dmg. I 'mounted' that file or whatever you call it, and out came something along the lines of "Install Mountain Lion OS x" (something like that - it was like a submenu under the installesd.dmg in the disk utility). I got my PNY 8 gb Attache Flash Drive and went to the Erase tab of disk utility. I erased it using the Mac OS Extended (Journaled) setting and called it "Mac". I went to the Restore tab, dragged "Mac" into destination, and dragged "Install Mountain Lion OS x" to the source. Everything seemed to go well, but it didn't. When trying to boot from the flash drive (and yes, I set the BIOS correctly), it skipped it, and loaded Windows 8 normally as if nothing was plugged in. When I try looking at the flash drive in windows 8, it comes up as a 200 mb capacity drive labeled "EFI" with nothing in it (remember, it was 8gb in the beginning). I downloaded Plop Boot Manager, but it did not recognize a USB being plugged in. Does anyone know how I could fix this?

    Read the article

  • atl90.dll version 9.0.30729.4148 is missing in WinSxS folder

    - by mkva
    I have the following problem: when starting Visual Studio 2008, it says "Cannot find one or more components. Please reinstall the application." and stops. With the help of Sysinternals ProcessMonitor, I found out that Visual Studio could not load the atl90.dll 9.0.30729.4148 from the WinSxS folder. I tried to manually copy the older atl90.dll 9.0.30729.1 with the result that Visual Studio works again. Now I call this a dirty workaround, and not a solution. Plus I still don't know the reason why the atl90.dll disappeared in the first place. So my questions: - Does anyone know of a reason why this might have happened? - Does anyone know a real solution to the problem, e.g. a Microsoft download that includes the atl90.dll in the correct version 9.0.30729.4148 that installs into WinSxS? Some details: - WinXp SP3 - missing DLL: C:\WINNT\WinSxS\x86_Microsoft.VC90.ATL_1fc8b3b9a1e18e3b_9.0.30729.4148_x-ww_353599c2\atl90.dll - workaround DLL: C:\WINNT\WinSxS\x86_Microsoft.VC90.ATL_1fc8b3b9a1e18e3b_9.0.30729.1_x-ww_d01483b2\atl90.dll - manifests in WinSxS seem to be alright, but unfortunately all point to the missing version 9.0.30729.4148 Thanks, Markus

    Read the article

  • atl90.dll version 9.0.30729.4148 is missing in WinSxS folder

    - by mkva
    Hi I have the following problem: when starting Visual Studio 2008, it says "Cannot find one or more components. Please reinstall the application." and stops. With the help of Sysinternals ProcessMonitor, I found out that Visual Studio could not load the atl90.dll 9.0.30729.4148 from the WinSxS folder. I tried to manually copy the older atl90.dll 9.0.30729.1 with the result that Visual Studio works again. Now I call this a dirty workaround, and not a solution. Plus I still don't know the reason why the atl90.dll disappeared in the first place. So my questions: - Does anyone know of a reason why this might have happened? - Does anyone know a real solution to the problem, e.g. a Microsoft download that includes the atl90.dll in the correct version 9.0.30729.4148 that installs into WinSxS? Some details: - WinXp SP3 - missing DLL: C:\WINNT\WinSxS\x86_Microsoft.VC90.ATL_1fc8b3b9a1e18e3b_9.0.30729.4148_x-ww_353599c2\atl90.dll - workaround DLL: C:\WINNT\WinSxS\x86_Microsoft.VC90.ATL_1fc8b3b9a1e18e3b_9.0.30729.1_x-ww_d01483b2\atl90.dll - manifests in WinSxS seem to be alright, but unfortunately all point to the missing version 9.0.30729.4148 Thanks, Markus

    Read the article

  • Managing Many External Hosts Using EC2 and Route 53

    - by futureal
    Looking for a "best practice" answer to managing externally-addressable hosts using the combination of Amazon EC2 and Amazon Route 53, without using Elastic IPs for each host. In my scenario I will have 30+ hosts that need to be accessible from outside EC2, so directly using internal DNS will not work. In the past, I have addressed hosts by assigning an elastic IP to that host (let's say, 55.55.55.55) and then creating an associated A record. For example, let's say I want to create "ec2-corp01.mydomain.com" I might do: ec2-corp01.mydomain.com. A 55.55.55.55 300 Then on that EC2 instance, I would assign the Elastic IP of 55.55.55.55, and everything works fine. Of course, to make this work, I need to have one Elastic IP per instance, which is something I'd like to avoid if possible; I'd like the infrastructure to be more dynamic. So my thought is to try something like: Create a script that queries the internal EC2 tools to determine an instance's private hostname On instance boot, call that script to determine its hostname, and then using the command-line Route 53 interface to find and update that hostname to its current internal hostname Since the host will have a relatively low TTL (let's say 300 as above, or 5 minutes) it should take effect pretty quickly Is this a good idea? Is there a better or more widely accepted way to handle it? If it IS a good idea, what type of record should I be creating? A CNAME that points to the internal host, like ec2-55-55-55-55.compute-1.amazonaws.com? Is an A record better or worse? Thanks!

    Read the article

  • gpfs: adding a new nsd server to a cluster

    - by alessandra
    I have a gpfs cluster composed by 10 linux nodes, managed by a primary server A, which also act as nsd server for a first stack of disks. I attached a new jbod to one of the nodes (call it node B), which I would like to become a nsd server for this new stack of disks, but still be included in the cluster so that the disks are available to all the nodes. Node B is connected to the cluster via ethernet. How can I make the new nsd seen by all the nodes of the cluster? I can create the new nsd but when trying to create the filesystem on node B it the command mmcrfs times out. It looks like the nodes of the cluster cannot understand the filesystem location even if I specify them attached to server B in the description file. Would it be better to remove node B from the cluster, create a cluster on its own with its attached filesystem and connect it remotely with the previous cluster? Or a clustered NFS solution would apply better? Can you please give me any suggestion?

    Read the article

  • Looking for a help desk ticketing system..

    - by Dan
    Hi guys Im looking for a good help desk ticket solution. It must perform the following actions for it to be useful. It needs to have a single point of contact via email..e.g [email protected] If we recieve a telephone(or an email outside of the system) we need to be able to create a ticket as if had been added via the single point of contact, this needs to be done with ease in order to save time. Certain people within our organisation deal with certain customers, so if the email/ custom input support call as mentioned in bullet 2 is picked up as having a relationship with that certain person in our organisation it needs to be sent to them/put in their queue for them to work on. If a person is out of office or sick any tickets sent to them must be forwarded to somebody else or put into a seperate pool of tickets that anybody can access. Perhaps have an agent that sorts through tickets in the pool and assigns them to anybody who is available, preferably the person with fewest tickets in their queue/list. Once a customer emails and the system logs it they immediately get a response with a ticket number and maybe details of who is dealing with the problem. Any correspondance in relation to a particular ticket is automatically grouped into some sort of message, and not made into a load of separate tickets. I.e system scans incoming email subjects for ticket numbers and assosciates it with exisiting tickets if that number exists. Any help is much appreciated Thanks P.S I have taken a look at OTRS but i'm not feeling it so unless someone can convince me I guess i'm after an alternative.

    Read the article

  • Run script before shutdown/restart

    - by dtbarne
    I'd like to run a PHP script when an instance is told to shutdown, but of course before it actually finishes shutting down. My particular script is just looking to push some log files from the local partition to a another server. I've got the gist of how this process works, but I need some clarification. How I understand it. Please correct me if I'm wrong. Create an executable script in /etc/init.d (lets call it /etc/init.d/push-logs) Create a symlink to /etc/init.d/push-logs from /etc/rc0.d (shutdown) and /etc/rc6.d (reboot). The name should be KXXpush-logs Here's my questions: Of course - am I understanding correctly? For #2 above - it sounds like the lower the XX the better - is there too low a number I can use? Does it matter if it shares a number with another script? Does the script in /etc/init.d/push-logs HAVE to follow the standard init.d template (supporting start/stop, etc. commands)? This doesn't really apply to my use case. If possible I just want the script to be the following: #!/bin/sh # # Run PHP file prior to shutdown # /usr/bin/php /path/to/php_file.php

    Read the article

  • How To Remove Bottleneck with Squid Caching Proxy

    - by Volomike
    I'm more of a LAMP web developer trying to help the sysop. When I joined a project, I inherited some old PHP spaghetti code. Some of that code is that it goes out to a third-party website (let's call it thirdparty.com) and pulls down content with an HTTP-GET request. Unfortunately, the way the code is designed, it needs to do this several times a minute. When we looked at the bottlenecks on the server with 'netstat -a', we saw that connections to thirdparty.com were constantly running when this content would be plenty fine to be gathered once a day. What I need to know is if the Squid Proxy Caching Server is the solution we need? I'm guessing that this might let us have it pretend to be thirdparty.com on the network. If the web server needs to query thirdparty.com, it hits Squid instead. Squid can then determine whether it needs to supply content from cache or if it needs to go to thirdparty.com for fresh content. Is this the solution we need? And second, is this easily configured and only to cache thirdparty.com requests?

    Read the article

  • Dismount USB External Drive using powershell

    - by JC
    Hello, I am attempting to dismount an external USB drive using powershell and I cannot successfuly do this. The following script is what I use: #get the Win32Volume object representing the volume I wish to eject $drive = Get-WmiObject Win32_Volume -filter "DriveLetter = 'F:'" #call dismount on that object there by ejecting drive $drive.Dismount($Force , $Permanent) I then check my computer to check if drive is unmounted but it is now. The boolean parameters $force and $permanent have been tried with different permutations to no avail. The exit code returned by the dismount command changes when the params are toggled. (0,0) = exit code 0 (0,1) = exit code 2 (1,0) = exit code 0 (1,1) = exit code 2 The documentation for exit code 2 indicates that there are existing mount points as a reason why it cannot dismount. Although I am trying to dismount the only mount point that exists so I am unsure what this exit code is trying to tell me. Having already trawled the web for people experiencing similar problems I have only found one additional command to try and that is the following: # executed after the .Dismount() command $drive.Put() This additional command does not help. I am running out of things to try, so any assistance anyone can give me would be greatly appreciated, Thanks.

    Read the article

  • How to configuration keepalived on Amazon EC2?

    - by oeegee
    I rad some article. Keepalived over GRE tunnel for failover on VPS environment http://blog.killtheradio.net/how-tos/keepalived-haproxy-and-failover-on-the-cloud-or-any-vps-without-multicast/ but, I don't know how to configuration? and How to call this architecture? only I Know that How to config Master/Backup configuration at keepalived. What I want to know that How does work keepalived? I want to design this.... XMPP Server(EC2) | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy #1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 Thanks! but What I want to know how to work on keepalived with unicast patche modul. ELB is expansive. and this is first totaly design. [Flow] ELB -- XMPP Server -- ELB -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ELB | Casandra#1 Casandra#2 Casandra#3 Casandra#4 and change first design. [Flow] ELB -- XMPP Server -- HAProxy Master(Casandra Farm) -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy#1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 this is second. [Flow] ELB -- HAProxy(XMPP Farm) -- XMPP Server -- HAProxy(Casandra Farm) -- Casanda It's OK? ELB | HAProxy#1 HAProxy#2 HAProxy#3 HAProxy#4 XMPP#1 XMPP#2 XMPP#3 XMPP#4 | Casandra#1 Casandra#2 Casandra#3 Casandra#4

    Read the article

  • ISPConfig dovecot status=bounced (user unknown)

    - by Ivan Dokov
    Before you point me to Google or serverfault search I want to tell you that I've searched a lot, did some "fixes". They didn't help. I have ISPConfig 3 installed on Ubuntu 12.04 LTS Server The server has several domains and lets call the main domain: example.com. I have also demo.com I have several emails on each domain. The status of the email sent between the emails: [email protected] - [email protected] (Success) [email protected] - [email protected] (Failure) The failure is with error: postfix/pipe[31311]: 8E72ED058D: to=<[email protected]>, relay=dovecot, delay=0.1, delays=0.03/0/0/0.07, dsn=5.1.1, status=bounced (user unknown) I saw the fix for removing the example.com from: mydestination = localhost, localhost.localdomain in /etc/postfix/main.cf It didn't help. Also an important thing is that the example.com MX records are Google's. We are using Google Apps for this domain in order to use Gmail servers. I think the problem is that the mail server is not looking for the MX records of the domain. It knows the domain is set on this server and it searches for the destination email on the local server, not on Google's servers. For several days I'm really lost! Thanks for your help in advance!

    Read the article

< Previous Page | 690 691 692 693 694 695 696 697 698 699 700 701  | Next Page >