Search Results

Search found 17278 results on 692 pages for 'directory conventions'.

Page 207/692 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Virtualhost entries gets over-written when apache httpd.conf is rebuilt

    - by Amitabh
    Background: We have been trying to get a wildcard SSL working on multiple sub domains on a single dedicated address.. We have two sub domains next.my-personal-website.com and blog.my-personal-website.com Part of our strategy has been to edit the httpd.conf and add the NameVirtualHost xx.xx.144.72:443 directive and the virtualhost entries for port 443 for the subdomains there. This works good if we just edit the httpd.conf, add the entries, save it and restart the apache. The problem: But if we add a new sub domain from cpanel or we run the # /usr/local/cpanel/bin/apache_conf_distiller --update # /scripts/rebuildhttpdconf the virtualhost entries that we added manually are no more there in the newly generated httpd.conf file. Only the virtualhost entry for the main domain for port 443 that was there before we made edits to the httpd.conf is there(assuming we are not discussing virtualhost entries for port 80). I understand we need to put the new virtualhost entries in some include files as mentioned here in the cpanel documentation. But am not sure where to. So the question would be where do I put the NameVirtualHost xx.xx.144.72:443 directive and the two virtualhost directive for port 443, so that they are not overwritten when httpd.conf is rebuilt/regenerated later. Virtualhost entries: The two virtualhost entries for the subdomains are: <VirtualHost xx.xx.144.72:443> ServerName next.my-personal-website.com ServerAlias www.next.my-personal-website.com DocumentRoot /home/myguardi/public_html/next.my-personal-website.com ServerAdmin [email protected] UseCanonicalName On CustomLog /usr/local/apache/domlogs/next.my-personal-website.com combined CustomLog /usr/local/apache/domlogs/next.my-personal-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User myguardi # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup myguardi myguardi </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup myguardi myguardi </IfModule> ScriptAlias /cgi-bin/ /home/myguardi/public_html/next.my-personal-website.com/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-personal-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-personal-website.com.key SSLCACertificateFile /etc/ssl/certs/my-personal-website.com.cabundle CustomLog /usr/local/apache/domlogs/next.my-personal-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/myguardi/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> and <VirtualHost xx.xx.144.72:443> ServerName blog.my-personal-website.com ServerAlias www.blog.my-personal-website.com DocumentRoot /home/myguardi/public_html/blog.my-personal-website.com ServerAdmin [email protected] UseCanonicalName On CustomLog /usr/local/apache/domlogs/blog.my-personal-website.com combined CustomLog /usr/local/apache/domlogs/blog.my-personal-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User myguardi # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup myguardi myguardi </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup myguardi myguardi </IfModule> ScriptAlias /cgi-bin/ /home/myguardi/public_html/blog.my-personal-website.com/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-personal-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-personal-website.com.key SSLCACertificateFile /etc/ssl/certs/my-personal-website.com.cabundle CustomLog /usr/local/apache/domlogs/blog.my-personal-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/myguardi/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> and the automatically generated virtualhost entry for the main domain for port 443 is <VirtualHost xx.xx.144.72:443> ServerName my-personal-website.com ServerAlias www.my-personal-website.com DocumentRoot /home/myguardi/public_html ServerAdmin [email protected] UseCanonicalName Off CustomLog /usr/local/apache/domlogs/my-personal-website.com combined CustomLog /usr/local/apache/domlogs/my-personal-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User myguardi # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup myguardi myguardi </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup myguardi myguardi </IfModule> ScriptAlias /cgi-bin/ /home/myguardi/public_html/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-personal-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-personal-website.com.key SSLCACertificateFile /etc/ssl/certs/my-personal-website.com.cabundle CustomLog /usr/local/apache/domlogs/my-personal-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/myguardi/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/ssl/2/myguardi/my-personal-website.com/*.conf" I really appreciate if somebody can tell me how to proceed on this. Thank you. Update: Include directives present are: `Include "/usr/local/apache/conf/includes/pre_main_global.conf" Include "/usr/local/apache/conf/includes/pre_main_2.conf" Include "/usr/local/apache/conf/php.conf" Include "/usr/local/apache/conf/includes/errordocument.conf" Include "/usr/local/apache/conf/modsec2.conf" Include "/usr/local/apache/conf/includes/pre_virtualhost_global.conf" Include "/usr/local/apache/conf/includes/pre_virtualhost_2.conf" ` These are the entries that are generated before any virtualhost entry is defined. Towards the end of the httpd.conf file , the following two entries are added Include "/usr/local/apache/conf/includes/post_virtualhost_global.conf" Include "/usr/local/apache/conf/includes/post_virtualhost_2.conf" The older httpd.conf file before we added the virtualhost entries for sub domains for port 443 can be viewed here

    Read the article

  • Installing 12.04 within 11.04

    - by user288752
    I recently installed 11.04 from an installation disk (overwriting Windows in the process). I know 11.04 is no longer supported but I had no problems subsequently upgrading it to 12.04 (via 11.10) a couple of months ago on another device. This time though, things are different. I can't upgrade through update manager because Ubuntu then tells me I have no internet connection, which is (obviously incorrect). I have tried to circumvent the problem by downloading the 12:04 iso from ubuntu.com directly but now I'm troubled by something else. The download is succesfull but after mounting the iso I can't interact with it. When I try to access the Wubi it gives me the following message: Archive: /home/lars/.cache/.fr-7g75Fe/wubi.exe [/home/lars/.cache/.fr-7g75Fe/wubi.exe] End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. zipinfo: cannot find zipfile directory in one of /home/lars/.cache/.fr-7g75Fe/wubi.exe or /home/lars/.cache/.fr-7g75Fe/wubi.exe.zip, and cannot find /home/lars/.cache/.fr-7g75Fe/wubi.exe.ZIP, period. What am I doing wrong here?

    Read the article

  • How can I tell whether an interrupted rm -r removed any files?

    - by Jake Petroules
    I installed sshfs a Linux box and then mounted my Mac home directory. In the middle of troubleshooting a configuration issue, I did an ls -l on the mount directory (as normal user), receiving: total 0 d????????? ? ? ? ? ? sl I then ran sudo rm -r on that directory but pressed Ctrl+C to terminate it immediately before it (looks) like the command did anything. I notice no files missing but I want to be sure - is there a way I can somehow inspect the filesystem log on my Mac to see if any files were actually removed?

    Read the article

  • Error when make "make install" PHP WebDav

    - by kron
    Hi, I'm having issues install PHP WebDAV onto Fedora8 - after downloading and running make install I get the following errors: [root@ip-18-192-114-35 dav]# make install /bin/sh /tmp/dav/libtool --mode=compile gcc -I. -I/tmp/dav -DPHP_ATOM_INC -I/tmp/dav/include -I/tmp/dav/main -I/tmp/dav -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /tmp/dav/dav.c -o dav.lo gcc -I. -I/tmp/dav -DPHP_ATOM_INC -I/tmp/dav/include -I/tmp/dav/main -I/tmp/dav -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /tmp/dav/dav.c -fPIC -DPIC -o .libs/dav.o /tmp/dav/dav.c:21:23: error: ne_socket.h: No such file or directory /tmp/dav/dav.c:22:24: error: ne_session.h: No such file or directory /tmp/dav/dav.c:23:22: error: ne_utils.h: No such file or directory /tmp/dav/dav.c:24:21: error: ne_auth.h: No such file or directory /tmp/dav/dav.c:25:22: error: ne_basic.h: No such file or directory /tmp/dav/dav.c:26:20: error: ne_207.h: No such file or directory /tmp/dav/dav.c:35: error: expected specifier-qualifier-list before 'ne_session' /tmp/dav/dav.c: In function 'dav_destructor_dav_session': /tmp/dav/dav.c:152: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:153: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:155: error: 'DavSession' has no member named 'base_uri_path' /tmp/dav/dav.c:156: error: 'DavSession' has no member named 'user_name' /tmp/dav/dav.c:157: error: 'DavSession' has no member named 'user_password' /tmp/dav/dav.c:158: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c: In function 'cb_dav_auth': /tmp/dav/dav.c:194: error: 'DavSession' has no member named 'user_name' /tmp/dav/dav.c:194: error: 'NE_ABUFSIZ' undeclared (first use in this function) /tmp/dav/dav.c:194: error: (Each undeclared identifier is reported only once /tmp/dav/dav.c:194: error: for each function it appears in.) /tmp/dav/dav.c:195: error: 'DavSession' has no member named 'user_password' /tmp/dav/dav.c: In function 'zif_webdav_connect': /tmp/dav/dav.c:212: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:212: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:213: error: 'ne_uri' undeclared (first use in this function) /tmp/dav/dav.c:213: error: expected ';' before 'uri' /tmp/dav/dav.c:215: error: 'uri' undeclared (first use in this function) /tmp/dav/dav.c:259: error: 'DavSession' has no member named 'base_uri_path' /tmp/dav/dav.c:260: error: 'DavSession' has no member named 'base_uri_path_len' /tmp/dav/dav.c:262: error: 'DavSession' has no member named 'user_name' /tmp/dav/dav.c:264: error: 'DavSession' has no member named 'user_name' /tmp/dav/dav.c:267: error: 'DavSession' has no member named 'user_password' /tmp/dav/dav.c:269: error: 'DavSession' has no member named 'user_password' /tmp/dav/dav.c:271: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c: In function 'get_full_uri': /tmp/dav/dav.c:304: error: 'DavSession' has no member named 'base_uri_path_len' /tmp/dav/dav.c:307: error: 'DavSession' has no member named 'base_uri_path_len' /tmp/dav/dav.c:313: error: 'DavSession' has no member named 'base_uri_path' /tmp/dav/dav.c:313: error: 'DavSession' has no member named 'base_uri_path_len' /tmp/dav/dav.c:314: error: 'DavSession' has no member named 'base_uri_path_len' /tmp/dav/dav.c: In function 'zif_webdav_get': /tmp/dav/dav.c:329: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:329: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:330: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:330: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:348: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:354: error: 'ne_accept_2xx' undeclared (first use in this function) /tmp/dav/dav.c:359: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:359: error: invalid type argument of '->' /tmp/dav/dav.c: In function 'zif_webdav_put': /tmp/dav/dav.c:377: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:377: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:378: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:378: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:396: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:405: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:405: error: invalid type argument of '->' /tmp/dav/dav.c: In function 'zif_webdav_delete': /tmp/dav/dav.c:422: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:422: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:423: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:423: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:441: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:448: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:448: error: invalid type argument of '->' /tmp/dav/dav.c: In function 'zif_webdav_mkcol': /tmp/dav/dav.c:465: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:465: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:466: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:466: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:484: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:491: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:491: error: invalid type argument of '->' /tmp/dav/dav.c: In function 'zif_webdav_copy': /tmp/dav/dav.c:510: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:510: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:511: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:511: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:539: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:550: error: 'NE_DEPTH_INFINITE' undeclared (first use in this function) /tmp/dav/dav.c:550: error: 'NE_DEPTH_ZERO' undeclared (first use in this function) /tmp/dav/dav.c:554: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:554: error: invalid type argument of '->' /tmp/dav/dav.c: In function 'zif_webdav_move': /tmp/dav/dav.c:573: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:573: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:574: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:574: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:598: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:611: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:611: error: invalid type argument of '->' make: *** [dav.lo] Error 1 Any help would be much appreciated. Thanks!

    Read the article

  • Working of trashcan utility in tru64 Unix server.. or any other utility??

    - by RBA
    Hi, I used this mktrashcan command mktrashcan deleteMe1 trashcan/ And then i Deleted all the contents inside deleteMe1 directory(rm -rf*).. But then what happend is only the two text files which are inside the deleteMe1(deleteMe2.txt, deleteMe3.txt) directory were moved into the trashcan folder.. Rest of the directories and files inside the directories were not foundd!! Isn't there any other way, so that whatever is deleted, moves exactly the same way to the trashcan directory??? Or is there Any Other Utility that can perform the same task but in advance way.. mkdir deleteMe1 mkdir deleteMe1/deleteMe2 mkdir deleteMe1/deleteMe3 touch ./deleteMe1/deleteMe2/deleteMe4.txt touch ./deleteMe1/deleteMe2/deleteMe5.txt touch ./deleteMe1/deleteMe3/deleteMe6.txt touch ./deleteMe1/deleteMe3/deleteMe7.txt touch ./deleteMe1/deleteMe2.txt touch ./deleteMe1/deleteMe3.txt Thankss..

    Read the article

  • Ubuntu 12.04 + Raid0 + Windows 7 not loading

    - by Douglas
    please someone help me.... (Sorry for my english) Hi, I have a Pc with 2 Hd (1Tb each) on Raid0. I had a Windows 7 64bits working for several months. When I installed the Windows I let a 100Gb partition empty to install Ubuntu someday. I was using Linux on a Virtualbox, but this week I tried to install Ubuntu 12.04 in this 100Gb partition. I used the Ubuntu alternate cd, because the 'normal' cd was giving me trouble with the Raid0. The grub installation always reported a error. After a lot of work I found that I nedded to install grub on partition /dev/mapper/isw_chjbfeec_DougRaid1 (see Bootinfo below). The Windows installation created a 100Mb boot partition, so I needed to install grub in this partition. Now I have the Ubuntu working 100% ok. The problem is, the Windows is not booting! The windows option is present on the grub menu, but when I choose the windows option there is a black screen and after that the grub is reloaded. My Bootinfo is: Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== => Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks in partition 1 for /boot/grub. => Grub2 (v1.99) is installed in the MBR of /dev/mapper/isw_chjbfeec_DougRaid and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks in partition 1 for /boot/grub. sda1: __________________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: mount: unknown filesystem type '' sda2: __________________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: mount: unknown filesystem type '' mount: unknown filesystem type '' sda3: __________________________________________________________________________ File system: Extended Partition Boot sector type: Unknown Boot sector info: isw_chjbfeec_DougRaid1: ________________________________________________________ File system: ntfs Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of isw_chjbfeec_DougRaid1 and looks at sector 3841862992 of the same hard drive for core.img. core.img is at this location and looks for (,msdos5)/boot/grub on this drive. No errors found in the Boot Parameter Block. Operating System: Boot files: /grldr /bootmgr /Boot/BCD /grldr isw_chjbfeec_DougRaid2: ________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /Windows/System32/winload.exe isw_chjbfeec_DougRaid3: ________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: isw_chjbfeec_DougRaid5: ________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 12.04 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img isw_chjbfeec_DougRaid6: ________________________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 2,048 206,847 204,800 7 NTFS / exFAT / HPFS /dev/sda2 206,848 3,686,402,047 3,686,195,200 7 NTFS / exFAT / HPFS /dev/sda3 3,686,402,558 3,907,039,743 220,637,186 5 Extended Invalid MBR Signature found. EBR refers to a location outside the hard drive. /dev/sda2 ends after the last sector of /dev/sda /dev/sda3 ends after the last sector of /dev/sda Drive: isw_chjbfeec_DougRaid _____________________________________________________________________ Disk /dev/mapper/isw_chjbfeec_DougRaid: 2000.4 GB, 2000404348928 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907039744 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/mapper/isw_chjbfeec_DougRaid1 * 2,048 206,847 204,800 7 NTFS / exFAT / HPFS /dev/mapper/isw_chjbfeec_DougRaid2 206,848 3,686,402,047 3,686,195,200 7 NTFS / exFAT / HPFS /dev/mapper/isw_chjbfeec_DougRaid3 3,686,402,558 3,907,039,743 220,637,186 5 Extended /dev/mapper/isw_chjbfeec_DougRaid5 3,686,402,560 3,881,876,479 195,473,920 83 Linux /dev/mapper/isw_chjbfeec_DougRaid6 3,881,876,992 3,907,039,743 25,162,752 82 Linux swap / Solaris "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/mapper/isw_chjbfeec_DougRaid1 C89C73D19C73B910 ntfs Reservado pelo Sistema /dev/mapper/isw_chjbfeec_DougRaid2 6830883A3088116C ntfs /dev/mapper/isw_chjbfeec_DougRaid5 bbab868a-ea53-4be3-ba7d-2737fe6cb24c ext4 /dev/mapper/isw_chjbfeec_DougRaid6 7a830a3c-88fb-4cba-80dc-f32e08abfd5b swap /dev/sda isw_raid_member /dev/sdb isw_raid_member /dev/sr0 iso9660 Windows7x86x64SK ========================= "ls -R /dev/mapper/" output: ========================= /dev/mapper: control isw_chjbfeec_DougRaid isw_chjbfeec_DougRaid1 isw_chjbfeec_DougRaid2 isw_chjbfeec_DougRaid3 isw_chjbfeec_DougRaid5 isw_chjbfeec_DougRaid6 ================================ Mount points: ================================= Device Mount_Point Type Options /dev/mapper/isw_chjbfeec_DougRaid5 / ext4 (rw,errors=remount-ro) /dev/sr0 /media/Windows7x86x64SK iso9660 (ro,nosuid,nodev,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500,uhelper=udisks) ================= isw_chjbfeec_DougRaid1/grldr embedded menu: ================== -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- ================== isw_chjbfeec_DougRaid5/boot/grub/grub.cfg: ================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="$1" if [ "$1" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.2.0-24-generic-pae' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c linux /boot/vmlinuz-3.2.0-24-generic-pae root=UUID=bbab868a-ea53-4be3-ba7d-2737fe6cb24c ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-24-generic-pae } menuentry 'Ubuntu, with Linux 3.2.0-24-generic-pae (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c echo 'Loading Linux 3.2.0-24-generic-pae ...' linux /boot/vmlinuz-3.2.0-24-generic-pae root=UUID=bbab868a-ea53-4be3-ba7d-2737fe6cb24c ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-24-generic-pae } submenu "Previous Linux versions" { menuentry 'Ubuntu, with Linux 3.2.0-23-generic-pae' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c linux /boot/vmlinuz-3.2.0-23-generic-pae root=UUID=bbab868a-ea53-4be3-ba7d-2737fe6cb24c ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-23-generic-pae } menuentry 'Ubuntu, with Linux 3.2.0-23-generic-pae (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c echo 'Loading Linux 3.2.0-23-generic-pae ...' linux /boot/vmlinuz-3.2.0-23-generic-pae root=UUID=bbab868a-ea53-4be3-ba7d-2737fe6cb24c ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-23-generic-pae } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober_proxy ### menuentry "Windows 7 (loader) (on /dev/mapper/isw_chjbfeec_DougRaid1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(sda,msdos1)' search --no-floppy --fs-uuid --set=root C89C73D19C73B910 chainloader +1 } ### END /etc/grub.d/30_os-prober_proxy ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- ====================== isw_chjbfeec_DougRaid5/etc/fstab: ======================= -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/isw_chjbfeec_DougRaid5 / ext4 errors=remount-ro 0 1 /dev/mapper/isw_chjbfeec_DougRaid6 none swap sw 0 0 -------------------------------------------------------------------------------- ========== isw_chjbfeec_DougRaid5: Location of files loaded by Grub: =========== GiB - GB File Fragment(s) = boot/grub/core.img 1 = boot/grub/grub.cfg 1 = boot/initrd.img-3.2.0-23-generic-pae 2 = boot/initrd.img-3.2.0-24-generic-pae 2 = boot/vmlinuz-3.2.0-23-generic-pae 1 = boot/vmlinuz-3.2.0-24-generic-pae 1 = initrd.img 2 = initrd.img.old 2 = vmlinuz 1 = vmlinuz.old 1 ======================== Unknown MBRs/Boot Sectors/etc: ======================== Unknown BootLoader on sda1 Unknown BootLoader on sda2 Unknown BootLoader on sda3 =============================== StdErr Messages: =============================== xz: (stdin): Compressed data is corrupt xz: (stdin): Compressed data is corrupt hexdump: /dev/sda1: No such file or directory hexdump: /dev/sda1: No such file or directory hexdump: /dev/sda2: No such file or directory hexdump: /dev/sda2: No such file or directory hexdump: /dev/sda3: No such file or directory hexdump: /dev/sda3: No such file or directory xz: (stdin): Compressed data is corrupt awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in How we can see the Windows part at grub is: menuentry "Windows 7 (loader) (on /dev/mapper/isw_chjbfeec_DougRaid1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(sda,msdos1)' search --no-floppy --fs-uuid --set=root C89C73D19C73B910 chainloader +1 } I tried a lot of combinations at the line: set root='(sda,msdos1)' , but no success I tried to change uuid to the /dev/mapper/isw_chjbfeec_DougRaid2 uuid, but the grub reports a error. I dont know what to do now. I really need to boot my windows partition. Someone knows what to do? Thanks........

    Read the article

  • wired connection not working in ubuntu 12.04 on lenovo G580 laptop

    - by shravankumar
    I found solution in http://www.zyxware.com/articles/2680/solved-wired-connection-eth0-not-detected-in-ubuntu-12-04 I downloaded compact-wireless-2012-07-03-p.tar.bz2 Here the steps i followed along with output 1. shravankumar@shravankumar-Lenovo-G580:~/Desktop/compat-wireless-2012-07-03-p$ scripts/driver-select alx Output: Processing new driver-select request... Backup exists: Makefile.bk Backup exists: Makefile.bk Backup exists: drivers/net/ethernet/broadcom/Makefile.bk Backup exists: drivers/net/ethernet/atheros/Makefile.bk Backup exists: Makefile.bk Backup exists: Makefile.bk Backup exists: drivers/net/ethernet/broadcom/Makefile.bk 2.shravankumar@shravankumar-Lenovo-G580:~/Desktop/compat-wireless-2012-07-03-p$ make output: make -C /lib/modules/3.2.0-23-generic/build M=/home/shravankumar/Desktop/compat-wireless-2012-07-03-p modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-23-generic' scripts/Makefile.build:44: /home/shravankumar/Desktop/compat-wireless-2012-07-03-p/drivers/net/ethernet/atheros/alx/Makefile: No such file or directory make[4]: *** No rule to make target `/home/shravankumar/Desktop/compat-wireless-2012-07-03-p/drivers/net/ethernet/atheros/alx/Makefile'. Stop. make[3]: *** [/home/shravankumar/Desktop/compat-wireless-2012-07-03-p/drivers/net/ethernet/atheros/alx] Error 2 make[2]: *** [/home/shravankumar/Desktop/compat-wireless-2012-07-03-p/drivers/net/ethernet/atheros] Error 2 make[1]: *** [_module_/home/shravankumar/Desktop/compat-wireless-2012-07-03-p] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-23-generic' make: *** [modules] Error 2 3. hravankumar@shravankumar-Lenovo-G580:~/Desktop/compat-wireless-2012-07-03-p$ make install output: FATAL: Could not open /lib/modules/3.2.0-23-generic/modules.dep.temp for writing: Permission denied make: *** [uninstall] Error 1 4. shravankumar@shravankumar-Lenovo-G580:~/Desktop/compat-wireless-2012-07-03-p$ modeprobe alx output: No command 'modeprobe' found, did you mean: Command 'modprobe' from package 'module-init-tools' (main) modeprobe: command not found I am new to Ubuntu ,Please help me. Thanks in advance

    Read the article

  • Modeling RBAC actors using LDAP (Core X.5xx)

    - by Tetsujin no Oni
    Mirrored from stackoverflow... When implementing an RBAC model using an LDAP store (I'm using Apache Directory 1.0.2 as a testbed), some of the actors are obviously mappable to specific objectClasses: Resources - I don't see a clear mapping for this one. applictionEntity seems only tangentially intended for this purpose Permissions - a Permission can be viewed as a single-purpose Role; obviously I'm not thinking of an LDAP permission, as they govern access to LDAP objects and attributes rather than an RBAC permission to a Resource Roles - maps fairly directly to groupOfNames or groupOfUniqueNames, right? Users - person In the past I've seen models where a Resource isn't dealt with in the directory in any fashion, and Permissions and Roles were mapped to Active Directory Groups. Is there a better way to represent these actors? How about a document discussing good mappings and intents of the schema?

    Read the article

  • bash one-liner loop over directories throws errors

    - by cori
    I'm trying to build a bash one-liner to loop over the directories within the current directory and tar the content into unique tars, using the directory name as the tar file name. I've got the basics working (finding the directory names, and tarring them up with those names) but my loop tosses some error messages and I can't understand where it's getting the commands its trying to run. Here's the mostly-working one-liner: for f in `ls -d */`; do `tar -czvvf ${f%/}.tar.gz $f`;done The "strange" output is: -bash: drwxrwxr-x: command not found -bash: drwxr-xr-x: command not found -bash: drwxr-xr-x: command not found -bash: drwxrwxr-x: command not found What portion of the command that I'm running do I not understand and that's generating that output?

    Read the article

  • How do I force .htaccess authorization to occur over ssl?

    - by kenja
    I'm trying to force a particular directory to require only allowed IPs and a valid username/password through basic authorization. To ensure that the username/password are sent in encrypted form, I want the directory to also force SSL use. Here is what I have in my .htaccess file: # Force HTTPS-Connection RewriteEngine On RewriteCond %{SERVER_PORT} !^443$ RewriteRule (.*) https://www.mywebsite.com%{REQUEST_URI} [R,L] ## password begin ## AuthName "Restricted Access" AuthUserFile /var/www/admin/.htpasswd AuthType Basic Require valid-user Order deny,allow Deny from all Allow from 79.1.231.151 62.123.134.83 Satisfy All Unfortunately, when I access that directory using http protocol, it is asking for the password before it redirects the page to the secure version. This means the password is sent unencrypted. What am I doing wrong? Is there a way to do this?

    Read the article

  • How to backup in Fedora-13?

    - by Ramy
    I just bought a 1.5T HDD and a disk enclosure. I connected this disk to my laptop via the provided USB cable. I then used the following command: rsync -r -t -v --progress --delete -c -l -z / /media/C4E41A11E41A0678/Moonface_BKP/ I ran this for a long (long long) time when i noticed that what had been backed up to the HDD began to be backed up. In other words, when i ran the command, it created a /media directory and a C4... directory below that and kept recursively backing up this directory (since, I suppose, I was backing up the hard drive itself, too). So...what's the proper way to use rsync?

    Read the article

  • symlink for dbus headers

    - by DarenW
    Source code for something that won't compile has the line #include but in real life that header file is in /usr/include/dbus-1.0/ Similarsituation exists for the dbus-c++ package. Why doesn't Ubuntu provide a symlink /usr/include/dbus pointing to the dbus-1.0 directory? Is this an bug in the dbus package? If intended, what it the purpose? Is it a proper fix to add a symlink myself? (Changing the source is not practical - there are many files, and they need to match what other people have.) update: ok, I totally misunderstood the situation, though it still comes down to a problem I think should be solved by a symlink. The dbus directory referred to in the #include statement is a deeper level directory under /usr/include/dbus-1.0/. The real problem is that the file dbus-arch-deps.h appears to be missing, but is actually stored in the weird location /usr/lib/x86_64-linux-gnu/dbus-1.0/include/dbus/ so now - why doesn't ubuntu provide a symlink to this in /usr/include/dbus-1.0/dbus, or actually store it there?

    Read the article

  • rsync problems and security concerns

    - by MB.
    Hi I am attempting to use rsync to copy files between two linux servers. both on 10.04.4 I have set up the ssh and a script running under a cron job. this is the message i get back from the cron job. To: mark@ubuntu Subject: Cron ~/rsync.sh Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: X-Cron-Env: X-Cron-Env: X-Cron-Env: Message-Id: <20120708183802.E0D54FC2C0@ubuntu Date: Sun, 8 Jul 2012 14:38:01 -0400 (EDT) rsync: link_stat "/home/mark/#342#200#223rsh=ssh" failed: No such file or directory (2) rsync: opendir "/Library/WebServer/Documents/.cache" failed: Permission denied (13) rsync: recv_generator: mkdir "/Library/Library" failed: Permission denied (13) * Skipping any contents from this failed directory * rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1060) [sender=3.0.7] Q.1 can anyone tell me why I get this message -- rsync: link_stat "/home/mark/#342#200#223rsh=ssh" failed: No such file or directory (2) the script is: #!/bin/bash SOURCEPATH='/Library' DESTPATH='/Library' DESTHOST='192.168.1.15' DESTUSER='mark' LOGFILE='rsync.log' echo $'\n\n' >> $LOGFILE rsync -av –rsh=ssh $SOURCEPATH $DESTUSER@$DESTHOST:$DESTPATH 2>&1 >> $LOGFILE echo “Completed at: `/bin/date`” >> $LOGFILE Q2. I know I have several problems with the permissions all of the files I am copying usually require me to use sudo to manipulate them. My question is then is there a way i can run this job without giving my user root access or using root in the login ?? Thanks for the help .

    Read the article

  • Running make for Nginx throws a “multiple target patterns” error

    - by Justin Meltzer
    When I run make inside my installed nginx directory I get the output: make -f objs/Makefile make[1]: Entering directory `/home/ec2-user/nginx/nginx-1.2.4' objs/Makefile:110: *** multiple target patterns. Stop. make[1]: Leaving directory `/home/ec2-user/nginx/nginx-1.2.4' make: * [build] Error 2 I am on an Amazon Linux AMI. The steps I took from the beginning was wget /path/to/nginx/tarball tar xvf nginx-1.2.4.tar.gz cd nginx-1.2.4 ./configure --prefix=/nginx --a-bunch-of-other-options Then I ran make. Also I installed make by running sudo yum install make Please let me know if there's any other information I should be providing.

    Read the article

  • Windows server 2008 R2 IIS7 file permissions

    - by StealthRT
    Hey all i am trying to figure out why i can not access a index.php file from within the wwwroot/mollify/backend directory. It keeps coming up with this: Server Error 403 - Forbidden: Access is denied. You do not have permission to view this directory or page using the credentials that you supplied. I've given all the permissions (Full control) to the wwwroot directory i could think of (IUSR, Guest, GUESTS, IIS_IUSRS, Users, Administrators, NETWORK, NETWORK SERVICE, SYSTEM, CREATOR OWNER & Everyone). I also added index.php to the "Default Document" under my website settings in IIS 7 manager. What else am i missing? Thanks! David

    Read the article

  • Properly Hosting Multiple Sites on VDS

    - by Aristotle
    I'm going to be moving about 7-10 websites (5-8 with Databases - MySQL) onto our new Virtual Private Server. I'm curious what the best way to host many sites on a single server is though. Do I create a directory for each site immediately within my root directory, and then point the domain names for each site to http://123.123.123.123/siteDirectory - or is there a more appropriate way to do this? I'm very interested in maintining control over how many concurent connections each site can have at any given time - would I be able to do that on the directory-level, or am I required to limit the concurrent-connections to the VPS itself?

    Read the article

  • Apache 2 Symbolic link not allowed or link target not accessible

    - by djechelon
    While the title of this question matches an already asked question, in my case I already set Options +FollowSymLinks. The setup is the following: my hosting setup includes htdocs/ directory that is the default document root for HTTP websites and htdocs-secure that is for HTTPS. They are meant for sites that need a different HTTPS version. In case both share the same files I create a link from htdocs-secure to htdocs by ln -s htdocs htdocs-secure but here comes the problem! Log still says Symbolic link not allowed or link target not accessible: /path/to/htdocs-secure Vhost fragment Header always set Strict-Transport-Security "max-age=500" DocumentRoot /path/to/htdocs-secure <Directory "/path/to/htdocs-secure"> allow from all Options +FollowSymLinks </Directory> I think it's a correct setup. The HTTP version of the site is accessible, so it doesn't look like a permission problem. How to fix this? [Add] other info: I use MPM-itk and I set AssignUserId to the owner/group of both the directories

    Read the article

  • Virtualmin - Added Virtual Server - Stopped access to Rails app?

    - by Dan
    Hi, Sorry if this sounds pretty simple, I'm new to Virtualmin and running servers in general. I recently purchased a VPS and installed Virtualmin with no problems. I then installed mod_rails and uploaded my first rails app, which I got working by adding the following to my apache httpd.conf file: <VirtualHost *:80> ServerName testing.mydomain.com DocumentRoot /home/myapp/public <Directory /home/myapp/public> Allow from All AllowOverride all Options -MultiViews </Directory> RailsBaseURI / </VirtualHost> I then tried adding a virtual server through Virtualmin, using mydomain.com. Now, the site this created (plus several sub-servers) and working as expected. However, my original Rails app is no longer accessible. The URL now sends me to the parent application (ie mydomain.com) The Rails app is not located within the parent's application directory, would this be a problem? Can anyone help? Any advice appreciated. Thanks.

    Read the article

  • ubuntu 12.04 server and tftp access violation issue on put command

    - by SMYERS
    I installed tftp as per this document: http://icesquare.com/wordpress/solvedtftp-error-code-2-access-violation/ I followed this to the letter 3 times and every time I put a file I get: root@CiscoCFG:~# tftp localhost tftp put test Error code 2: Access violation tftp root@CiscoCFG:~# tftp localhost tftp put test Error code 2: Access violation If I touch the file name chmod 777 the file then do a put it works perfectly fine. My config is as follows: service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -s /svr/tftp disable = no } the directory /svr/tftp permissions are 777: drwxrwxrwx 3 nobody nobody 4096 Nov 14 10:32 svr This thing should have full permissions as would anyone who wanted to write or read from that directory. I see nothing in the logs im really stumped on this. If the file is already in the directory I can read it all day long, I just cant make NEW files, can not put them, but I can do get's, I can only put to an existing file with permissions @777. Thanks

    Read the article

  • how to limit disk space per user in a PHP web application & CentOS

    - by solid
    we have a web application written in PHP and we want all our users to be able to upload images for e.g. 50MB. We will create a directory structure so that every user has its own folder like app/user1/images app/user2/images ... Now everytime a user uploads an image, we need to check if this is still allowed or not but we don't want 1000 users to continously scan our hard drive counting file sizes in their directory. So writing a script that counts all file sizes in a user directory is not an option I guess? Is there an easier way to calculate used up space per user and limit our app accordingly?

    Read the article

  • Setting user's group and umask has no effect

    - by Andrew Vit
    I'm trying to allow my "deploy" user to have access to files created by www-data: I added "deploy" to the www-data group. I set umask to 002. When I run the following commands, I'm not seeing the result I expect: deploy@ubuntu-lucid-32-generic:/var/www$ groups www-data adm dialout cdrom plugdev lpadmin sambashare admin deploy sysadmin deploy@ubuntu-lucid-32-generic:/var/www$ newgrp www-data deploy@ubuntu-lucid-32-generic:/var/www$ umask 0002 deploy@ubuntu-lucid-32-generic:/var/www$ mkdir test deploy@ubuntu-lucid-32-generic:/var/www$ ls -la test total 0 drwxr-xr-x 1 deploy deploy 68 Nov 7 20:37 . drwxr-xr-x 1 deploy deploy 476 Nov 7 20:37 .. I see that: The folder doesn't belong to the www-data group. The folder permissions don't have group-write (775). Note that the /var/www directory is owned by the deploy user: drwxr-xr-x 1 deploy deploy 510 Nov 7 20:45 . How can I give www-data selective access to directories? Or, how to share the /var/www directory with my deploy user: I don't care who owns it, as long as I can write to it, and so can www-data. (Ideally I would set up a directory with SGID access for www-data.)

    Read the article

  • What do these "Cron Daemon" email errors mean?

    - by Meltemi
    Anyone know what this means? Getting one of these every minute in one user's inbox: From: Cron Daemon <[email protected]> Subject: Cron <joe@mail> /tmp/.d/update >/dev/null 2>&1 To: [email protected] Received: from murder ([unix socket]) by mail.domain.com (Cyrus v2.2.12-OS X 10.3) with LMTPA; Tue, 04 May 2010 10:35:00 -0700 shell-init: could not get current directory: getcwd: cannot access parent directories: Permission denied job-working-directory: could not get current directory: getcwd: cannot access parent directories: Permission denied

    Read the article

  • ADF Seeded Customizations in JDeveloper 11.1.2.1

    - by Dmitry Nefedkin
    For the ADF training I needed a demo application that shows ADF seeded customizations functionality. I’m using the latest JDeveloper 11.1.2.1, so I decided to download the “Customizing and Personalizing an ADF Application” completed tutorial application available here I’ve downloaded and unzipped the CustomizeApp.zip and opened the CustomizeApp.jws in the JDeveloper 11.1.2.1 using the Customization Role. The result was the following: MDS-00036 “Cannot instantiate the class oracle.model.mycompany.SiteCC”. I thought: “OK, that’s because SiteCC class is not accessible to JDeveloper classloader, I should jar it and put to the <JDEVELOPER_HOME> \jdev\lib\patches like I did in JDeveloper 11.1.1.5 and ealier”.  No way, it JDeveloper 11.1.2 we do not have this patches directory at all! It seems that is because of the new architecture of the JDeveloper plugins based on OSGi.   I looked through the tutorial and have not found any step related to the jar–ing the SiteCC class and moving it to the specific directory.  So, JDeveloper 11.1.2  is smart enough to find my customization class and add it to the classpath without any specific actions from my side.  But why am I getting this “cannot instantiate the class” error?I’ve checked at the the full path to my CustomizeApp.jws  - c:\temp\ADF personalizations\CustomizeApp\CustomizeApp.jws  and noticed the space in the name of the directory.  Was it the root cause of the issue?  Yes!  I’ve renamed the ADF personalizations folder to pers, opened the c:\temp\pers\CustomizeApp\CustomizeApp.jws,  and received the expected behaviour: So, be aware of the spaces in the paths when working with JDeveloper…

    Read the article

  • Trouble getting FTP login to work in IIS6

    - by Frank Rosario
    Hello all, I'm trying to setup an FTP site for one of my clients to pickup files from us using IIS6. I've created the FTP site, have set to not isolate users (not necessary as FTP will be read only with authentication). Here's the problem. The FTP is to be password protected, so I turned of anonymous access on the FTP site. I then created a ftpuser account on the machine, and gave it read and browse directory permissions on the ftp's root directory. However, when I go to test the ftpuser login, I get a 530 "ftpuser cannot login" error. However, if I browse to same directory over HTTP (anonymous access turned off as well) and enter the ftpuser login info, I can download files and browse directories successfully. Why is the ftpuser working over HTTP but not FTP? Shouldn't I be able to login over FTP with the ftpuser login information I just created? Thanks in advance, - Frank

    Read the article

  • How to avoid tilde ~ in Bash prompt?

    - by Jirka
    Hello! I have set my prompt in bash in a such way that I can use it directly in scp command: My current PS1 string: PS1="\h:\w\n$" And the prompt looks like this: lnx-hladky:/tmp/plugtmp $ What I don't like at all is the fact that $HOME directory is displayed as tilde. Can this be avoided? It's causing problems when switching between different users. Example: lnx-hladky:~/DOC $ Documentation says: \w : the current working directory, with $HOME abbreviated with a tilde \W: the basename of the current working directory, with $HOME abbreviated with a tilde Is there any possibility to avoid $HOME being abbreviated with a tilde? I have found one way around but I feel like it's overcomplicated: PROMPT_COMMAND='echo -ne "\e[4;35m$(date +%T)\e[24m$(whoami)@$(hostname):$(pwd)\e[m\n"' PS1=$ Can anyone propose a better solution? I have a feeling it's not quite OK to run so many commands just to get prompt. (date,whoami,hostname,pwd). Thanks a lot! Jirka

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >