Search Results

Search found 23480 results on 940 pages for 'directory structure'.

Page 57/940 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Default Save Directory for gnome-screensaver?

    - by trent
    Are there any sort of configuration options for specifying the default save location for gnome-screenshot, or is this hard-coded into the source code? It used to be ~/Desktop, which seems to have changed to ~/Pictures (in 12.04). The only possible solution I've seen is about Setting the default name (as it includes time stamp information now instead of simply Screenshot#), but that solution doesn't really seem ideal to me. Also, this post suggested that the last save location is remembered the next time you take a screenshot, but in my experience, this doesn't seem to be the case. And in any case, following on from that, that entry in gconf-editor doesn't even seem to accurately reflect the last location, so more than likely an entry related to an older version of gnome-screenshot.

    Read the article

  • Adding a website link to the Member Directory in DotNetNuke 6.2

    - by Chris Hammond
    In case you missed it, DotNetNuke 6.2 was released today, check out Will Morgenweck’s blog post for more details on the release . With some of the new features DotNetNuke 6.2 makes it easier to start to customize the listing of members on your site, and also the Profile display for users on the website. I started implementing DotNetNuke 6.2 on one of my racing websites last night (yeah, so I upgraded before the release happened, a benefit of working for the corp ). In doing so I configured the profile...(read more)

    Read the article

  • Error when running binary with root setuid under encrypted home directory

    - by carestad
    I'm using a VPN script for Juniper's Secure Access protocol form here, which executes a binary located under ~/.juniper_networks/network_connect/ncsvc with the following permissions: -rws--s--x 1 root root 1225424 okt. 25 13:54 ncsvc But when I do, I get the following error: ncsvc> Failed to setuid to root. Error 1: Operation not permitted Moving/copying the ~/.juniper_networks folder to e.g. /opt/juniper (with the same owner permissions), I don't get the error. In the forum thread at Ubuntuforums someone pointed out that it's probably because I have encrypted my /home and thus a "problem" with ecryptfs. How can I fix this?

    Read the article

  • Unable to use "Manage Content and Structure" after removing Project server form the SharePoint farm

    - by Brian
    We're no longer using Office Project Server, and I've removed it from the farm in which it was installed. However, now that it's been removed, I am unable to access the "Manage Content and Structure" link on some of our SharePoint sites. I get an error indicating that SharePoint Failed to find the XML file at location '12\Template\Features\PWSCommitments\feature.xml' Anyone have an idea how to fix this?

    Read the article

  • mount another drive to the same directory

    - by Ken Autotron
    I recently purchased a server that was advertised as 2TB (2 1TB drives) in size, when I use it it reports only one of the drives, I would like to be able to use both as if one drive. here is the specs... sudo lshw -C disk *-disk description: ATA Disk product: TOSHIBA DT01ACA1 vendor: Toshiba physical id: 0.0.0 bus info: scsi@1:0.0.0 logical name: /dev/sda version: MS2O serial: 13EJ81XPS size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=0005b3dd *-disk description: ATA Disk product: TOSHIBA DT01ACA1 vendor: Toshiba physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/sdb version: MS2O serial: 13OX3TKPS size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=00030e86 and fdisk -l Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00030e86 Device Boot Start End Blocks Id System /dev/sdb1 * 4096 41947135 20971520 fd Linux raid autodetect /dev/sdb2 41947136 1952468991 955260928 fd Linux raid autodetect /dev/sdb3 1952468992 1953519615 525312 82 Linux swap / Solaris Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0005b3dd Device Boot Start End Blocks Id System /dev/sda1 * 4096 41947135 20971520 fd Linux raid autodetect /dev/sda2 41947136 1952468991 955260928 fd Linux raid autodetect /dev/sda3 1952468992 1953519615 525312 82 Linux swap / Solaris Disk /dev/md2: 978.2 GB, 978187124736 bytes 2 heads, 4 sectors/track, 238815216 cylinders, total 1910521728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/md2 doesn't contain a valid partition table Disk /dev/md1: 21.5 GB, 21474770944 bytes 2 heads, 4 sectors/track, 5242864 cylinders, total 41942912 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table is it possible to mount both drives to say /Home/ so I would have 2TB of usable space?

    Read the article

  • bash dirtrim produces strange results with ~/foo/bar/var directory

    - by queueoverflow
    In some of my projects, I keep a var or a lib folder for runtime output and external libraries. To keep my prompt rather short, I have the export PROMPT_DIRTRIM=3 option in my .bashrc. This works very well for most paths, but as soon as I have a /var in there, it goes nuts like this (for ~/Projects/someproject/var/gfx): ~/.../gfxr/gfxr/gfxr/gfxr/gfxr/gfx Interestingly, it works with /opt/lampp/lib Is there some way to get around this? Update my .bashrc my .bash_functions

    Read the article

  • How to completely remove ldap and remove the directory tree

    - by rugbert
    so I followed this guide: https://help.ubuntu.com/11.04/serverguide/C/openldap-server.html to install and configure ldap but then I discoverd both phpLDAPadmin and Luma and have decided to rebuild my tree from scratch using one of those tools. However Im not sure how to completely remove LDAP now. I can remove it using apt-get, but if I attempt to reinstall it and login using phpLDAPadmin it seems that it's still looking for older authentication and gives me a credential error

    Read the article

  • Google webmaster showing duplicate meta descriptions for search directory

    - by Mike Flynn
    What is the best way to get rid of this error in Google Webmasters? Do I really need to add "- Page 2" at the end of the descripton? Page Description Kansas basketball tournaments posted by organizations and teams for youth, AAU, and NCAA certified e Pages /youth-basketball-tournaments/kansas /youth-basketball-tournaments/kansas?page=2 /youth-basketball-tournaments/kansas?page=3 /youth-basketball-tournaments/kansas?page=9

    Read the article

  • Help With Hard links And Symlinks Moving Directory And Files

    - by Julio
    This is what I would like to do. I have a symlink "/var" linking to "/tmpfs/var.1" /var - /tmpfs/var.1 I start a script called "cache_tmpfs" from /etc/rc.local on startup this script will copy /var.backup/* contents to /tmpfs/var.1/ cp -dpRxf /var.backup/* /tmpfs/var.1/ now the problem is that kernel is opening messages log file in /var/log/messages, is it possible to remove the current /var symlink and recreate a new one (that will symlink to /var.backup insteed of /tmpfs/var.1) without issues as files once opened by system become hard links?? rm /var && ln -s /var.backup /var Thanks...

    Read the article

  • Remove folder structure from archive and fix error

    - by Michael
    I am trying to make a script to backup each of my plesk hosts to individual files, I am having two problems: I would like to remove the folder structure from archive, the tar is 3 folders deep I am getting this error: tar: Removing leading `/' from member names The code: FILES=/var/www/vhosts/* FNAME="" for f in $FILES do FNAME=`basename $f` tar cfv "/root/backup/ftp/$FNAME.tar" $f done Sample output: tar: Removing leading `/' from member names /var/www/vhosts/mydomain.com/ /var/www/vhosts/mydomain.com/conf /var/www/vhosts/mydomain.com/etc/ /var/www/vhosts/mydomain.com/etc/group /var/www/vhosts/mydomain.com/etc/termcap /var/www/vhosts/mydomain.com/etc/passwd /var/www/vhosts/mydomain.com/usr/

    Read the article

  • Plesk Folder Structure Doesn't Allow Creating Folders

    - by user39110
    Hi, i use kohana framework which have 3 folders applications, system and public. I uploaded public folder to httpdocs but applications and system folders should be away from httpdocs. I should upload upper level of httpdocs but plesk structure doesn't allow that. What should i do ? (i am owner of vps)

    Read the article

  • How to set JS source directory in apache2?

    - by highBandWidth
    I am trying to run a very basic webserver for development/debugging. The static HTML seems to be delivered correctly, but it seems that the JavaScript libraries are not being delivered to the browser. The page HTML says something like <html> <head> <script type='text/javascript' src="/lib/json.js"></script> ... Now, I have set up a link for /lib/ in my httpd.conf as: Scriptalias /lib/ "/SomeFolder/lib/" When I do this, it can't fetch the files because this is what I see in my apache error log: ... [error] [client ::1] client denied by server configuration: /SomeFolder/lib/json.js, referer: http://localhost/SomeSite It seems that apache is not allowing access to the folder, so I add this to httpd.conf: Directory "/SomeFolder/lib/"> Allow from all </Directory> After this, browsing the page still does not run the JS, instead I see the following error in my apache error log: [error] [client ::1] (13)Permission denied: exec of '/SomeFolder/lib/json.js' failed, referer: http://localhost/SomeSite So now, it seems that apache is trying to run the JS files on the server like a cgi script or something. But I have not made that folder a cgi-bin folder. The only lines where SomeFolder is mentioned by name is in these lines in httpd.conf: Scriptalias /lib/ "/SomeFolder/lib/" Directory "/SomeFolder/lib/"> Allow from all </Directory>

    Read the article

  • Transform data to a new structure

    - by rAyt
    Hi, I've got an Access database from one of our clients and want to import this data into a new MSSQL Server 2008 database structure I designed. It's similar to the Access Database (including all the rows and so on) but I normalized the entire database. Is there any tool (microsoft tools preferred) to map the old database to my new design? thanks

    Read the article

  • Incorrect directory permissions with OpenSSH on Cygwin on Windows Server 2008 SP2

    - by Davy Brion
    I ran into a weird directory permission problem when logged in to a Win2008SP2 (not R2) server through SSH. When I open a local cygwin shell on the server, i can do this: myUser@myServer ~ $ cd /cygdrive/c/Windows/System32/inetsrv/ myUser@myServer /cygdrive/c/Windows/System32/inetsrv $ cd config myUser@myServer /cygdrive/c/Windows/System32/inetsrv/config $ I have no issues accessing the 'config' directory when using a local cygwin shell. 'myUser' has all necessary permissions to access the directory as well. In fact, 'myUser' is a local administrator on the machine. Listing the permissions of the config folder through the local cygwin shell shows the following output: 4 drwx------+ 1 SYSTEM SYSTEM 0 Aug 2 09:38 config But when I log into the server with a SSH client (in this case Putty), i run into the following problem: myUser@myServer ~ $ cd /cygdrive/c/Windows/System32/inetsrv/ myUser@myServer /cygdrive/c/Windows/System32/inetsrv $ cd config -bash: cd: config: Permission denied It also doesn't list the proper permissions through SSH: 0 drwxr-x--- 1 ???????? ???????? 0 Aug 2 09:38 config When I look at the running processes on the server with Task Manager (with a remote desktop connection), it shows that all bash.exe processes are running under the 'myUser' account, so I don't understand why I can't access that particular directory through SSH but have no problems accessing it in a local cygwin shell. I'm using OpenSSH 5.9p1-1. I'm not sure what the Cygwin version is... I used the latest setup.exe (version 2.738) of Cygwin, but I can't seem the find any other Cygwin-related version number. I doubt that it's related to SSH/Cygwin though, because when I connect from the Win2008SP2 server to my local Win7 machine through SSH (using the same OpenSSH/Cygwin versions) I can access the /cygdrive/c/Windows/System32/inetsrv/config folder without issues. Does anyone have an idea on what the issue could be?

    Read the article

  • Setup for mounting kerberized nfs home directory - gssd not finding valid kerberos ticket

    - by janm
    Our home directories are exported via kerberized nfs, so the user needs a valid kerberos ticket to be able to mount its home. This setup works fine with our existing clients & server. Now we want to add some 11.10 client and thus set up ldap & kerberos together with pam_mount. The ldap authentication works and users can login via ssh, however their homes can not be mounted. When pam_mount is configured to mount as root, gssd does not find a valid kerberos ticket and the mount fails. Nov 22 17:34:26 zelda rpc.gssd[929]: handle_gssd_upcall: 'mech=krb5 uid=0 enctypes=18,17,16,23,3,1,2 ' Nov 22 17:34:26 zelda rpc.gssd[929]: handling krb5 upcall (/var/lib/nfs/rpc_pipefs/nfs/clnt2) Nov 22 17:34:26 zelda rpc.gssd[929]: process_krb5_upcall: service is '<null>' Nov 22 17:34:26 zelda rpc.gssd[929]: getting credentials for client with uid 0 for server purple.physcip.uni-stuttgart.de Nov 22 17:34:26 zelda rpc.gssd[929]: CC file '/tmp/krb5cc_65678_Ku2226' being considered, with preferred realm 'PURPLE.PHYSCIP.UNI-STUTTGART.DE' Nov 22 17:34:26 zelda rpc.gssd[929]: CC file '/tmp/krb5cc_65678_Ku2226' owned by 65678, not 0 Nov 22 17:34:26 zelda rpc.gssd[929]: WARNING: Failed to create krb5 context for user with uid 0 for server purple.physcip.uni-stuttgart.de Nov 22 17:34:26 zelda rpc.gssd[929]: doing error downfall When pam_mount is on the other hand configured with the noroot=1 option, then it cannot mount the volume at all. Nov 22 17:33:58 zelda sshd[2226]: pam_krb5(sshd:auth): user phy65678 authenticated as [email protected] Nov 22 17:33:58 zelda sshd[2226]: Accepted password for phy65678 from 129.69.74.20 port 51875 ssh2 Nov 22 17:33:58 zelda sshd[2226]: pam_unix(sshd:session): session opened for user phy65678 by (uid=0) Nov 22 17:33:58 zelda sshd[2226]: pam_mount(mount.c:69): Messages from underlying mount program: Nov 22 17:33:58 zelda sshd[2226]: pam_mount(mount.c:73): mount: only root can do that Nov 22 17:33:58 zelda sshd[2226]: pam_mount(pam_mount.c:521): mount of /Volumes/home/phy65678 failed So how can we allow users of a specific group to perform nfs mounts? If this does not work, can we make pam_mount use root but pass the correct uid?

    Read the article

  • Google Font API & Google Font Directory

    - by joelvarty
    There is a CSS element out there that looks like this: @font-face {   font-family: '';   src: url('…'); } I’ve only used this tag in a bunch of old apps and sites that were built exclusively for IE back in the day.  Well, it’s part of CSS 3 and Google is going to make it easy to find and share fonts. http://googlecode.blogspot.com/2010/05/introducing-google-font-api-google-font.html   more later - joel

    Read the article

  • win 7: copy all files from a complex folder structure to one folder

    - by Jason
    I'd have a complex folder structure like: h:\folder1\folder2\folder3 h:\folder1\folder2\a h:\folder1\b h:\folder1\folder3 h:\folder1\folder4\d Only with probably 100's of folders and a depth of around 4 (I'm guessing) Anyway I want to run a command that will move all the files from every sub folder into the top level folder. so something like h:\folder1\*\*.* to h:\folder1 Is there a tool I can use to do this? does win 7 have a command that will do this? Thanks Jason

    Read the article

  • Merge home directory after fresh installation with existing (Gentoo) home

    - by jhwist
    I reinstalled my desktop machine with Ubuntu 10.10., coming from Gentoo where I used XFCE. My home is usually NFS-mounted from a server. During the install I let the installer set up my user, but of course my NFS-home wasn't mounted then; I have a regular /home/user now. If I mv /home /home.old and mount my NFS-home to /home instead, I cannot login because Gnome complains about some config-files (sorry, no exact error message as there is no way to copy&paste this). Which of my /home.old/user directories do I have to copy over to my NFS-home so that Gnome is happy again?

    Read the article

  • eAccelerator Issue - Cache Directory Empty.

    - by Tom
    Hi all, Hoping someone can give me a hand with this. I've recently installated eAccelerator 0.9.6.1 - On a CentOS LAMP server. Had it working fine, using the /tmp/accelerator as the cache directory. php.ini set up: zend_extension="/usr/local/lib/php/extensions/no-debug-non-zts-20060613/eaccelerator.so" eaccelerator.shm_size="200" eaccelerator.cache_dir="/var/cache/eaccelerator" eaccelerator.enable="1" eaccelerator.optimizer="1" eaccelerator.check_mtime="1" eaccelerator.debug="0" eaccelerator.filter="" eaccelerator.shm_max="0" eaccelerator.shm_ttl="3600" eaccelerator.shm_prune_period="180" eaccelerator.shm_only="1" eaccelerator.compress="1" eaccelerator.compress_level="9" php -v output: PHP 5.2.12 (cli) (built: Feb 3 2010 00:34:28) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies with eAccelerator v0.9.6.1, Copyright (c) 2004-2010 eAccelerator, by eAccelerator with the ionCube PHP Loader v3.3.20, Copyright (c) 2002-2010, by ionCube Ltd. I had to remove the cache directory as I was testing something. Remade it, re-set permissions and found that eAccelerator was no longer creating cache files within the folder. I thought it might be down to ownership rights on the folder so chown'd it apache.apache and this made no difference. I recreated the directory in /var/cache instead and editted php.ini to point to the new cache dir location, chmod'd, chown'd etc. and still eAccelerator is not creating any of the cache files in the directory (just empty). Could someone suggest what I might be doing incorrectly here. I've read through numerous pages to try and troubleshoot the issue to no avail. Any help appreciated.

    Read the article

  • VMWare Workstation 8 can't find the headers directory

    - by BackSlash
    I'm having an issue with VMware Workstation 8. I installed it but when i run it, it shows this window: But, when I press on "Browse", this window comes up Even if I select the linux-headers-3.8.0-31-generic folder, it says that it can't find the C headers for that kernel. Why? P.S. I already tried sudo apt-get install linux-headers-3.8.0-31-generic and the terminal says that the kernel is up to date.

    Read the article

  • How to Submit to an Article Directory

    Article submission involves submitting your article to the article directories. Most article directories allow user to include a link to their site in the resource box. There are also several directories that permit link inclusion within the article body.

    Read the article

  • Symbolic link all files in directory to show in another directory?

    - by Thomas Clayson
    What I want is to be able to display all files that are ftp'd into /home/ftp in /srv/ftp /srv/ftp is password protected, and has files in it which I don't want to be accessible from the public ftp. So as such I wish that all files uploaded to /home/ftp are automatically symbolically linked (or otherwise) to /srv/ftp. Does this make sense? e.g. ls /srv/ftp: file.sh another.txt something_else.i386 then a user ftp's and drops a file in /home/ftp (or ssh, or whatever) ls /home/ftp: user_file.mk ls /srv/ftp: file.sh another.txt something_else.i386 user_file.mk I hope this makes sense. I have been told that this can probably be achieved using ln to create symbolic links, but I don't want to have to ssh in and create the links every time I (or someone else) puts files over ftp. Thanks! :)

    Read the article

  • Use Entitlements To Secure LDAP-enabled Applications With Oracle Virtual Directory and Oracle Entitl

    - by mark.wilcox
    I stumbled on an interesting article  that shows how the author used OVD to exposed OES security to protect a portal that only understood LDAP group-based authorization.This is great because it shows how you can use OES today to build central policies that can be used without needing to rewrite all of your applications - in particular if you just want to leverage rule-based groups.  Posted via email from Virtual Identity Dialogue

    Read the article

  • Installing MoinMoin -wiki to my user directory on a server with no root access

    - by deiga
    Hello all, I've been trying to install MoinMoin -wiki on this webserver, where I have no root access. The server doesn't support wsgi, but it does support cgi/fcgi/etc. I've scoured google for a simple guide on how to accomplish this, but the only guides I found were from the year 2004 or so. Other guides always assumed that one has root access. So can anyone link a good tutorial for my question or just help me out here? Your help is appreciated :) P.S. Sorry if this is the wrong stack -page

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >