Search Results

Search found 26517 results on 1061 pages for 'large directory'.

Page 58/1061 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • remote desktop client with panning of large desktops?

    - by mikewse
    When using the standard Windows remote desktop client in Full Screen to connect to a large remote desktop, the result is usually that the remote desktop is resized to a smaller size matching the local display. Are there any RDP clients that in Full Screen instead leave the remote desktop at its original size and pan this larger area when the mouse reaches the border edges of the local display? Thanks Mike

    Read the article

  • bash dirtrim produces strange results with ~/foo/bar/var directory

    - by queueoverflow
    In some of my projects, I keep a var or a lib folder for runtime output and external libraries. To keep my prompt rather short, I have the export PROMPT_DIRTRIM=3 option in my .bashrc. This works very well for most paths, but as soon as I have a /var in there, it goes nuts like this (for ~/Projects/someproject/var/gfx): ~/.../gfxr/gfxr/gfxr/gfxr/gfxr/gfx Interestingly, it works with /opt/lampp/lib Is there some way to get around this? Update my .bashrc my .bash_functions

    Read the article

  • mount another drive to the same directory

    - by Ken Autotron
    I recently purchased a server that was advertised as 2TB (2 1TB drives) in size, when I use it it reports only one of the drives, I would like to be able to use both as if one drive. here is the specs... sudo lshw -C disk *-disk description: ATA Disk product: TOSHIBA DT01ACA1 vendor: Toshiba physical id: 0.0.0 bus info: scsi@1:0.0.0 logical name: /dev/sda version: MS2O serial: 13EJ81XPS size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=0005b3dd *-disk description: ATA Disk product: TOSHIBA DT01ACA1 vendor: Toshiba physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/sdb version: MS2O serial: 13OX3TKPS size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=00030e86 and fdisk -l Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00030e86 Device Boot Start End Blocks Id System /dev/sdb1 * 4096 41947135 20971520 fd Linux raid autodetect /dev/sdb2 41947136 1952468991 955260928 fd Linux raid autodetect /dev/sdb3 1952468992 1953519615 525312 82 Linux swap / Solaris Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0005b3dd Device Boot Start End Blocks Id System /dev/sda1 * 4096 41947135 20971520 fd Linux raid autodetect /dev/sda2 41947136 1952468991 955260928 fd Linux raid autodetect /dev/sda3 1952468992 1953519615 525312 82 Linux swap / Solaris Disk /dev/md2: 978.2 GB, 978187124736 bytes 2 heads, 4 sectors/track, 238815216 cylinders, total 1910521728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/md2 doesn't contain a valid partition table Disk /dev/md1: 21.5 GB, 21474770944 bytes 2 heads, 4 sectors/track, 5242864 cylinders, total 41942912 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table is it possible to mount both drives to say /Home/ so I would have 2TB of usable space?

    Read the article

  • Tool to automatically combine many large pivot table in many large Excel sheet ?

    - by Sim
    Hi all, We have several Excel files that contains large pivot data table with the same structure For example File A Pivot table (Field A, B, C) File B Pivot table (Field A, B, C) We want to combine them into 1 pivot table (A, B, C). Just want to know which ways we can do it ? Manual way : open a new empty sheet, copy and paste the pivot there and create the pivot again Automatic way : is there some tool that did this ? Thanks

    Read the article

  • Amazon EC2 High-Memory Extra Large Instance

    - by Simpanoz
    I am new to Mongodb and EC2. If I use following single MongoDb server : High-Memory Extra Large Instance 17.1 GiB memory, 6.5 ECU (2 virtual cores with 3.25 EC2 Compute Units each), 420 GB of local instance storage, 64-bit platform As a layman, if we quantify I/O, data in MB/sec. How much I/O transactions mongodb server can handle easily, without being burnt out. Consider default settings of EC2 server with Ubuntu and MongoDb version available in AWS marketplace. Thanks in advance

    Read the article

  • Windows crashes when I try to compress large .MOV file from iPhone

    - by Andrew
    I have a 40 minute, 6 gigabyte .MOV file that I transferred from my iPhone to my PC. I need to upload the video to Youtube but the file is too large. When I try to compress the video, my computer will shut off when the process reaches 3 or 4 percent. This has happened in Windows Live Movie Maker, Riva FLV encoder, and VLC. Any ideas on how I can get this file down to a reasonable size so I can upload it online?

    Read the article

  • How to completely remove ldap and remove the directory tree

    - by rugbert
    so I followed this guide: https://help.ubuntu.com/11.04/serverguide/C/openldap-server.html to install and configure ldap but then I discoverd both phpLDAPadmin and Luma and have decided to rebuild my tree from scratch using one of those tools. However Im not sure how to completely remove LDAP now. I can remove it using apt-get, but if I attempt to reinstall it and login using phpLDAPadmin it seems that it's still looking for older authentication and gives me a credential error

    Read the article

  • HTG Explains: The Linux Directory Structure Explained

    - by Chris Hoffman
    If you’re coming from Windows, the Linux file system structure can seem particularly alien. The C:\ drive and drive letters are gone, replaced by a / and cryptic-sounding directories, most of which have three letter names. The Filesystem Hierarchy Standard (FHS) defines the structure of file systems on Linux and other UNIX-like operating systems. However, Linux file systems also contain some directories that aren’t yet defined by the standard. How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • Google webmaster showing duplicate meta descriptions for search directory

    - by Mike Flynn
    What is the best way to get rid of this error in Google Webmasters? Do I really need to add "- Page 2" at the end of the descripton? Page Description Kansas basketball tournaments posted by organizations and teams for youth, AAU, and NCAA certified e Pages /youth-basketball-tournaments/kansas /youth-basketball-tournaments/kansas?page=2 /youth-basketball-tournaments/kansas?page=3 /youth-basketball-tournaments/kansas?page=9

    Read the article

  • Import images from camera in KDE with particular directory structure

    - by Sergey
    I have been using f-spot for a few years to manage my photo archive, which is about 50K images at the moment. With the development of f-spot slowing down in the recent years and me switching to KDE, I'm looking at using DigiKam, which seems to be very nice and packed with features beyond my wildest hopes :) One thing I'm missing though is the way f-spot was importing the images: it was creating subdirectories based on the image's shooting date: $HOME/Photos/2011/11/12/IMG_1234.jpg $HOME/Photos/2011/11/13/IMG_1235.jpg $HOME/Photos/2011/11/13/IMG_1236.jpg I don't seem to be able to find a way to make DigiKam to behave like this - although it has some settings to change the image filename according to some mask which may include shooting date, I see now way to tell it to create sub-directories. Is there a way to make DigiKam to behave like this? Or, alternatively, what is a good program to import images from a camera and save them on disk in subdirectories according to their shooting date?

    Read the article

  • Help With Hard links And Symlinks Moving Directory And Files

    - by Julio
    This is what I would like to do. I have a symlink "/var" linking to "/tmpfs/var.1" /var - /tmpfs/var.1 I start a script called "cache_tmpfs" from /etc/rc.local on startup this script will copy /var.backup/* contents to /tmpfs/var.1/ cp -dpRxf /var.backup/* /tmpfs/var.1/ now the problem is that kernel is opening messages log file in /var/log/messages, is it possible to remove the current /var symlink and recreate a new one (that will symlink to /var.backup insteed of /tmpfs/var.1) without issues as files once opened by system become hard links?? rm /var && ln -s /var.backup /var Thanks...

    Read the article

  • How to set JS source directory in apache2?

    - by highBandWidth
    I am trying to run a very basic webserver for development/debugging. The static HTML seems to be delivered correctly, but it seems that the JavaScript libraries are not being delivered to the browser. The page HTML says something like <html> <head> <script type='text/javascript' src="/lib/json.js"></script> ... Now, I have set up a link for /lib/ in my httpd.conf as: Scriptalias /lib/ "/SomeFolder/lib/" When I do this, it can't fetch the files because this is what I see in my apache error log: ... [error] [client ::1] client denied by server configuration: /SomeFolder/lib/json.js, referer: http://localhost/SomeSite It seems that apache is not allowing access to the folder, so I add this to httpd.conf: Directory "/SomeFolder/lib/"> Allow from all </Directory> After this, browsing the page still does not run the JS, instead I see the following error in my apache error log: [error] [client ::1] (13)Permission denied: exec of '/SomeFolder/lib/json.js' failed, referer: http://localhost/SomeSite So now, it seems that apache is trying to run the JS files on the server like a cgi script or something. But I have not made that folder a cgi-bin folder. The only lines where SomeFolder is mentioned by name is in these lines in httpd.conf: Scriptalias /lib/ "/SomeFolder/lib/" Directory "/SomeFolder/lib/"> Allow from all </Directory>

    Read the article

  • Google Font API & Google Font Directory

    - by joelvarty
    There is a CSS element out there that looks like this: @font-face {   font-family: '';   src: url('…'); } I’ve only used this tag in a bunch of old apps and sites that were built exclusively for IE back in the day.  Well, it’s part of CSS 3 and Google is going to make it easy to find and share fonts. http://googlecode.blogspot.com/2010/05/introducing-google-font-api-google-font.html   more later - joel

    Read the article

  • Incorrect directory permissions with OpenSSH on Cygwin on Windows Server 2008 SP2

    - by Davy Brion
    I ran into a weird directory permission problem when logged in to a Win2008SP2 (not R2) server through SSH. When I open a local cygwin shell on the server, i can do this: myUser@myServer ~ $ cd /cygdrive/c/Windows/System32/inetsrv/ myUser@myServer /cygdrive/c/Windows/System32/inetsrv $ cd config myUser@myServer /cygdrive/c/Windows/System32/inetsrv/config $ I have no issues accessing the 'config' directory when using a local cygwin shell. 'myUser' has all necessary permissions to access the directory as well. In fact, 'myUser' is a local administrator on the machine. Listing the permissions of the config folder through the local cygwin shell shows the following output: 4 drwx------+ 1 SYSTEM SYSTEM 0 Aug 2 09:38 config But when I log into the server with a SSH client (in this case Putty), i run into the following problem: myUser@myServer ~ $ cd /cygdrive/c/Windows/System32/inetsrv/ myUser@myServer /cygdrive/c/Windows/System32/inetsrv $ cd config -bash: cd: config: Permission denied It also doesn't list the proper permissions through SSH: 0 drwxr-x--- 1 ???????? ???????? 0 Aug 2 09:38 config When I look at the running processes on the server with Task Manager (with a remote desktop connection), it shows that all bash.exe processes are running under the 'myUser' account, so I don't understand why I can't access that particular directory through SSH but have no problems accessing it in a local cygwin shell. I'm using OpenSSH 5.9p1-1. I'm not sure what the Cygwin version is... I used the latest setup.exe (version 2.738) of Cygwin, but I can't seem the find any other Cygwin-related version number. I doubt that it's related to SSH/Cygwin though, because when I connect from the Win2008SP2 server to my local Win7 machine through SSH (using the same OpenSSH/Cygwin versions) I can access the /cygdrive/c/Windows/System32/inetsrv/config folder without issues. Does anyone have an idea on what the issue could be?

    Read the article

  • Setup for mounting kerberized nfs home directory - gssd not finding valid kerberos ticket

    - by janm
    Our home directories are exported via kerberized nfs, so the user needs a valid kerberos ticket to be able to mount its home. This setup works fine with our existing clients & server. Now we want to add some 11.10 client and thus set up ldap & kerberos together with pam_mount. The ldap authentication works and users can login via ssh, however their homes can not be mounted. When pam_mount is configured to mount as root, gssd does not find a valid kerberos ticket and the mount fails. Nov 22 17:34:26 zelda rpc.gssd[929]: handle_gssd_upcall: 'mech=krb5 uid=0 enctypes=18,17,16,23,3,1,2 ' Nov 22 17:34:26 zelda rpc.gssd[929]: handling krb5 upcall (/var/lib/nfs/rpc_pipefs/nfs/clnt2) Nov 22 17:34:26 zelda rpc.gssd[929]: process_krb5_upcall: service is '<null>' Nov 22 17:34:26 zelda rpc.gssd[929]: getting credentials for client with uid 0 for server purple.physcip.uni-stuttgart.de Nov 22 17:34:26 zelda rpc.gssd[929]: CC file '/tmp/krb5cc_65678_Ku2226' being considered, with preferred realm 'PURPLE.PHYSCIP.UNI-STUTTGART.DE' Nov 22 17:34:26 zelda rpc.gssd[929]: CC file '/tmp/krb5cc_65678_Ku2226' owned by 65678, not 0 Nov 22 17:34:26 zelda rpc.gssd[929]: WARNING: Failed to create krb5 context for user with uid 0 for server purple.physcip.uni-stuttgart.de Nov 22 17:34:26 zelda rpc.gssd[929]: doing error downfall When pam_mount is on the other hand configured with the noroot=1 option, then it cannot mount the volume at all. Nov 22 17:33:58 zelda sshd[2226]: pam_krb5(sshd:auth): user phy65678 authenticated as [email protected] Nov 22 17:33:58 zelda sshd[2226]: Accepted password for phy65678 from 129.69.74.20 port 51875 ssh2 Nov 22 17:33:58 zelda sshd[2226]: pam_unix(sshd:session): session opened for user phy65678 by (uid=0) Nov 22 17:33:58 zelda sshd[2226]: pam_mount(mount.c:69): Messages from underlying mount program: Nov 22 17:33:58 zelda sshd[2226]: pam_mount(mount.c:73): mount: only root can do that Nov 22 17:33:58 zelda sshd[2226]: pam_mount(pam_mount.c:521): mount of /Volumes/home/phy65678 failed So how can we allow users of a specific group to perform nfs mounts? If this does not work, can we make pam_mount use root but pass the correct uid?

    Read the article

  • Apache consuming large memory with Apache Mobile Filter

    - by VarunGupta
    I installed and configured Apache Mobile Filter module for Apache to redirect users to mobile version of our website, depending on their user agents. I configured the module to use WURFL. But as soon as I start Apache it hogs a large amount of memory (without any in-coming web requests): Resident Memory: 300 to 400 MB Virtual Memory: 300 to 650 MB Without this module, Apache was consuming much lesser memory (4 to 10 MB). What could be the reason here?

    Read the article

  • Type dependencies vs directory structure

    - by paul
    Something I've been wondering about recently is how to organize types in directories/namespaces w.r.t. their dependencies. One method I've seen, which I believe is the recommendation for both Haskell and .NET (though I can't find the references for this at the moment), is: Type Type/ATypeThatUsesType Type/AnotherTypeThatUsesType My natural inclination, however, is to do the opposite: Type Type/ATypeUponWhichTypeDepends Type/AnotherTypeUponWhichTypeDepends Questions: Is my inclination bass-ackwards? Are there any major benefits/pitfalls of one over the other? Is it just something that depends on the application, e.g. whether you're building a library vs doing normal coding?

    Read the article

  • eAccelerator Issue - Cache Directory Empty.

    - by Tom
    Hi all, Hoping someone can give me a hand with this. I've recently installated eAccelerator 0.9.6.1 - On a CentOS LAMP server. Had it working fine, using the /tmp/accelerator as the cache directory. php.ini set up: zend_extension="/usr/local/lib/php/extensions/no-debug-non-zts-20060613/eaccelerator.so" eaccelerator.shm_size="200" eaccelerator.cache_dir="/var/cache/eaccelerator" eaccelerator.enable="1" eaccelerator.optimizer="1" eaccelerator.check_mtime="1" eaccelerator.debug="0" eaccelerator.filter="" eaccelerator.shm_max="0" eaccelerator.shm_ttl="3600" eaccelerator.shm_prune_period="180" eaccelerator.shm_only="1" eaccelerator.compress="1" eaccelerator.compress_level="9" php -v output: PHP 5.2.12 (cli) (built: Feb 3 2010 00:34:28) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies with eAccelerator v0.9.6.1, Copyright (c) 2004-2010 eAccelerator, by eAccelerator with the ionCube PHP Loader v3.3.20, Copyright (c) 2002-2010, by ionCube Ltd. I had to remove the cache directory as I was testing something. Remade it, re-set permissions and found that eAccelerator was no longer creating cache files within the folder. I thought it might be down to ownership rights on the folder so chown'd it apache.apache and this made no difference. I recreated the directory in /var/cache instead and editted php.ini to point to the new cache dir location, chmod'd, chown'd etc. and still eAccelerator is not creating any of the cache files in the directory (just empty). Could someone suggest what I might be doing incorrectly here. I've read through numerous pages to try and troubleshoot the issue to no avail. Any help appreciated.

    Read the article

  • Merge home directory after fresh installation with existing (Gentoo) home

    - by jhwist
    I reinstalled my desktop machine with Ubuntu 10.10., coming from Gentoo where I used XFCE. My home is usually NFS-mounted from a server. During the install I let the installer set up my user, but of course my NFS-home wasn't mounted then; I have a regular /home/user now. If I mv /home /home.old and mount my NFS-home to /home instead, I cannot login because Gnome complains about some config-files (sorry, no exact error message as there is no way to copy&paste this). Which of my /home.old/user directories do I have to copy over to my NFS-home so that Gnome is happy again?

    Read the article

  • VMWare Workstation 8 can't find the headers directory

    - by BackSlash
    I'm having an issue with VMware Workstation 8. I installed it but when i run it, it shows this window: But, when I press on "Browse", this window comes up Even if I select the linux-headers-3.8.0-31-generic folder, it says that it can't find the C headers for that kernel. Why? P.S. I already tried sudo apt-get install linux-headers-3.8.0-31-generic and the terminal says that the kernel is up to date.

    Read the article

  • How to Submit to an Article Directory

    Article submission involves submitting your article to the article directories. Most article directories allow user to include a link to their site in the resource box. There are also several directories that permit link inclusion within the article body.

    Read the article

  • Symbolic link all files in directory to show in another directory?

    - by Thomas Clayson
    What I want is to be able to display all files that are ftp'd into /home/ftp in /srv/ftp /srv/ftp is password protected, and has files in it which I don't want to be accessible from the public ftp. So as such I wish that all files uploaded to /home/ftp are automatically symbolically linked (or otherwise) to /srv/ftp. Does this make sense? e.g. ls /srv/ftp: file.sh another.txt something_else.i386 then a user ftp's and drops a file in /home/ftp (or ssh, or whatever) ls /home/ftp: user_file.mk ls /srv/ftp: file.sh another.txt something_else.i386 user_file.mk I hope this makes sense. I have been told that this can probably be achieved using ln to create symbolic links, but I don't want to have to ssh in and create the links every time I (or someone else) puts files over ftp. Thanks! :)

    Read the article

  • Use Entitlements To Secure LDAP-enabled Applications With Oracle Virtual Directory and Oracle Entitl

    - by mark.wilcox
    I stumbled on an interesting article  that shows how the author used OVD to exposed OES security to protect a portal that only understood LDAP group-based authorization.This is great because it shows how you can use OES today to build central policies that can be used without needing to rewrite all of your applications - in particular if you just want to leverage rule-based groups.  Posted via email from Virtual Identity Dialogue

    Read the article

  • Installing MoinMoin -wiki to my user directory on a server with no root access

    - by deiga
    Hello all, I've been trying to install MoinMoin -wiki on this webserver, where I have no root access. The server doesn't support wsgi, but it does support cgi/fcgi/etc. I've scoured google for a simple guide on how to accomplish this, but the only guides I found were from the year 2004 or so. Other guides always assumed that one has root access. So can anyone link a good tutorial for my question or just help me out here? Your help is appreciated :) P.S. Sorry if this is the wrong stack -page

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >