Search Results

Search found 4296 results on 172 pages for 'git clone'.

Page 127/172 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • How to setup linux permissions for the WWW folder?

    - by Xeoncross
    Updated Summery The /var/www directory is owned by root:root which means that no one can use it and it's entirely useless. Since we all want a web server that actually works (and no-one should be logging in as "root"), then we need to fix this. Only two entities need access. PHP/Perl/Ruby/Python all need access to the folders and files since they create many of them (i.e. /uploads/). These scripting languages should be running under nginx or apache (or even some other thing like FastCGI for PHP). The developers How do they get access? I know that someone, somewhere has done this before. With however-many billions of websites out there you would think that there would be more information on this topic. I know that 777 is full read/write/execute permission for owner/group/other. So this doesn't seem to be needed as it leaves random users full permissions. What permissions are need to be used on /var/www so that... Source control like git or svn Users in a group like "websites" (or even added to "www-data") Servers like apache or lighthttpd And PHP/Perl/Ruby can all read, create, and run files (and directories) there? If I'm correct, Ruby and PHP scripts are not "executed" directly - but passed to an interpreter. So there is no need for execute permission on files in /var/www...? Therefore, it seems like the correct permission would be chmod -R 1660 which would make all files shareable by these four entities all files non-executable by mistake block everyone else from the directory entirely set the permission mode to "sticky" for all future files Is this correct? Update: I just realized that files and directories might need different permissions - I was talking about files above so i'm not sure what the directory permissions would need to be. Update 2: The folder structure of /var/www changes drastically as one of the four entities above are always adding (and sometimes removing) folders and sub folders many levels deep. They also create and remove files that the other 3 entities might need read/write access to. Therefore, the permissions need to do the four things above for both files and directories. Since non of them should need execute permission (see question about ruby/php above) I would assume that rw-rw-r-- permission would be all that is needed and completely safe since these four entities are run by trusted personal (see #2) and all other users on the system only have read access. Update 3: This is for personal development machines and private company servers. No random "web customers" like a shared host. Update 4: This article by slicehost seems to be the best at explaining what is needed to setup permissions for your www folder. However, I'm not sure what user or group apache/nginx with PHP OR svn/git run as and how to change them. Update 5: I have (I think) finally found a way to get this all to work (answer below). However, I don't know if this is the correct and SECURE way to do this. Therefore I have started a bounty. The person that has the best method of securing and managing the www directory wins.

    Read the article

  • vim-powerline colors are out of whack in urxvt

    - by komidore64
    I have attached two images showing what my vim-powerline looks like. As you can see, something has happened to the colors and I cannot figure out how to fix it. I'm running Fedora 17 on a clean install with i3 (default config) and urxvt. Here is my bashrc: # .bashrc if [[ "$(uname)" != "Darwin" ]]; then # non mac os x # source global bashrc if [[ -f "/etc/bashrc" ]]; then . /etc/bashrc fi export TERM='xterm-256color' # probably shouldn't do this fi # bash prompt with colors # [ <user>@<hostname> <working directory> {current git branch (if you're in a repo)} ] # ==> PS1="\[\e[1;33m\][ \u\[\e[1;37m\]@\[\e[1;32m\]\h\[\e[1;33m\] \W\$(git branch 2> /dev/null | grep -e '\* ' | sed 's/^..\(.*\)/ {\[\e[1;36m\]\1\[\e[1;33m\]}/') ]\[\e[0m\]\n==> " # execute only in Mac OS X if [[ "$(uname)" == 'Darwin' ]]; then # if OS X has a $HOME/bin folder, then add it to PATH if [[ -d "$HOME/bin" ]]; then export PATH="$PATH:$HOME/bin" fi alias ls='ls -G' # ls with colors fi alias ll='ls -lah' # long listing of all files with human readable file sizes alias tree='tree -C' # turns on coloring for tree command alias mkdir='mkdir -p' # create parent directories as needed alias vim='vim -p' # if more than one file, open files in tabs export EDITOR='vim' # super-secret work stuff if [[ -f "$HOME/.workbashrc" ]]; then . $HOME/.workbashrc fi # Add RVM to PATH for scripting if [[ -d "$HOME/.rvm/bin" ]]; then # if installed PATH=$PATH:$HOME/.rvm/bin fi and my Xdefaults: ! URxvt config ! colors! URxvt.background: #101010 URxvt.foreground: #ededed URxvt.cursorColor: #666666 URxvt.color0: #2E3436 URxvt.color8: #555753 URxvt.color1: #993C3C URxvt.color9: #BF4141 URxvt.color2: #3C993C URxvt.color10: #41BF41 URxvt.color3: #99993C URxvt.color11: #BFBF41 URxvt.color4: #3C6199 URxvt.color12: #4174FB URxvt.color5: #993C99 URxvt.color13: #BF41BF URxvt.color6: #3C9999 URxvt.color14: #41BFBF URxvt.color7: #D3D7CF URxvt.color15: #E3E3E3 ! options URxvt*loginShell: true URxvt*font: xft:DejaVu Sans Mono for Powerline:antialias=true:size=12 URxvt*saveLines: 8192 URxvt*scrollstyle: plain URxvt*scrollBar_right: true URxvt*scrollTtyOutput: true URxvt*scrollTtyKeypress: true URxvt*urlLauncher: google-chrome and finally my vimrc set nocompatible set dir=~/.vim/ " set one place for vim swap files " vundler for vim plugins ---- filetype off set rtp+=~/.vim/bundle/vundle call vundle#rc() Bundle 'gmarik/vundle' Bundle 'tpope/vim-surround' Bundle 'greyblake/vim-preview' Bundle 'Lokaltog/vim-powerline' Bundle 'tpope/vim-endwise' Bundle 'kien/ctrlp.vim' " ---------------------------- syntax enable filetype plugin indent on " Powerline ------------------ set noshowmode set laststatus=2 let g:Powerline_symbols = 'fancy' " show fancy symbols (requires patched font) set encoding=utf-8 " ---------------------------- " ctrlp ---------------------- let g:ctrlp_open_multiple_files = 'tj' " open multiple files in additional tabs let g:ctrlp_show_hidden = 1 " include dotfiles and dotdirs in ctrlp indexing let g:ctrlp_prompt_mappings = { \ 'AcceptSelection("e")': ['<c-t>'], \ 'AcceptSelection("t")': ['<cr>', '<2-LeftMouse>'], \ } " remap <cr> to open file in a new tab " ---------------------------- set showcmd set tabpagemax=100 set hlsearch set incsearch set nowrapscan set ignorecase set smartcase set ruler set tabstop=4 set shiftwidth=4 set expandtab set wildmode=list:longest autocmd BufWritePre * :%s/\s\+$//e "remove trailing whitespace " :REV to "revert" file to state of the most recent save command REV earlier 1f " disable netrw -------------- let g:loaded_netrw = 1 let g:loaded_netrwPlugin = 1 " ---------------------------- Any guidance as to fixing the statusline would be fantastic. I've found a github issue outlining almost the exact same problem, but the solution was never posted. Thank you.

    Read the article

  • Windows Explorer is blank

    - by Scott Mitchell
    I am using Windows 7 Utlimate x64. Once a week, or so, when I boot up n the morning and launch Windows Explorer it shows up blank, as the following screen shot show. Clicking on my Computer doesn't load anything. Interestingly, I can go the the Address bar at the top and type in a folder name. This brings up that folder's files and subfolders, but as I drill around the tree of folders on the left only shows the immediate folder and not its siblings. There's no plus icon to expand the folder, etc. My usual "solution" is to reboot, which typically brings everything back to normal, but this is a frustrating remedy. Any idea what's going on and how to fix it? Some Googling turned up this discussion, but the remedy was to uninstall a particular piece of software that I don't have installed (Virtual Clone Drive). Thanks

    Read the article

  • Mercurial hgwebdir configuration URL

    - by Jonathan Sternberg
    I'm setting up an hgwebdir configuration for the first time with Mercurial on apache2. I can see the three repositories I've set up in the first page, and I've figured out how to modify their names so they don't resemble the directory path. But when I click to go to one of the repositories, the URL becomes http://localhost/hg/hgweb.cgi/path/to/repos. I would like the directory to be http://localhost/hg/name instead as that is easier to remember for people who want to clone the repository. Is there anyway to do that with hgwebdir?

    Read the article

  • Migration with SysPrep, ImageX and

    - by Jack Smith
    I know that you can use SysPrep and ImageX to create a prepared image that can be used on several systems but the question is. How well does it work in a corporate environment of moving machines from old hardware off to new harddrives and new hardware? EDIT: The system runs accounting software and databases. So would SysPrep remove all License keys and other information which means would cause problems right? Would something else be a better option even though there are heavy costs involved? Currently, when I clone/copy the drive, Windows will black screen on me. So I need something with differential hardware support?

    Read the article

  • multi boot: xp + xp + xubuntu, how to?

    - by Jassano
    My laptop (with a single harddrive) currently has xp + xubuntu dual boot. I want to make that triple boot: xp + xp + xubuntu Please don't ask why, take it as given. How can I accomplish this triple boot? I tried using gparted to add a partition (worked!), used dd to clone the xp install to the new partition (worked!), edited grub (my bootloader) to list a third entry pointing to the correct device (worked!). But regardless of which of the two xp entries in grub I select I still get booted into one and the same XP. The files for the other XP show up under D: so I know they're there alright. I have edited the boot.ini on the new partition so everything looks to be in order. What do I need to do to change that and make both xp instances bootable in this scenario?

    Read the article

  • Fragile XenServer won't create new VDIs anymore

    - by thoiz_vd
    I'm getting increasingly frustrated with XenServer. Currently I'm using 5.6FP1 and it seems to be very fragile. Canceling any VDI-related action almost guarantees trouble. This time I tried to create a new VM using a snapshot for a template, with the fast disk cloning option disabled. It would take much longer than I was able and willing to keep my XenCenter open, so I canceled it. That took ages too, so I had to decide to "Quit Anyway." Since, I seem unable to create any new VDI. New attempts at cloning halt with "The attempt to clone the VDI failed," and creating new VMs based on built-in templates hang at "Provisioning." I'm in need of some advice how to solve this. What I did so far is run a xe vdi-list, which returned nothing odd to me, but I'm no expert. I assume that an incomplete VDI is blocking my Storage Repository somehow, however, how to deal with that remains unclear.

    Read the article

  • Stunnel delaying boot

    - by Onitlikesonic
    My stunnel implementation works fine when the network is plugged in but it takes an awful amount of time, which delays the whole boot process, when there is no network connected to the machine. As extra information: I'm using "delay=yes" I'm using an fqdn (e.g: stunnel.mydomain.com) for the connections Using ubuntu but this also happened with centos5 previously How can this be avoided or a timeout specified? edit: doing an strace as suggested by symcbean shows the following (including the last part where it hangs): [...] --- SIGCHLD (Child exited) @ 0 (0) --- rt_sigreturn(0x11) = 0 close(3) = 0 wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 6039 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7ff9ce0c79d0) = 6046 wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 6046 --- SIGCHLD (Child exited) @ 0 (0) --- rt_sigreturn(0x11) = 6046 write(1, "[Started: /etc/stunnel/stunnel.c"..., 37) = 37 write(1, "stunnel.\n", 9) = 9 exit_group(0) = ? [...] stunnel hangs in this line: wait4(-1, and when i plug in the network cable it continues to show [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 6046

    Read the article

  • Partimage and autocheck problem when restoring Windows XP from image

    - by Xolstice
    I'm trying to create an image of Windows XP and clone it to several partitions on the same hard drive using Partimage. I seem to be running into a problem when I restore the image onto another partition - when I boot into the OS from the partition I just restored, it brings up this message during the boot sequence: autochk program not found - skipping autocheck, and then after this, the OS reboots the PC and the whole process repeats itself in an infinite loop. After doing some Google search, it is suggested that this loop was caused by the partition being hidden or the mountmgr.sys file is missing. I checked my configuration and verified that this was not the case. I'm just wondering: Has anyone else experienced this and is there a solution for it? Is this what happens when you try to restore the image to a different partition on the same hard disk or is Partimage itself the problem? Should I be trying out a different partition cloning software?

    Read the article

  • DIR 601 No wireless internet

    - by ashley
    I have an orange globe on my d link router. I signed into 192.168.0.1 and went to Manuel Internet Connection Setup as I was told to do. When I clicked on that and tried to clone my PC's MAC address, it said invalid MAC address. Host Name : DIR-601 Use Unicasting : (compatibility for some DHCP Servers) Primary DNS Address : 0.0.0.0 Secondary DNS Address : 0.0.0.0 MTU : (bytes) MTU default = 1500 MAC Address : F8:1E:DF:EA:38:E6 How do I get a valid MAC address so I can save the settings and move on to the next steps I was told to do in order to get wireless internet again?

    Read the article

  • Can't start the Windows Phone Emulator

    - by Louis
    When I try run/debug an app in the emulator I get this error: And in Visual Studio's Error List console it simply says: 0x80131500 I haven't worked on this project for about a week, but it was working then. I checked the BIOS and everything is enabled (as it was last week). I don't think this is related, but yesterday, I did upgrade my system SSD and used the Samsung Data Migration Tool to clone my drive. I've tried repairing the Windows Phone SDK 8.0, but that didn't help. Are there any other things I can try? I really don't think it's related to the SSD. Hyper-V related services: I can't start any of these either.

    Read the article

  • Raid HDD Boot up

    - by user234695
    My server is Power Edge 1950 running server 2008 32Bit, using 3 Physical SAS HDD as a Virtual 2 Disk configured for RAID5, with partition of C drive as OS and D drive as Data. Planning to format and install Server 2008 R2 64Bit so I insert a New Physical disk and configured as RAID5 and clone the C drive and D drive to the new hard disk. Now I need to test that the new hard disk is able to boot windows and work as expected. How do I test, I am not able to boot the windows by choosing the new harddisk, the bios show only the existing HDD, not the new one. Also, if I remove the old three hard disk, and leave the new harddisk and then am I able to boot the device, if I do this does my existing RAID5 configuration and data on the hard disk still remain.

    Read the article

  • change ip address verizon fios [closed]

    - by John Smith
    One well documented way to change a dynamically assigned ip address is to log into the router configuration settings and change the mac address , disconnect the router and modem, and to turn it all back on. This actually worked with a basic modem connected by ethernet directly to a laptop, spoofing laptop mac address, with a cable internet provider. Now this question is specific to fiber optic internet providers who bundle with tv and internet (verizon fios, comcast xfinity). Verizon fios installation comes with an actiontec router, and there is a built in way to clone the mac address in the router configuration settings. What will happen if the mac address changes from what verizon installed it as? Will they get angry and disconnect service? Will other services other than internet stop working (tv or phone)?

    Read the article

  • How to setup linux permissions the WWW folder?

    - by Xeoncross
    Updated Summery The /var/www directory is owned by root:root which means that no one can use it and it's entirely useless. Since we all want a web server that actually works (and no-one should be logging in as "root"), then we need to fix this. Only two entities need access. PHP/Perl/Ruby/Python all need access to the folders and files since they create many of them (i.e. /uploads/). These scripting languages should be running under nginx or apache (or even some other thing like FastCGI for PHP). The developers How do they get access? I know that someone, somewhere has done this before. With however-many billions of websites out there you would think that there would be more information on this topic. I know that 777 is full read/write/execute permission for owner/group/other. So this doesn't seem to be needed as it leaves random users full permissions. What permissions are need to be used on /var/www so that... Source control like git or svn Users in a group like "websites" (or even added to "www-data") Servers like apache or lighthttpd And PHP/Perl/Ruby can all read, create, and run files (and directories) there? If I'm correct, Ruby and PHP scripts are not "executed" directly - but passed to an interpreter. So there is no need for execute permission on files in /var/www...? Therefore, it seems like the correct permission would be chmod -R 1660 which would make all files shareable by these four entities all files non-executable by mistake block everyone else from the directory entirely set the permission mode to "sticky" for all future files Is this correct? Update: I just realized that files and directories might need different permissions - I was talking about files above so i'm not sure what the directory permissions would need to be. Update 2: The folder structure of /var/www changes drastically as one of the four entities above are always adding (and sometimes removing) folders and sub folders many levels deep. They also create and remove files that the other 3 entities might need read/write access to. Therefore, the permissions need to do the four things above for both files and directories. Since non of them should need execute permission (see question about ruby/php above) I would assume that rw-rw-r-- permission would be all that is needed and completely safe since these four entities are run by trusted personal (see #2) and all other users on the system only have read access. Update 3: This is for personal development machines and private company servers. No random "web customers" like a shared host. Update 4: This article by slicehost seems to be the best at explaining what is needed to setup permissions for your www folder. However, I'm not sure what user or group apache/nginx with PHP OR svn/git run as and how to change them. Update 5: I have (I think) finally found a way to get this all to work (answer below). However, I don't know if this is the correct and SECURE way to do this. Therefore I have started a bounty. The person that has the best method of securing and managing the www directory wins.

    Read the article

  • How to restrict user to a particular folder in CentOS 6?

    - by Chris Demetriad
    I will need to create users so developers can log in and clone/pull/push changes/repositories from a github like platform. I've managed to add a user (using the root) to this CentOS machine; I now have this line in /etc/passwd: chris:x:32008:32010::/home/chris/public_html:/bin/bash ..and this in /etc/shadow: chris:$1$ruUeLtTu$onAY2hdu1J.UmHajEIlmR.:15385:0:99999:7::: I am able to SSH the server, I have permission to create a folder and I guess that should be enough. But I am able to see other files and folders outside public_html. How can I actually restrict the user to a particular directory so he can't "cd out" of his folder? Update: root@echo [~]# ls -ld /home/moove drwx--x--x 21 moove moove 4096 Mar 22 16:16 /home/moove/ root@echo [~]# ls -ld /home/moove/public_html drwxr-x--- 11 moove nobody 4096 Mar 27 11:29 /home/moove/public_html/ root@echo [~]# ls -ld /home/moove/public_html/dev drwxr-x--- 12 moove nobody 4096 Mar 27 14:47 /home/moove/public_html/dev/ root@echo [~]# ls -ld /home/moove/public_html/dev/arsenal drwxr-xr-x 3 arsenal moove 4096 Mar 27 14:53 /home/moove/public_html/dev/arsenal/

    Read the article

  • Rip authedicatation from LDAP to Local

    - by oxinabox
    We are taking a small portion of out network offline, and running a separate network using that portion. (By small portion I mean 2 servers, that will be connected to 30 odd boxs that aren't usually part of our network, and don't need to authenicate) I intend to create a VM on one of the servers to provide general user services, and IRC server, remote shell etc. And I would like the users to be able to use there usual server log in details. Problem is the LDAP server that normally checks those details is not one of the severs. So I need to be able to some how take their details off LDAP and put them on the the server that is coming. One suggestion I had was to set a LDAP server on the VM locally, and clone the LDAP database onto it (using something called slapcat) is this the best way? Or can I I change the LDAP data into local authentication data?

    Read the article

  • Get cryptic error when trying to create a snapshot of any of my VMs

    - by Zolt
    I'm using Oracle VM VirtualBox. I have 6 VMs that I've imported. When I click on an individual VM, and then click the camera image (or Ctrl+Shift+S) to take a snapshot, the snapshot process fails and VirtualBox gives the following error: Failed to create a snapshot of the virtual machine vmName. Details: Result Code: VBOX_E_IPRT_ERROR (0x80BB0005) This happens not only for one of my VMs, but all of them, if I try to take a snapshot of each one separately, one at a time. My computer is a Windows 7 machine. I have 200 GB free on my hard drive, and I see no reason why the error should occur. I can import VMs, run them, and clone then without any problems. Can anyone tell me what to do to fix the issue?

    Read the article

  • TrueCrypt & upgrading your hard drive?

    - by Danielb
    I currently use TrueCrypt to encrypt the hard drive in a Win7 laptop (everything in a single partition). I am looking to upgrade the hard drive to a model with significantly more storage capacity. I've had a look through the documentation but I couldn't see anything about this particular scenario. I assume I need to do something like the following: Remove the encryption from the existing drive. Clone the existing drive image onto the new hard drive. Physically install the new drive into the laptop. Resize the single partition to use all the space in the new drive. Encrypt all of the new bigger drive with TrueCrypt.

    Read the article

  • Time Machine for Windows

    - by Kevin L.
    A simple Google search for "Time Machine for Windows" results in a flurry of different little apps. But instead of relying on forum anecdotes and advertisements, I call on the much wiser Super User beta community for some depth on this one. Having Time Machine running on Leopard is like a warm, fuzzy blanket of comfort that I never got with RAID, rsync, or SyncToy on Windows. I'm not asking the community what the "best" backup software for Windows is, but instead: Is there any true Time Machine clone for Windows, one that includes as many of the following as possible: Completely transparent, "set-it-and-forget-it" backup Incremental backups (changes only) for every hour for a day, every day for a month, and every week until the backup disk is full Ability to rebuild from this backup disk in case of main drive meltdown (the backup doesn't have to be bootable; neither are Time Machine disks) Extremely easy to use UI (target user == wife). Bonus points for a beautiful UI

    Read the article

  • Resize Ubuntu Linux system to smaller disk inside VMware ESXi

    - by mlambie
    I have several Ubuntu Linux virtual machines running on VMware ESXi hosts that have all been allocated disks much larger than their required capacity. As space is now becoming an issue on our SAN, I'd like to investigate downsizing the allocated disk space on these machines. All systems will be completely backed up imaged before I begin making changes, and I will always retain a pristine backup in case the partition resizing does not work. Is there an easier way than the following procedure, or is their a better solution entirely? Shutdown and assign a second disk to the virtual machine Boot using the SystemRescueCD Use GParted to resize the original (source) partition, making it smaller Clone the new, smaller partition to the second disk Shutdown and remove initial disk from the virtual machine Reboot and force fsck to check the filesystem

    Read the article

  • Windows 8 cloned drive in 2nd computer

    - by Mark
    I did the Windows 8 Pro upgrade machine w/ 64GB SSD. Finding 64GB not enough, I ordered a 128 GB SSD (Samsung 830) while planning to use CloneZilla to clone the Windows 8 OS to it. I might try using the 64GB SSD (with the Windows 8 upgrade on it) as a boot drive in a backup machine. I understand that I need to do some registry work to make it happy about the SSD 'transplant,' but I am worried about having to register the same activation key on 2 computers. Am I at high risk of getting 'deactivated'? Note that the backup machine is only used when primary computer is off.

    Read the article

  • NodeJS Supervisord Hashlib

    - by enedebe
    I have an problem with my NodeJS app. The problem is the include of the library Hashlib I've followed more than 10 times the instructions to install. Get a clone of the repo, do make and make install. NodeJS is installed in default path, and that's the tricky point: When I launch node app.js it works, perfectly. The problem starts when I configured my Supervisord to run with the same user, with the same config file as I have in other systems working, and I get that NodeJS can't find hashlib. module.js:337 throw new Error("Cannot find module '" + request + "'"); ^ Error: Cannot find module 'hashlib' I'm getting crazy, what can I do?! Why my user launching node from the console works great, but not the supervisord? Thanks!

    Read the article

  • Clonezilla , NLite and PCs with different hardware specs..

    - by r2b2
    Hi, I used Clonezilla to create and restore images from a master computer to other workstations with the same specs. But the problem now is there are new computers whose hardware specs are different than the ones I maintained (mostly the videocard is different). Is it possible to create a customize Windows XP installer using Nlite and integrate all potential videocard and motherboard drivers ? If I then used this NLite ISO to install to a master computer which i will later clone and restore the image to other workstations, will windows xp still pick up the correct driver set? During XP installation, does the installer transfers all the drivers to the computer's harddisk? Thanks! Ryan

    Read the article

  • nagios-nrpe-unable-to-read-output [closed]

    - by Bill S
    Oracle Linux; Icinga; Nagios plugins I did all the easy steps command runs fine standalone through my normal login; looked at /var/log/messages to see if any clues there Trying to run plugin under nrpe login - cant login don't know password; does this password matter? can I reset it? clone id? Any way to have shell being executed log all commands and output to somewhere? Trying to run this shell script plugin "nqcmd OBIEE plugin for Nagios" from this URL: http://www.rittmanmead.com/2012/09/advanced-monitoring-of-obiee-with-nagios/ I went through script and made sure that everything obvious was set to 755 Any help would be appreciated

    Read the article

  • bash: per-command history. How does it work?

    - by romainl
    OK. I have an old G5 running Leopard and a Dell running Ubuntu 10.04 at home and a MacPro also running Leopard at work. I use Terminal.app/bash a lot. On my home G5 it exhibits a nice feature: using ? to navigate history I get the last command starting with the few letters that I've typed. This is what I mean (| represents the caret): $ ssh user@server $ vim /some/file/just/to/populate/history $ ss| So, I've typed the two first letters of "ssh", hitting ? results in this: $ ssh user@server instead of this, which is the behaviour I get everywhere else : $ vim /some/file/just/to/populate/history If I keep on hitting ? or ?, I can navigate through the history of ssh like this: $ ssh otheruser@otherserver $ ssh user@server $ ssh yetanotheruser@yetanotherserver It works the same for any command like cat, vim or whatever. That's really cool. Except that I have no idea how to mimic this behaviour on my other machines. Here is my .profile: export PATH=/Developer/SDKs/flex_sdk_3.4/bin:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/sw/bin:/sw/sbin:/bin:/sbin:/bin:/sbin:/usr/bin:/usr/sbin:$HOME/Applications/bin:/usr/X11R6/bin export MANPATH=/usr/local/share/man:/usr/local/man:opt/local/man:sw/share/man export INFO=/usr/local/share/info export PERL5LIB=/opt/local/lib/perl5 export PYTHONPATH=/opt/local/bin/python2.7 export EDITOR=/opt/local/bin/vim export VISUAL=/opt/local/bin/vim export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home export TERM=xterm-color export GREP_OPTIONS='--color=auto' GREP_COLOR='1;32' export CLICOLOR=1 export LS_COLORS='no=00:fi=00:di=01;34:ln=target:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:*.tar=00;31:*.tgz=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.zip=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.deb=00;31:*.rpm=00;31:*.TAR=00;31:*.TGZ=00;31:*.ARJ=00;31:*.TAZ=00;31:*.LZH=00;31:*.ZIP=00;31:*.Z=00;31:*.Z=00;31:*.GZ=00;31:*.BZ2=00;31:*.DEB=00;31:*.RPM=00;31:*.jpg=00;35:*.png=00;35:*.gif=00;35:*.bmp=00;35:*.ppm=00;35:*.tga=00;35:*.xbm=00;35:*.xpm=00;35:*.tif=00;35:*.png=00;35:*.fli=00;35:*.gl=00;35:*.dl=00;35:*.psd=00;35:*.JPG=00;35:*.PNG=00;35:*.GIF=00;35:*.BMP=00;35:*.PPM=00;35:*.TGA=00;35:*.XBM=00;35:*.XPM=00;35:*.TIF=00;35:*.PNG=00;35:*.FLI=00;35:*.GL=00;35:*.DL=00;35:*.PSD=00;35:*.mpg=00;36:*.avi=00;36:*.mov=00;36:*.flv=00;36:*.divx=00;36:*.qt=00;36:*.mp4=00;36:*.m4v=00;36:*.MPG=00;36:*.AVI=00;36:*.MOV=00;36:*.FLV=00;36:*.DIVX=00;36:*.QT=00;36:*.MP4=00;36:*.M4V=00;36:*.txt=00;32:*.rtf=00;32:*.doc=00;32:*.odf=00;32:*.rtfd=00;32:*.html=00;32:*.css=00;32:*.js=00;32:*.php=00;32:*.xhtml=00;32:*.TXT=00;32:*.RTF=00;32:*.DOC=00;32:*.ODF=00;32:*.RTFD=00;32:*.HTML=00;32:*.CSS=00;32:*.JS=00;32:*.PHP=00;32:*.XHTML=00;32:' export LC_ALL=C export LANG=C stty cs8 -istrip -parenb bind 'set convert-meta off' bind 'set meta-flag on' bind 'set output-meta on' alias ip='curl http://www.whatismyip.org | pbcopy' alias ls='ls -FhLlGp' alias la='ls -AFhLlGp' alias couleurs='$HOME/Applications/bin/colors2.sh' alias td='$HOME/Applications/bin/todo.sh' alias scale='$HOME/Applications/bin/scale.sh' alias stree='$HOME/Applications/bin/tree' alias envoi='$HOME/Applications/bin/envoi.sh' alias unfoo='$HOME/Applications/bin/unfoo' alias up='cd ..' alias size='du -sh' alias lsvn='svn list -vR' alias jsc='/System/Library/Frameworks/JavaScriptCore.framework/Versions/A/Resources/jsc' alias asl='sudo rm -f /private/var/log/asl/*.asl' alias trace='tail -f $HOME/Library/Preferences/Macromedia/Flash\ Player/Logs/flashlog.txt' alias redis='redis-server /opt/local/etc/redis.conf' source /Users/johncoltrane/Applications/bin/git-completion.sh export GIT_PS1_SHOWUNTRACKEDFILES=1 export GIT_PS1_SHOWUPSTREAM="verbose git" export GIT_PS1_SHOWDIRTYSTATE=1 export PS1='\n\[\033[32m\]\w\[\033[0m\] $(__git_ps1 "[%s]")\n\[\033[1;31m\]\[\033[31m\]\u\[\033[0m\] $ \[\033[0m\]' mkcd () { mkdir -p "$*" cd "$*" } function cdl { cd $1 la } n() { $EDITOR ~/Dropbox/nv/"$*".txt } nls () { ls -c ~/Dropbox/nv/ | grep "$*" } copy(){ curl -s -F 'sprunge=<-' http://sprunge.us | pbcopy } if [ -f /opt/local/etc/profile.d/cdargs-bash.sh ]; then source /opt/local/etc/profile.d/cdargs-bash.sh fi if [ -f /opt/local/etc/bash_completion ]; then . /opt/local/etc/bash_completion fi Any idea?

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >