Search Results

Search found 24505 results on 981 pages for 'bash script'.

Page 293/981 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Linux: Schedule command to run once after reboot (RunOnce equivalent)

    - by Christopher Parker
    I'd like to schedule a command to run after reboot on a Linux box. I know how to do this so the command consistently runs after every reboot with a @reboot crontab entry, however I only want the command to run once. After it runs, it should be removed from the queue of commands to run. I'm essentially looking for a Linux equivalent to RunOnce in the Windows world. In case it matters: $ uname -a Linux devbox 2.6.27.19-5-default #1 SMP 2009-02-28 04:40:21 +0100 x86_64 x86_64 x86_64 GNU/Linux $ bash --version GNU bash, version 3.2.48(1)-release (x86_64-suse-linux-gnu) Copyright (C) 2007 Free Software Foundation, Inc. $ cat /etc/SuSE-release SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 0 Is there an easy, scriptable way to do this?

    Read the article

  • Synchronization of volume snapshots when doing whole system backups

    - by intuited
    Is there a way to guarantee consistency across volumes when doing backups from LVM snapshots? Consider this scenario: Some system upgrade is in progress. It will write some files to the /usr volume, and once completed, will record success in the /var volume. As the upgrade is just about complete, I run a backup script that creates snapshots of the /usr and /var volumes, along with the rest of the system's volumes, and proceeds to create backups from those snapshots. Just before the upgrade's last write/flush on the /usr volume completes, the backup script takes its snapshot of /usr. That write completes, and the upgrade operation's success is quickly recorded in the nebulous depths of /var. The backup script takes a snapshot of /var. The backup script creates backups from the snapshots it has, er, snapshotted. So the result of all of this tomfoolery is that the resulting /usr backup contains a file which is missing a few bits, and the /var backup contains metadata indicating that that file is complete and approved for use. Without delving into the details of which operating systems' system upgrade systems would be unfazed by such trifles, is there a way to avoid such problems? At the least this seems like it could cause some application to fail unexpectedly after restoration of such a backup.

    Read the article

  • IIS7 - Virtual Directories' Parent Paths behaving differently than previous versions

    - by MisterZimbu
    I'm doing a migration of a web server running on IIS 5 to IIS 7. I'm noticing that the virtual directories are behaving differently between the two. I have a site located at c:\inetpub\SiteName. This site contains a virtual directory "bob" that points at c:\virtualdirs\bob. There's a script in the bob folder (script.asp) that contains just: <!--#include virtual="../index.asp"--> I'm noticing different behaviors between IIS5 and IIS7 when I attempt to run the script by going to http://SiteName/bob/script.asp: IIS5 references the parent path of the site, and imports c:\inetpub\SiteName\index.asp. IIS7 references the parent folder of the virtual directory, and looks for a c:\virtualdirs\index.asp (that doesn't exist). Doing a Response.Write of a Server.MapPath confirms this. Is there a way to get IIS7 to behave like IIS5 in this regard? Unfortunately, moving index.asp and its logic into the virtualdirs folder isn't an option as the virtual directory will be shared across many sites (with differing index.asps). Thanks.

    Read the article

  • Mapped networkdrive on logout

    - by Robuust
    I'm using a script to keep a mapped networkconnection alive, but ofcourse the mapped connection is gone when I logout. The point is now, that I'm running this on Windows Server 2008 R2, where I use remote desktop to login on the administrator account. However, it should remain logged in and not remove the mapped connection as this script takes care of not logging out on MS office 365 sharepoint. Is there a way to keep the mapped networklocation (L:) available after logout? So the script can run to remain the connection? # Create an IE Object and navigate to my SharePoint Site $ie = New-Object -ComObject InternetExplorer.Application $ie.navigate('https://xxx.sharepoint.com/') # Don't need the object anymore, so let's close it to free up some memory $ie.Quit() # Just in case there was a problem with the web client service # I am going to stop and start it, you could potentially remove this # part if you want. I like it just because it takes out a step of # troubleshooting if I'm having problems. Stop-Service WebClient Start-Service WebClient # We are going to set the $Drive variable here, this is just # going to tell the command what drive letter to map you can # change this to whatever you want (if you change it to a # drive that is already mapped it will overwrite it, so be careful. $Drive = "L:" # You can change the drive destiniation to whatever you want, # it has to be a document library or folder of course. $DrvDest = "https://xxx.sharepoint.com/files/" # Here is where we create the object to map the network drive and # then map the network drive $net = New-Object -ComObject WScript.Network; $net.mapnetworkdrive($Drive,$DrvDest) # That is the end of the script, now schedule this with task # scheduler and every so often and you should be set.

    Read the article

  • Launching Installer Via Powershell and WinRM and Nothing Happens

    - by Nick DeMayo
    I'm currently working on a Powershell script to run some Microsoft Hotfix installers remotely on several Windows Server 2008 R2 servers that I manage. Basically, the script copies all the appropriate files up to the server, and then runs the installer via Invoke-Command, like so: function InstallCU { Write-Host "Installing June 2013 CU..." Invoke-Command -ComputerName $ServerName -ScriptBlock { Start-Process "c:\aaa\prjcusp2\ubersrvprj2010-kb2817530-fullfile-x64-glb.exe" -ArgumentList "/passive" } } If I run the "Start-Process" command locally on the server, the installer runs properly. However, when trying to run it remotely, nothing happens (actually, I can see the installer start up in Task Manager, but it closes a couple seconds later and doesn't run). I've attempted giving the Invoke-Command -Credentials, I've turned off UAC on the server, and I've ensured that my WinRM settings (running 'winrm quickconfig' and setting TrustedHosts to *) are correct. I've also tried having the Invoke-Command script run a local Powershell script to run the installer and changing the Argument from '/passive' to 'quiet' (in case it can't remotely launch something that has a UI), but again, no dice. Is there anything else I can try, or am I just not going to be able to do this?

    Read the article

  • chroot'ing SSH home directories, shell problem.

    - by Hamza
    Hi folks, I am trying to chroot my SSH users to their home directories and it seems to work.. in a strange way. Here is what I have in my sshd_config: Match group restricthome ChrootDirectory %h The permissions on the user directories looks like this: drwxr-xr-x 2 root root 1024 May 11 13:45 [user]/ And I can see that the user logs in successfully: May 11 13:49:23 box sshd[5695]: Accepted password for [user] from x.x.x.x port 2358 ssh2 (with no error messages after this) But after entering the password the PuTTY window closes down. This is a wild guess, but could it be because the user's shell is set to /bin/bash and it can't execute because of the chroot? If so, could you give me pointers on how to fix it? Would simply copying the bash binary into user's home directory and modyfying the shell work? How would I deal with the dependencies, ldd shows quite a few of those :) Comments/suggestions will be appreciated. Thanks.

    Read the article

  • How to install software packages on a shared Red Hat Linux host account without root access or rpm?

    - by jeff
    I have a shared RHEL 4 host account where I do not have root privileges. I would like to install Git and Bash Complete in a way that they can be upgraded easily. To date, I've just been installing from source providing $HOME as a prefix to autoconf. Obviously this isn't ideal as I need to hunt down the files associated with the version I'm upgrading away from and delete them. I've tried using rpm but I just get -bash: rpm: command not found back so it's not available. I also looked into checkinstall but it looks like that requires rpm, dpkg, or Slackware's package manager to be available. Is there anything out there that can be used like a package manager without requiring root access or an existing package manager?

    Read the article

  • Gnu screen, how to update dynamically the title of a window?

    - by Fabio
    I googled a lot, but I can't find the answer I'm looking for... I'm trying to improve the aspect of GNU Screen using the screenrc file, I tuned colors, status line, caption and the list of the loaded windows. The only thing I'm not able to achieve is getting the caption with the current executed command as in this picture, note the vim caption in the right pane. What I currently have is this, and what I would like to obtain is having captions (and if possible also hardstatus line) with |0 less| 1 man instead of the current |0 bash| 1 bash. How to do this? Thanks in advance.

    Read the article

  • how to install mpgtx from source code

    - by Ahmet vardar
    i am new on linux server. i have mpgtx folder in my root, how can i install it ? in readme file it is written; ./configure && make when i type this i get permission denied error ? thanks EDIT: Here the steps i done root@server [/]# cd /mpgtx root@server [/mpgtx]# ./configure -bash: ./configure: Permission denied root@server [/mpgtx]# make ----------------------------------------------------------------------------- Hello ! I'm afraid I'm a dummy Makefile. My goal in life is to politely ask you to run the configure script to actual- ly generate a real Makefile. Would you be kind enough to type "./configure --help" to see the options that will suit your needs ? Please note that typing "./configure" without option will generate a Makefile that will suit most people needs. I wish you a good day. Please don't drive to fast. ----------------------------------------------------------------------------- root@server [/mpgtx]# ./configure -bash: ./configure: Permission denied root@server [/mpgtx]#

    Read the article

  • Permissions Required for Sharepoint Backups

    - by Wyatt Barnett
    We are in the process of rolling out an extranet for some of our partners using WSS 3.0 as the platform. We already use it internally for a variety of things, and we are using the following powershell script to backup the server: param( $url="http://localhost", $backupFolder="c:\" ) [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") $site= new-Object Microsoft.SharePoint.SPSite($url) $names=$site.WebApplication.Sites.Names foreach ($name in $names) { $n2 = "" if ($name.Length -eq 0) { $n2="ROOT" } else { $n2 = $name } $tmp=$n2.Replace("/", "_") + ".sbk" $saveas = "" if ($backupFolder.Length -eq 0) { $saveas = $tmp } else { $saveas = join-path -path $backupFolder -childPath $tmp } $site.WebApplication.Sites.Backup($name, $saveas, "true") write-host "$n2 backed up to $saveas." } This script works perfectly on the current installation running as our domain backup user. On the new box, it fails when ran as the backup user--claiming "The web application located at http://extranet/" could not be found. That url does, in fact, work so I'm fairly certain it isn't anything that dumb and rather is some permissions issue. Especially because, when executed from my security context, the script works perfectly. I have tried making the backup user a farm owner, as well as added him to the various site collection admin groups on the extranet. The one major difference between the extranet and the intranet server is that the extranet has an alternative access mapping (for https://xnet.example.com) and also uses forms authentication for that mapping. Anyhow, what permissions (or other voodoo) do I need to setup to get this script to work properly?

    Read the article

  • Why are all Linux commands broken after installing Perl?

    - by user115079
    I installed perl using following command: curl -L http://xrl.us/installperlnix | bash after that i run following command to create soft link ln -sf /usr/local/bin/perl /usr/bin/perl now I'm trying to run commands like dir, mkdir, ll, rm, vi but nothing seems to be working for me. also when i try to login into my shell i get following msg at startup: Last login: Wed Apr 4 21:50:12 2012 from x.y.z.ip -bash: perl: command not found please help. Here is system detail: cat /proc/version Linux version 2.6.18-274.18.1.el5.028stab098.1 (root@rhel5-build-x64) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-46)) #1 SMP Sat Feb 11 15:30:41 MSK 2012 cat /etc/issue CentOS 5.7 32 bit Kernel \r on an \m Don't know if perl was already installed or not. and now i can't check.

    Read the article

  • for ps aux what are Ss Sl Ssl proccess types UNIX

    - by JiminyCricket
    when doing a "ps aux" command I get some process listed as Ss, Ssl and Sl what do these mean? root 24653 0.0 0.0 2256 8 ? Ss Apr12 0:00 /bin/bash -c /usr/bin/python /var/python/report_watchman.py root 24654 0.0 0.0 74412 88 ? Sl Apr12 0:01 /usr/bin/python /var/python/report_watchman.py root 21976 0.0 0.0 2256 8 ? Ss Apr14 0:00 /bin/bash -c /usr/bin/python /var/python/report_watchman.py root 21977 0.0 0.0 73628 88 ? Sl Apr14 0:01 /usr/bin/python /var/python/report_watchman.py

    Read the article

  • Suppress EXT3-fs warning on mount

    - by STM
    I am familiar with output suppress on Unix machines, ie: cat /file/that/doesnt/exist > /dev/null 2>& However I can't seem to suppress the output of mount when an ext3 filesystem is mounted for the nth time, and it recommends an fsck. As it happens, fscks are run regularly by another machine, so these warning messages are needlessly interrupting the flow of output to my pretty bash script. These are the errors: # mount -t ext3 /dev/sda1 /mnt > /dev/null 2>& kjournald starting. Commit interval 5 seconds EXT3-fs warning: maximal mount count reached, running e2fsck is recommended EXT3 FS 2.4-0.9.19, 19 August 2002 on sd(8,1), internal journal EXT3-fs: mounted filesystem with ordered data mode. Can anyone shed some light on this? I'm clearly blocking both fd's, but somehow output is still getting through. This is GNU Bash v2.05a

    Read the article

  • Configure Supervisor to manage init.d services

    - by Eduard Luca
    I installed uwsgi and created a bash script, which allows me to start/stop uwsgi in the following manner: service uwsgi [start|stop]. This bash script is located in /etc/init.d/uwsgi. Now, I want to (politely) ask Supervisor to use that script to manage the uwsgi process. All the tutorials indicate that this is not the way to do it, however I do want to be able to do both service uwsgi stop and supervisorctl stop uwsgi (not sure if I nailed the syntax of the latter) -- even though I am aware that the first one will not in fact stop my service because supervisor will restart it (that's exactly what I need). Note that I'm using uwsgi in emperor mode if that matters in any way.

    Read the article

  • Estimating compressed file size using a list parameter

    - by Sai
    I am currently compressing a list of files from a directory in the following format: tar -cvjf test_1.tar.gz -T test_1.lst --no-recursion The above command will compress only those files mentioned in the list. I am doing this because this list is generated such that it fits a DVD. However, during compression the compression rate decreases the estimated file size and there is abundant space left in the DVD. This is something like a Knapsack algorithm. I would like to estimate the compressed file size and add some more files to the list. I found that it is possible to estimate file size using the following command: tar -cjf - Folder/ | wc -c This command does not take a list parameter. Is there a way to estimate compressed file size? I am also looking into options like perl scripts etc. Edit: I think I should provide more information since I have been doing a lot of web search. I came across a perl script(Link)that sort of emulates the Knapsack algorithm. The current problem with the above mentioned script is that it splits the files in their original state. When I compress the files after splitting them, there are opportunities for adding more files which I consider to be inefficient. There are 2 ways I could resolve the inefficiency: a) Compress individual files and save them in a directory using a script. The compressed file could provide a best estimate. I could generate a script using a folder of compressed files and use them on the uncompressed ones. b) Check whether the compressed file's size is less than the required size. If so, I should keep adding files until I meet the requirement. However, the addition of new files to the compressed file is an optimization problem by itself.

    Read the article

  • Correct password for ssh key rejected when ssh-d into machine

    - by user20342
    When I am logged into my machine directly, I can do all git operations, and when prompted for a password, the password is accepted. When I ssh into the same box and run git operations on the same repos, the password is rejected. Relevant section of .ssh/config looks like this: # Generic settings Host * ServerAliveInterval 600 ControlPath /tmp/ssh-%r@%h:%p ControlMaster auto KeepAlive yes IdentityFile ~/.ssh/id_rsa.pub Transaction looks like this when I login when I ssh into my box: {12-12-03 9:41}hbrown-wks2:~/workspace/spt/project@master??? hbrown% git pull Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. Using bash does not appear to make a difference (i.e. ssh-agent /bin/bash). This is a recent development, but I can't cite the change that caused it.

    Read the article

  • I am not seeing named session in gnu screen

    - by dorelal
    I am tying to learn gnu screen. I am using mac(snow leopard). I am running 4.00.03 version of screen. I am starting a new screen with following command screen -S foo However after that if I do ctrl + A + " then I see the list of screens. However all the lists have numbers and then bash. Because all it says is 'bash' I can't figure out which screen has what. Am I missing something?

    Read the article

  • CDN Rerouting on 404 (file not yet in synch with original storage)

    - by Alan Ristic
    Here is the problem. I've setup my app(on EC2) to store uploaded images directly on Amazon S3. I'd like to be able to serve static files(cdn) from my 'home' server so I wrote script that does sync from S3. But there is a window of (at least) one minute in synch. Now I see two solutions on the problem of pics not been available on 'home' server here: 1.I write script on EC2 (where the app resides) to fetch from DB pics that have status of "not-yet-synch", which is default state when user uploads picture. The script then does a ping to picture and if it gets OK response, updates DB from "not-yet-synch" to "synch". 2.Prefered solution would be to let apache (in this case) redirect request for an image if it sees 404 (e.g. doesent find image requested) to S3. This way I wouldn't need script from solution 1. So what approach do you suggest I take in solving this redundancy problem? Or what is practice in production environments? To further clarify; I'd like so serve images first from 'home' server, if that fails serve them from S3. Tnx, Alan

    Read the article

  • Unable to force Debian to do unattended install... libc6 wants interactive confirm

    - by JD Long
    I'm trying to create a script that forces a Debian Lenny install to install the latest version of CRAN R. During the install it appears libc6 is upgraded and the install wants interactive confirm that it's OK to restart three services (mysql, exim4, cron). This process HAS to be unattended as it runs on Amazon's Elastic Map Reduce (EMR) machines. But I'm running out of options. Here's a few things I've tried: This previous question appears to be exactly what I'm looking for. So I set up my install script as follows: # set my CRAN repos... yes, I know there's a new convention where to put these. echo "deb http://cran.r-project.org/bin/linux/debian lenny-cran/" | sudo tee -a /etc/apt/sources.list echo "deb-src http://cran.r-project.org/bin/linux/debian lenny-cran/" | sudo tee -a /etc/apt/sources.list # set the dpkg.cfg options per the previous SuperUser question echo "force-confold" | sudo tee -a /etc/dpkg/dpkg.cfg echo "force-confdef" | sudo tee -a /etc/dpkg/dpkg.cfg export DEBIAN_FRONTEND=noninteractive # add key to keyring so it doesn't complain gpg --keyserver pgp.mit.edu --recv-key 381BA480 gpg -a --export 381BA480 > jranke_cran.asc sudo apt-key add jranke_cran.asc sudo apt-get update # install the latest R sudo apt-get install --yes --force-yes r-base But this script hangs with the following request for input: OK, so I tried stopping the services using the following script: sudo /etc/init.d/mysql stop sudo /etc/init.d/exim4 stop sudo /etc/init.d/cron stop sudo apt-get install --yes --force-yes libc6 This does not work and the interactive screen comes back, but this time with only cron listed as the service that must be restarted. So is there a way to make libc6 just restart these services with no user input? Or is there a way to stop cron so it does not cause an interactive prompt? Maybe a creative option I've never thought of? Keep in mind that this system is brought up, some Hadoop code is run, and then it's torn down. So I can put up with side effects and bad behavior that we might not want in a production desktop machine or web server.

    Read the article

  • Setting up a Network Bridge on Linux VM (Windows 7 Host)

    - by GrandAdmiral
    I would like to use NetEm to simulate a low bandwidth environment while testing an Internet-connected device. My plan is to setup a bridge in a Linux VM (Linux Mint 13) on a Windows 7 host. Unfortunately I'm having trouble setting up the bridge. Then I can use NetEm in the Linux VM to limit the bandwidth to an external device. I went with the following script: ifconfig eth0 0.0.0.0 promisc up ifconfig eth1 0.0.0.0 promisc up Then create the bridge and bring it up: brctl addbr br0 brctl setfd br0 0 brctl addif br0 eth0 brctl addif br0 eth1 dhclient br0 ifconfig br0 up When I run that script, I see the following warning: Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service smbd reload Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the reload(8) utility, e.g. reload smbd The device connecting to the bridge is able to obtain an IP Address, but it can only ping the IP Address of the bridge (both are 10.2.32.xx). Then after a few minutes, other parts of our network go down. I'm not sure why, but once I kill the bridge the network is fine. Is it possible to setup a network bridge in a Linux VM? Do I need to do something else with the dhclient br0 part of the script? By the way, I'm using VirtualBox. The wired connection is eth0 and the wireless connection is eth1. The wired connection is connecting to the device and the wireless connection is going to the network. Both adapters are set up as bridged adapters with promiscuous mode set to "allow all".

    Read the article

  • Directory Not Found Error

    - by noobguy
    I am trying to verify tails and when I get to the command prompt portion of the verification some difficulties seem to have arose. Below is the script: noob@noob-System-Product-Name:~$ cd [/media/noob/UUI] bash: cd: [/media/noob/UUI]: No such file or directory noob@noob-System-Product-Name:~$ gpg --keyid-format long --import tails-signing.key gpg: can't open `tails-signing.key': No such file or directory gpg: Total number processed: 0 Same thing happens when I try from download directory; noob@noob-System-Product-Name:~$ cd [/home/noob/Downloads] bash: cd: [/home/noob/Downloads]: No such file or directory noob@noob-System-Product-Name:~$ gpg --keyid-format long --import tails-signing.key gpg: can't open `tails-signing.key': No such file or directory gpg: Total number processed: 0 Any suggestions would be greatly appreciated.

    Read the article

  • How can I make an encrypted email message into a .p7m file?

    - by Blacklight Shining
    This is a bit complicated, so I'll explain what I'm really trying to do here: I have a Debian server, and I want to automatically email myself certain logs every week. I'm going to use cron and a bash script to copy the logs into a tarball shortly after midnight every Monday. A bash script on my home computer will then download the tarball from the server, along with a file to be used as the body of the email, and call an AppleScript to make a new email message. This is where I'm stuck—I can't find a way to encrypt and sign the email using AppleScript and Apple's mail client. I've noticed that if I put a delay in before sending the message, Mail will automatically set it to be encrypted and signed (as it normally does when I compose a message myself). However, there's no way to be sure of this when the script runs—if something goes wrong there, the script will just blindly send the email unencrypted. My solution there would be to somehow manually create a .p7m file with the tarball and message and attach it to the email the AppleScript creates. Then, when I receive it, Mail will treat it just like any other encrypted message with an attachment (right?) If there's a better way to do this, please let me know. ^^ (Ideally, everything would be done from the server, but there doesn't seem to be a way to send mail automatically without storing a password in plaintext.) (The server is running Debian squeeze; my home computer is a Mac running OS X Lion.)

    Read the article

  • Fixing a typo in machine name

    - by justSteve
    When i installed windows i had a typo in the machine name that i corrected from the system's 'Computer Name/Domain Changes' - the workstation is a member of a workgroup not a domain. From everything i can see the renamed machine name is correct. Shift gears.... I'm importing SQL logins from my remote server to this, my development workstation and have used the script presented here - a script that generates a CREATE statement for each login found. While I was preparing to run this script's output (from the remote box) i needed to change the domain name from the remote to my local's name - so i ran the same script locally (in order to see what SQL things my domain name is. SQL has the original machine name - the one with the typo. However, the scripts are tossing errors if i try to create logins with that identifier. CREATE LOGIN [Setve\Admin] FROM WINDOWS WITH DEFAULT_DATABASE = [master] But works correctly if i use the updated machine name: CREATE LOGIN [Steve\Admin] FROM WINDOWS WITH DEFAULT_DATABASE = [master] So the problem is: do i have a problem i need to solve? Somewhere, deep in the guts of SQL Server, it has record of a Domain name that does not exist. Should i find and fix that discrepancy? thx

    Read the article

  • Mechanism behind user forwarding in ScriptAliasMatch

    - by jolivier
    I am following this tutorial to setup gitolite and at some point the following ScriptAliasMatch is used: ScriptAliasMatch \ "(?x)^/(.*/(HEAD | \ info/refs | \ objects/(info/[^/]+ | \ [0-9a-f]{2}/[0-9a-f]{38} | \ pack/pack-[0-9a-f]{40}\.(pack|idx)) | \ git-(upload|receive)-pack))$" \ /var/www/bin/gitolite-suexec-wrapper.sh/$1 And the target script starts with USER=$1 So I am guessing this is used to forward the user name from apache to the suexec script (which indeed requires it). But I cannot see how this is done. The ScriptAliasMatch documentation makes me think that the /$1 will be replaced by the first matching group of the regexp before it. For me it captures from (?x)^/(.* to ))$ so there is nothing about a user here. My underlying problem is that USER is empty in my script so I get no authorizations in gitolite. I give my username to apache via a basic authentication: <Location /> # Crowd auth AuthType Basic AuthName "Git repositories" ... Require valid-user </Location> defined just under the previous ScriptAliasMatch. So I am really wondering how this is supposed to work and what part of the mechanism I missed so that I don't retrieve the user in my script.

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >