Search Results

Search found 29574 results on 1183 pages for 'directory services'.

Page 316/1183 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • What is the alternative of Apache's global Alias in IIS instead of adding a Virtual Directory to every single sites one by one?

    - by Sk8erPeter
    In Apache, there's a way I can make phpMyAdmin available globally to all VirtualHosts I set up. In Apache, it looks like this: <IfModule mod_alias.c> Alias /phpmyadmin "c:/AppServ/www/phpMyAdmin" </IfModule> This way I reach phpmyadmin with prepending /phpmyadmin to all my domain names, and I can see phpmyadmin's initial page. (So for example it works for all my domains like this: http://example_1.com/phpmyadmin, http://example_2.com/phpmyadmin, http://example_3.com/phpmyadmin also does work). In IIS, there's an "Add Virtual Directory..." option when right clicking on a given site. Here I can set up e.g. phpMyAdmin's path to be reached with prepending /phpmyadmin to the given domain (e.g. http://example_1.com/phpmyadmin), but isn't there a "global" setting similar to Apache's Alias? Or do I have to add a virtual directory to every given sites one by one? I'm just curious, it's not a hard work to do it, but I'm interested in it if there exists another method to do it. Thanks in advance!

    Read the article

  • Apache Redirect is redirecting all HTTP instead of just one subdomain

    - by David Kaczynski
    All HTTP requests, such as http://example.com, are getting redirected to https://redmine.example.com, but I only want http://redmine.example.com to be redirected. For example, requests for I have the following in my 000-default configuration: <VirtualHost *:80> ServerName redmine.example.com DocumentRoot /usr/share/redmine/public Redirect permanent / https://redmine.example.com </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> . . . </VirtualHost> Here is my default-ssl configuration: <VirtualHost *:443> ServerName redmine.example.com DocumentRoot /usr/share/redmine/public SSLEngine on SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown <Directory /usr/share/redmine/public> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> LogLevel info ErrorLog /var/log/apache2/redmine-error.log CustomLog /var/log/apache2/redmine-access.log combined </VirtualHost> <VirtualHost *:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> . . . </VirtualHost> Is there anything here that is cause all HTTP requests to be redirected to https://redmine.example.com?

    Read the article

  • Linux - File was deleted and then reappeared when folder was zipped

    - by davee9
    Hello, I am using Backtrack 4 Final, which is a Linux distro that is Ubuntu based. I had a directory that contained around 5 files. I deleted one of the files, which sent it to the trash. I then zipped the directory up (now containing 4 files), using this command: zip -r directory.zip directory/ When I then unzipped directory.zip, the file I deleted was in there again. I couldn't believe this, so I zipped up the directory again, and the file reappeared again but this time could not be opened because the operating system said it didn't exist or something. I don't remember the exact error, and I cannot make this happen again. Would anyone happen to know why a file that was deleted from a directory would reappear in that directory after it was zipped up? Thank you.

    Read the article

  • PDFtk Password Protection Help

    - by Dave W.
    I am using Ubuntu 11.10 and am looking for a solution to password protect a bunch of pdf files in a directory in batch. I came across PDFtk and it looks like it might do what I need, but I've reviewed the command line PDFtk examples and can't figure out if there is a way to do it in batch without having to individually specify the output file name for every file. I'm hoping a command-line guru can take a look at the PDFtk syntax and tell me if there is some trick / command that will allow me to password protect a directory of pdf files (e.g., *.pdf) and overwrite the existing files using the same name, or consistently rename the individual output files without having to specify each output name individually. Here's a link to the PDFtk command line examples page: http://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/ Thanks for your help. I think I've answered my own question. Here's a bash script that appears to do the trick. I'd welcome help evaluating why the code I've commented out doesn't work... #!/bin/bash # Created by Dave, 2012-02-23 # This script uses PDFtk to password protect every PDF file # in the directory specified. The script creates a directory named "protected_[DATE]" # to hold the password protected version of the files. # # I'm using the "user_pw" parameter, # which means no one will be able to open or view the file without # the password. # # PDFtk must be installed for this script to work. # # Usage: ./protect_with_pdftk.bsh [FILE(S)] # [FILE(S)] can use wildcard expansion (e.g., *.pdf) # This part isn't working.... ignore. The goal is to avoid errors if the # directory to be created already exists by only attempting to create # it if it doesn't exists # #TARGET_DIR="protected_$(date +%F)" #if [ -d "$TARGET_DIR" ] #then #echo # echo "$TARGET_DIR directory exists!" #else #echo # echo "$TARGET_DIR directory does not exist!" #fi # mkdir protected_$(date +%F) for i in *pdf ; do pdftk "$i" output "./protected_$(date +%F)/$i" user_pw [PASSWORD]; done echo "Complete. Output is in the directory: ./protected_$(date +%F)"

    Read the article

  • “Apparently, you signed a software services agreement without fully understanding it.”

    - by Dave Ballantyne
    I am not a lawyer. Let me say that again, I am not a lawyer. Todays Dilbert has prompted me to post about my recent experience with SqlServer licensing. I'm in the technical realm and rarely have much to do with purchasing and licensing.  I say “I need” , budget realities will state weather I actually get.  However, I do keep my ear to the ground and due to my community involvement, I know, or at least have an understanding of, some licensing restrictions. Due to a misunderstanding, Microsoft Licensing stated that we needed licenses for our standby servers.  I knew that that was not the case,  and a quick tweet confirmed this. So after composing an email stating exactly what the machines in question were used for ie Log shipped to and used in a disaster recover scenario only,  and posting several Technet articles to back this up, we saved 2 enterprise edition licences, a not inconsiderable cost. However during this discussion, I was made aware of another ‘legalese’ document that could completely override the referenced articles, and anything I knew, or thought i knew, about SqlServer licensing. Personally, I had no knowledge of this.  The “Purchase Use Rights” agreement would appear to be the volume licensing equivalent of the “End User License Agreement” , click throughs we all know and ignore.  Here is a direct quote from Microsoft licensing, when asked for clarification. “Thanks for your email. Just to give some background on the Product Use Rights (PUR), licenses acquired through volume licensing are bound by the most recent PUR at the time of license acquisition. The link for the current PUR and PUR archive is http://www.microsoft.com/licensing/about-licensing/product-licensing.aspx. Further to this, products acquired through boxed product or pre-installed on hardware (OEM) are bound by the End User License Agreement (EULA). The PUR will explain limitations, license requirements and rulings on areas like multiplexing, virtualization, processor licensing, etc. When an article will appear on a Microsoft site or blog describing the licensing of a product, it will be using the PUR as a base. Due to the writing style or language used by the person writing areas of the website or technical blogs, the PUR is what you should use as a rule and not any of the other media. The PUR is updated quarterly and will reference every product available at that time working on the latest version unless otherwise stated. The crux of this is that the PUR is written after extensive discussions between the different branches of Microsoft (legal, technical, etc) and the wording is then approved. This is not always the case for some pages explaining licensing as they are merely intended to advise and not subject to the intense scrutiny as the PUR.” So, exactly what does that mean ? My take :  This is a living document, “updated quarterly” , though presumably this could be done on a whim and a fancy.  It could state , you are only licensed if ,that during install you stand in a corner juggling and that photographic evidence is required. A plainly ridiculous demand but,  what else could it override or new requirements could it state that change your existing understanding of the product or your legal usage of it. As i say, im not a lawyer, but are you checking the PURA prior to purchase ?

    Read the article

  • Why doesn't SSHFS let me look into a mounted directory?

    - by Jan
    I use SSHFS to mount a directory on a remote server. There is a user xxx on client and server. UID and GID are identical on both boxes. I use sshfs -o kernel_cache -o auto_cache -o reconnect -o compression=no \ -o cache_timeout=600 -o ServerAliveInterval=15 \ [email protected]:/mnt/content /home/xxx/path_to/content to mount the directory on the remote server. When I log in as xxx on the client I have no problems. I can cd into /home/xxx/path_to/content. But when I log in on the client as another user zzz and then $ ls -l /home/xxx/path_to I get this d????????? ? ? ? ? ? content and on $ ls -l /home/xxx/path_to/content I get ls: cannot access content: Permission denied When I do $ ls -l /mnt on the remote server I get drwxr-xr-x 6 xxx xxx 4096 2011-07-25 12:51 content What am I doing wrong? The permissions seem to be correct to me. Am I wrong?

    Read the article

  • Does the app need to be run from /opt directory for in the Ubuntu App Showdown?

    - by costales
    I read this question, but I'm more confused yet. In developer.ubuntu.com/showdown I read: "(2) run out of /opt" (I understand "run out" is not in /opt :O Am I wrong?) In https://wiki.ubuntu.com/AppReviewBoard/Review/Guidelines#Packaging I read: "our package should install most files in /opt/extras.ubuntu.com/" It's not clear for me if the app must be in the /opt or out /opt :$ Then, can I use the /usr? Will be the app rejected if it's on /usr? Thanks! :)

    Read the article

  • how to make startup application to open the folder or inode/directory after booting?

    - by santosamaru
    I think it will be awesome if after login the folder that locate not at the same localhost / can open it self like and application as skype and others. do we can make it because if this one works for it , it will help others people too that save musics and other file under the /home folder or the like me , i do need to click other partitions to listen songs and movie and other what i want is just single click when i do login. the partitions / folder / inode was open so i can simply click the Play button at the rhythmbox and click next just to watch the next edition of serial movies ^^ here the photos, i need this partition / hard disk to open while star up "almacén hard disk. thx out of context why do the Fn + F6 wont lock the mouse pad under the laptop i do using classics gnome ubuntu 1204.

    Read the article

  • How can I redirect all files in a directory that doesn't conform to a certain filename structure?

    - by user18842
    I have a website where a previous developer had updated several webpages. The issue is that the developer had made each new webpage with new filenames, and deleted the old filenames. I've worked with .htaccess redirects for a few months now, and have some understanding of the usage, however, I am stumped with this task. The old pages were named like so: www.domain.tld/subdir/file.html The new pages are named: www.domain.tld/subdir/file-new-name.html The first word of all new files is the exact name of the old file, and all new files have the same last 2 words. www.domain.tld/subdir/file1-new-name.html www.domain.tld/subdir/file2-new-name.html www.domain.tld/subdir/file3-new-name.html ect. We also need to be able to access the url: www.domain.tld/subdir/ The new files have been indexed by google (the old urls cause 404s, and need redirected to the new so that google will be friendly), and the client wants to keep the new filenames as they are more descriptive. I've attempted to redirect it in many different ways without success, but I'll show the one that stumps me the most RewriteBase / RewriteCond %{THE_REQUEST} !^subdir/.*\-new\-name\.html RewriteCond %{THE_REQUEST} !^subdir/$ RewriteRule ^subdir/(.*)\.html$ http://www.domain.tld/subdir/$1\-new\-name\.html [R=301,NC] When visiting www.domain.tld/subdir/file1.html in the browser, this causes a 403 Forbidden error with a url like so: www.domain.tld/subdir/file1-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name.html I'm certain it's probably something simple that I'm overlooking, can someone please help me get a proper redirect? Thanks so much in advance! EDIT I've also got all the old filenames saved on a separate document in case I need them set up like the following example: (file(1|2|3|4|5)|page(1|2|3|4|5)|a(l(l|lowed|ter)|ccept)

    Read the article

  • Counting product releases if you work on the backend/online services?

    - by stackoverflowuser2010
    I am trying to update my resume, and I would like to count the number of "product releases" that I was directly involved in with a company. It would seem to serve as a performance metric. The problem is that I was working on the backend of a very large distributed system, like along the lines of Hadoop or other huge database. We had regular 6-month major releases and other minor releases. My manager kept saying that "shipped" these releases, but "shipping" a product to me sounds like releasing single pieces of software, like Microsoft would ship Office 11 or something. Any ideas on "product releases" for backend service engineers, or any other type of performance metric?

    Read the article

  • How to Integrate Backbone.js with RESTful Web Services in 5 Minutes!

    - by Geertjan
    In NetBeans IDE 7.3, a Backbone.js file can be generated from a Java RESTful web service. The Backbone.js file contains complete CRUD functionality and your HTML5 application can immediately be deployed to make use of those features. Coupled with the NetBeans IDE two-way editing support for HTML5, via interaction with WebKit in Chrome, Backbone.js users have a completely new and powerful tool for coding their HTML5 applications. The above is illustrated via the brand new YouTube movie below: This makes NetBeans IDE 7.3 well suited as a learning tool for new Backbone.js users, as well as a productivity tool for those who are comfortable with Backbone.js already.

    Read the article

  • How to work around the home directory changing to /root when using sudo?

    - by Nathanel Titane
    Hello everybody! With Natty coming out soon, I've been at work updating my deployment and self-config script to make my desktop on 11.04 run and look the way I want it to. One bummer is that dbus seems to have changed and does not permit, in the same manner Lucid and Maverick did, the authentication of the current user by terminal call using grep and cat. Ideally, to run the script, I would sudo -s and then launch it as # chmod +x install && ./install Instead of returning my user name.. it now returns root and applies changes to the root profile and aborts whenever paths do not correspond. Here is my script header: #!/bin/bash ON_USER=$(echo ~ | awk -F'/' '{ print $1 $2 $3 }' | sed 's/home//g') export $(grep -v "^#" ~/.dbus/session-bus/`cat /var/lib/dbus/machine-id`-0) if sudo -u $ON_USER test -z "$DBUS_SESSION_BUS_ADDRESS" ; then eval `sudo -u $ON_USER dbus-launch --sh-syntax --exit-with-session` fi RELEASE=$(lsb_release -cs) How could I make it return the actual user now that natty is coming? Thanks for the help

    Read the article

  • How can I password protect & let cgi-bin to work?

    - by jaaaaaaax
    This is taken from sites-available directory. It's a virtual host setting for apache. Accessing myiphere/cgi-bin/ throws 403. The directory setting for /var/www2/ drwxrwxrwx 8 www-data www-data NameVirtualHost myiphere <VirtualHost myiphere> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory>

    Read the article

  • How to Access User Directory shared by Apache on OS X Mountain Lion?

    - by schluchc
    When trying to access the local user web page on localhost/~username, I get a "403 Forbidden". The system web page in /Library/WebServer/Documents is accessible on localhost/ though, so I assume Apache is working fine. I know that this problem has been discussed several times, also on superuser. I implemented and checked all I could find, but I still couldn't solve the problem and would be glad if someone had a suggestion for this particular case: sudo apachectl -t returns Syntax OK. I have a username.conf file in /etc/apache2/users/: <Directory "/Users/username/Sites/"> Options Indexes MultiViews FollowSymLinks AllowOverride AuthConfig Limit Order allow,deny Allow from all </Directory> as proposed here [SuperUser] and in several other tutorials. The permissions of the username.conf file are -rw-r--r-- root wheel, as they should be. The httpd.conf is unchanged and therefore contains the line Include /private/etc/apache2/extra/httpd-userdir.conf. That file in turn contains UserDir Sites Include /private/etc/apache2/users/*.conf <IfModule bonjour_module> RegisterUserSite customized-users </IfModule> So the httpd*.conf files should be ok. The permissions of /Users/username/Sites is drwxr-xr-x 10 username staff and -rw-r--r--@ 1 username staff for the index.html. In the error log I simply get a [Sun Nov 25 22:14:32 2012] [error] [client 127.0.0.1] (13)Permission denied: access to /~username/ denied. And yes, after each change I did the sudo apachectl restart. Any help no how to solve the problem or how to further analyze it would be highly appreciated!

    Read the article

  • How to enable home directory encryption for a particular user?

    - by Ivan
    I prefer to have a dedicated "administrator" user for technical purposes and that was one I've set up during installation. I've also refused to encrypt the home folder of the user. Now, as I've added a user account for my actual work usage I want my (but not the "administrator") home folder to be encrypted. How to turn this on? If it is not possible then how to enable encryption for all users home directories on a system already installed? I've found questions and answers about how to disable it but am not sure how to enable it.

    Read the article

  • Toutes les semaines un peu de code pour aller plus loin avec Windows 7, La session des services Windows

    En cette fin d'année, la communauté de Developpez.com s'est alliée avec Microsoft France pour relayer une série de questions / réponses sur le développement Windows 7. A partir d'aujourd'hui, nous poserons une question chaque lundi sur une fonctionnalité propre au développement d'applications Windows 7. La bonne réponse de la question de la semaine sera ensuite dévoilée la semaine suivante avec un exemple de mise en pratique. Êtes-vous prêt à relever le défi ? Pensez-vous bien connaître les possibilités que proposent les API Windows 7 ? C'est ce que nous allons voir dès aujourd'hui, nous attendons vos propositions ! La réponse de la semaine : Dans quelle session tournent les ...

    Read the article

  • delete multiple files on linux with spaces in file names

    - by raido
    I have a directory on my Linux box with over 10000 files that I have to delete. Running... sudo rm -rf /var/tmp/* Gives the error message... sudo: unable to execute /bin/rm: Argument list too long The solution to this is to run sudo find /var/tmp | xargs sudo rm This only works for files with no spaces in the file name. However, some of the files have names with spaces in them and they are not deleted. For example, if a file is named 'A File With Spaces in the Name.dat', Running the command gives me errors like this.... rm: cannot remove `/var/tmp/A': No such file or directory rm: cannot remove `File': No such file or directory rm: cannot remove `With': No such file or directory rm: cannot remove `Spaces': No such file or directory rm: cannot remove `in': No such file or directory rm: cannot remove `the': No such file or directory rm: cannot remove `Name.dat': No such file or directory How do I pass the complete file path to xargs sudo rm without breaking up the file name.

    Read the article

  • Architectures réparties en Java: RMI, CORBA, JMS, sockets, SOAP, services web, de Annick Fron, critique par Benwit

    En java, il existe différentes façon de développer des applications réparties. Le livre "Architectures Réparties en Java" dont je fais la critique en dresse un panorama. A cette occasion, j'aimerai vous demander quelles sont celles que vous utilisez dans votre entreprise ? Citation: Ce livre s'adresse aux ingénieurs logiciel, développeurs, architectes et chefs de projet. Il s'adresse aussi aux étudiants en écoles d'in...

    Read the article

  • Toutes les semaines un peu de code pour aller plus loin avec Windows 7, Les services web natifs

    En cette fin d'année, la communauté de Developpez.com s'est alliée avec Microsoft France pour relayer une série de questions / réponses sur le développement Windows 7. A partir d'aujourd'hui, nous poserons une question chaque lundi sur une fonctionnalité propre au développement d'applications Windows 7. La bonne réponse de la question de la semaine sera ensuite dévoilée la semaine suivante avec un exemple de mise en pratique. Êtes-vous prêt à relever le défi ? Pensez-vous bien connaître les possibilités que proposent les API Windows 7 ? C'est ce que nous allons voir dès aujourd'hui, nous attendons vos propositions ! La réponse de la semaine : Est-il possible de développer des...

    Read the article

  • How Helpful is Onsite Optimization Exactly?

    Search Engine Optimization, or SEO, is one of the most widely used online services today. One of the main reasons why SEO services have become so sought after is that SEO services come as a great alternative to other much more expensive optimization services such as PPC, and so on. With the number of websites being introduced to the internet on a daily basis constantly growing, it is safe to say that there are no more safe heavens or exclusive niches where a few select companies can operate without worries.

    Read the article

  • Is there a standard for machine-readable descriptions of RESTful services?

    - by ecmendenhall
    I've interacted with a few RESTful APIs that provided excellent documentation for humans and descriptive URIs, but none of them seem to return machine-readable descriptions of themselves. It's not too tough to write methods of my own that assemble the right paths, and many language-specific API libraries are already just wrappers around RESTful requests. But the next level of abstraction seems really useful: a library that could read in an API's own machine readable documentation and generate the wrappers automatically, perhaps with a call to some standard URI like base_url + '/documentation' Are there any standards for machine-readable API documentation? Am I doing REST wrong? I am a relatively new programmer, but this seems like a good idea.

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >