Search Results

Search found 29574 results on 1183 pages for 'directory services'.

Page 335/1183 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • MySQL (local) owner and permissions

    - by Steve Nelson
    I asked this question on the MySQL forums and got no answer. I asked on StackOverflow and received a recommendation to try on ServerFault. So here I am. I recently successfully installed the 64 bit version of mysql-5.5.8 on a MacBook Pro in the /usr/local directory. To address a completely unrelated software (RVM actually) , I chown-ed my /usr/local directory to $USER, Which made MySQL very unhappy. It complained specifically about the /usr/local/mysql/data directory, so I chown-ed that directory to _mysql:wheel. Everything appears to work again, but it made me wonder if I would have been better off changing the owner of the whole /usr/local/mysql directory, not just the data subdirectory. Since I neglected to make notes of what owner the default installation runs under before rashly changing the owner of the /usr/local directory, could someone tell me what owner and permissions the /usr/local/mysql directory is by default if you don't inadvertently screw it up? :-/ In terms of permissions I'm guessing rwxr-xr-x would be appropriate (that's what the data directory currently has and it appears to be working fine), but reinforcement for that hunch would be appreciated. Thanks for any help. Steve

    Read the article

  • Does a successful exit of rsync -acvvv s d guarantee identical directory trees?

    - by user259774
    I have two volumes, one xfs, and another ntfs - ntfs was empty, and xfs had 10 subitems. I needed to sync them. I initially copied a few of the subitems by dragging them over in a gui fm. Several of the direct descendants which i had dragged finished, apparently. One I stopped before it was done, and the rest I cancelled while it still appeared to be gathering information about the files. Then I ran rsync -acvvv xmp/ nmp/, where xmp and nmp are the volumes' respective mountpoints, which exited with a 0 status. find xmp -printf x | wc -c and find nmp -printf x | wc -c both return 372926. My question is: Am I guaranteed that the two drives' contents are identical?

    Read the article

  • Spamassassin command to tag & move mail with an X-Spam-Score of 10+ to a new directory?

    - by ane
    Have a maildir with tens of thousands of messages in it, about 70% of which are spam. Would like to: Run /usr/local/bin/spamassassin against it, tagging each message if the score is 10 or greater Have a tcsh shell or perl one-liner grep all mails with a spam score of over 10 and move those mails to /tmp/spam What commands can I run to accomplish this? Pseudocode: /usr/local/bin/spamassassin ./Maildir/cur/* -tagscore10 grep "X-Spam-Score: [10-100]" ./Maildir/cur/* | mv %1 /tmp/spam

    Read the article

  • Is it ok to share private key file between multiple computers/services?

    - by Behrang
    So we all know how to use public key/private keys using SSH, etc. But what's the best way to use/reuse them? Should I keep them in a safe place forever? I mean, I needed a pair of keys for accessing GitHub. I created a pair from scratch and used that for some time to access GitHub. Then I formatted my HDD and lost that pair. Big deal, I created a new pair and configured GitHub to use my new pair. Or is it something that I don't want to lose? I also needed a pair of public key/private keys to access our company systems. Our admin asked me for my public key and I generated a new pair and gave it to him. Is it generally better to create a new pair for access to different systems or is it better to have one pair and reuse it to access different systems? Similarly, is it better to create two different pairs and use one to access our companies systems from home and the other one to access the systems from work, or is it better to just have one pair and use it from both places?

    Read the article

  • Change ownership of directory and all contents to a new user from root.

    - by Andrew Fashion
    I created a website under /var/www/html/ all under root, all images, files, .htacess, directories, etc... I uploaded and configured everything as root. I want to make it it's own username/password so it's not owned by root. I currently do not have the user account made either, I want to also setup FTP for the user account. There is also about 30GB of images in the folder as well. How can I go about changing all of this? I am running CentOS 5.5 64 bit. Thank you!

    Read the article

  • Binding services to localhost and using SSH tunnels - can requests be forged?

    - by Martin
    Given a typical webserver, with Apache2, common PHP scripts and a DNS server, would it be sufficient from a security perspective to bind administration interfaces like phpmyadmin to localhost and access it via SSH tunnels? Or could somebody, who knew eg. that phpmyadmin (or any other commonly availible script) is listening at a certain port on localhost easily forge requests that would be executed if no other authentication was present? In other words: could somebody from somewhere in the internet easily forge a request, so that the webserver would accept it, thinking it originated from 127.0.0.1 if the server is listening on 127.0.0.1 only? If there were a risk, could it be somehow dealt with on a lower level than the application, eg. by using iptables? The idea being, that if someone found a weakness in a php script or apache, the network would still block this request because it did not arrive via a SSH-tunnel?

    Read the article

  • mail directory of exim and courier and ftp accounts?

    - by shadow_of__soul
    i'm making a server migration from a cpanel centos 5 box to a plain centos box. i already migrated, files, ssl, user system permissions and installed the basic service: as proftpd, apache, dovecot, exim, dns etc.. now what i can't find is where are the email accounts and the ftp account (that ones of [email protected]) in terms of email i want to migrate the accounts AND the emails stored in the server at the moment. Regards.

    Read the article

  • How do I allow a (local) user to start/stop services with a scheduled task?

    - by Mulmoth
    Hi, on a Windows 2008 R2 server I have two small .cmd-scripts to start/stop a certain service. They look like this net start MyService and net stop MyService I want to execute these script via scheduled task, and I thought it would be best to create a local user for this job. The user is not member of the Administrators group. But the scripts fail with exit code 2. When I logon with this local user and try to execute these script in command line, I see a message like (maybe not exactly translated from german to english): Error code 5: Access denied It doesn't matter whether I start the command line as Administrator or not. How can this local user gain rights to do the job?

    Read the article

  • Apache file appearing in directory list, but giving 404 when attempting access?

    - by aayush
    Please forgive my lack of knowledge, this is more of a learning project than anything else. I have a linux box, and it works pretty much fine. When i go to example.com/css it says theres one file in there, bootstrap.min.css When i go to example.com/css/bootstrap.min.css, it gives me a 404 error. I have only one htaccess file to remove the index.php from the url, which also i renamed to htaccess (Instead of .htaccess, so apache wont find it) and i restarted the server, yet no help. I also tried to chmod the css file 755 but no help. Contents of the htaccess file: RewriteEngine On RewriteBase / # Allow any files or directories that exist to be displayed directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L,QSA] Please help, i am very confused about this. I tried to google excessively but i came up with nothing. Edit: I found the solution to be renamed the htaccess file to something entirely different and restarting. Is there any way i can still implement the losing of the .php?

    Read the article

  • Remote Desktop Services Licensing - Does server have to have a RDS role?

    - by transistor1
    I recently set up a "micro" size Windows 2008 Datacenter server on Amazon AWS. My small group needs several concurrent RDS users to be able to access the machine. Without installing the "Remote Desktop Server" role, it allows 2 concurrent connections. I read on MS' website that in order to set up multiple users, we needed to install the RDS role. I did so, but now the application we are trying to share is running much slower than it was before. Prior to the role installation, it was taking about 5 seconds to open; now it is taking a few minutes to open -- without any other users logged on except me. My assumption is that the RDS role may be too much for this micro instance to handle, and currently, changing to another size instance is not an option (it may be possible later if we were to receive enough funding). This leads me to the following questions: 1) Is it a sensible assessment to assume that it is the RDS role is slowing things down, or are there other things that I could look at to speed it up? We are talking about a machine with ~600MB of memory. 2) If I revert back to the pre-RDS role, is there any legitimate way (in terms of purchasing RDS licenses) to get more than 2 concurrent desktops? I did read this, and am not questioning that the answerer is knowlegeable; but someone else may have some other experience. I am also making it clear that we want to do this in a legitimate way. Thanks in advance for any assistance that can be provided! EDIT: if it is helpful in answering the question, the application in question is a Lotus Approach database. Also, I am asking this from a technical perspective: not a legal one. I want to know if it is possible to install valid licenses without the RDS role.

    Read the article

  • Virtual host doesn't read .htaccess

    - by Charlie
    I just created virtual host: <VirtualHost myvirtualhost:80> ServerAdmin webmaster@myvirtualhost ServerName myvirtualhost DocumentRoot /home/myname/sites/public_html <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/myname/sites/public_html/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> It works, but it cant read .htacces file in public_html: DirectoryIndex otherindex.php I tried change all AllowOverride to All, but I get 500 error. How can I fix this ? thanks.

    Read the article

  • Safe to use high port numbers? (re: obscuring web services)

    - by sofakng
    I have a small home network and I'm trying to balance the need for security versus convenience. The safest way to secure internal web servers is to only connect using VPNs but this seems overkill to protect a DVRs remote web interface (for example). As a compromise, would it be better to use very large ports numbers? (eg. five digits up to 65531) I've read that port scanners typically only scan the first 10,000 ports so using very high port numbers is a bit more secure. Is this true? Are there better ways to protect web servers? (ie. web guis for applications)

    Read the article

  • Safe to use high port numbers? (re: obscuring web services)

    - by sofakng
    I have a small home network and I'm trying to balance the need for security versus convenience. The safest way to secure internal web servers is to only connect using VPNs but this seems overkill to protect a DVRs remote web interface (for example). As a compromise, would it be better to use very large ports numbers? (eg. five digits up to 65531) I've read that port scanners typically only scan the first 10,000 ports so using very high port numbers is a bit more secure. Is this true? Are there better ways to protect web servers? (ie. web guis for applications)

    Read the article

  • What is the correct way to distinguish services to which RADIUS request belongs?

    - by John
    If I have 1 generic RADIUS server and a number of clients (VoIP server, VPN server, WIFI access point) how to distinguish senders (NAS) of access and accounting requests? I didn't found anything helpful by reading all related RFCs, there just NAS-IP-Addess or NAS-Identifier is required, but this attributes usually represents IP address of the NAS and this address can not be used as a robust identifier, because it's often possible to fall into the case when 2 different NASes are running on the same server and this attributes can not be customized. Any ideas?

    Read the article

  • Proftpd: How to set default root to a users home directory without jailing the user?

    - by sacamano
    Hi there. I've installed proftpd on my debian box but I'm having having some trouble with the configuration. In my proftpd.conf I've added; DefaultRoot ~ !ftp_special This works fine in that all users except members of ftp-special are unable to navigate outside of their home folder. However, I want users that are members of ftp-special to enter a special home folder when logging on to the ftp server but at the same time I want them to be able to navigate the entire server. Right now, if a user that is a member of ftp-special logs on his entry-point is the root ( / ). Thanks in advance.

    Read the article

  • How to change the build directory of a Hudson job?

    - by mark
    Dear ladies and sirs. My C: drive is full. I wish to move the builds folder from the job to another location. I can cheat with the help of the JUNCTION utility to redirect the original builds folder, but I am interested to know if there is the Hudson way to do it right. Thanks.

    Read the article

  • Best Linux Distro for web services (Nginx & node.js) on laptop: Compaq 6710b?

    - by tomByrer
    I haven't used Linux in 5+ years, aside from d/l occasional system recovery CDs off DistroWatch, so I don't know the current landscape. Related postings on this forum are several years old & may not relate to my hardware (Compaq 6710b laptop, Core2Duo Centrino). Requirements: Use the Compaq 6710b laptop's WiFi out of the box enough frequently updated pre-made packages for web hosting & development (Nginx & node.js are biggest concerns, everyone has Apache & PHP, & I'm not crazy about building from source) prefer be easy enough to use, but outside help available (so a small user-base distro is only OK if the community is active & a major disto's packages are compatable) configuration easy to transfer to outside web hosts. You have actually installed/used recommended disto (don't have to be expert) TIA!

    Read the article

  • Am I obliged to use ipv6 tunnel services if I want to be able to use it?

    - by Zagorax
    I was looking for configuring Slackware to use ipv6 but all instruction I found speak about using an ipv6 tunnel that encapsulate ipv6 request into ipv4 packet and send them to an external router that extracts ipv6 request and sends a reply (or, at least, this is what I understood). Is that necessary? Isn't there a way to configure a pure ipv6 system? If yes, could you please point me to a guide that clearly explain how to enable ipv6 without this trick? I would like to configure my Slackware desktop at first, and then do the same with my Centos server. EDIT: maybe I gave you too few information. Sorry. I'll write some more information thanks to the posted guide. ~$ test -f /proc/net/if_inet6 && echo "Running kernel is IPv6 ready" Running kernel is IPv6 ready So, it seems ipv6 is enabled in my kernel. Some other output from ifconfig, route and /etc/resolv.conf content (with opendns): ~$ /sbin/ifconfig wlan0 | grep inet6 inet6 addr: fe80::21f:3bff:fe60:cc5b/64 Scope:Link ~$ /sbin/route -A inet6 | grep wlan0 fe80::/64 :: U 256 0 0 wlan0 ff00::/8 :: U 256 0 0 wlan0 ~$ cat /etc/resolv.conf inet6 nameserver 2620:0:ccc::2 nameserver 208.67.222.222 nameserver 208.67.220.220 But still, with ping6 I can only ping localhost (::1). Everything else is unreachable. Normal ping works fine. That is why I was asking if I am obliged to use a tunnel.

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >