Search Results

Search found 13437 results on 538 pages for 'trusted root certificates'.

Page 356/538 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Invalid user names when creating a LDAP account

    - by h1d
    I'm trying to set up a system where a visitor can enter any user name in a form to create a new user and in the end it gets built on LDAP directory and I'm planning that to be mapped as a UNIX account as well (on Ubuntu Linux) by making the system look up for system accounts on the LDAP. Doing so is fine, but I feel that many user names should be avoided, one of the obvious being 'root' and all the other user names taken for daemons etc. How do you tackle at this problem? Do you make up a list of disallowed user names by checking /etc/passwd? I was thinking that if, internally, the user names could be prepended as 'ldap_' or something, it will avoid any naming conflicts but that seems hard when the LDAP entry name is 'joe' but the system account will look like 'ldap_joe'. Not even sure how that can be achieved.

    Read the article

  • How to allow Hudson build URL through Nginx auth_basic?

    - by rodreegez
    Hi, I have Hudson running and made available to the world via nginx. I have protected Hudson with nginx's auth_basic and that works great. The trouble is, I want to allow unauthenticated requests to the build URL, i.e. /job/<job_name>/build. Currently I have this in my nginx conf: upstream hudson { server 127.0.0.1:8888; } server { server_name ci.myurl.com; root /var/lib/hudson; location / { proxy_pass http://hudson/; auth_basic "Super secret stuff"; auth_basic_user_file /var/opt/hudson/htpasswd; } location ~ \/build { auth_basic off; } } I can't get that second location to allow unauthenticated requests. I have tried various combinations of location ~ /job/(.*)/biuld { } location ^~ \/build { } location ~ \/job\/(.*)\/build { } etc... Maddening! Can anyone point me in the right direction? Thanks, Ad.

    Read the article

  • Possible causes for Domain server being unavailable?

    - by serversurfer
    One of our servers was compromised after a user with administrative privileges accidentally loaded a virus from a USB drive on a desktop connected to the domain. The two most obvious symptoms of this were: The server is no longer responding to login attempts The root directory of the drive containing user data has been filled with randomly named empty folders. (Initially it was around a million folders, I've been slowly deleting them.) I've run several virus scans from different vendors and am fairly confident the virus has been removed but the damage is done. I'm hoping the two symptoms are related and that once the directories are gone the server will start responding again. The drive is very slow to respond. I'm deleting about 20k folders at a time. Anymore than that and windows explorer becomes unresponsive. In the event that I finish cleaning up the HD and things don't return to normal what other things can I check?

    Read the article

  • rsync --link-dest behaviour when run as sudo

    - by fotNelton
    In order to create regular backups, I'm using rsync together with --link-dest so as to create hard-links for unchanged files. For example: rsync -ax \ --partial --delete --delete-excluded --inplace \ --exclude-from=/tmp/temp_excludes \ --link-dest=/Volumes/Backup/current \ /Users /Volumes/Backup/2012-06-25 This works very well as long as I start the process from my normal user account. Though as soon as I start the process using sudo it behaves erradically, meaning that rsync copies all the unchanged files instead of hard-linking them. Since sudo modifies the environment, I've already also tried sudo -E in conjunction with making sure that my sudoers file has the corresponding option set. Well, that didn't work either. So, the question is, how can I run rsync using sudo? Whereas the above example only shows a backup of the Users directory, I also need to backup some system files that I can only access as root.

    Read the article

  • Installing maven on Ubuntu by manual download

    - by WebDevHobo
    To install Maven, I downloaded the latest version from the website and then followed these steps: http://maven.apache.org/download.html#Installation The last step, the version control, does not work. It says that 'mvn' is currently not installed and that I should type sudo apt-get install maven2 If I go directly to the mvn file itself, it does work: root@ubuntu:~# /usr/local/apache-maven/apache-maven-2.2.1/bin/mvn --version Apache Maven 2.2.1 (r801777; 2009-08-06 12:16:01-0700) Java version: 1.6.0_21 Java home: /usr/java/jdk1.6.0_21/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux" version: "2.6.32-25-generic" arch: "i386" Family: "unix" So, what am I doing wrong here? Or what would and apt-get install do extra that I might have forgotten?

    Read the article

  • Do best-practices say to restrict the usage of /var to sudoers?

    - by NewAlexandria
    I wrote a package, and would like to use /var to persist some data. The data I'm storing would perhaps even be thought of as an addition for /var/db. The pattern I observe is that files in /var/db, and the surrounds, are owned by root. The primary (intended) use of the package filters cron jobs - meaning you would need permissions to edit the crontab. Should I presume a sudo install of the package? Should I have the package gracefully degrade to a /usr subdir, and if so then which one? If I 'opinionate' that any non-sudo install requires a configrc (with paths), where should the package look (presuming a shared-host environment) for that config file? Incidentally, this package is a ruby gem, and you can find it here.

    Read the article

  • Upgrading Fedora on Amazon to 12 but getting libssl.so.* & libcrypto.so.* are missing

    - by bateman_ap
    I am upgrading to Fedora 12 on a Amazon EC2 using help here: http://www.ioncannon.net/system-administration/894/fedora-12-bootable-root-ebs-on-ec2/ I managed to do a 64 bit instance OK, however facing some problems with a standard one. On the final bit of the install from 11 to 12 I am getting an error: Error: Missing Dependency: libcrypto.so.8 is needed by package httpd-tools-2.2.1.5-1.fc11.1.i586 (installed) Error: Missing Dependency: libssl.so.8 is needed by package httpd-tools-2.2.1.5-1.fc11.1.i586 (installed) This is referenced in the comments from the link above but all it says is: Q: Apache failed, or libssl.so.* & libcrypto.so.* are missing A: These versions are mssing the symlinks they require. Easy fix, go symlink them to the newest versions in /lib However I am afraid I don't know how to do this. If it is any help I tried running the command locate libssl.so and got: /lib/libssl.so.0.9.8b /lib/libssl.so.6

    Read the article

  • Keep ASP.NET site and content separate

    - by Nelson
    I have an ASP.NET site in folder x. Currently lots of other static content gets added to folder x and gets mixed in, making it one big mess. I would like to keep the ASP.NET site and the content separate somehow. I know you can create virtual directories in IIS, but there are LOTS and even some content in the root. The content people are not technical and really need an easy way to add it. I would stick everything in a subfolder (they don't touch anything outside, I don't touch their folder), but that would change their URLs (www.example.com/something to www.example.com/content/something). I almost need a way to "merge" two folders and have them act as one. I'm guessing that is impossible since there could be file conflicts, etc. Any other ways I can achieve this?

    Read the article

  • Squid throws error, The requested URL could not be retrieved

    - by Supratik
    Hi Sometimes I am getting the following error The requested URL could not be retrieved While trying to retrieve the URL: http://groups.google.com/ The following error was encountered: Unable to determine IP address from host name for groups.google.com The dnsserver returned: Refused: The name server refuses to perform the specified operation. This means that: The cache was not able to resolve the hostname presented in the URL. Check if the address is correct. Your cache administrator is root. What could be the reason for the above error ? Regards Supratik

    Read the article

  • How to setup apache multi-site with multi-domain on ec2

    - by Esh
    Say I have two document roots domain1/ and domain2/ I know how to access those two roots from my own computer if they are hosted on the same computer. My question is that if I want to do the same thing on my ec2 server, how should I configure my elastic ips to those two roots? I know by default the elastic ip will only associate to the root with the name localhost(127.0.0.1). Anyone could give me a detailed answer? An example would help, thanks!

    Read the article

  • Performance & Security Factors of Symbolic Links

    - by Stoosh
    I am thinking about rolling out a very stripped down version of release management for some PHP apps I have running. Essentially the plan is to store each release in /home/release/1.x etc (exported from a tag in SVN) and then do a symlink to /live_folder and change the document root in the apache config. I don't have a problem with setting all this up (I've actually got it working at the moment), however I'm a developer with just basic knowledge of the server admin side of things. Is there anything I need to be aware of from a security or performance perspective when using this method of release management? Thanks

    Read the article

  • Running php and java in parallel on the same server

    - by manni
    I have got a java server from Rackspace. and I am already running a java application on the server. Now I want to run a php application on the same server. What should I do? When I asked Rackspace people, they said, apache is already installed on the server so I can run the php on it. I have also tried installing php on the server and then copied my php files in var/www/xxx but when I hit the url it is saying giving the page not found error. They have given me the ssh server root username and password. Thanks in advance.

    Read the article

  • How do you interpret `strace` on an apache process returning `restart_syscall`?

    - by indiehacker
    We restart an apache server every day because RAM usage reaches its limit. Though of value See this serverfault answer, I dont think lowering the MaxClients in the apache configuration is a solution to the unknown root problem. Can you make sense out of the below data? Below is an extract of what $top with M returns: 20839 www-data 20 0 1008m 359m 22m S 4 4.8 1:52.61 apache2 20844 www-data 20 0 1008m 358m 22m S 1 4.8 1:51.85 apache2 20842 www-data 20 0 1008m 356m 22m S 1 4.8 1:54.60 apache2 20845 www-data 20 0 944m 353m 22m S 0 4.7 1:51.80 apache2 and then investigating a single process with $sudo strace -p 20839 returns only this one line, which is cryptic, for me: restart_syscall(<... resuming interrupted call ...> <unfinished ...> Any insights? Thanks.

    Read the article

  • How to mount encrypted volume at login (Ubuntu 12.04, pam_mount)

    - by Nick Lothian
    I'm trying to get pam_mount working on Ubuntu 12.04. I have /dev/sda1 (encrypted partition) with /dev/dm-1 (ext4 formatted) inside it. Should ~/.pam_mount.conf.xml be trying to mount /dev/sda1 or /dev/dm-1? If I use the line: <volume fstype="ext4" path="/dev/dm-1" mountpoint="~/slowstore" options="rw" /> then it nearly works. It prompts for the password (ok, I'd like pam_mount to do that for me, but still..) then I get: pam_mount(rdconf2.c:126): checking sanity of luserconf volume record (/dev/dm-1) pam_mount(rdconf2.c:132): user-defined volume (/dev/dm-1), volume not owned by user If I do: sudo chown nick:disk /dev/dm-1 Then re-login the encrypted partition mounts correctly (ignoring th fact I have to reneter the password). However, if I log out completely the ownership on /dev/dm-1 gets reset to root:disk. What am I doing wrong?

    Read the article

  • Apache routing vhosts to /var/www

    - by FHannes
    One user at my site has reported that he reaches the content at /var/www when browsing to any of the vhosts at my server. As far as I’m aware, my Apache server does not contain a document root that references this folder. On top of that, this user seems to be the only one experiencing the issue. According to his ISP, the issue isn’t caused by them, yet, on his mobile connection, he can access the site. When browsing to my server’s IP, he also receives the correct content from the default vhost. What could be the possible causes of this issue and how can I get it to stop? I’ve explored pretty much every option I could think of.

    Read the article

  • Unable to login into CentOS

    - by Rendl
    I had setup a multinode cluster using CentOS with VMware yesterday. Today when I reboot the nodes I get the below error on startup. "there is a problem with the configuration server status 256 centOS" (/usr/libexec/gconf-sanity-check-2 ) I am unable to login as root or any user as the screen is frozen. The solutions online is to change the permissions for some tmp files. My problem is I am unable to access the terminal as I cannot login. Also on reboot I do not have any recovery options in CentOS. I only see command line GRUB. I am new to linux and Hadoop.Pls help asap.

    Read the article

  • Nginx - Address already in use

    - by user2426362
    If I run: service nginx restart I have this error: root@user /etc/nginx/sites-enabled # service nginx restart Restarting nginx: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] still could not bind() nginx. How to fix it? I have also apache conf running on port 80.

    Read the article

  • mysqld stopped working..can't restart...need help?

    - by grant tailor
    i was just checking somethings and noticed mysqld is not running in parallels power panel control panel...but my websites on the server were all working fine, which use mysql databases...so really strange So i tried to restart mysqld but got errors and can't restart and now all my websites are all offline now saying error connecting to database. logged in as root and tried /etc/init.d/mysqld start and got this error ERROR! Manager of pid-file quit without updating file What do i do next? What do i do? Please help!

    Read the article

  • Migrating Magento Concern

    - by Pankaj Upadhyay
    We have a Magento 1.5.0.1 store running at a hosting provider. Now, we need to migrate the same from that server to a new hosting provider. I had talk with a technical guy from the new hosting provider who told me to do following things. Go into the cPanel Backup Wizard . Make a FULL BACKUP and download the zip file Then upload that zip file on their server in my root folder. Then tell them and they will do the restore. My Concern :- Will everything work as expected. What about the connectionstrings and database and all. Will database be automatically created and work the same. Also, somewhere I read that ver 1.5.0.1 used older type of database which might not work on new MySQLs. Can this too have any impact. Should i proceed in the same manner or I need to take care of some additional things to ensure smooth running.

    Read the article

  • ConfigurationErrorsException when serving images via UNC on IIS6

    - by Mark Richman
    I have a virtual directory in my web app which connects to a Samba share via UNC. I can browse the files via Windows Explorer without issue, but my web app throws a yellow screen with the following message: Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: An error occurred loading a configuration file: Could not find file '\cluster\cms\qa-images\120400\web.config'. What makes no sense to me is why it's looking for a web.config in that location. I know it's not an authentication issue because the virtual directory can serve images from its root (i.e. \cluster\cms\qa-images\test.jpg serves as http://myserver/upload/test.jpg just fine).

    Read the article

  • Fresh install of nginx causes browser to download index.html instead of opening it

    - by 010110110101
    When I view this in Chrome, http://localhost:90 the file is downloaded instead of displayed in Chrome. This question has been asked a lot of times on SO, but about index.php files. My problem is a plain jane HTML file, not a PHP file. That hasn't been asked yet. I was hoping the solution would be similar, but I haven't been able to figure it out. Here's my example.com.conf: server { server_name localhost; listen 90; root /var/www/example.com/html index index.html location / { try_file $uri $uri/ =404; } } My index.html file contains only two words, no markup Hello World I think it's the mime.types. The mime.types file has the entry for html in it. This is a fresh nginx install. nginx -t reports "test is successful"

    Read the article

  • Giving SSH access to a user, and security issues.

    - by Kris Sauquillo
    Okay, so I have a VPS and I made an account for a friend so he can host his own domains (using the reseller features in DirectAdmin). He's asking for SSH access, and I know that this is probably a bad idea. Does he have access to my whole server, such as executing commands, accessing my domains that I host on my server? I logged into my SSH using his account details and it let me navigate around all of the root folders/files, and his account is under /home/AccountName/. Is there anyway to restrict his access to his folder only? And the commands he can use?

    Read the article

  • SSH disconnects active session after 20 minutes

    - by Paramaeleon
    I’ve just set up a new Linux box (OpenSuSE 12.3 on VmWare). Now I stated that my SSH shell sessions are disconnected exactly after 20 minutes, clearly with activity. (Putty: “Network error: Software caused connection abort”) I already set Putty to send keep alives every 64 sec. In sshd_config, I set ClientAliveInterval 50 ClientAliveCountMax 2 and did a deamon reload. Didn’t help. About two minutes after the link breakdown, ssh reports to /var/log/messages: … … sshd[…]: Timeout, client not responding. … … sshd[…]: pam_unix(sshd:session): session closed for user root I don’t encounter this behaviour when connecting to other virtual machines, so I guess the problem isn’t in the network. Any help is appreciated.

    Read the article

  • Why does the Mobile Safari Browser on iOS not allow file uploads? [migrated]

    - by Kirinriki
    As already known, it's not possible for iOS users to select image files to upload from Safari on iPhone, because the browse button to display the "select file"- dialog is disabled. It works fine on Android, but not on iPhone... What is the particular reason for that issue? I heard that the browse button is disabled because there isn't a file browser on the iPhone. Someone other said that Safari just disabled root access. Is there any reliable source which explains the issue? (I need it for my thesis.)

    Read the article

  • nginx rewrite base url

    - by ptn777
    I would like the root url http://www.example.com to redirect to http://www.example.com/something/else This is because some weird WP plugin always sets a cookie on the base url, which doesn't let me cache it. I tried this directive: location / { rewrite ^ /something/else break; } But 1) there is no redirect and 2) pages start shooting more than 1,000 requests to my server. With this one: location / { rewrite ^ http://www.example.com/something/else break; } Chrome reports a redirect loop. What's the correct regexp to use?

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >