Search Results

Search found 12017 results on 481 pages for 'no root'.

Page 374/481 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Unable to start tomcat6: Java error (Exception in thread "main")

    - by Nyxynyx
    After installing tomcat6 on CentOS 6.3, I am unable to start tomcat6 server. root@host [/var/log/tomcat6]# service tomcat6 start Starting tomcat6: [ OK ] Although it says OK, I cant access http://mydomain.com:8080. catalina.out Exception in thread "main" java.lang.NullPointerException at java.lang.VMClassLoader.defineClass(libgcj.so.10) at java.lang.ClassLoader.defineClass(libgcj.so.10) at java.security.SecureClassLoader.defineClass(libgcj.so.10) at java.net.URLClassLoader.findClass(libgcj.so.10) at gnu.gcj.runtime.SystemClassLoader.findClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at gnu.java.lang.MainThread.run(libgcj.so.10) Tomcat6 was installed using yum: yum -y install java tomcat6 tomcat6-webapps tomcat6-admin-webapps When I tried to find the version: tomcat6 version: Exception in thread "main" java.lang.NoClassDefFoundError: org.apache.catalina.util.ServerInfo at gnu.java.lang.MainThread.run(libgcj.so.10) Caused by: java.lang.ClassNotFoundException: org.apache.catalina.util.ServerInfo not found in gnu.gcj.runtime.SystemClassLoader{urls=[], parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}} at java.net.URLClassLoader.findClass(libgcj.so.10) at gnu.gcj.runtime.SystemClassLoader.findClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at gnu.java.lang.MainThread.run(libgcj.so.10) Any idea what I should do? Thanks!

    Read the article

  • what is uninstall procedure for software installed via "make install" on CentOS 6.2

    - by gkdsp
    I installed OCILIB on my CentOS 6.2 server some time ago, and now I want to install a newer version. The vendor requires an uninstall, but doesn't provide instructions. I'm guessing that's because it's trivial for people with a Linux background. http://orclib.sourceforge.net/doc/html/group_g_install.html If I installed this software using: step 1: # ./configure --with-oracle-headers-path=/usr/include/oracle/11.2/client64 --with-oracle-lib-path=/usr/lib/oracle/11.2/client64/lib step 2: # make step 3: # su root step 4: # make install step 5: # gcc -g -DOCI_IMPORT_LINKAGE -DOCI_CHARSET_ANSI -L/usr/lib/oracle/11.2/client64/lib -lclntsh -L/usr/local/lib -locilib conn.c -o conn How would I go about uninstalling this? I tried following this http://www.cyberciti.biz/faq/delete-uninstall-software-linux-commands/ but nothing was found on my disk using rpm -qa *oci* or yum list *oci*. Maybe since it wasn't installed with yum or rpm then I shouldn't expect either of these to find it. Are there generic instructions for uninstalling software on Linux that I could use, or do the instructions really depend on the specific software? Any help much appreciated.

    Read the article

  • Change XRDP keyboard layout to en-gb Ubuntu 12.04

    - by Earl Sven
    Does anybody know how to change the keyboard layout to en-gb in an XRDP session on Ubuntu 12.04? I am using mstsc.exe to connect to an XRDP server hosting an XVNC session, however I cannot work out how to apply the UK keyboard layout. A bit of googling has yeilded these instructions which allow me to change the keymap, however using the keymap file I downloaded from here I loose the ability to use the arrow keys, home/end etc. Comparing the file with the standard one there are substantially more differences than I would expect considering the similarity between the layouts. I only have RDP access to the box so i don't seem to be able to actually generate a new layout per the instructions above, maybe it's a local console thing? Also I can't change either the RDP client used or the RDP server as they are my only access to the system, I don't have local console access. I do have root priveleges on the OS however. Any thoughts? Edit: I have found http:// xrdp.sourceforge.net/documents/keymap/newkeymap.html (apologies for not typing the link properly but the antispam filter won't let me post more than 2 links) this documentation on the XRDP sourceforge page which describes keymap file format. It indicates the values in the keymap files are unicode 0x64 etc, however the files I have already on my system seem to use a different format 0:0 or 65307:27 etc, does anybody know what the difference is?

    Read the article

  • OpenSSL response 404 issue on centOS 6

    - by dsp_099
    I followed this tutorial (though it's for 5.2, I figured I'd be alright). The changes I had to make that seemed to have worked: Rename ca.csr to ca.cslr (that's the one the command generated) List it in the ssl.conf as ca.cslr instead of ca.csr I have the following in the httpd.conf <VirtualHost *:80> DocumentRoot /etc/test ServerName site.com </VirtualHost> <VirtualHost *:433> SSLEngine on SSLCertificateFile /etc/pki/tls/certs/ca.crt SSLCertificateKeyFile /etc/pki/tls/private/ca.key <Directory /etc/test> AllowOverride All </Directory> DocumentRoot /etc/test ServerName cryptokings.com </VirtualHost> /test contains a folder inside of it, accessible via http://site.com/test/foo, however attempting to access it via https://site.com/test/foo results in warning that the certificate is untrusted (self-signed, no biggie) a 404 error. Chrome's complains about the certificate are the following: The identity of this website has not been verified. • Server's certificate does not match the URL. • Server's certificate is not trusted. I think those warnings are a side-effect of a self-signed certificate - or is the first one something that needs to be addressed? I seem to be able fetch the root page via https just fine though, it shows a standard CentOS setup page. (That said, I haven't added a VirtualHost entry for it so I suppose that makes sense) I think I've made a mistake somewhere during the setup as I'm not too familiar with the process. During setup, I was prompted for a type of password that would be required when apache restarts but running service httpd restart does not seem to prompt me for one. Any help would be appreciated.

    Read the article

  • CopSSH SFTP -- limit users access to their home directory only

    - by bradvido
    Let me preface this by saying I've read and followed these instructions at the FAQ many times: http://www.itefix.no/i2/node/37 It does not do what the title claims... It allows every user access to every other user's home directory, as well as access to all subfolders below the copssh installation path. I'm only using this for SFTP access and I need my users to be sandboxed into only their home directory. If you know a fool-proof way to lock users down so they can see only their home directory and its subfolders, stop reading now and reply with the solution. The details: Here is exactly what i tried as I followed the FAQ. My copSSH installation directory is: C:\Program Files\CopSSH net localgroup sftp_users /ADD **Create a user group to hold all my SFTP users cacls c:\ /c /e /t /d sftp_users **For that group, deny access at the top level and all levels below cacls "C:\Program Files\CopSSH" /c /e /t /r sftp_users **Allow my user group access to the copSSH installation directory and its subdirectories For each sftp user, I create a new windows user account, then I: net localgroup sftp_users sftp_user_1 /add **Add my user to the group I've created Open the activate user wizard for CopSSH, choosing the user, "/bin/sftponly" and Remove copssh home directory if it exists **Remains checked Create keys for public key authentication **Remains checked Create link to user's real home directory **Remains checked This works, however, every user has access to every other user's home directory as well as the CopSSH root directory.... So I tried denying access for all users to the user home directory: cacls "C:\Program Files\CopSSH\home" /c /e /t /d sftp_users **Deny access for users to the user home directory Then I tried adding permissions on a user-by-user basis for each users home\username folder. However,these permission were not allowed by windows because of the above deny rule i created at the home directory was being inherited and over-riding my allow rule. The next step for me would be to remove the deny rule at the home directory and for each user folder, add a deny rule for every user it doesn't belong to, and add an allow rule for the one user it does belong to. However, as my user list gets long, this will become very cumbersome. Thanks for the help!

    Read the article

  • different nmap results

    - by aasasas
    Hello I have a scan on my server form outside and from inside, why results are different? [root@xxx ~]# nmap -sV -p 0-65535 localhost Starting Nmap 5.51 ( http://nmap.org ) at 2011-02-16 07:59 MSK Nmap scan report for localhost (127.0.0.1) Host is up (0.000015s latency). rDNS record for 127.0.0.1: localhost.localdomain Not shown: 65534 closed ports PORT STATE SERVICE VERSION 22/tcp open ssh OpenSSH 4.3 (protocol 2.0) 80/tcp open http Apache httpd 2.2.3 ((CentOS)) Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 7.99 seconds AND sh-3.2# nmap -sV -p 0-65535 xxx.com Starting Nmap 5.51 ( http://nmap.org ) at 2011-02-16 00:01 EST Warning: Unable to open interface vmnet1 -- skipping it. Warning: Unable to open interface vmnet8 -- skipping it. Stats: 0:07:49 elapsed; 0 hosts completed (1 up), 1 undergoing SYN Stealth Scan SYN Stealth Scan Timing: About 36.92% done; ETC: 00:22 (0:13:21 remaining) Stats: 0:22:05 elapsed; 0 hosts completed (1 up), 1 undergoing Service Scan Service scan Timing: About 75.00% done; ETC: 00:23 (0:00:02 remaining) Nmap scan report for xxx.com (x.x.x.x) Host is up (0.22s latency). Not shown: 65528 closed ports PORT STATE SERVICE VERSION 21/tcp open tcpwrapped 22/tcp open ssh OpenSSH 4.3 (protocol 2.0) 25/tcp open tcpwrapped 80/tcp open http Apache httpd 2.2.3 ((CentOS)) 110/tcp open tcpwrapped 143/tcp open tcpwrapped 443/tcp open tcpwrapped 8080/tcp open http-proxy?

    Read the article

  • Fedora 9 not reconizing hard drive

    - by Andrew Jones
    I am installing Fedora 9 to a PC (specifications at the bottom) and have had a lot of trouble with it recognising the hard drive. To get the Fedora installer to recognize it in the first place I had to pass "ata_generic.all_generic_ide=1 pci=nomsi" to the kernel, after which it installed OK. However, now when I boot the installed OS, I get a "could not find filesystem '/dev/root'" error. I tried passing the same arguments to the kernel at boot as I did when installing but to no avail. I have tried using the default LVM layout and defining manual ones but it made no difference. There is no option in the BIOS to enable AHCI or anything like that, in fact the BIOS is very limited in most respects. I can get into the system by using the installation CD in rescue mode (with those extra kernal parameters) but I'm not sure what to do once in there... Unfortunately using a more recent version of Fedora or even another Linux distribution altogether isn't an option becuase of outside constraints - which is annoying since I know for a fact Ubuntu works fine on this setup. I have not been using Linux that long, so treat me like an idiot - I am one. Any help would be greatly appreciated, thanks! System spec: Intel Atom Z530 CPU @ 1.6 GHz Intel US15W chipset 1 GB DDR2 160 GB SATA harddisk (Samsung HM16HI) 1000 Mbit/s Ethernet port Phoenix BIOS

    Read the article

  • Apache2 with lighttpd as proxy

    - by andrzejp
    Hi, I am using apache2 as web server. I would like to help him lighttpd as a proxy for static content. Unfortunately I can not well set up lighttpd and apache2. (OS: Debian) Important things from lighttpd.config: server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_proxy", "mod_status", ) server.document-root = "/www/" server.port = 82 server.bind = "localhost" $HTTP["remoteip"] =~ "127.0.0.1" { alias.url += ( "/doc/" => "/usr/share/doc/", "/images/" => "/usr/share/images/" ) $HTTP["url"] =~ "^/doc/|^/images/" { dir-listing.activate = "enable" } } I would like to use lighttpd in only one site operating as a virtual directory on apache2. Configuration of this virtual directory: ProxyRequests Off ProxyPreserveHost On ProxyPass /images http://0.0.0.0:82/ ProxyPass /imagehosting http://0.0.0.0:82/ ProxyPass /pictures http://0.0.0.0:82/ ProxyPassReverse / http://0.0.0.0:82/ ServerName MY_VALUES ServerAlias www.MY_VALUES UseCanonicalName Off DocumentRoot /www/MYAPP/forum <Directory "/www/MYAPP/forum"> DirectoryIndex index.htm index.php AllowOverride None ... As you can see (or not;)) my service is physically located at the path: / www / myapp / forum and I would like to support lighttpd dealt with folders: / www / myapp / forum / images / www / myapp / forum / imagehosting / www / myapp / forum / pictures and left the rest (PHP scripts) for apache After running lighttpd and apache2 working party, but did not show up any images of these locations. What is wrong?

    Read the article

  • Linux: Force fsck of a read-only mounted filesystem?

    - by Timothy Miller
    I'm developing for a headless embedded appliance, running CentOS 6.2. The user can connect a keyboard, but not a monitor, and a serial console would require opening the case, something we don't want the user to have to do. This all pretty much obviates the possibility of using a recovery USB drive to boot from, unless all it does is blindly reimage the harddrive. I would like to provide some recovery facilities, and I have written a tool that comes up on /dev/tty1 in place of getty to provide these functions. One such function is fsck. I have found out how to remount the root and other file systems read-only. Now that they are read-only, it should be safe to fsck them and then reboot. Unfortunately, fsck complains to me that the filesystems are mounted and refuses to do anything. How can I force fsck to run on a read-only mounted partition? Based on my research, this is going to have to be something obscure. "-f" just means to force repair of a clean (but unmounted) partition. I need to repair a clean or unclean mounted partition. From what I read, this is something "only experts" should do, but no one has bothered to explain how the experts do it. I'm hoping someone can reveal this to me. BTW, I've noticed that e2fsck 1.42.4 on Gentoo will let you fsck a mounted partition, even mounted read-write, but it seems only to do so if fsck is run from a terminal, so it can ask the user if they're sure they want to do something so dangerous. I'm not sure if the CentOS version does the same thing, but it appears that fsck CAN repair a mounted partition, but it flatly refuses to when not run from a terminal. One last-resort option is for me to compile my own hacked fsck. But I'm afraid I'll mess it up in some unexpected way. Thanks! Note: Originally posted here.

    Read the article

  • Launch synergy client on boot in Mac OS X

    - by Herms
    I have a mac as a secondary machine at work. Currently I use synergy on my main machine to share its keyboard and mouse with the mac. I created a launch agent for my user to launch synergy when I log in, and that's working. However, this means I still have to pull out the mac's keyboard and mouse in order to log in. I tried making a user daemon so that it would launch on boot, but I get the following errors in the console: LaunchSynergy[52] Tue Jul 14 12:41:44 testmacpro.local synergyc[52] <Warning>: 3891612: (CGSLookupServerRootPort) Untrusted apps are not allowed to connect to or launch Window Server before login. LaunchSynergy[52] Tue Jul 14 12:41:44 testmacpro.local synergyc[52] <Error>: kCGErrorRangeCheck : On-demand launch of the Window Server is allowed for root user only. LaunchSynergy[52] Tue Jul 14 12:41:44 testmacpro.local synergyc[52] <Error>: kCGErrorRangeCheck : Set a breakpoint at CGErrorBreakpoint() to catch errors as they are returned LaunchSynergy[52] _RegisterApplication(), FAILED TO establish the default connection to the WindowServer, _CGSDefaultConnection() is NULL. Is there a way to get this to work? Looks like the Mac's security doesn't want to allow anything to take control of the window while at the login screen. I can understand that, but I'd like a way to override it, as it would make my life a lot easier.

    Read the article

  • Remote server security: handling compiler tools

    - by Gonzolas
    Hello! I was wondering wether to remove compiler tools (gcc, make, ...) from a remote production server, mainly for security purposes. Background: The server runs a web application on Linux. Consider Apache jailed. Otherwise, only OpenSSHd faces the public network. Of course there is no compiler stuff within the jail, so this is about the actual OS outside of any jails. Here's my personal PRO/CON list (regarding removal) so far: PRO: I had been reading some suggestions to remove compiler tools in order inhibit custom building of trojans etc. from within the host if an attacker attains unpriviliged user permissions. CON: I can't live without Perl/Python and a trojan/whatever could be written in a scripting language like that, anyway, so why bother about removing gcc et al. at all. There is a need to build new Linux kernels as well as some security tools from source directly on the server, because the server runs in 64-bits mode and (to my understanding) I can't (cross-)compile locally/elsewhere due to lack of another 64-bits hardware system. OK, so here are my questions for you: (a) Is my PRO/CON assessment correct? (b) Do you know of other PROs / CONs to removing all compiler tools? Do they weigh in more? (c) Which binaries should I consider dangerous if the given PRO statement holds? Only gcc, or also make, or what else? Should I remove the enitre software packages them come with? (d) Is it OK to just move those binaries to a root-only accessible directory when they are not needed? Or is there a gain in security if I "scp them in" every time? Thank you!

    Read the article

  • .htaccess error "not allowed here" for all for all instructions

    - by andres descalzo
    I am using Debian Lenny and Apache 2. I changed the default .htaccess file with: AllowOverride AuthConfig But I always get the error message not allowed here when putting any instructions in the .htaccess file. EDIT: file default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/ <Directory /> Options FollowSymLinks Order allow,deny Allow from all AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks Includes #AllowOverride All #AllowOverride Indexes AuthConfig Limit FileInfo AllowOverride AuthConfig Order allow,deny Allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> .htaccess: #Options +FollowSymlinks # Prevent Directoy listing Options -Indexes # Prevent Direct Access to files <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> # SEO URL Settings RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] PHP info: apache2handler Apache Version = Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny10 with Suhosin-Patch Apache API Version = 20051115 Server Administrator = webmaster@localhost Hostname:Port = hw-linux.homework:80 User/Group = www-data(33)/33 Max Requests = Per Child: 0 - Keep Alive: on - Max Per Connection: 100 Timeouts = Connection: 300 - Keep-Alive: 15 Virtual Server = Yes Server Root = /etc/apache2 Loaded Modules = core mod_log_config mod_logio prefork http_core mod_so mod_alias mod_auth_basic mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_deflate mod_dir mod_env mod_mime mod_negotiation mod_php5 mod_rewrite mod_setenvif mod_status

    Read the article

  • supervisord launches with wrong setuid

    - by friendzis
    I am trying to test a pilot system with nginx connecting to uwsgi served application controlled by supervisord running on ubuntu-server. Application is written in python with Flask in virtualenv, although I'm not sure if that is relevant. To test the system I have created a simple hello world with flask. I want nginx and uwsgi both to run as www-data user. If I launch uwsgi "manually" from root shell I can see uwsgi processes runing as appropriate user (www-data). Although, if I let supervisor launch the application something strange happens - uwsgi processes are runing under my user (friendzis). Consequently, socket file gets created under wrong user and nginx cannot communicate with my applicaion. note: the linux server runs as Hyper-V VM, under Windows Server 2008. Relevant configuration: [uwsgi] socket = /var/www/sockets/cowsay.sock chmod-socket = 666 abstract-socket = false master = true workers = 2 uid = www-data gid = www-data chdir = /var/www/cowsay/cowsay pp = /var/www/cowsay/cowsay pyhome = /var/www/cowsay module = cowsay callable = app supervisor [program:cowsay] command = /var/www/cowsay/bin/uwsgi -s /var/www/sockets/cowsay.sock -w cowsay:app directory = /var/www/cowsay/cowsay user = www-data autostart = true autorestart = true stdout_logfile = /var/www/cowsay/log/supervisor.log redirect_stderr = true stopsignal = QUIT I'm sure I'm missing some minor detail, but I'm unable to notice it. Would appreciate any suggestions.

    Read the article

  • Apache2, Tomcat6, and proxy redirects

    - by Randal Hale
    So here is my question - go easy and slow. I'm a GIS Consultant and general hack with linux. I inherited this volunteer job essentially because I knew more than the rest of the team - or the rest of the team isn't as stubborn as I am... With that said a number of people have been mucking around in the server before I got involved so I've been cleaning up a lot of things. The domain names have been changed to protect the innocent. I have a server running Apache2 (port 80) and tomcat6 (8080) running on ubuntu server 10.4. There is a virtual host on Apache2 called "Runner" (the domain is runner.org). I have mod_proxy loaded. I am trying to redirect everyone that visits runner.org to http://some.ip.address:8080/openrunner-webapp/ So far I've gotten runner.org assigned to the apache2 server. Someone set up a redirect in the httpd.conf file but I believe it needs to go into the virtualhost. I tried setting the redirect in the virtualhost as: *ProxyPass / http://localhost:8080/openrunner-webapp All that does is show me the root of the Apache webserver. Anyway I'm stuck

    Read the article

  • Sharing disk volumes across OpenVZ guests to reduce Package Management Overhead

    - by andyortlieb
    Is it feasible to create a single "master" OpenVZ guest who would only be used for package management, and use something like mount --bind on several other OpenVZ guests sort of trick them into using the environment installed by the master guest? The point of this would be so that users can maintain their own containers, and yet stay in sync with the master development environment, so they'll always have the latest & greatest requirements without worrying too much about system administration. If they need to install their own packages, could put them in /opt, or /usr/local (or set a path to their home directory)? To rephrase, I would like several (developer's, for example) OpenVZ guests whose /bin, /usr (and so on...) actually refer to the same disk location as that of a master OpenVZ guest who can be started up to install and update common packages for the environment to be shared by all of this group of OpenVZ guests. For what it's worth, we're running Debian 6. Edit: I have tried mounting (bind, and readonly) /bin, /lib, /sbin, /usr in this fashion and it refuses to start the containers stating that files are already mounted or otherwise in use: Starting container ... vzquota : (error) Quota on syscall for id 1102: Device or resource busy vzquota : (error) Possible reasons: vzquota : (error) - Container's root is already mounted vzquota : (error) - there are opened files inside Container's private area vzquota : (error) - your current working directory is inside Container's vzquota : (error) private area vzquota : (error) Currently used file(s): /var/lib/vz/private/1102/sbin /var/lib/vz/private/1102/usr /var/lib/vz/private/1102/lib /var/lib/vz/private/1102/bin vzquota on failed [3] If I unmount these four volumes, and start the guest, and then mount them after the guest has started, the guest never sees them mounted.

    Read the article

  • mrepo and grouplist/groupinstall?, mrepo not working as expected with group

    - by user52874
    All, I'm trying to set up mrepo so we can have internal repositories. After quite the slog, things seem to be working as expected EXCEPT for groups. From man createrepo: EXAMPLES Here is an example of a repository with a groups file. Note that the groups file should be in the same directory as the rpm packages (i.e. /path/to/rpms/comps.xml). createrepo -g comps.xml /path/to/rpms So here's what I'm doing: wget -c http://ftp.scientificlinux.org/linux/scientific/6/x86_64/os/repodata/comps-sl6-x86_64.xml cp comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/comps-sl6-x86_64.xml createrepo -g comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/ lots of output, no apparent errors or warnings BUT.. from a client: yum grouplist Loaded plugins: refresh-packagekit Setting up Group Process Error: No group data available for configured repositories Here's /etc/mrepo.conf: ### Configuration file for mrepo ### The [main] section allows to override mrepo's default settings ### The mrepo-example.conf gives an overview of all the possible settings [main] srcdir = /var/mrepo wwwdir = /var/www/mrepo confdir = /etc/mrepo.conf.d arch = x86_64 mailto = root@localhost smtp-server = localhost pxelinux = /usr/lib/syslinux/pxelinux.0 tftpdir = /tftpboot #rhnlogin = username:password ### Any other section is considered a definition for a distribution ### You can put distribution sections in /etc/mrepo.conf.d ### Examples can be found in the documentation. Here's /etc/mrepo.conf.d/sl6.mrepo: ### Scientific Linux 6 [SL6] name = Scientific Linux 6 release = 6 arch = x86_64 metadata = repomd repoview os = rsync://rsync.scientificlinux.org/scientific/$release/$arch/os/ updates = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/ security = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/security/ fastbugs = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/fastbugs/

    Read the article

  • Authenticating Active Directory Users to Mac OS X Mavericks Server L2TP VPN Service

    - by dean
    We have a Windows Server 2012 Active Directory Infrastructure that consists of two domain controllers. Bound to the Active Directory Domain is a Mac OS X Mavericks Server 10.9.3. The server runs Profile Manager and VPN Services. My Active Directory users are able to authenticate to the Profile Manager, but not the VPN. I have found several threads on other forums of other users reporting similar issues, here is just one of many references: https://discussions.apple.com/thread/5174619 It appears as though the issue is related to a CHAP authentication failure. Can anyone suggest what next troubleshooting steps I might take? Is there a way to liberalize the authentication mechanism to include MSCHAP? Here is an excerpt of the transaction from the logs. Please note the domain has been changed to example.com. Jun 6 15:25:03 profile-manager.example.com vpnd[10317]: Incoming call... Address given to client = 192.168.55.217 Jun 6 15:25:03 profile-manager.example.com pppd[10677]: publish_entry SCDSet() failed: Success! Jun 6 15:25:03 --- last message repeated 2 times --- Jun 6 15:25:03 profile-manager.example.com pppd[10677]: pppd 2.4.2 (Apple version 727.90.1) started by root, uid 0 Jun 6 15:25:03 profile-manager.example.com pppd[10677]: L2TP incoming call in progress from '108.46.112.181'... Jun 6 15:25:03 profile-manager.example.com racoon[257]: pfkey DELETE received: ESP 192.168.55.12[4500]->108.46.112.181[4500] spi=25137226(0x17f904a) Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP connection established. Jun 6 15:25:04 profile-manager kernel[0]: ppp0: is now delegating en0 (type 0x6, family 2, sub-family 0) Jun 6 15:25:04 profile-manager.example.com pppd[10677]: Connect: ppp0 <--> socket[34:18] Jun 6 15:25:04 profile-manager.example.com pppd[10677]: CHAP peer authentication failed for alex Jun 6 15:25:04 profile-manager.example.com pppd[10677]: Connection terminated. Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP disconnecting... Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP disconnected Jun 6 15:25:04 profile-manager.example.com vpnd[10317]: --> Client with address = 192.168.55.217 has hung up

    Read the article

  • package issue with ubuntu 10.10 and passenger requirements

    - by user368937
    I'm trying to get Passenger working with Ubuntu 10.10 and I'm running into a problem. It seems that the passenger installer is not recognizing the virtual package. I'm getting this error: Code: passenger-install-apache2-module ... * OpenSSL support for Ruby... not found ... And then it says, run this: * To install OpenSSL support for Ruby: Please run apt-get install libopenssl-ruby as root. When I run the above command, it refers to the libruby package: sudo apt-get install libopenssl-ruby Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'libruby' instead of 'libopenssl-ruby' libruby is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 43 not upgraded. When I look at the details for libruby, it says it provides libopenssl-ruby: Code: Provides: libbigdecimal-ruby, libcurses-ruby, libdbm-ruby, libdl-ruby, libdrb-ruby, liberb-ruby, libgdbm-ruby, libiconv-ruby, libopenssl-ruby, libpty-ruby, libracc-runtime-ruby, libreadline-ruby, librexml-ruby, libsdbm-ruby, libstrscan-ruby, libsyslog-ruby, libtest-unit-ruby, libwebrick-ruby, libxmlrpc-ruby, libyaml-ruby, libzlib-ruby And when I rerun the passenger installer, it gives the same error: Code: passenger-install-apache2-module ... * OpenSSL support for Ruby... not found ... Let me know if you need more info. How do I fix this?

    Read the article

  • PDFs and Networked Printers

    - by Bart Silverstrim
    Weird issue. We have users printing to networked windows-shared printers (print server Win2003 sp2). Some users have been reporting recently that they can't print PDF documents to particular printers (two example printers are HP 2430 PCL 6 driver and 4250 PCL 6 driver). At first, we found that on many of these systems the "Everyone" object was added to the permissions for the root of the C: volume but had no permissions checked. We added modify privileges to it (these are Deep-Freeze systems, so modifications to these systems that we don't add as administrators won't matter) and they seemed to be able to print. Perhaps Acrobat Reader was writing a temp file for printing where users didn't have permission, we surmised, and made the change and moved on. Yesterday the user called in saying it's not working still. Looked at it; bring up a PDF, click Print and the reader app says that you have to install a printer. Look at the printers folder (Windows XP workstation), and it has printers installed. Print a test page, return to AcroReader, and it will print fine to that printer. The whole time web pages, MS Office documents, etc. print without issue to the same printers. Has anyone seen this issue with Acro Reader 9 and certain network printer drivers or shares involving HP printers? I'd post this to SuperUser but it seems to be associated with a networked printer issue, seems to affect subsets of users but may be more widespread and our users aren't reporting it to us assuming we just know about it, and I've not found rhyme or reason as to why it's affecting just PDF printing and particular printers. The print spoolers are all running on the workstations and print server without errors being logged so far, but I'm going through the logs now to see if I can find anything out of place.

    Read the article

  • OSX: Mimic Ubuntu IP Masquerading via iptables with ipfw

    - by Dogbert
    Good day, I am attempting to replicate a setup I have between a router and an Ubuntu PC, and have the same setup working on my MacBook (10.6, Snow Leopard). First, I have a router that has a USB port. When I plug it into my Ubuntu PC, it creates an RNDIS connection, allowing me to connect to the router over the USB cable via an IP connection. When I plug it into my computer via USB, it gets assigned an IP address of 172.16.84.1, and a new adapter appears when I type ifconfig. I can then SSH into the device via ssh [email protected]. When I log in to the device, I flush the routes, then create the default route: admin@localhost> route -f admin@localhost> route add default 172.16.84.2 Now, in my Ubuntu machine, I use iptables to enable IP masquerading: root@Valhalla> sudo iptables -t nat -A POSTROUTING -s 172.16.84.2 -j MASQUERADE Once this is all done, the router has internet access over the USB connection to my PC. I am trying to replicate this exact setup on my MacBook now (Snow Leopard), but iptables does not exist for OSX, not even a Macports version exists. I have scoured through other questions on StackOverflow that cover the usage of the ipfw command, which apparently works as a drop-in replacement for iptables. However, the syntax is significantly different, and I'm pretty much lost. Does anyone with some experience with ipfw have some suggestions on how I could accomplish this and create a NAT connection via IP masquerading like I could with my Ubuntu PC? Thank you for your assistance.

    Read the article

  • "ImportError: No module named flask" - Trouble with nginx + uWSGI + Flask in a virtualenv setup

    - by vjk2005
    I got nginx + uWSGI running on localhost inside a virtualenv with a simple hello world program, but I get this error when I replace the hello world with a simple Flask app: File "./wsgi_configuration_module.py", line 1, in <module> from flask import Flask ImportError: No module named flask unable to load app mountpoint Here's the flask app (wsgi_configuration_module.py): from flask import Flask application = Flask(__name__) @application.route("/") def hello(): return "hello world" if __name__ == "__main__": application.run() uWSGI config (app_conf.xml): <uwsgi> <socket>127.0.0.1:9001</socket> <chdir>/srv/www/labs/application</chdir> <pythonpath>/srv/www</pythonpath> <module>wsgi_configuration_module</module> <callable>application</callable> <no-site>true</no-site> </uwsgi> nginx config: server { listen 80; server_name localhost; access_log /srv/www/labs/logs/access.log; error_log /srv/www/labs/logs/error.log; location / { include uwsgi_params; uwsgi_pass 127.0.0.1:9001; } location /static { root /srv/www/labs/public_html/static/; index index.html index.htm; } } virtualenv stored in ~/virtual_env with Python 2.7 + nginx + uWSGI + Flask installed in a virtualenv called basic. Things I've tried to solve this: set the --home (-H) option to my virtualenv folder ~/virtual_env while running uWSGI. Other info: I have the same setup working outside of a virtualenv. Things go wrong only when I try to replicate the setup inside of a virtualenv. Where have I gone wrong?

    Read the article

  • Two instances of Windows Vista on boot up after failed clean install

    - by Dwayne
    I tried to install a clean version of Vista but failed. I ended up with Windows and Windows.old on my C: drive and a dual boot option on boot up. I gave up and booted up the old version and tried to rename the Windows.old to Windows and was asked if I wanted to merge the two folders. I answered yes and all seemed OK until I booted up this morning and was given the choice of two versions of Vista. The first one is the one that failed to installed correctly and the second one is the old version. How can I get rid of the failed installation? I got rid of the bad boot via MSCONFIG. Here is my current situation: several hard drives installed Using C: as my boot drive a much larger drive (H:) for storing most of my files. I found a subfolder in my C:\windows folder named windows. Upon inspection I determined it to be older than the C:\windows folder and therefore it must be the older, working version of the boot. I renamed the C:\windows folder to c:\windows.bad and moved the sub windows to the C: root directory. I also copied it to the h: drive. Now MSCONFIG reports that the copy that is booting is the h: copy. How can I change it back to the C:\ copy and can I delete the C:\windows.bad file set?

    Read the article

  • Fixing corrupt files or corrupt file table on a USB drive?

    - by Kelsey
    I was doing a virus scan on an external USB drive while copying data over to it. While AVG was scanning my system got locked up I think due to the USB drive running out of space and it required a reboot. Since that time all data on the external drive is no longer accessible. I can see all the files in the root and directories but I cannot browse into any of them as Windows 7 gives an error stating they are corrupt. I think the file table or whatever it uses to store the index of what exists on the drive has been corrupted since it still shows the the drive as being almost full but everything I do a properties check on says it is 0 bytes. Does anyone know how to 'unlock' or recover this data? Is there a way to rebuild the file table somehow? Luckily I can recover this data from other sources as a last resort but I would like to fix this if possible. Any help would be appreciated. Thanks.

    Read the article

  • Wireless USB keyboard and mouse can wake system, but then receiver is inactive

    - by BlueMonkMN
    I have a Microsoft brand USB device that acts as a receiver for a wireless Microsoft Keyboard and a wireless Mouse. When it's operating normally, there are LEDs on the device indicating Caps Lock, Num Lock and Function Lock, of which the latter 2 are usually lit. It is plugged into a Dell Isnpiron 531 with Windows 7 32-bit running on an AMD Athlon 64 X2 Dual Core processor 5000+. When the computer goes to sleep (the power indicator on the main box is flashing), I can wake it by moving the mouse. So far all is good. However, something changed in, I think, the past couple weeks (I suspect due to a Microsoft driver update problem). Before the change, after waking the computer, everything would operate normally as far as I could tell, but now after waking the computer, the receiver has no lights on, and the keyboard and mouse are completely unresponsive (which is odd, considering the mouse woke up the computer). There is a button on the receiver that's supposed to reset the wireless connection and flash the lights while it does so, but it has no effect in this state. It's like the receiver doesn't have power (but how would the system know I moved the mouse, unless the power was on until it woke up?). I have checked the BIOS/CMOS settings or whatever you call them, and did not see anything related to USB in the power management section. I have checked Windows 7 device manager and ensured that all the USB Root Hub devices have the setting unchecked for allowing the USB power to be turned off. Like I said, this was working before, and the only thing I can think of that's changed is applying Windows Updates.

    Read the article

  • mysql is not connecting after data directory change

    - by user123827
    I've changed data directory in /etc/my.cnf. datadir=/data/mysql socket=/data/mysql/mysql.sock I also moved mysql folder from /var/lib/mysql/ to /data/mysql Now when i connect to mysql i get following error: [root@youradstats-copy mysql]# mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) also when i see /var/logs/msqld.log i get following messages in that: InnoDB: Setting log file /data/mysql/ib_logfile0 size to 512 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 300 400 500 120704 7:43:31 InnoDB: Log file /data/mysql/ib_logfile1 did not exist: new to be created InnoDB: Setting log file /data/mysql/ib_logfile1 size to 512 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 300 400 500 InnoDB: Cannot initialize created log files because InnoDB: data files are corrupt, or new data files were InnoDB: created when the database was started previous InnoDB: time but the database was not shut down InnoDB: normally after that. 120704 7:43:36 [ERROR] Plugin 'InnoDB' init function returned error. 120704 7:43:36 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. I shut down mysql properly before doing these changes and then started it properly but dont know why getting these messages. please help to solve issue as i have changed socket path in my.cnf but still its pointing to old path...

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >