Search Results

Search found 5419 results on 217 pages for 'warning'.

Page 113/217 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Unable to configure Ruby with readline

    - by Liam Berg
    1) ./configure --prefix=$HOME/.packages --with-readline-dir=$HOME/.packages 2) configure: WARNING: unrecognized options: --with-readline-dir I am trying to setup the most up-to-date version of Ruby on my webhost (I do not have sudo access). Line 1 is the configure command I used for Ruby and Line 2 is the first printed line after executing 'configure'. I've googled this issue and found other people with the same problem but there aren't any real solutions. There are no warnings or errors when configuring/compiling readline-6.1. I am pretty stumped, any help/insight would be greatly appreciated. Thanks ahead of time.

    Read the article

  • Computer can't boot

    - by zETO
    I have a 1 year old PC and the last weeks, when I press the ON button the PC doesn't power on ( Like I didn't press the ON button ). I have to plug and unplug a few times the power cord in order to make it work. At the start this happened once in 10 boots. Now it happens much more frequently to the point that when I press the power on button it never even opens if I don't do the cord switch thingy. Very rarely also, the PC shut downs for no reason and no warning, even when idle. The important fact to note here is, the green light on the motherboard is always on, even when the PC doesn't power on. What should I do? Is it a Power Supply failure or a motherboard? My power supply is a high end corsair model, AX850 and my motherboard a high end ASUS.

    Read the article

  • Centos 6.3 vsftp unable to upload file to apache webserver

    - by user148648
    I am new to Centos, I did work with Sun Solaris and upload files to Apache web server before. I create an end user account and manage to ftp using command prompt to the server, error message is '226 Transfer Done (but failed to open directory). Content of my vsftpd.conf as below # Example config file /etc/vsftpd/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=YES # ** may need to comment it back # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) #local_umask=022 local_umask=077 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. anon_upload_enable=YES # *** maybe to comment it back!!! # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES # ** may need to comment it back!!! # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. ascii_upload_enable=YES ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Warning, only for authorize login. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). chroot_local_user=YES chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list local_root=/var/www # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd with two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES

    Read the article

  • Migrating from Apache2 to Lighttpd creating errors in PHP/mySQL?

    - by Jean-Philippe Murray
    Ok, I've been using basics ubuntu LAMP setups for years now, and I wanted to give lighttpd a try. My LAMP setup run in a virtual machine with scripts running just fine. So I created a new virtual machine, starting with a fresh install of ubuntu and made my setups. On this new VM, lighttpd + php works just fine. (Or at least it seems...) Problem occurs when I take the scripts from my LAMP setup and upload them to the new VM. I'm getting : Warning: mysql_real_escape_string(): Access denied for user 'www-data'@'localhost' (using password: NO) My lighttpd setup is configured as php-cgi but not my apache2 setup. Could this be the source of the problem? I think that scripts would be independent of the server configuration, so I doubt it. Also, I know that my DB connexion informations are good (as I can log in via phpmyadmin perfectly). I'm in the dark here, any pointers ? Thanks,

    Read the article

  • Install APC RedHat

    - by zackaryka
    i am trying to install apc on redhat so i did: pecl install apc i said yes to: Use apxs to set compile flags (if using APC with Apache)? [yes]: and i get this: checking for re2c... no configure: WARNING: You will need re2c 0.9.11 or later if you want to \ regenerate PHP parsers. and checking whether apc needs to get compiler flags from apxs... Sorry, I was not able to successfully run APXS. Possible reasons: 1. Perl is not installed; 2. Apache was not compiled with DSO support (--enable-module=so); 3. 'apxs' is not in your path. Try to use --with-apxs=/path/to/apxs The output of apxs follows /tmp/tmpJQuZdD/APC-3.0.16/configure: line 3846: apxs: command not found configure: error: Aborting ERROR: `/tmp/tmpJQuZdD/APC-3.0.16/configure --with-apxs' failed what could be the problem? Thanks

    Read the article

  • maven sonar problem

    - by senzacionale
    I want to use sonar for analysis but i can't get any data in localhost:9000 <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <artifactId>KIS</artifactId> <groupId>KIS</groupId> <version>1.0</version> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.4</version> <executions> <execution> <id>compile</id> <phase>compile</phase> <configuration> <tasks> <property name="compile_classpath" refid="maven.compile.classpath"/> <property name="runtime_classpath" refid="maven.runtime.classpath"/> <property name="test_classpath" refid="maven.test.classpath"/> <property name="plugin_classpath" refid="maven.plugin.classpath"/> <ant antfile="${basedir}/build.xml"> <target name="maven-compile"/> </ant> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> output when running sonar: jar file is empty [INFO] Executed tasks [INFO] [resources:testResources {execution: default-testResources}] [WARNING] Using platform encoding (Cp1250 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] skip non existing resourceDirectory J:\ostalo_6i\KIS deploy\ANT\src\test\resources [INFO] [compiler:testCompile {execution: default-testCompile}] [INFO] No sources to compile [INFO] [surefire:test {execution: default-test}] [INFO] No tests to run. [INFO] [jar:jar {execution: default-jar}] [WARNING] JAR will be empty - no content was marked for inclusion! [INFO] Building jar: J:\ostalo_6i\KIS deploy\ANT\target\KIS-1.0.jar [INFO] [install:install {execution: default-install}] [INFO] Installing J:\ostalo_6i\KIS deploy\ANT\target\KIS-1.0.jar to C:\Documents and Settings\MitjaG\.m2\repository\KIS\KIS\1.0\KIS-1.0.jar [INFO] ------------------------------------------------------------------------ [INFO] Building Unnamed - KIS:KIS:jar:1.0 [INFO] task-segment: [sonar:sonar] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] [sonar:sonar {execution: default-cli}] [INFO] Sonar host: http://localhost:9000 [INFO] Sonar version: 2.1.2 [INFO] [sonar-core:internal {execution: default-internal}] [INFO] Database dialect class org.sonar.api.database.dialect.Oracle [INFO] ------------- Analyzing Unnamed - KIS:KIS:jar:1.0 [INFO] Selected quality profile : KIS, language=java [INFO] Configure maven plugins... [INFO] Sensor SquidSensor... [INFO] Sensor SquidSensor done: 16 ms [INFO] Sensor JavaSourceImporter... [INFO] Sensor JavaSourceImporter done: 0 ms [INFO] Sensor AsynchronousMeasuresSensor... [INFO] Sensor AsynchronousMeasuresSensor done: 15 ms [INFO] Sensor SurefireSensor... [INFO] parsing J:\ostalo_6i\KIS deploy\ANT\target\surefire-reports [INFO] Sensor SurefireSensor done: 47 ms [INFO] Sensor ProfileSensor... [INFO] Sensor ProfileSensor done: 16 ms [INFO] Sensor ProjectLinksSensor... [INFO] Sensor ProjectLinksSensor done: 0 ms [INFO] Sensor VersionEventsSensor... [INFO] Sensor VersionEventsSensor done: 31 ms [INFO] Sensor CpdSensor... [INFO] Sensor CpdSensor done: 0 ms [INFO] Sensor Maven dependencies... [INFO] Sensor Maven dependencies done: 16 ms [INFO] Execute decorators... [INFO] ANALYSIS SUCCESSFUL, you can browse http://localhost:9000 [INFO] Database optimization... [INFO] Database optimization done: 172 ms [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESSFUL [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6 minutes 16 seconds [INFO] Finished at: Fri Jun 11 08:28:26 CEST 2010 [INFO] Final Memory: 24M/43M [INFO] ------------------------------------------------------------------------ any idea why, i successfully compile with maven ant plugin java project.

    Read the article

  • How to Set Linux Bonding Interface to Gigabit

    - by Kyle Brandt
    I have enabled Linux active backup mode bonding. Each interface is a gigabit interface, but the bond interface seems to end up at 100 Megabit: bonding: bond0: Warning: failed to get speed and duplex from eth1, assumed to be 100Mb/sec and Full. ... bnx2: eth0 NIC Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON ... bonding: bond0: backup interface eth1 is now up ethtool apparently can't provide info on bond: sudo ethtool bond0 Settings for bond0: No data available So does this mean I am operating at 100 or 1000 Megabit (My guess is 1000)? If it is only 100, what options in the ifcfg scripts or the modprobe bonding options do I need to sett to make it 1000?

    Read the article

  • Event ID 8021 The browser was unable to retrieve a list of servers from the browser master

    - by Ash
    We have a LAN where workstations are randomly losing network connectivity for brief moments of time. The workstations can also take a long time to login to the domain. During our troubleshooting we have found an error log on a few Windows 7 workstations: Warning BROWSER 8021 The browser was unable to retrieve a list of servers from the browser master \\random-pc on the network \Device\NetBT_Tcpip_{BBABCDE9-D8A0-4399-93F2-492FE0848B12}. The data is the error code. What do these errors mean? What computers should have the Computer Browser service enabled, workstations and/or servers? The environment is a mix of Windows 7 & Windows XP workstations on a Windows Server SBS 2011 SP1 domain.

    Read the article

  • Unable to configure Rails with readline

    - by Liam Berg
    1) ./configure --prefix=$HOME/.packages --with-readline-dir=$HOME/.packages 2) configure: WARNING: unrecognized options: --with-readline-dir I am trying to setup the most up-to-date version of Rails on my webhost (I do not have sudo access). Line 1 is the configure command I used for Ruby and Line 2 is the first printed line after executing 'configure'. I've googled this issue and found other people with the same problem but there aren't any real solutions. There are no warnings or errors when configuring/compiling readline-6.1. I am pretty stumped, any help/insight would be greatly appreciated. Thanks ahead of time.

    Read the article

  • Compiling zip component for PHP 5.2.11 in MAMP PRO

    - by Zlatoroh
    Helo I installed MAMP PRO on my Macbook Pro (10.6) some time ago. Now I would like to use zip functions in php. I found that I must add zip.so to my extension folder and edited php.ini. On my computer I have two different versions of PHP one in MAMP folder and other in user/lib which was pre-installed on my system. Now I wish to compile my zip library for MAMP version. I got zip sources for my version of PHP then in terminal called function /Applications/MAMP/bin/php5/bin/phpize so it uses mamp php version ./configure make then I moved compile zip.so to extensions/no-debug-non-zts-20060613. When MAMP is launched it returns this error: [11-Apr-2010 16:33:27] PHP Warning: PHP Startup: zip: Unable to initialize module Module compiled with module API=20090626, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 Can some body explain to me how to do this the right way.

    Read the article

  • Hosting 2 different SSL domains on the same IP [closed]

    - by Jim
    Possible Duplicate: Multiple SSL domains on the same IP address and same port? I have 2 different domains, domain1.com and domain2.net. Both require SSL certificates. I've got the certificates signed by a CA and now need to set them up on Apache. I've got a single IP address and have found a post here: SSL site not using the correct IP in Apache and Ubuntu that apparently works for ubuntu but I am on CentOS and have not had the same luck trying that configuration. When I try that configuration, Apache dies telling me that the port is already in use: (98)Address already in use: make_sock: could not bind to address 0.0.0.0:443 I had SSL working for a single domain but now that I've added the second, I get a browser warning about the cert not belonging to the domain. How do I get both domains to work using virtual host containers?

    Read the article

  • Solaris Non global Zone x server

    - by ankimal
    I am not even sure if this is possible but how can I start an X server on a non-global zone? If I run startx from within my zone. I created the xorg.conf by running /usr/X11/bin/xorgconfig root@foo:/usr/X11/bin# startx xauth: creating new authority file /root/.serverauth.20957 X.Org X Server 1.5.3 Release Date: 5 November 2008 X Protocol Version 11, Revision 0 Build Operating System: SunOS 5.11 snv_108 i86pc Current Operating System: SunOS dsol101 5.11 snv_111b i86pc Build Date: 07 May 2009 04:44:56PM Solaris ABI: 64-bit SUNWxorg-server package version: 6.9.0.5.11.11100,REV=0.2009.05.07 SUNWxorg-mesa package version: 6.9.0.5.11.11100,REV=0.2009.04.02 Before reporting problems, check http://sunsolve.sun.com/ to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Nov 10 19:17:53 2009 (==) Using config file: "/etc/X11/xorg.conf" Fatal server error: xf86OpenConsole: Cannot open /dev/fb (No such file or directory)

    Read the article

  • Recovery of Pinnacle Studio Project Files

    - by seanieb
    My external hard drive had some sort of issue a few months ago, but I was able to recover my files using a data recovery software program. However my Pinnacle studio files are not being recovered as before, they are being recovered as directory's/folders that have sub directory's and files. And I have tried with several different recovery programs and they all recover the projects as directories. And the projects all contain one file called README.TXT: * WARNING This directory contains the descriptive data of the project, split into. various subdirectories and files for better access. DO NOT EDIT, ADD, CHANGE OR MODIFY ANY OF IT'S CONTENTS! This gives me hope that I could some how just turn the directory into a .stu Pinnacle studio project file. How would I go about doing this? Or is there another way to solve this problem?

    Read the article

  • HTTP Requests Fails for WordPress sites after upgrading from PHP 5.2 to PHP 5.3

    - by anjanesh
    I've upgraded from Ubuntu 9.10 to 10.04 beta 1 and PHP got upgraded from 5.2 to 5.3 Now all my WordPress & Magento sites arent working. I tried retrieving the URL headers from the command line, but HTTP request fails. Utilizing get_headers in a small script, PHP Warning: get_headers(http://local.vhosts1.com): failed to open stream: HTTP request failed! in get_headers.php on line 12 But HTTP request fails only for WordPress and Magento based sites - not custom written ones. Does this probably have to do with some htaccess directive ?

    Read the article

  • DNS server redirect users on first visit

    - by Sihan Zheng
    I am looking for a DNS level solution, that redirects a user to a specific IP on the first visit, than directs them to the correct IP on subsequent visits. So the idea is, for example, if a user visits "malicioussite.com", the first time they try to resolve that DNS name, it resolves to the IP of an internal web server, showing them a warning. On subsequent resolves, the users would get the actual IP, so they can visit the site. How can this be achieved? I am really flexible on what I can use, as long as its on the DNS level.

    Read the article

  • On booting my laptop warns me about failure of hard disk is imminent...can it be repaired or to be replaced..?

    - by nrb
    when I am booting my laptop (Lenovo G550) it gives an error message prior to that.. SMART Failure Predicted on Hard Disk O:Hitachi HTS543225l9A300-(PM) Warning : Immediately back up your data and replace your hard disk drive. Afailure may be imminent. Press F1 to continue.. Once it started, it runs smoothly, I also verified its status by running some internal HDD tests by Hard Disk Senitel software..It shows no bad sector , no virus, no damage , normal temperature ... it suggests every thing is OK except its health... Now my question is that then what is the wrong with it..? It may be wrong with plates (dusty) or the read/write heads...then can I go for repair it or replace it...? My system now reminds me every half an hour to resolve this problem since last 10 days... Help me.

    Read the article

  • Plesk file permissions - Apache/PHP conflicting with user accounts.

    - by hfidgen
    Hiya, I'm building a Drupal site which performs various automatic disk operations using the apache user (id=40). The problem is that the site was set up on a subdomain belonging to user ID 10001 (ie my main FTP account) so the filesystem belongs to that user ID. So I keep getting errors like this: warning: move_uploaded_file() [function.move-uploaded-file]: SAFE MODE Restriction in effect. The script whose uid is 10001 is not allowed to access /var/www/vhosts/domain.com/httpdocs/sites/default/files/images/user owned by uid 48 in /var/www/vhosts/domain.com/httpdocs/includes/file.inc on line 579. I've tried changing the apache group in httpd.conf to apache:psacln, psacln being the default group for all web users but that's not helped. The situation now is: ..../files/images/ = 777 and chown = ftplogin:psacln ..../files/images/user = 775 and chown = apache:psacln ..../files/tmp = 777 and chown = ftplogin:psacln So apparently uid 40 and 10001 both have permissions to write to any of the 3 directories involved, but still can't. Am i missing something here? Can anyone help? Thanks!

    Read the article

  • TFS and Sharepoint integration

    - by pho3nix
    I using a Sharepoint 2007 and TFS 2010. I installed all with reports sharepoint integration but when i try create a TeamProject Collections always return this warning: Server was unable to process request. --- TF250029: No user was found within the context of a Web site. Verify that the site does not allow anonymous access. And in sharepoint web applications tfs console window when check my sharepoint portal /sites return a error saying is blocked by firewall or server extensions is not installed, but i have all ok. Anyone can helpme, please.

    Read the article

  • Why does httpd handle requests for wrong hostnames in SSL mode?

    - by Manuel
    I have an SSL-enabled virtual host for my sites at example.com:10443 Listen 10443 <VirtualHost _default_:10443> ServerName example.com:10443 ServerAdmin [email protected] ErrorLog "/var/log/httpd/error_log" TransferLog "/var/log/httpd/access_log" SSLEngine on SSLProtocol all -SSLv2 SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5 SSLCertificateFile "/etc/ssl/private/example.com.crt" SSLCertificateKeyFile "/etc/ssl/private/example.com.key" SSLCertificateChainFile "/etc/ssl/private/sub.class1.server.ca.pem" SSLCACertificateFile "/etc/ssl/private/StartCom.pem" </VirtualHost> Browsing to https://example.com:10443/ works as expected. However, also browsing to https://subdomain.example.com:10443/ (with DNS set) shows me the same pages (after SSL certificate warning). I would have expected the directive ServerName example.com:10443 to reject all connection attempts to other server names. How can I tell the virtual host not to serve requests for URLs other than the top-level one?

    Read the article

  • Alerting system for Munin

    - by akirk
    I am very satisfied with munin as a monitoring tool (as I only have a simple 2-server setup) but its alerting system is very annoying as you can only configure it to send an e-mail upon every check which generates a warning or an error. It seems like the only option is to use Nagios but in Debian it has Apache as a dependency and I already use nginx on my monitoring machine. All I want is to have the possibility to silence/acknowledge the alarm while I am working on a fix, so that I don't get bombarded with e-mails. Nagios seems like an oversized solution for that anyway. Is there any simple solution for that or am I the only one who feels like he needs such a tool?

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • iChat can't authenticate to Lion Server 10.7.2 [migrated]

    - by glenstorey
    I've enabled iChat and iCal Server through our local 10.7.2 Server which has DNS set up correctly. I can add the server account via a client's System Preferences (under other - Mac OS X server) and it authenticates with my shortname correctly. However, when I load iChat, I get this error message: Where the account is [email protected]. The password and username is correct. Console throws this error: 22/11/11 3:03:31.135 PM imagent: [Warning] XMPPConnection: Error: Error Domain=XMPPErrorDomain Code=105 "The operation couldn?t be completed. (XMPPErrorDomain error 105.)" UserInfo=0x7f81bbe2a3e0 {XMPPErrorText=service requested for unknown domain} DNS is set up correctly (it's working for Profile Management, Software Update Server and Web Services) but I can't get iChat to work correctly. How can I get clients to authenticate? FYI: It's probably worth noting that I get the exact same error messages when I use [email protected] instead of [email protected]. Also posted this question on Apple Discussion here.

    Read the article

  • Error in Installing MediaWiki for Ubuntu, Posgres 8.3

    - by Masi
    How can you solve the error message at the last line? .... # Installing MediaWiki with php file extensions # Environment checked. You can install MediaWiki. # Generating configuration file... # Database type: PostgreSQL # Loading class: DatabasePostgres # Attempting to connect to database "wikidb" as "wikiuser"... error: No database connection # Checking the version of Postgres... Warning: pg_version(): supplied argument is not a valid PostgreSQL link resource in /var/www/wiki/includes/db/DatabasePostgres.php on line 1078 FAILED. Required version is 8.1. You have 7.3 or earlier I am using Postgres 8.3 which makes the error message strange. The file "LocalSettings.php" was not created to the directory config so I cannot continue the installation without solving the problem.

    Read the article

  • Redirected to piratenpartji.nl. What can I do?

    - by Luke
    a few hours ago, I found a link to Kickass Torrent, which is blocked in my country, Italy and went for it. The link worked just fine but wasn't able to save anything. I renounced and continued normal navigation. I then noticed that everytime I try to access some pages, for instance google.com (but not Google.it) I receive a warning from Chrome that I'm being redirected through piratenpartji.nl Since I found a similar topic here on 'superuser' I tried what was proposed in the solution, namely shutting down adblock and trying again or trying Incognito mode. Nevertheless, no result. I performed a search with both Avira and Spybot SD but except for a couple cookies from other origin nothing came up. What do you suggest I do? Thanks in advance, feel free to ask any info that might be necessary Luke

    Read the article

  • How to resolve these errors and install ClamAV for Perl under Ubuntu/Debian?

    - by Alex R
    After successful apt-get install clamav I then did: perl -MCPAN -e shell install File::Scan::ClamAV and got CPAN.pm: Going to build J/JA/JAMTUR/File-Scan-ClamAV-1.91.tar.gz Cannot find clamd in /root/bin (or a number of other places) - are you sure clamav in installed? Warning: No success on command[/usr/bin/perl Makefile.PL INSTALLDIRS=site] JAMTUR/File-Scan-ClamAV-1.91.tar.gz /usr/bin/perl Makefile.PL INSTALLDIRS=site -- NOT OK Running make test Make had some problems, won't test Running make install Make had some problems, won't install Failed during this command: JAMTUR/File-Scan-ClamAV-1.91.tar.gz : writemakefile NO '/usr/bin/perl Makefile.PL INSTALLDIRS=site' returned status 512 What did I do wrong?

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >