Search Results

Search found 21348 results on 854 pages for 'active directory lds'.

Page 400/854 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • Postfix MySql Dovecot - SMTP Authentication Failure

    - by borncamp
    Hello I have a Postfix setup with Dovecot and MySql. The server is running Debian Squeeze. The MySql server is a slave that has data pushed to it from a primary (postfix) mail server(running a different os). The emails are stored on a replicated GlusterFS volume. I am able to check email using thunderbird over IMAP. However, SMTP requests fail. After turning on query logs for the MySql server I have noticed that no query statement is executed to retrieve the user information when an SMTP client tries to authenticate. I'd like to know what I'm doing wrong or what the next troubleshooting steps are. I'm about to pull my hair out. Below is some log and configuration data that I thought would be relevant. You're help is much obliged. The file /var/log/mail.log shows Oct 11 14:54:16 mailbox2 postfix/smtpd[25017]: connect from unknown[192.168.0.44] Oct 11 14:54:19 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL PLAIN authentication failed: Oct 11 14:54:25 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL LOGIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:48 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL PLAIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:54 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL LOGIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:57 mailbox2 postfix/smtpd[25017]: disconnect from unknown[192.168.0.44] This is my dovecot.conf file log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/var/mail/virtual/%d/%n/ auth_mechanisms = plain login disable_plaintext_auth = no namespace { inbox = yes location = prefix = INBOX. separator = . type = private } passdb { args = /etc/dovecot/dovecot-mysql.conf driver = sql } protocols = imap pop3 service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-master { mode = 0600 user = postfix } user = root } ssl_cert = </etc/ssl/certs/dovecot.pem ssl_key = </etc/ssl/private/dovecot.pem userdb { args = /etc/dovecot/dovecot-mysql.conf driver = sql } protocol lda { auth_socket_path = /var/run/dovecot/auth-master mail_plugins = sieve postmaster_address = [email protected] } protocol pop3 { pop3_uidl_format = %08Xu%08Xv } Here is my dovecot-mysql.conf file: connect = host=127.0.0.1 dbname=postfix user=postfix password=ffjM2MYAqQtAzRHX driver = mysql default_pass_scheme = MD5-CRYPT password_query = SELECT username AS user,password FROM mailbox WHERE username = '%u' AND active='1' user_query = SELECT CONCAT('/var/mail/virtual/', maildir) AS home, 1001 AS uid, 109 AS gid, CONCAT('*:messages=10000:bytes=',quota) as quota_rule, 'Trash:ignore' AS quota_rule2 FROM mailbox WHERE username = '%u' AND active='1' Here is my output from 'postconf -n': append_dot_mydomain = no biff = no bounce_template_file = /etc/postfix/bounce.cf broken_sasl_auth_clients = yes config_directory = /etc/postfix delay_warning_time = 0h dovecot_destination_recipient_limit = 1 inet_interfaces = all local_recipient_maps = $virtual_mailbox_maps local_transport = virtual mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 maximal_queue_lifetime = 1d message_size_limit = 25600000 mydestination = mailbox2.cws.net, debian.local.cws.net, localhost.local.cws.net, localhost myhostname = mailbox2.cws.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 172.18.0.119 63.164.138.3 myorigin = /etc/mailname proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps readme_directory = no recipient_delimiter = + relay_domains = relayhost = smtp_connect_timeout = 10 smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_client_message_rate_limit = 50 smtpd_client_recipient_rate_limit = 500 smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks smtpd_delay_reject = yes smtpd_discard_ehlo_keyword_address_maps = hash:/etc/postfix/discard_ehlo smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, reject_invalid_helo_hostname, permit smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_tls_security_options = $smtpd_sasl_security_options smtpd_sasl_type = dovecot smtpd_sender_restrictions = permit_mynetworks, reject_non_fqdn_sender, reject_unknown_sender_domain, permit smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_catchall_maps.cf virtual_gid_maps = static:1001 virtual_mailbox_base = /var/mail/virtual/ virtual_mailbox_domains = proxy:mysql:/etc/postfix/sql/mysql_virtual_domains_maps.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/sql/mysql_virtual_mailbox_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_mailbox_maps.cf virtual_transport = dovecot virtual_uid_maps = static:1001

    Read the article

  • IIS 6 getting "Page Not Found" after applying SSL

    - by Dominic Zukiewicz
    I am setting up SSL certificates on a development environment using IIS 6 on W2k3. I have a directory called login with a single page login.asp which I would like only viewable over SSL. So before installing or applying SSL permissions, the page is viewable through a browser. I can browse the page and it redirects etc. and all is good. However Basic Authentication is Base64 encoded so I want to secure the traffic from this page only. I have created a dummy certificate in makecert, installed it and added it to IIS. IIS is happy that it is trusted. I have selected the directory of login and child files to "Require SSL channel". When I refresh my browser on login/login.asp I get a "404: Page Not Found" in IE 8. So 2 issues here The page is now unviewable when using HTTPS. They must manually type the HTTPS (minor inconvenience for now) If I turn off "Require SSL Channel" from IIS, it works again. What part of the process am I missing as I have followed several tutorials on installed SSL certificates, but still come across this barrier.

    Read the article

  • Apache and linux file permissions

    - by morpheous
    I recently moved a Symfony 1.3.2 website (a PHP web framework), from a windows machine to Linux (Ubuntu 9.10). Ever since then, I have had all kinds of problems involving file permission (even though the app run without any of these problems on windows). I run symfony fix-perms which applied a 777 mask to the web directory (presumably, including its sub folders) - (as an aside) I think that is a potential security hole ... I have been meaning to come in here to ask how to correctly set permissions. Currently, when attempting to save a file from my website, I am getting the following error: PHP Warning: imagejpeg() [0function.imagejpeg0]: Unable to open '/home/morpheous/work/webdev/frameworks/symfony/sites/project1/web/uploads/../images/thumbnail/959cd604cf6115014a3703bef5a50486a5520642.jpg' for writing: Permission denied in /home/morpheous/work/webdev/frameworks/symfony/sites/project1/apps/frontend/lib Here are the permissions on the folders: web drwxr-xr-x 16 morpheous morpheous 4096 2010-02-24 21:01 web web/uploads/../images drwxr-xr-x 13 morpheous morpheous 12288 2010-04-09 15:25 images web/uploads/../images/thumbnail drwxr-xr-x 3 morpheous morpheous 4096 2010-02-24 20:44 thumbnail Can someone kindly tell me how to set the permissions so that my website (presumably running as the Apache daemon) can write the files to the directory required above?

    Read the article

  • Should be simple: existing laptop with local user and outlook 2007 migrate on same computer to domain user with outlook 2007 emails intact

    - by bifpowell
    I have Dell Laptop with windows 7 64 bit and for the last year it's been just a machine with an account like: machine\john there are files in folders and stuff in c:\users\john and john uses outlook 2007 as a pop3 client and has identifiable local appdata pst files. Now I installed a server and want to have everything be domain-centric so I added this laptop to the domain with admin credentials and then logged in as a domain user as: domain\john.smith Now I want to duplicate machine\john (outlook emails mostly) to domain\john.smith. In the past I used the Files and Settings Xfer Wizard and done. I tried that here and it crunched away for a while, made the file, but the restore had no effect - it ran for a while, had a progress bar, but it's like nothing happened at all afterwards. I've rebooted the machine, logged in as domain administrator as the first user to log on after the restart and tried: c:\users\john xcopy c:\users\john c:\users\john.smith /V /C /F /H /K /Y /E ...and it copies some of it, but when it gets to c:\users\john.smith\appdata\local\application data it chokes "Access denied, unable to create directory" I also tried logging in as domain\john.smith and copying the entire directory that the PSTs are in from machine\john and a lot of the mail was there when I launched outlook after replacing the PSTs, but not all of them??? I got errors about files in use when doing this method, which I figure must be why not all the old emails are in the inbox?... There must be some extremely simple way to do what must be a very common requirement. Any guidance appreciated.

    Read the article

  • Installing Ruby 1.9.3 OSX 10.7.4 breaks after altering PATH

    - by R V
    I was having trouble installing ruby 1.9.3-p194 from ruby 1.8.7 on my mac osx 10.7.4. I have was trying to fix my homebrew after running "brew doctor" and got the message of "/usr/bin occurs before /usr/local/bin This means that system-provided programs will be used instead of those provided by Homebrew. The following tools exist at both paths: c++-4.2 cpp-4.2 erb g++-4.2 gcc-4.2 gcov-4.2 gem i686-apple-darwin11-cpp-4.2.1 i686-apple-darwin11-g++-4.2.1 i686-apple-darwin11-gcc-4.2.1 irb rake rdoc ri ruby testrb" I fixed it by entering the following, which I found on another stackoverflow answer: export PATH="/usr/local/bin:/usr/local/sbin:~/bin$PATH" Lo and behold! when I typed that ruby updates to 1.9.3-p194. Ruby files seem to compile and run just fine. However, afterward, my navigation around terminal is messed up severely. For instance I can't do the command "open example_file.html" and have the file pop up in Chrome, instead I get the error: "-bash: open: command not found" Also, when I change directory, I get an error, inputting "$ cd desktop" yields the output, "-bash: dirname: command not found" but the directory does then changes... strange. When I exit out of a terminal window all this resets. I'm back to Ruby 1.8.7, have to use the PATH command again to update to 1.9.3, command line navigation gets broken again. Any guidance on how to remedy so I can use 1.9.3-p194 and also have normal terminal navigation would be greatly appreciated.

    Read the article

  • Shared printer hosted by Windows 7 to XP peers [closed]

    - by Alistair Knock
    Possible Duplicate: Add Network Printer drivers in Windows 7/Server 2008 R2? A Canon Pixma ip4600 installed all by itself when connected to a Windows 7-64 bit machine. I then updated it with the add-on Canon provide to give additional functionality. I then wish to share it with a box which is running Windows XP 32-bit. The XP box can see the printer but can't find a driver from the 7 machine, which is fine. I ask the 7 machine to get the x86 drivers, but it can't. I install the 32-bit XP drivers on the XP machine, but unlike previous Canon drivers (which unzip to give an .inf file), they assume a local printer and partially give up. I find the .inf file in the Program files\CanonBJ\... directory. Neither the XP machine, nor the 7 machine when the CanonBJ directory is shared to it, is happy with the .inf file. I attempt to install the 32-bit drivers on the 64-bit 7 machine, which, understandably, fail. Where do we go from here? (I apologise for posting in the first-person, I'm not sure why this was)

    Read the article

  • How do I make rsync also check ctime?

    - by Benoît
    rsync detects files modification by comparing size and mtime. However, if for any reason, the mtime is unchanged, rsync won't detect the change, although it's possible to spot it by looking at the ctime. Of course, I can tell rsync do compare the whole files' contents, but that's very very expensive. Is there a way to make rsync smarter, for example by checking mtime+size are the same AND that ctime isn't newer than mtime (on both source and destination) ? Or should I open a feature request ? Here's an example: Create 2 files, same content and atime/mtime benoit@debian:~$ mkdir d1 && cd d1 benoit@debian:~/d1$ echo Hello > a benoit@debian:~/d1$ cp -a a b Rsync them to another (non-exisiting) directory: benoit@debian:~/d1$ cd .. benoit@debian:~$ rsync -av d1/ d2 sending incremental file list created directory d2 ./ a b sent 164 bytes received 53 bytes 434.00 bytes/sec total size is 12 speedup is 0.06 OK, everything is synced benoit@debian:~$ grep . d*/* d1/a:Hello d1/b:Hello d2/a:Hello d2/b:Hello Update file 'b', same size and then reset its atime/mtime benoit@debian:~$ echo World > d1/b benoit@debian:~$ touch -r d1/a d1/b Attempt to rsync again: benoit@debian:~$ rsync -av d1/ d2 sending incremental file list sent 63 bytes received 12 bytes 150.00 bytes/sec total size is 12 speedup is 0.16 Nope, rsync missed the change. benoit@debian:~$ grep . d*/* d1/a:Hello d1/b:World d2/a:Hello d2/b:Hello Tell rsync the compare the file content benoit@debian:~$ rsync -acv d1/ d2 sending incremental file list b sent 144 bytes received 31 bytes 350.00 bytes/sec total size is 12 speedup is 0.07 Gives the correct result: benoit@debian:~$ grep . d*/* d1/a:Hello d1/b:World d2/a:Hello d2/b:World

    Read the article

  • Apache2 VirtualHost on Debian not working

    - by milo5b
    I am having some problems with Apache2 configuration. I have already tried to look for documentation on the web (Apache's site, Debian's site, here on serverfault, etc), but nothing really helps. I have tried different configurations, but my current configuration is the following (/etc/apache2/sites-available/default): <VirtualHost *:80> ServerAdmin [email protected] ServerName mysite.dev ServerAlias mysite.dev DocumentRoot /var/www/mysite.dev/httpdocs/ ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName livesite.com ServerAlias www.livesite.com DocumentRoot /var/www/livesite.com/httpdocs/ <Directory /var/www/livesite.com/httpdocs/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> mysite.dev it's just an entry in hosts file on my client machine, while livesite.com it's an actual DNS record which would resolve to the same IP as the IP set in hosts file for mysite.dev. The problem is that when i try to type mysite.dev in my browser, it would automatically go to livesite.com. I tried to have different /etc/apache2/sites-enabled/ files (/etc/apache2/sites-enabled/mysite.dev , /etc/apache2/sites-enabled/livesite.com ) - and of course with the actual sites-available related files, but achieving the same results. I have tried to have a peak on error.log and access.log but there's nothing I can see. My httpd.conf contains: AccessFileName .htaccess And I have no /etc/apache2/conf.d/virtual.conf file. Any help would be greatly appreciated - if I did not provide enough info please let me know I will do my best to provide all necessary info. Thanks

    Read the article

  • How to make working TFTP server on CentOS 6.2

    - by Dima
    I'm trying to setup TFTP server on CentOS 6.2. The /etc/xinet.d/tftp configuration file is the following: service tftp { disable = no socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -s /tftpboot -vvv per_source = 11 cps = 100 2 flags = IPv4 } The selinux and firewall are disabled. The /etc/hosts.allow and /etc/hosts.deny files are empty. When I'm trying to get a file from the TFTP server, the file transfer always failed and I see the following errors into /var/log/messages Jul 11 03:16:53 localhost xinetd[4155]: xinetd Version 2.3.14 started with libwrap loadavg labeled-networking options compiled in. Jul 11 03:16:53 localhost xinetd[4155]: Started working: 1 available service Jul 11 03:17:00 localhost xinetd[4155]: START: tftp pid=4157 from=192.168.10.3 Jul 11 03:17:00 localhost in.tftpd[4158]: RRQ from 192.168.10.3 filename 1 Jul 11 03:17:00 localhost in.tftpd[4158]: sending NAK (0, Permission denied) to 192.168.10.3 Jul 11 03:17:01 localhost in.tftpd[4159]: RRQ from 192.168.10.3 filename 1 Jul 11 03:17:01 localhost in.tftpd[4159]: sending NAK (0, Permission denied) to 192.168.10.3 Jul 11 03:17:03 localhost in.tftpd[4160]: RRQ from 192.168.10.3 filename 1 The tftpboot directory permissions are (output of the ls -l command): drw-rw-rw-. 3 root root 4096 Jul 11 03:32 tftpboot I also see that the tftpboot directory is shown (by ls -l) with green background (unlike other files/directories) (Why? As I know the green background is for sticky bit only). What I did wrong? How can I make TFTP server working?

    Read the article

  • Can I recover a rm -rf-ed Mercurial repository?

    - by WishCow
    I made the mistake of wiping out my entire project directory with a quick "rm -rf project". Of course, the .hg directory went with it. I had about 15-20 changesets, that I have not pushed to anyone, and I would really really like to get those back. The system is a Ubuntu machine, and the partiton where the delete happened is ext3, the project consist mostly of PHP files. I know about the guideline to not write to the disk in question. The first idea was to use the tool named scalpel, to get the PHP files back and diff them with the current version from the repo, and somehow carve the changes out. While it succeeded, it did not recover the file names (or there is a switch I'm missing), so I'm left with a few thousand sequentially named .php files, combing through them is not an option. Can a kind soul please save me, and suggest a way to: a) get the repo back, or b) get the files back, with filenames For those wondering how I did such a stupid thing: I was working on a file in Vim which I wanted to remove from the repository: :!hg rm % This complained that the file is in a subrepository, so I specified the following: :!hg rm % -R engine which complained that file has modifications, use -f to force. And this is when somehow, I made up the following command: :!rm -rf % -R engine Somehow, seeing "force" makes me do a rm -rf by reflex.

    Read the article

  • Issues configuring Exchange 2010 as well as SSL problems.

    - by Eric Smith
    Possibly-Relevant Background Info: I've recently moved up from icky shared hosting to a glorious, Remote Desktop-administrated VPS server running Windows Server 2008 R2. Even though I'm only 21 now and a computer science major, I've tried to play with every Windows Server release since '03, just to learn new things. What usually happens is inevitably I'll do something wrong and pretty much ruin the install. You're dealing with an amateur here :) Through the past few months of working with my new server, I've mastered DNS, IIS, got Team Foundation Server running (yay!), and can install all of the other basics like SQL Server and Active Directory. The Problem: Now, these last few weeks I've been trying to install Exchange Server 2010 (SP1). To make a long story short, it took me several attempts, and I even had to get my server wiped just so I could start fresh since Exchange decided uninstalling properly was for sissies (cost me $20, bah). Today, at long last, I got Exchange mostly working. There were two main problems left, however, that left me unsatisfied: Exchange installed itself and all of its child sites into Default Web Site. I wanted to access Exchange via mail.domain.com, but instead everything was configured to domain.com. My limited server admin knowledge was not enough to configure IIS or Exchange to move itself over to the website I had set up for it, appropriately titled 'mail.domain.com', which I had bound to a dedicated IP address (I was told this was necessary, but he may have been wrong). I have two SSL certificates: one for my main domain and one for my mail subdomain. For whatever reason, I had issues geting Exchange to use my mail certificate, even though I had assigned the proper roles in the MMC. I did, at one point, get it to work (or mostly work, anyways. Frankly, my memory of today is clouded by intense frustration). Additionally, I was confused which type of SSL certificate I should be using for Exchange. My SSL provider, GoDaddy, allows me to request a new certificate whenever, so I can use either the certificate request provided by IIS or the more complicated and specific request you can create with Exchange. Which type should I be using, the IIS or Exchange certificate? If I must use the Exchange certificate, will that 1) cause issues when I bind that certificate to my mail.domain.com subdomain or 2) is that an unnecessary step? The SSL Certificate Strikes Back When I thought I had the proper SSL certificate assigned for those brief, sweet moments, Google Chrome reported the correct mail.domain.com certificate when browsing https://mail.domain.com. However, Outlook 2010 threw up an error when trying to configure my email account claiming that the certificate didn't match the domain of "mail.domain.com". Is this an issue that will be resolved by problem #2 or is it a separate one entirely? Apologies for the massive wall of text, but I wanted to provide as much info as I possibly could. Exchange is the last thing I'd like installed on my server, and naturally it's turning out to be the hardest. Thanks for any info at all. Even a point in a vague direction would be a huge help at this point. Thanks! -Eric P.S.: The reason I keep ruining my install is that when I attempt to uninstall Exchange, something invariably goes wrong. The last time the uninstaller complained that there was still a mailbox active and it couldn't proceed until I deleted it. ... The only mailbox left was the Administrator account, the built-in one I couldn't delete. So I attempted to manually uninstall it following several guides online only to now be stuck unable to launch the installer and have to get my system wiped AGAIN for the second time today ($40 down the drain, bah!). I do not understand at all why "uninstall" just can't mean "hey, you, delete everything and go away". There's not even a force uninstall option, only a "recover system" option that just fails to fix anything and makes it so I can't even use the GUI uninstaller. </rant>

    Read the article

  • YUM error. Is this a cert error

    - by Julia Roberts
    Nov 13 13:38:57 host abrt: detected unhandled Python exception in '/usr/bin/yum' Nov 13 13:38:57 host abrtd: New client connected Nov 13 13:38:57 host abrt-server[3508]: Saved Python crash dump of pid 3151 to /var/spool/abrt/pyhook-2012-11-13-13:38:57-3151 Nov 13 13:38:57 host abrtd: Directory 'pyhook-2012-11-13-13:38:57-3151' creation detected Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-former Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-release Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-rhx Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release Nov 13 13:38:57 host abrtd: Package 'yum' isn't signed with proper key Nov 13 13:38:57 host abrtd: 'post-create' on '/var/spool/abrt/pyhook-2012-11-13-13:38:57-3151' exited with 1 Nov 13 13:38:57 host abrtd: Corrupted or bad directory /var/spool/abrt/pyhook-2012-11-13-13:38:57-3151, deleting There is also nothing in the crash dump file. Ideas? yum update Loaded plugins: fastestmirror, rhnplugin, security An error has occurred: Internal Server Error See /var/log/up2date for more information Is yum broken

    Read the article

  • Can't make nodejs mingw32: pkg-config can't find gnutils

    - by valya
    I'm trying to compile nodejs using MSYS, mingw32 on Windows 7-64 Valentin Golev@VALYASNOTEBOOK /home/Valentin_Golev/nodejs $ ./configure Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\x86_amd64\CL.exe Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\CL.exe Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\x86_amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\CL.exe Checking for program LINK : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\LINK.exe Checking for program LIB : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\LIB.exe Checking for program MT : ok C:\Program Files\\Microsoft SDKs\W indows\v6.0A\bin\x64\MT.exe Checking for program RC : ok C:\Program Files\\Microsoft SDKs\W indows\v6.0A\bin\x64\RC.exe Checking for msvc : ok Checking for msvc : ok Checking for library dl : not found Checking for library execinfo : not found Checking for gnutls >= 2.5.0 : fail --- libeio --- Checking for library pthread : not found Checking for function pthread_create : not found error: the configuration failed (see 'C:\\msys\\1.0\\home\\Valentin_Golev\\node js\\build\\config.log') I have gnutils built and installed! I've checked the config.log, and there was a command: pkg-config --errors-to-stdout --print-errors --atleast-version=2.5.0 gnutls I typed it in the console Valentin Golev@VALYASNOTEBOOK /home/Valentin_Golev/nodejs $ pkg-config --errors-to-stdout --print-errors --atleast-version=2.5.0 gnutls Package gnutls was not found in the pkg-config search path. Perhaps you should add the directory containing `gnutls.pc' to the PKG_CONFIG_PATH environment variable No package 'gnutls' found But, Valentin Golev@VALYASNOTEBOOK ~ $ $PKG_CONFIG_PATH sh: c:/msys/1.0/local/lib/pkgconfig: is a directory Valentin Golev@VALYASNOTEBOOK ~ $ cd $PKG_CONFIG_PATH Valentin Golev@VALYASNOTEBOOK /local/lib/pkgconfig $ ls gnutls-extra.pc gnutls.pc What am I doing wrong?

    Read the article

  • Unable to compile netmap on Fedora 32 bit

    - by John Elf
    This is the error everytime I try to install netmap: Can someone let me know how to isntall the same on e1000e or ixgbe. I have kernel header and source installed. [root@localhost e1000]# make KSRC=/usr/src/kernels/2.6.35.6-45.fc14.i686/ make -C /usr/src/kernels/2.6.35.6-45.fc14.i686/ M=/media/sf_Shared/netmap-linux/net/e1000 modules make[1]: Entering directory /usr/src/kernels/2.6.35.6-45.fc14.i686' CC [M] /media/sf_Shared/netmap-linux/net/e1000/e1000_main.o /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_setup_tx_resources’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:1485:2: error: implicit declaration of function ‘vzalloc’ /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:1485:20: warning: assignment makes pointer from integer without a cast /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_setup_rx_resources’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:1680:20: warning: assignment makes pointer from integer without a cast /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_tx_csum’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:2780:2: error: implicit declaration of function ‘skb_checksum_start_offset’ /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_rx_checksum’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:3689:2: error: implicit declaration of function ‘skb_checksum_none_assert’ /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_restore_vlan’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:4617:23: error: ‘VLAN_N_VID’ undeclared (first use in this function) /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:4617:23: note: each undeclared identifier is reported only once for each function it appears in make[2]: *** [/media/sf_Shared/netmap-linux/net/e1000/e1000_main.o] Error 1 make[1]: *** [_module_/media/sf_Shared/netmap-linux/net/e1000] Error 2 make[1]: Leaving directory/usr/src/kernels/2.6.35.6-45.fc14.i686' make: * [all] Error 2

    Read the article

  • Need Corrected htaccess File

    - by Vince Kronlein
    I'm attempting to use a wordpress plugin called WP Fast Cache which creates static html files from all your posts, pages and categories. It creates the following directory structure inside wp-content: wp_fast_cache example.com pagename index.html categoryname postname index.html basically just a nested directory structure and a final index.html for each item. But the htaccess edits it makes are crazy. #start_wp_fast_cache - do not remove this comment <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(GET) RewriteCond /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}x__query__x%{QUERY_STRING}index.html -f RewriteCond %{HTTP_USER_AGENT} !(iPhone|Windows\sCE|BlackBerry|NetFront|Opera\sMini|Palm\sOS|Blazer|Elaine|^WAP.*$|Plucker|AvantGo|Nokia) RewriteCond %{HTTP_COOKIE} !(wordpress_logged_in) [NC] RewriteRule ^(.*)$ /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}x__query__x%{QUERY_STRING}index.html [L] RewriteCond %{REQUEST_METHOD} ^(GET) RewriteCond %{QUERY_STRING} ^$ RewriteCond /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}index.html -f RewriteCond %{HTTP_USER_AGENT} !(iPhone|Windows\sCE|BlackBerry|NetFront|Opera\sMini|Palm\sOS|Blazer|Elaine|^WAP.*$|Plucker|AvantGo|Nokia) RewriteCond %{HTTP_COOKIE} !(wordpress_logged_in) [NC] RewriteRule ^(.*)$ /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}index.html [L] </IfModule> #end_wp_fast_cache No matter how I try and work this out I get a 404 not found. And not the Wordpress 404, and janky apache 404. I need to find the correct syntax to route all requests that don't exist ie: files or directories to: wp-content/wp_fast_cache/hostname/request_uri/ So for example: Page: example.com/about-us/ => wp-content/wp_page_cache/example.com/about-us/index.html Post: example.com/my-category/my-awesome-post/ => wp-content/wp_fast_cache/example.com/my-category/my-awesome-post/index.html Category: example.com/news/ => wp-content/wp_fast_cache/example.com/news/index.html Any help is appreciated.

    Read the article

  • Password Protect XML-RPC

    - by Terence Eden
    I have a service running on a server which I want to access via XML-RPC. I've installed all the necessary bits. Within /etc/apache2/httpd.conf I have the single line SCGIMount /RPC2 127.0.0.1:5000 I can run xmlrpc commands from my server - and any server which connects to /RPC2. What I want to do is password protect the directory to stop unauthorised usage. Within /etc/apache2/httpd.conf I've added <Location /RPC2> AuthName "Private" AuthType Basic AuthBasicProvider file AuthUserFile /home/me/myhtpasswd Require user me </Location> Trying to access /RPC2 brings up the "Authorization Required" box and it accepts my username and password. However, xmlrpc now doesn't work! If I run xmlrpc localhost some_command on my server, I get the error Failed. Call failed. HTTP response code is 401, not 200. (XML-RPC fault code -504) Is there any way I can password protect my /RPC2 directory and have xmlrpc commands work?

    Read the article

  • Mac creating files w/ wrong perms on samba share

    - by geoffjentry
    In my group, which is very heterogeneous in terms of machines, we use a samba share to collaborate on files and such. In all but one case, it works as expected (or at least close enough). The one exception is my boss' laptop, a snow leopard macbook air. On his desktop (also snow leopard), if he creates a file it ends up serverside with perms of 774, but when he creates it with the Air, the perms are 644. The key problem is the lack of group write permission on the laptop created files. What's really confusing is that everything that I've looked at on the two machines are identical - same version of OS X, same version of samba (3.0.25b-apple), same settings for the same software, etc. I can't imagine why one machine would be different than the other, but it is. To try to be complete w/ the description, here is the relevant portion of my smb.conf file: comment = my Share path = /path/to/share public = no writeable = yes printable = no force group = myshare directory mask = 0770 create mask = 0770 force create mode = 0770 force directory mode = 0770 EDIT: I looked at three more Macs and all of them worked as expected which leaves this one laptop the true oddball. This wasn't as good as a test as the others though, as they were all leopard.

    Read the article

  • How can I disable Kerberos authentication for only the root of my site?

    - by petRUShka
    I have Kerberos-based authentication and I want to disable it on only root url: http://mysite.com/. And I want it to continue to work fine on any other page like http://mysite.com/page1. I have such things in my .htaccess: AuthType Kerberos AuthName "Domain login" KrbAuthRealms DOMAIN.COM KrbMethodK5Passwd on Krb5KeyTab /etc/httpd/httpd.keytab require valid-user I want to turn it off only for root URL. As workaround it is possible to turn off using .htaccess in virtual host config. Unfortunately I don't know how to do it. Part of my vhost.conf: <Directory /home/user/www/current/public/> Options -MultiViews +FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> UPD. I'm using Apache/2.2.3 (Linux/SUSE) I tried to use such version of .htaccess: SetEnvIf Request_URI ^/$ rootdir=1 Allow from env=rootdir Satisfy Any AuthType Kerberos AuthName "Domain login" KrbAuthRealms DOMAIN.COM KrbMethodK5Passwd on Krb5KeyTab /etc/httpd/httpd.keytab require valid-user Unfortunately such config turn Kerberos AuthType for all URLs. I tried to place first 3 lines SetEnvIf Request_URI ^/$ rootdir=1 Allow from env=rootdir Satisfy Any after main block, but it didn't help me.

    Read the article

  • Libraries merged folder views

    - by Stigma
    So I pretty much love the Windows 7 Libraries feature, and saw one use for them that I thought would be perfect, but I can't seem to manage it. Basically, a merged view of different folder structures. Suppose I make a new generic library and add three locations to it: C:\Test\, D:\Test\ and D:\temp\Test\. Now, these may look somewhat okay as long as there are no duplicates in these folders. (It wants to group them based on the included directory, which one can work around by looking on google - I don't have the precise trick on hand I am afraid.) But when you get collisions and, say, two of those directories have a Sub directory in them, stuff becomes unusable (assuming Arrange by: Folder view). You'll have multiple folders listed named Sub, which is pretty useless when looking for data. I want folders to get 'merged', which ought to be possible somehow since it can create these merged views based on artist, album etc in other views. So all subdirectories that are double (and recursively checking for doubles inside those, etc) ought to be merged for as far the View is concerned. If files have a collision, I don't really care what happens - hide one, show both, filter out duplicates, whatever. (Although an option would be nice...) Anyhow, is there anyone who knows how to get such a 'merged folder structure' functionality for Libraries? It would be really useful for me.

    Read the article

  • Apache2 shared server: default webpage

    - by Eamorr
    Greetings, I have an apache2 server with 4 domain names point to my server's single IP address. When I type in www.site1.com it serves pages from /home/eamorr/site1/index.php Same for www.site2.com, www.site3.com and www.site4.com However, when I type in to the address bar of a browser without the www, it always redirects to site1.com! i.e. site1.com - site1.com site2.com - site1.com site3.com - site1.com site4.com - site1.com How do I configure apache to do the following: site1.com - site1.com site2.com - site2.com site3.com - site3.com site4.com - site4.com Here is my default config: ServerAdmin [email protected] ServerName www.site1.com DocumentRoot /home/eamorr/sites/site1.com/www DirectoryIndex index.php index.html <Directory /home/eamorr/sites/site1.com/www> Options Indexes FollowSymLinks MultiViews Options -Indexes AllowOverride all Order allow,deny allow from all php_value session.cookie_domain ".site1.com" #Added by EOH for redirection RewriteEngine on RewriteRule ^([^/.]+)/?$ driver.php?uname=$1 [L] </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined I'd like to look at the domain name and then redirect to www.sitex.com. Is there an Apache rule to do this? I hope someone can help. My SysAdmin/apache2 config skill aren't the best. Many thanks in advance,

    Read the article

  • Help me understand Ubuntu user/group permissions.

    - by Bartek
    I'm beginning to deal with more than one user on my system (it's a VPS serving some sites) and I need to make sure I understand how group permissions work. Here's my setup: I have an account named "admin" .. it's basically the primary account that is used for serving most of the sites that I control myself. Now, I added a second account named "Ville" as one of my users wants to be able to administer that site. So, I can do this the easy way and just chown their domains folder under the ville user and viola, they have permission to do whatever they need be and so forth. However, let's say I want to also give the admin user access to the files (modifying and all) .. how can I put both users into the same group and give them both permission? I've tried doing: sudo usermod -a -G admin ville To add the ville into the admin group, but ville still cannot edit files by admin. Permissions for the primary directory for the ville user are read/write for both owner and group, and the current group for the files is admin:admin .. But ville still can't write into the directory. So, what should I be doing here to get this right and secure at the same time? Thank you.

    Read the article

  • puma init.d for centos 6 fails with runuser: user /var/log/puma.log does not exist

    - by Rubytastic
    Trying to get a init.d/puma to work on Centos 6. It throws error runuser: user /var/log/puma.log does not exist I run this from the /srv/books/current folder but it fails. I tried to debug the values but not quite get what is missing and why it throws this error. #! /bin/sh # puma - this script starts and stops the puma daemon # # chkconfig: - 85 15 # description: Puma # processname: puma # config: /etc/puma.conf # pidfile: /srv/books/current/tmp/pids/puma.pid # Author: Darío Javier Cravero &lt;[email protected]> # # Do NOT "set -e" # Original script https://github.com/puma/puma/blob/master/tools/jungle/puma # It was modified here by Stanislaw Pankevich <[email protected]> # to run on CentOS 5.5 boxes. # Script works perfectly on CentOS 5: script uses its native daemon(). # Puma is being stopped/restarted by sending signals, control app is not used. # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 # PATH should only include /usr/* if it runs after the mountnfs.sh script PATH=/usr/local/bin:/usr/local/sbin/:/sbin:/usr/sbin:/bin:/usr/bin DESC="Puma rack web server" NAME=puma DAEMON=$NAME SCRIPTNAME=/etc/init.d/$NAME CONFIG=/etc/puma.conf JUNGLE=`cat $CONFIG` RUNPUMA=/usr/local/bin/run-puma # Skipping the following non-CentOS string # Load the VERBOSE setting and other rcS variables # . /lib/init/vars.sh # CentOS does not have these functions natively log_daemon_msg() { echo "$@"; } log_end_msg() { [ $1 -eq 0 ] && RES=OK; logger ${RES:=FAIL}; } # Define LSB log_* functions. # Depend on lsb-base (>= 3.0-6) to ensure that this file is present. . /lib/lsb/init-functions # # Function that performs a clean up of puma.* files # cleanup() { echo "Cleaning up puma temporary files..." echo $1; PIDFILE=$1/tmp/puma/puma.pid STATEFILE=$1/tmp/puma/puma.state SOCKFILE=$1/tmp/puma/puma.sock rm -f $PIDFILE $STATEFILE $SOCKFILE } # # Function that starts the jungle # do_start() { log_daemon_msg "=> Running the jungle..." for i in $JUNGLE; do dir=`echo $i | cut -d , -f 1` user=`echo $i | cut -d , -f 2` config_file=`echo $i | cut -d , -f 3` if [ "$config_file" = "" ]; then config_file="$dir/puma/config.rb" fi log_file=`echo $i | cut -d , -f 4` if [ "$log_file" = "" ]; then log_file="$dir/puma/puma.log" fi do_start_one $dir $user $config_file $log_file done } do_start_one() { PIDFILE=$1/puma/puma.pid if [ -e $PIDFILE ]; then PID=`cat $PIDFILE` # If the puma isn't running, run it, otherwise restart it. if [ "`ps -A -o pid= | grep -c $PID`" -eq 0 ]; then do_start_one_do $1 $2 $3 $4 else do_restart_one $1 fi else do_start_one_do $1 $2 $3 $4 fi } do_start_one_do() { log_daemon_msg "--> Woke up puma $1" log_daemon_msg "user $2" log_daemon_msg "log to $4" cleanup $1; daemon --user $2 $RUNPUMA $1 $3 $4 } # # Function that stops the jungle # do_stop() { log_daemon_msg "=> Putting all the beasts to bed..." for i in $JUNGLE; do dir=`echo $i | cut -d , -f 1` do_stop_one $dir done } # # Function that stops the daemon/service # do_stop_one() { log_daemon_msg "--> Stopping $1" PIDFILE=$1/tmp/puma/puma.pid STATEFILE=$1/tmp/puma/puma.state echo $PIDFILE if [ -e $PIDFILE ]; then PID=`cat $PIDFILE` echo "Pid:" echo $PID if [ "`ps -A -o pid= | grep -c $PID`" -eq 0 ]; then log_daemon_msg "---> Puma $1 isn't running." else log_daemon_msg "---> About to kill PID `cat $PIDFILE`" # pumactl --state $STATEFILE stop # Many daemons don't delete their pidfiles when they exit. kill -9 $PID fi cleanup $1 else log_daemon_msg "---> No puma here..." fi return 0 } # # Function that restarts the jungle # do_restart() { for i in $JUNGLE; do dir=`echo $i | cut -d , -f 1` do_restart_one $dir done } # # Function that sends a SIGUSR2 to the daemon/service # do_restart_one() { PIDFILE=$1/tmp/puma/puma.pid i=`grep $1 $CONFIG` dir=`echo $i | cut -d , -f 1` if [ -e $PIDFILE ]; then log_daemon_msg "--> About to restart puma $1" # pumactl --state $dir/tmp/puma/state restart kill -s USR2 `cat $PIDFILE` # TODO Check if process exist else log_daemon_msg "--> Your puma was never playing... Let's get it out there first" user=`echo $i | cut -d , -f 2` config_file=`echo $i | cut -d , -f 3` if [ "$config_file" = "" ]; then config_file="$dir/config/puma.rb" fi log_file=`echo $i | cut -d , -f 4` if [ "$log_file" = "" ]; then log_file="$dir/log/puma.log" fi do_start_one $dir $user $config_file $log_file fi return 0 } # # Function that statuss then jungle # do_status() { for i in $JUNGLE; do dir=`echo $i | cut -d , -f 1` do_status_one $dir done } # # Function that sends a SIGUSR2 to the daemon/service # do_status_one() { PIDFILE=$1/tmp/puma/pid i=`grep $1 $CONFIG` dir=`echo $i | cut -d , -f 1` if [ -e $PIDFILE ]; then log_daemon_msg "--> About to status puma $1" pumactl --state $dir/tmp/puma/state stats # kill -s USR2 `cat $PIDFILE` # TODO Check if process exist else log_daemon_msg "--> $1 isn't there :(..." fi return 0 } do_add() { str="" # App's directory if [ -d "$1" ]; then if [ "`grep -c "^$1" $CONFIG`" -eq 0 ]; then str=$1 else echo "The app is already being managed. Remove it if you want to update its config." exit 1 fi else echo "The directory $1 doesn't exist." exit 1 fi # User to run it as if [ "`grep -c "^$2:" /etc/passwd`" -eq 0 ]; then echo "The user $2 doesn't exist." exit 1 else str="$str,$2" fi # Config file if [ "$3" != "" ]; then if [ -e $3 ]; then str="$str,$3" else echo "The config file $3 doesn't exist." exit 1 fi fi # Log file if [ "$4" != "" ]; then str="$str,$4" fi # Add it to the jungle echo $str >> $CONFIG log_daemon_msg "Added a Puma to the jungle: $str. You still have to start it though." } do_remove() { if [ "`grep -c "^$1" $CONFIG`" -eq 0 ]; then echo "There's no app $1 to remove." else # Stop it first. do_stop_one $1 # Remove it from the config. sed -i "\\:^$1:d" $CONFIG log_daemon_msg "Removed a Puma from the jungle: $1." fi } case "$1" in start) [ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME" if [ "$#" -eq 1 ]; then do_start else i=`grep $2 $CONFIG` dir=`echo $i | cut -d , -f 1` user=`echo $i | cut -d , -f 2` config_file=`echo $i | cut -d , -f 3` if [ "$config_file" = "" ]; then config_file="$dir/config/puma.rb" fi log_file=`echo $i | cut -d , -f 4` if [ "$log_file" = "" ]; then log_file="$dir/log/puma.log" fi do_start_one $dir $user $config_file $log_file fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; stop) [ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME" if [ "$#" -eq 1 ]; then do_stop else i=`grep $2 $CONFIG` dir=`echo $i | cut -d , -f 1` do_stop_one $dir fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; status) # TODO Implement. log_daemon_msg "Status $DESC" "$NAME" if [ "$#" -eq 1 ]; then do_status else i=`grep $2 $CONFIG` dir=`echo $i | cut -d , -f 1` do_status_one $dir fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; restart) log_daemon_msg "Restarting $DESC" "$NAME" if [ "$#" -eq 1 ]; then do_restart else i=`grep $2 $CONFIG` dir=`echo $i | cut -d , -f 1` do_restart_one $dir fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; add) if [ "$#" -lt 3 ]; then echo "Please, specifiy the app's directory and the user that will run it at least." echo " Usage: $SCRIPTNAME add /path/to/app user /path/to/app/config/puma.rb /path/to/app/config/log/puma.log" echo " config and log are optionals." exit 1 else do_add $2 $3 $4 $5 fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; remove) if [ "$#" -lt 2 ]; then echo "Please, specifiy the app's directory to remove." exit 1 else do_remove $2 fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; *) echo "Usage:" >&2 echo " Run the jungle: $SCRIPTNAME {start|stop|status|restart}" >&2 echo " Add a Puma: $SCRIPTNAME add /path/to/app user /path/to/app/config/puma.rb /path/to/app/config/log/puma.log" echo " config and log are optionals." echo " Remove a Puma: $SCRIPTNAME remove /path/to/app" echo " On a Puma: $SCRIPTNAME {start|stop|status|restart} PUMA-NAME" >&2 exit 3 ;; esac :

    Read the article

  • Munin graphing by CGI

    - by Vaughn Hawk
    I have Munin working just fine, but any time I try to do cgi graphing - it just stops graphing... no errors in the log, nothing. I've followed the instructions here: http://munin-monitoring.org/wiki/CgiHowto - and it should be working - here's my munin.conf setup, at least the parts that matter: dbdir /var/lib/munin htmldir /var/www/munin logdir /var/log/munin rundir /var/run/munin tmpldir /etc/munin/templates graph_strategy cgi cgiurl /usr/lib/cgi-bin cgiurl_graph /cgi-bin/munin-cgi-graph And then the host info yada yada - graph_strategy cgi and cgrurl are commented out in munin.conf - that's because if I uncomment them, graphing stops working. Again, I get no errors in logs, just blank images where the graphs used to be. Comment out cgi? As soon as munin html runs again, everything is back to normal. I'm running the latest version of munin and munin-node - I've tried fastcgi and regular cgi - permissions for all of the directories involved are munin:www-data - and my httpd.conf file looks like this: ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory /usr/lib/cgi-bin/> AllowOverride None SetHandler fastcgi-script Options ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> <Location /cgi-bin/munin-cgi-graph> SetHandler fastcgi-script </Location> Does anyone have any ideas? Without this working, at least from what I understand, Munin just graphs stuff, even if no one is looking at them - you add 100 servers to graph, and this starts to become a problem. Hope someone has ran into this and can help me out. Thanks!

    Read the article

  • How to setup Python with Lighttpd and FastCGI (like PHP)

    - by johndir
    Running Lighttpd on Linux, I would like to be able to execute Python scripts just the way I execute PHP scripts. The goal is to be able to execute arbitrary script files stored in the WWW directory, e.g. http://www.example.com/*.py. I would not like to spawn a new Python instance (interpreter) for every request (like done in regular CGI, if I'm not mistaken), which is why I'm using FastCGI. Following Lighttpd's documentation, the following is the FastCGI part of my config file. The problem is that it always runs the /usr/local/bin/python-fcgi script for every *.py file, regardless of the content of that file: http://www.example.com/script.py [output=>] "python-fcgi: test" (regardless of the content of script.py) I'm not interested in using any framework, but simply executing individual [web] scripts. How can I make it act like PHP, executing any script in the WWW directory by requesting it's path? /etc/lighttpd/conf.d/fastcgi.conf: server.modules += ( "mod_fastcgi" ) index-file.names += ( "index.php" ) fastcgi.server = ( ".php" => ( "localhost" => ( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/run/lighttpd/php-fastcgi.sock", "max-procs" => 4, # default value "bin-environment" => ( "PHP_FCGI_CHILDREN" => "1", # default value ), "broken-scriptfilename" => "enable" ) ), ".py" => ( "python-fcgi" => ( "socket" => "/var/run/lighttpd/fastcgi.python.socket", "bin-path" => "/usr/local/bin/python-fcgi", "check-local" => "disable", "max-procs" => 1, ) ) ) /usr/local/bin/python-fcgi: #!/usr/bin/python2 def myapp(environ, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) return ['python-fcgi: test\n'] if __name__ == '__main__': from flup.server.fcgi import WSGIServer WSGIServer(myapp).run()

    Read the article

  • OpenVPN on Tomato and Vista - can't see my network

    - by Ian
    I followed the instructions here (http://todayguesswhat.blogspot.ca/2011/03/quick-simple-vpn-setup-guide-using.html) to set up a TCP connection to OpenVPN on my Tomato router. Used TCP because the place I usually surf at seems to have the other ports blocked. My Vista laptop is able to connect to the router but I don't appear to be getting an IP address. I'm able to access my router's admin page, but I can't see the network at home. When I browse to Whatsmyip I see my home IP. Here are the results of route print -4 when I'm just connect to the library and when I've fired up the VP connection as well: Library only: =========================================================================== Interface List 22 ...00 ff c4 a0 e7 5c ...... TAP-Win32 Adapter V9 15 ...00 23 4e 20 b3 64 ...... Atheros AR9281 Wireless Network Adapter 10 ...00 23 8b 39 ec 71 ...... Marvell Yukon 88E8040T PCI-E Fast Ethernet Controller 1 ........................... Software Loopback Interface 1 11 ...00 00 00 00 00 00 00 e0 isatap.{834A8A0A-5E2C-47D0-9673-7965DE8B5470} 14 ...02 00 54 55 4e 01 ...... Teredo Tunneling Pseudo-Interface 17 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #3 20 ...00 00 00 00 00 00 00 e0 6TO4 Adapter 18 ...00 00 00 00 00 00 00 e0 6TO4 Adapter 19 ...00 00 00 00 00 00 00 e0 6TO4 Adapter 23 ...00 00 00 00 00 00 00 e0 isatap.{C4A0E75C-765E-4F7D-A55C-77945779816A} 34 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #5 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.1.29.1 10.1.29.117 25 10.1.29.0 255.255.255.0 On-link 10.1.29.117 281 10.1.29.117 255.255.255.255 On-link 10.1.29.117 281 10.1.29.255 255.255.255.255 On-link 10.1.29.117 281 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.1.29.117 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.1.29.117 281 =========================================================================== Library and TCP OpenVPN: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.1.29.1 10.1.29.117 25 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.116 30 0.0.0.0 128.0.0.0 192.168.1.1 192.168.1.116 30 10.1.29.0 255.255.255.0 On-link 10.1.29.117 281 10.1.29.117 255.255.255.255 On-link 10.1.29.117 281 10.1.29.255 255.255.255.255 On-link 10.1.29.117 281 24.212.205.68 255.255.255.255 10.1.29.1 10.1.29.117 25 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.0.0.0 128.0.0.0 192.168.1.1 192.168.1.116 30 192.168.1.0 255.255.255.0 On-link 192.168.1.116 286 192.168.1.116 255.255.255.255 On-link 192.168.1.116 286 192.168.1.255 255.255.255.255 On-link 192.168.1.116 286 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.116 286 224.0.0.0 240.0.0.0 On-link 10.1.29.117 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.116 286 255.255.255.255 255.255.255.255 On-link 10.1.29.117 281 =========================================================================== Thanks for any advice. I looked at one of the answers but I'm not sure if it applied to me as it said that 10...* was the vpn connection, but I appear to have 10...* when I connect just to the library.

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >