Search Results

Search found 18756 results on 751 pages for 'generate images'.

Page 678/751 | < Previous Page | 674 675 676 677 678 679 680 681 682 683 684 685  | Next Page >

  • Vagrant - Failed login, ssh not set up

    - by motleydev
    This question is two fold because somewhere in my attempts to solve the problem, I created a new one. First: I was trying to vagrant up using a Vagranfile based on the standard hashicorp/precise32 box. Everything worked up until default: SSH auth method: private key where it would eventually time out. Enabling gui in the Vagrantfile showed that the machine never actually logs in. I can use the standard user/pass and log in from that point but the vagrant up process still remains at that prior status. Here's where my understanding might be a little dim. I've tried setting auth method from the insecure_pass_key to my root ~/.ssh/id_rsa or whichever one I wanted to use. I'm not entirely sure where to put a copy of the public key or my authorized keys file. I've got a .vagrant.d folder in my user dir (I'm on OSX) which seems to contain the box images. I've got a .vagrant folder in my directory with the Vagrantfile which seems to contain the specific machine I am building off of. I've tried pouring over the docs and forums but I seem to be missing a key concept here. And Now: After a host of tips/tricks such as rolling back by VirtualBox install, uninstalling/reinstalling Vagrant and VritualBox several times, when I try to run vagrant ssh-config with a Vagrantfile based on the same hashicorp/precise32 it says that the box is not enabled for SSH. Specifically, this error: The provider for this Vagrant-managed machine is reporting that it is not yet ready for SSH. Depending on your provider this can carry different meanings. Make sure your machine is created and running and try again. Additionally, check the output of `vagrant status` to verify that the machine is in the state that you expect. If you continue to get this error message, please view the documentation for the provider you're using. So now I am slightly up a creek. Any help would be appreciated if not just clarifying a concept. Some pertinent info: I'm on OSX Maverics Due to the fact I'm running a dual HD system with system files on one HD and user files on another, my permissions are a little wonky and VBoxManage will only let me run commands via Sudo - not sure if it's pertinent - but maybe. I have no idea what I'm doing. That part is perhaps more important.

    Read the article

  • xen-create-image does not create inird or initramfs image and domU does not starts with system image

    - by user219372
    I have Fedora 19 as Dom0. To create image I run # xen-create-image --hostname=debian-wheezy --memory=512Mb --dhcp --size=20Gb --swap=512Mb --dir=/xen --arch=amd64 --dist=wheezy After generation finished I start vm and see: # xl create /etc/xen/debian-wheezy.cfg Parsing config from /etc/xen/debian-wheezy.cfg libxl: error: libxl_dom.c:409:libxl__build_pv: xc_dom_ramdisk_file failed: No such file or directory libxl: error: libxl_create.c:919:domcreate_rebuild_done: cannot (re-)build domain: -3 In the /etc/xen/debian-wheezy.cfg i have # # Kernel + memory size # kernel = '/boot/vmlinuz-3.11.2-201.fc19.x86_64' ramdisk = '/boot/initrd.img-3.11.2-201.fc19.x86_64' and ls -1 /boot/*201* shows /boot/config-3.11.2-201.fc19.x86_64 /boot/initramfs-3.11.2-201.fc19.x86_64.img /boot/System.map-3.11.2-201.fc19.x86_64 /boot/vmlinuz-3.11.2-201.fc19.x86_64 Then if I fix ramdisk directive in .cfg file to /boot/initramfs-3.11.2-201.fc19.x86_64.img vm will start but os inside will not boot. In a tail of xl console I get [ OK ] Reached target Basic System. dracut-initqueue[130]: Warning: Could not boot. dracut-initqueue[130]: Warning: /dev/disk/by-uuid/085883ad-73ca-45cc-8bc5-e6249f869b26 does not exist dracut-initqueue[130]: Warning: /dev/fedora/root does not exist dracut-initqueue[130]: Warning: /dev/fedora/swap does not exist dracut-initqueue[130]: Warning: /dev/mapper/fedora-root does not exist dracut-initqueue[130]: Warning: /dev/mapper/fedora-swap does not exist dracut-initqueue[130]: Warning: /dev/xvda2 does not exist Starting Dracut Emergency Shell... Warning: /dev/disk/by-uuid/085883ad-73ca-45cc-8bc5-e6249f869b26 does not exist Warning: /dev/fedora/root does not exist Warning: /dev/fedora/swap does not exist Warning: /dev/mapper/fedora-root does not exist Warning: /dev/mapper/fedora-swap does not exist Warning: /dev/xvda2 does not exist Generating "/run/initramfs/sosreport.txt" Entering emergency mode. Exit the shell to continue. Type "journalctl" to view system logs. You might want to save "/run/initramfs/sosreport.txt" to a USB stick or /boot after mounting them and attach it to a bug report. dracut:/# .img files in /xen/domains/debian-wheezy exists and listed in disk section of debian-wheezy.cfg So what should i do? Update: I've found that xl does not mount images. In debian-wheezy.cfg I have that: root = '/dev/xvda2 ro' disk = [ 'file:/xen/domains/debian-wheezy/disk.img,xvda2,w', 'file:/xen/domains/debian-wheeze/swap.img,xvda1,w', ] And there is no /dev/xvda* or /dev/sda* or /dev/hda* files in VM.

    Read the article

  • Checking the configuration of two systems to determine changes

    - by None
    We are standing up a replicant data center at work and need to ensure that the new data center is configured (nearly) identically to the original. The new data center will be differently addressed and named than the original and will have differing user accounts, but all the COTS, patches, and configurations should be the same. We would normally ghost the original servers and install those images onto the new machines, however, we have a few problematic pieces of COTS that require we install them outside of an image due to how they capture the setup of the network during their installation and maintain it within their configuration information (in some cases storing it in various databases). We have tried multiple times and this piece of COTS cannot be captured within a ghost image unless the destination machine will have an identical network setup (all the same IPs, hostnames, user accounts, etc across the entire network) as the original. In truth, it is the setup of these special COTS that I want to audit the most because they are difficult to install and configure in the first place. In light of the fact that we can’t simply ghost, I’m trying to find a reasonable manner to audit the new data center and check to see if it is setup like the original (some sort of system wide configuration audit or integrity check). I’m considering using something like Tripwire for Servers to capture the configuration on the source machines and then run an audit on the destination machines. I understand that it will still show some differences due to the minor config changes, but I’m hoping that it will eliminate the majority of the work. Here are some of the constraints I’m working under: Data center is comprised of multiple Windows and Linux machines of differing versions (about 20 total) I absolutely cannot ghost or snap any other type of image of these machines … at least not in their final configuration I want to audit the final configuration to ensure all of the COTS, patches, configurations, etc are installed and setup properly (as compared to the original data center) I would rather not install any additional tools on these machines … I’d much rather run it from a standalone machine or off a DVD Price of tools is important but not an impossible burden, however, getting a solution soon is important (I can’t take the time to roll my own tools to do this) For the COTS that stores the network information, I don’t know all of the places it stores the network information … so it would be unlikely I could find a way in the near future to adjust its setup after the installation has occurred Anyone have any thoughts or alternate approaches? Can anyone recommend tools that would be usable for system wide configuration audits?

    Read the article

  • Creating a USB stick for installing centos 6.x using DVD1 and DVD2 iso files

    - by user250563
    First, we create 2 partitions on the USB stick that is let's say 16GB. first partition is let's say only 1GB and the second partition is the rest of what is available. after we "w" write the changes, the USB now has 2 partitions. 1 is 1GB 1 is more than 14GB so , we have... sdb1 and sdb2 now. now we need to turn these partitions into filesystems some say i should run these commands after those procedures. mkfs.vfat -F 32 /dev/sdb1 mkfs.ext3 /dev/sdb2 but some web pages recommend using: mkfs.vfat -n BOOT /dev/sdb1 mkfs.ext2 -m 0 -b 4096 -L DATA /dev/sdb2 which is it? so let's say the DVDs are called: CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso so we make a directory: mkdir -p /mnt/dvd1 and then mount it: mount -o loop CentOS-6.4-x86_64-bin-DVD1.iso /mnt/dvd1 and i suppose we don't make a directory for dvd2 and we don't have to mount it ? at this point i do not know what should be done. but i think this step might be next: we make the USB bootable by finding the file named mbr.bin and then moving it to there via these commnad. dd conv=notrunc bs=440 count=1 if=/usr/lib/syslinux/mbr.bin of=/dev/sdb parted /dev/sdb set 1 boot on in other words we are "dd-ing it to 'sdb' not sdb1' or 'sdb2'. and then we use parted to set the boot to on for sdb so far everything looks good? here is the confusing parts.. how exactly do i move these iso files to the usb drive? EVERYTHING BELOW IS A GUESS. so at this point i should copy the folder /mnt/dvd1/isolinux to usb's sdb1 or sdb2 ? rename it to syslinux ? and then inside this syslinux folder there will be a file called... isolinux.cfg ? which should be renamed to syslinux.cfg ? and then copy the contents of /mnt/dvd1/images/* to USB's sdb2 ? but i think i am also suppose to copy and paste the both CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso somewhere into this USB's sdb2 partition, correct ? almost like a drag and drop kind of a thing? or do they go into any folders ? centos' own web site has some instructions but those instructions do not work. http://wiki.centos.org/HowTos/InstallFromUSBkey i once got this working but things got ruined, i have to do it again and this time take notes.

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • Postfix: Relay access denied

    - by Joseph Silvashy
    When I telnet to my server thats running postfix and try to send an email: MAIL FROM:<[email protected]> #=> 250 2.1.0 Ok RCPT TO:<[email protected]> #=> 554 5.7.1 <[email protected]>: Relay access denied I couldn't really find the answer on the site or by looking at other users question/answers, I'm not sure where to start. Ideas? Update So basically looking at the docs: http://www.postfix.org/SMTPD_ACCESS_README.html (section: Getting selective with SMTP access restriction lists), I don't seem to have any of those directives in etc/postfix/main.cf like smtpd_client_restrictions = permit_mynetworks, reject or any of the other ones, so I'm quite confused. But really I'm going to have a rails app connect to the server and send the emails, so I'm not sure how to handle it. Here is what my config file looks like: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = rerecipe-utils alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = $myhostname, localhost.$mydomain, localhost, mail.rerecipe.com, rerecipe.com relayhost = mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all mynetworks = 127.0.0.0/8 204.232.207.0/24 10.177.64.0/19 [::1]/128 [fe80::%eth0]/64 [fe80::%eth1]/64 Something to note is that relayhost is blank, this is the default configuration file that was created when I installed Postfix, when testing to connect with openssl I get this: ~% openssl s_client -connect mail.myhostname.com:25 -starttls smtp CONNECTED(00000003) depth=0 /CN=myhostname verify error:num=18:self signed certificate verify return:1 depth=0 /CN=myhostname verify return:1 --- Certificate chain 0 s:/CN=myhostname i:/CN=myhostname --- Server certificate -----BEGIN CERTIFICATE----- MIIBqTCCARICCQDDxVr+420qvjANBgkqhkiG9w0BAQUFADAZMRcwFQYDVQQDEw5y ZXJlY2lwZS11dGlsczAeFw0xMDEwMTMwNjU1MTVaFw0yMDEwMTAwNjU1MTVaMBkx FzAVBgNVBAMTDnJlcmVjaXBlLXV0aWxzMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB iQKBgQDODh2w4A1k0qiPNPhkrPj8sfkxpKPTk28AuZhgOEBYBLeHacTKNH0jXxPv P3TyhINijvvdDPzyuPJoTTliR2EHR/nL4DLhr5FzhV+PB4PsIFUER7arx+1sMjz6 5l/Ubu1ppMzW9U0IFNbaPm2AiiGBQRCQN8L0bLUjzVzwoSRMOQIDAQABMA0GCSqG SIb3DQEBBQUAA4GBALi2vvk9TGKJubXYJbU0PKmVmsfzFK35yLqr0keiDBhK2Leg 274sWxEH3ds8mUaRftuFlXb7RYAGNlVyTuMTY3CEcnqIsH7F2McCUTpjMzu/o1mZ O/B21CelKetBd1u79Gkrv2vWyN7Csft6uTx5NIGG2+pGi3r0gX2r0Hbu2K94 -----END CERTIFICATE----- subject=/CN=myhostname issuer=/CN=myhostname --- No client certificate CA names sent --- SSL handshake has read 1203 bytes and written 360 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 1AA4B8BFAAA85DA9ED4755194C50311670E57C35B8C51F9C2749936DA11918E4 Session-ID-ctx: Master-Key: 9B432F1DE9F3580DCC6208C76F96631DC5A4BC517BDBADD5F514414DCF34AC526C30687B96C5C4742E9583555A118232 Key-Arg : None Start Time: 1292985376 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) --- 250 DSN Oddly enough when I try to send an email from the machine itself it does work: echo test | mail -s "test subject" [email protected]

    Read the article

  • Fix/Bypass "Cannot connect to the real website-blocked" error in Google Chrome with OpenDNS blocking

    - by George H
    I have a large problem with Chrome in my organisation. I use DNS to manage web site blocking, for sites which are not appropriate and are potentially a risk to the organisation where I do this. I only want to use Chrome over the network, as Internet Explorer has compatibility problems with some sites that we use (We cannot change this either or use different sites). Therefore using internet explorer is not a solution. I do not want to install a different browser, for multiple reasons. Mainly because of the difficulty of rewriting the customised add-ons that we use. However, recently, I have had lots of problems with Chrome SSL Errors. I cannot use my custom OpenDNS block pages, which uses the contact form to request an unblocking. Chrome often blocks OpenDNS for sites (a good example is Facebook) that request HTTPS. Some sites like https://internetbadguys.com (OpenDNS example) This means that chrome refuses to load the blocking page, explaining that the site is blocked. Instead they often call IT support, but they want a solution, as they are sick of getting lots of SSL errors. I have tried looking into ways to turning this off. I have tried: Typing "proceed". That didn't work. Typing "proceed", pressing enter. Didn't work I cannot find phishing and anti-malware any more in Chrome, from the internet guides. Not using HTTPS. However there is an automatic redirect to HTTPS on most sites. Therefore the error keeps coming up. Checking my clocks. They were correct. Does anyone have an idea on how to disabling, bypassing or working around this "feature"? EDIT: This is an example what I am talking about - I found that on google images. I do not block google. EDIT 2: My clocks are correct. I cannot stop using OpenDNS either. EDIT 3: My question is: How do I stop chrome from refusing to load pages that are blocked by OpenDNS, where the server has explicitly requested HTTPS.

    Read the article

  • IIS/ASP.NET performance incident - Perfmon Current Annonymous Users going through roof but Requests/sec low

    - by Laurence
    Setup: ASP.NET 4.0 website on IIS 6.0 on Win 2003 64 bit, 8xCPUs, 16GB memory, separate SQL 2005 DB server. Had a serious slowdown today with any otherwise fairly well performing ASP.NET site. For a period of a couple of hours all page requests were taking a very long time to be served - e.g. 30-60s compared to usual 2s. The w3wp.exe's CPU and memory usage on the webserver was not much higher than normal. The application pool was not in the middle of recycling (and it hadn't recycled for several hours). Bottlenecks in the database were ruled out - no blocks occurring and query results were being returned quickly. I couldn't make any sense of it and set up the following Perfmon counters: Current Anonymous Users (for site in question) Get requests/sec (ditto) Requests/sec for the ASP.NET application running the site Get requests/sec was averaging 100-150. Requests/sec for ASP.NET was averaging 5-10. However Current Anonymous Users was around 200. And then as I was watching, the Current Anonymous Users began to climb steeply going up to about 500 within a few minutes. All this time Get requests/sec & Requests/sec for ASP.NET was if anything going down. I did a whole load of things (in a panic!) to try to get the site working, like shutting it down, recycling the app pool, and adding another worker process to the pool. I also extended the expiration time for content (in IIS under HTTP Headers) in an attempt to lower the number of requests for static files (there are a lot of images on the site). The site is now back to normal, and the counters are fairly steady and reading (added Current Connections counter): Current Anonymous Users : average 30 Get requests/sec : average 100 Requests/sec for ASP.NET : 5 Current Connections : average 300 I have also observed an inverse relationship between Get requests/sec & Current Anonymous Users. Usually both are fairly steady but there will be short periods when Get requests/sec will go down dramatically and Current Anonymous Users will go up in a perfect mirror image. Then they will flip back to their usual levels. So, my questions are: Thinking of the original performance issue - if w3wp.exe CPU, memory usage were normal and there was no DB bottleneck, what could explain page requests taking 20 times longer to be served than usual? What other counters should I be looking at if this happens again? What explains the inverse relationship between Get requests/sec & Current Anonymous Users? What could explain Current Anonymous Users going from 200 to 500 within a few minutes? Many thanks for any insight into this.

    Read the article

  • .htaccess template, suggestions needed

    - by purpler
    DefaultLanguage en-US FileETag None Header unset ETag ServerSignature Off SetEnv TZ Europe/Belgrade # Rewrites Options +FollowSymLinks RewriteEngine On RewriteBase / # Redirect to WWW RewriteCond %{HTTP_HOST} ^serpentineseo.com RewriteRule (.*) http://www.serpentineseo.com/$1 [R=301,L] # Redirect index to root RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*index\.html\ HTTP/ RewriteRule ^(.*)index\.html$ /$1 [R=301,L] # Cache media files: ExpiresActive On ExpiresDefault A0 # Month <filesMatch "\.(gif|jpg|jpeg|png|ico|swf|js)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> # Week <FilesMatch "\.(css|pdf)$"> Header set Cache-Control "max-age=604800" </FilesMatch> # 10 Min <FilesMatch "\.(html|htm|txt)$"> Header set Cache-Control "max-age=600" </FilesMatch> # Do not cache <FilesMatch "\.(pl|php|cgi|spl|scgi|fcgi)$"> Header unset Cache-Control </FilesMatch> # Compress output <IfModule mod_deflate.c> <FilesMatch "\.(html|js|css)$"> SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error Documents ErrorDocument 206 /error/206.html ErrorDocument 401 /error/401.html ErrorDocument 403 /error/403.html ErrorDocument 404 /error/404.html ErrorDocument 500 /error/500.html # Prevent hotlinking RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://(www\.)?serpentineseo.com/.*$ [NC] RewriteRule \.(gif|jpg|png)$ http://www.serpentineseo.com/images/angryman.png [R,L] # Prevent offline browsers RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [OR] RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:[email protected] [OR] RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [OR] RewriteCond %{HTTP_USER_AGENT} ^Custo [OR] RewriteCond %{HTTP_USER_AGENT} ^DISCo [OR] RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [OR] RewriteCond %{HTTP_USER_AGENT} ^eCatch [OR] RewriteCond %{HTTP_USER_AGENT} ^EirGrabber [OR] RewriteCond %{HTTP_USER_AGENT} ^EmailSiphon [OR] RewriteCond %{HTTP_USER_AGENT} ^EmailWolf [OR] RewriteCond %{HTTP_USER_AGENT} ^Express\ WebPictures [OR] RewriteCond %{HTTP_USER_AGENT} ^ExtractorPro [OR] RewriteCond %{HTTP_USER_AGENT} ^EyeNetIE [OR] RewriteCond %{HTTP_USER_AGENT} ^FlashGet [OR] RewriteCond %{HTTP_USER_AGENT} ^GetRight [OR] RewriteCond %{HTTP_USER_AGENT} ^GetWeb! [OR] RewriteCond %{HTTP_USER_AGENT} ^Go!Zilla [OR] RewriteCond %{HTTP_USER_AGENT} ^Go-Ahead-Got-It [OR] RewriteCond %{HTTP_USER_AGENT} ^GrabNet [OR] RewriteCond %{HTTP_USER_AGENT} ^Grafula [OR] RewriteCond %{HTTP_USER_AGENT} ^HMView [OR] RewriteCond %{HTTP_USER_AGENT} HTTrack [NC,OR] RewriteCond %{HTTP_USER_AGENT} ^Image\ Stripper [OR] RewriteCond %{HTTP_USER_AGENT} ^Image\ Sucker [OR] RewriteCond %{HTTP_USER_AGENT} Indy\ Library [NC,OR] RewriteCond %{HTTP_USER_AGENT} ^InterGET [OR] RewriteCond %{HTTP_USER_AGENT} ^Internet\ Ninja [OR] RewriteCond %{HTTP_USER_AGENT} ^JetCar [OR] RewriteCond %{HTTP_USER_AGENT} ^JOC\ Web\ Spider [OR] RewriteCond %{HTTP_USER_AGENT} ^larbin [OR] RewriteCond %{HTTP_USER_AGENT} ^LeechFTP [OR] RewriteCond %{HTTP_USER_AGENT} ^Mass\ Downloader [OR] RewriteCond %{HTTP_USER_AGENT} ^MIDown\ tool [OR] RewriteCond %{HTTP_USER_AGENT} ^Mister\ PiX [OR] RewriteCond %{HTTP_USER_AGENT} ^Navroad [OR] RewriteCond %{HTTP_USER_AGENT} ^NearSite [OR] RewriteCond %{HTTP_USER_AGENT} ^NetAnts [OR] RewriteCond %{HTTP_USER_AGENT} ^NetSpider [OR] RewriteCond %{HTTP_USER_AGENT} ^Net\ Vampire [OR] RewriteCond %{HTTP_USER_AGENT} ^NetZIP [OR] RewriteCond %{HTTP_USER_AGENT} ^Octopus [OR] RewriteCond %{HTTP_USER_AGENT} ^Offline\ Explorer [OR] RewriteCond %{HTTP_USER_AGENT} ^Offline\ Navigator [OR] RewriteCond %{HTTP_USER_AGENT} ^PageGrabber [OR] RewriteCond %{HTTP_USER_AGENT} ^Papa\ Foto [OR] RewriteCond %{HTTP_USER_AGENT} ^pavuk [OR] RewriteCond %{HTTP_USER_AGENT} ^pcBrowser [OR] RewriteCond %{HTTP_USER_AGENT} ^RealDownload [OR] RewriteCond %{HTTP_USER_AGENT} ^ReGet [OR] RewriteCond %{HTTP_USER_AGENT} ^SiteSnagger [OR] RewriteCond %{HTTP_USER_AGENT} ^SmartDownload [OR] RewriteCond %{HTTP_USER_AGENT} ^SuperBot [OR] RewriteCond %{HTTP_USER_AGENT} ^SuperHTTP [OR] RewriteCond %{HTTP_USER_AGENT} ^Surfbot [OR] RewriteCond %{HTTP_USER_AGENT} ^tAkeOut [OR] RewriteCond %{HTTP_USER_AGENT} ^Teleport\ Pro [OR] RewriteCond %{HTTP_USER_AGENT} ^VoidEYE [OR] RewriteCond %{HTTP_USER_AGENT} ^Web\ Image\ Collector [OR] RewriteCond %{HTTP_USER_AGENT} ^Web\ Sucker [OR] RewriteCond %{HTTP_USER_AGENT} ^WebAuto [OR] RewriteCond %{HTTP_USER_AGENT} ^WebCopier [OR] RewriteCond %{HTTP_USER_AGENT} ^WebFetch [OR] RewriteCond %{HTTP_USER_AGENT} ^WebGo\ IS [OR] RewriteCond %{HTTP_USER_AGENT} ^WebLeacher [OR] RewriteCond %{HTTP_USER_AGENT} ^WebReaper [OR] RewriteCond %{HTTP_USER_AGENT} ^WebSauger [OR] RewriteCond %{HTTP_USER_AGENT} ^Website\ eXtractor [OR] RewriteCond %{HTTP_USER_AGENT} ^Website\ Quester [OR] RewriteCond %{HTTP_USER_AGENT} ^WebStripper [OR] RewriteCond %{HTTP_USER_AGENT} ^WebWhacker [OR] RewriteCond %{HTTP_USER_AGENT} ^WebZIP [OR] RewriteCond %{HTTP_USER_AGENT} ^Wget [OR] RewriteCond %{HTTP_USER_AGENT} ^Widow [OR] RewriteCond %{HTTP_USER_AGENT} ^WWWOFFLE [OR] RewriteCond %{HTTP_USER_AGENT} ^Xaldon\ WebSpider [OR] RewriteCond %{HTTP_USER_AGENT} ^Zeus RewriteRule ^.*$ http://www.google.com [R,L] # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Deny access to sensitive files <FilesMatch "\.(htaccess|psd|log)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • .htaccess template, suggestions needed

    - by purpler
    # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US FileETag None Header unset ETag ServerSignature Off SetEnv TZ Europe/Belgrade # Rewrites Options +FollowSymLinks RewriteEngine On RewriteBase / # Redirect to WWW RewriteCond %{HTTP_HOST} ^serpentineseo.com RewriteRule (.*) http://www.serpentineseo.com/$1 [R=301,L] # Redirect index to root RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*index\.html\ HTTP/ RewriteRule ^(.*)index\.html$ /$1 [R=301,L] # Cache media files: ExpiresActive On ExpiresDefault A0 # Month <filesMatch "\.(gif|jpg|jpeg|png|ico|swf|js)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> # Week <FilesMatch "\.(css|pdf)$"> Header set Cache-Control "max-age=604800" </FilesMatch> # 10 Min <FilesMatch "\.(html|htm|txt)$"> Header set Cache-Control "max-age=600" </FilesMatch> # Do not cache <FilesMatch "\.(pl|php|cgi|spl|scgi|fcgi)$"> Header unset Cache-Control </FilesMatch> # Compress output <IfModule mod_deflate.c> <FilesMatch "\.(html|js|css)$"> SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error Documents ErrorDocument 206 /error/206.html ErrorDocument 401 /error/401.html ErrorDocument 403 /error/403.html ErrorDocument 404 /error/404.html ErrorDocument 500 /error/500.html # Prevent hotlinking RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://(www\.)?serpentineseo.com/.*$ [NC] RewriteRule \.(gif|jpg|png)$ http://www.serpentineseo.com/images/angryman.png [R,L] # Prevent offline browsers RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [OR] RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:[email protected] [OR] RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [OR] RewriteCond %{HTTP_USER_AGENT} ^Custo [OR] RewriteCond %{HTTP_USER_AGENT} ^DISCo [OR] RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [OR] RewriteCond %{HTTP_USER_AGENT} ^eCatch [OR] RewriteCond %{HTTP_USER_AGENT} ^EirGrabber [OR] RewriteCond %{HTTP_USER_AGENT} ^EmailSiphon [OR] RewriteCond %{HTTP_USER_AGENT} ^EmailWolf [OR] RewriteCond %{HTTP_USER_AGENT} ^Express\ WebPictures [OR] RewriteCond %{HTTP_USER_AGENT} ^ExtractorPro [OR] RewriteCond %{HTTP_USER_AGENT} ^EyeNetIE [OR] RewriteCond %{HTTP_USER_AGENT} ^FlashGet [OR] RewriteCond %{HTTP_USER_AGENT} ^GetRight [OR] RewriteCond %{HTTP_USER_AGENT} ^GetWeb! [OR] RewriteCond %{HTTP_USER_AGENT} ^Go!Zilla [OR] RewriteCond %{HTTP_USER_AGENT} ^Go-Ahead-Got-It [OR] RewriteCond %{HTTP_USER_AGENT} ^GrabNet [OR] RewriteCond %{HTTP_USER_AGENT} ^Grafula [OR] RewriteCond %{HTTP_USER_AGENT} ^HMView [OR] RewriteCond %{HTTP_USER_AGENT} HTTrack [NC,OR] RewriteCond %{HTTP_USER_AGENT} ^Image\ Stripper [OR] RewriteCond %{HTTP_USER_AGENT} ^Image\ Sucker [OR] RewriteCond %{HTTP_USER_AGENT} Indy\ Library [NC,OR] RewriteCond %{HTTP_USER_AGENT} ^InterGET [OR] RewriteCond %{HTTP_USER_AGENT} ^Internet\ Ninja [OR] RewriteCond %{HTTP_USER_AGENT} ^JetCar [OR] RewriteCond %{HTTP_USER_AGENT} ^JOC\ Web\ Spider [OR] RewriteCond %{HTTP_USER_AGENT} ^larbin [OR] RewriteCond %{HTTP_USER_AGENT} ^LeechFTP [OR] RewriteCond %{HTTP_USER_AGENT} ^Mass\ Downloader [OR] RewriteCond %{HTTP_USER_AGENT} ^MIDown\ tool [OR] RewriteCond %{HTTP_USER_AGENT} ^Mister\ PiX [OR] RewriteCond %{HTTP_USER_AGENT} ^Navroad [OR] RewriteCond %{HTTP_USER_AGENT} ^NearSite [OR] RewriteCond %{HTTP_USER_AGENT} ^NetAnts [OR] RewriteCond %{HTTP_USER_AGENT} ^NetSpider [OR] RewriteCond %{HTTP_USER_AGENT} ^Net\ Vampire [OR] RewriteCond %{HTTP_USER_AGENT} ^NetZIP [OR] RewriteCond %{HTTP_USER_AGENT} ^Octopus [OR] RewriteCond %{HTTP_USER_AGENT} ^Offline\ Explorer [OR] RewriteCond %{HTTP_USER_AGENT} ^Offline\ Navigator [OR] RewriteCond %{HTTP_USER_AGENT} ^PageGrabber [OR] RewriteCond %{HTTP_USER_AGENT} ^Papa\ Foto [OR] RewriteCond %{HTTP_USER_AGENT} ^pavuk [OR] RewriteCond %{HTTP_USER_AGENT} ^pcBrowser [OR] RewriteCond %{HTTP_USER_AGENT} ^RealDownload [OR] RewriteCond %{HTTP_USER_AGENT} ^ReGet [OR] RewriteCond %{HTTP_USER_AGENT} ^SiteSnagger [OR] RewriteCond %{HTTP_USER_AGENT} ^SmartDownload [OR] RewriteCond %{HTTP_USER_AGENT} ^SuperBot [OR] RewriteCond %{HTTP_USER_AGENT} ^SuperHTTP [OR] RewriteCond %{HTTP_USER_AGENT} ^Surfbot [OR] RewriteCond %{HTTP_USER_AGENT} ^tAkeOut [OR] RewriteCond %{HTTP_USER_AGENT} ^Teleport\ Pro [OR] RewriteCond %{HTTP_USER_AGENT} ^VoidEYE [OR] RewriteCond %{HTTP_USER_AGENT} ^Web\ Image\ Collector [OR] RewriteCond %{HTTP_USER_AGENT} ^Web\ Sucker [OR] RewriteCond %{HTTP_USER_AGENT} ^WebAuto [OR] RewriteCond %{HTTP_USER_AGENT} ^WebCopier [OR] RewriteCond %{HTTP_USER_AGENT} ^WebFetch [OR] RewriteCond %{HTTP_USER_AGENT} ^WebGo\ IS [OR] RewriteCond %{HTTP_USER_AGENT} ^WebLeacher [OR] RewriteCond %{HTTP_USER_AGENT} ^WebReaper [OR] RewriteCond %{HTTP_USER_AGENT} ^WebSauger [OR] RewriteCond %{HTTP_USER_AGENT} ^Website\ eXtractor [OR] RewriteCond %{HTTP_USER_AGENT} ^Website\ Quester [OR] RewriteCond %{HTTP_USER_AGENT} ^WebStripper [OR] RewriteCond %{HTTP_USER_AGENT} ^WebWhacker [OR] RewriteCond %{HTTP_USER_AGENT} ^WebZIP [OR] RewriteCond %{HTTP_USER_AGENT} ^Wget [OR] RewriteCond %{HTTP_USER_AGENT} ^Widow [OR] RewriteCond %{HTTP_USER_AGENT} ^WWWOFFLE [OR] RewriteCond %{HTTP_USER_AGENT} ^Xaldon\ WebSpider [OR] RewriteCond %{HTTP_USER_AGENT} ^Zeus RewriteRule ^.*$ http://www.google.com [R,L] # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Deny access to sensitive files <FilesMatch "\.(htaccess|psd|log)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Core i7 c1e and speedstepping - BSOD on shutdown

    - by DeaconDesperado
    I'm having an interesting problem with my recent Core i7 Digital Audio workstation build that I am curious to see if others have encountered. First, here are the specs on the machine. ASUS P6TD Deluxe Intel X58 Socket LGA1366 MB Intel Core i7-950 3.06Ghz 8M LGA1366 CPU CORSAIR DOMINATOR 6GB (3 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 Western Digital Caviar Black WD5001AALS 500GB Plus a couple ASUS optical drives and a 750W Corsair PSU. Running Windows 7 x64. All this is connected to the nefarious Digi 002 firewire audio interface for use with Pro Tools. I following mostly the specs posted by many other I7 users in the digidesign community who pooled their collective knowledge in this thread. Now after completing my build, I fell victim to the "UD5 squeal" described at that forum thread. So taking the advice posted, I disabled c1e advanced halt state and Intel speed stepping (I would likely have done this anyway to maintain a stable clock, power consumption isn't really a relevant concern on this machine.) I enabled XMP to set the ram timings properly as well. What I am experiencing is a BSOD upon shutdown, but only immediately after windows fully exits and ends all processes. The error is a MACHINE_CHECK_EXCEPTION 0x000000. The funny thing is that it is extremely intermitent and only occurs if the shutdown immediately followed a period of relative idleness. It does not a generate a minidump, I suspect because windows monitoring has terminated by the time this error occurs. No damage is evident and one can simply turn off manually and the system will act as though a proper shutdown had occurred. If anything it is a annoyance, I just want to be certain it is not affecting my long term stability. I have read that the i7 950 does not like DRAM voltages past 1.65, but that they are acceptable if they are within .5 of the BLCK setting. I have tried disabling XMP and setting all timings to auto and the problem still manifests in an identical way. It is suspect that the cpu idleness preceding shutdown is the determining factor, as both c1e and speedstepping are both settings intended to modify handling of this state. Any suggestions or prior experiences would be greatly appreciated. EDIT: The behavior very closely resembles what's described in this thread: http://www.tomshardware.com/forum/12003-63-shut-problem-windows The benign nature of it of is identical. I can't seem to download the hotfix cited there however.

    Read the article

  • Windows Server 2008 Migration - Did I miss something?

    - by DevNULL
    I'm running in to a few complications in my migration process. My main role has been a Linux / Sun administrator for 15 yrs so Windows server 2008 environment is a bit new to me, but understandable. Here's our situation and reason for migrating... We have a group of developers that develop VERY low-level software in Visual C with some inline assembler. All the workstations were separate from each other which cased consistency problems with development libraries, versions, etc... Our goal was to throw them all on to a Windows domain were we can control workstation installations, hot fixes (which can cause enormous problems), software versions, etc... All Development Workstations are running Windows XP x32 (sp3) and x64 (sp2) I running in to user permission problems and I was wondering maybe I missed one, tWO or a handful of things during my deployment. Here is what I have currently done: Installed and Activated Windows Server 2008 Added Roles for DNS and Active Directory Configured DNS with WINS for netbios name usage Added developers to AD and mapped their shared folders to their profile Added roles for IIS7 and configured the developers SVN Installed MySQL Enterprise Edition for development usage Not having a firm understanding of Group Policy I haven't delved deeply in to that realm yet. Problems I'm encountering: 1. When I configure any XP workstations to logon our domain, once a user uses their new AD login, everything goes well, except they have very restrictive permissions. (Eg: If a user opens any existing file, they don't have write access, except in their documents folder.) Since these guys are working on low system level events, they need to r/w all files. All I'm looking to restrict in software installations. Am I correct to assume that I can use WSUS to maintain the domains hot fixes and updates pushed to the workstations? I need to map a centralized shared development drive upon the users login. This is open to EVERYONE. Right now I have the users folders mapped upon login through their AD profile. But how do I map a share if I've already defined one within their profile in AD? Any responses would be very grateful. Do I have to configure and define a group policy for the domain users? Can I use Volume Mirroring to mirror / sync two drives on two separate servers or should I just script a rsync or MS Synctool? The drives simply store nightly system images.

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • CakePHP in a subdirectory using nginx (Rewrite rules?)

    - by lhnz
    I managed to get this to work a while back, but on returning to the cakephp project I had started it seems that whatever changes I've made to nginx recently (or perhaps a recent update) have broken my rewrite rules. Currently I have: worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } location /basic_cake/ { index index.php; if (-f $request_filename) { break; } if (!-f $request_filename) { rewrite ^/basic_cake/(.+)$ /basic_cake/index.php?url=$1 last; break; } } location /cake_test/ { index index.php; if (-f $request_filename) { break; } if (!-f $request_filename) { rewrite ^/cake_test/(.+)$ /cake_test/index.php?url=$1 last; break; } } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { listen 8081; server_name localhost; root /srv/http/html/xsp; location / { index index.html index.htm index.aspx default.aspx; } location ~ \.(aspx|asmx|ashx|asax|ascx|soap|rem|axd|cs|config|dll)$ { fastcgi_pass 127.0.0.1:9001; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } } The problem that I have is that the css and images will not load from the webroot. Instead if I visit http://localhost/basic_cake/css/cake.generic.css, I get a page which tells me: CakePHP: the rapid development php framework Missing Controller Error: CssController could not be found. Error: Create the class CssController below in file: app/controllers/css_controller.php var $name = 'Css'; } ? Notice: If you want to customize this error message, create app/views/errors/missing_controller.ctp CakePHP: the rapid development php framework Does anybody have any ideas on how to fix this?

    Read the article

  • "Error 1067: The process terminated unexpectedly" when trying to install MySQL on Win7 x64.

    - by Gravitas
    Hi, I've run into a brick wall trying to install MySQL v5.5 on my machine. My PC is Windows 7 x64, Enterprise edition. MySQL installs fine, but when I run the "MySQL Instance Configuration Wizard", it pauses forever on the step "Start Service" (I can let it run for 30 minutes with no response). If I go into services, I see that the "MySQL" service hasn't started, and if I try to start it, it says "Windows could not start MySQL Service on Local Computer. Error 1067: The process terminated unexpectedly." I've tried the following: Turning off firewall. Uninstalling all antivirus software. Installing / reinstalling 32-bit version of MySQL. Installing / reinstalling 64-bit version of MySQL. Uninstalling, deleting the contents of "C:\program files\MySQL" and "C:\program files (x86)\MySQL", reinstalling. Checking to see that there is no rogue services named MySQL???? (from a previous install). Checking that port 3306 is not used by an alternate program. Changing the default port that MySQL uses. Checking for "my.ini" and "my.ini.cnf" in "C:\windows" (nothing there but that can cause a problem). Running both MySQL installer, and configuration wizard, in "Adminstrator mode". Turning off UAC. Installing with defaults, not changing anything. Rebooting my machine (about 6 reboots so far). Opening up port 3306 in the firewall (both TCP and UDP, inbound and outbound). Swearing at the klutz of a programmer who designed MySQL so you can't even install it (as if that would help!) My machine is working 100% in every other way. InfiniDB (a MySQL compatible database) installs 100%, as does Visual Studio 2010, Microsoft SQL Server, etc, etc. Your advice on how to work around this? p.s. Here is the screen it got stuck on for 15 minutes until I killed the process: Update 2010-12-20 Tried MySQL v5.1, it didn't work either. Its amazing - if you type "mysqld /?", or "mysqld -help", it doesn't give you any help. And, if you try to restart the service manually, it doesn't display any error messages. Could it be any more unhelpful? Update 2010-12-21 Installed MySQL 6.0 alpha, and it worked. However, I'd rather not use an alpha release, given that the "stable" release is anything but :( Update 2010-12-21 Found http://dev.mysql.com/doc/refman/5.1/en/windows-troubleshooting.html, dealing with troubleshooting under Windows. Discovered that you can generate an error log if the service doesn't start - see here: http://dev.mysql.com/doc/refman/5.1/en/error-log.html

    Read the article

  • Generating/managing config files for hosted application

    - by mfinni
    I asked a question about config management, and haven't seen a reply. It's possible my question was too vague, so let's get down to brass tacks. Here's the process we follow when onboarding a new customer instance into our hosted application : how would you manage this? I'm leaning towards a Perl script to populate templates to generate shell scripts, config files, XML config files, etc. Looking briefly at CFengine and Chef, it seems like they're not going to reduce the amount of work, because I'd still have to manually specify all of the changes/edits within the tool. Doesn't seem to be much of a gain over touching the config files directly. We add a stanza to the main config file for the core (3rd-party) application. This stanza has values that defines the instance (customer) name the TCP listener port for this instance (not one currently used) the DB2 database name (serial numeric identifier, already exists, they get prestaged for us by the DBAs) three sub-config files, by name - they need to be created from 3 templates and be named after the instance The sub-config files define: The filepath for the DB2 volumes The filepath for the storage of objects The filepath for just one of the DB2 volumes (yes, redundant to the first item. We run some application commands, start the instance We do some LDAP thingies (make an OU for the instance, etc.) We add a stanza to the config file for our security listener that acts as a passthrough to LDAP instance name LDAP OU TCP port for instance DB2 database name We restart the security listener (off-hours), change the main config file from item 1, stop and restart the instance. It is now authenticating via LDAP. We add the stop and start commands for this instance to the HA failover scripts. We import an XML config file into the instance that defines things for the actual application for the customer - user names, groups, permissions, and business rules. The XML is supplied by the implementation team. Now, we configure the dataloading application We add a stanza to the existing top-level config file that points to a new customer-level config file. The new customer-level config file includes: the instance (customer) name the DB2 database name arbitrary number of sub-config files, by name Each of the sub-config files defines: filepaths to the directories for ingestion, feedback, backup, and failure those filepaths have a common path to a customer-specific folder, and then one folder for each sub-config file Each of those filepaths needs to be created We need to add this customer instance to our monitoring scripts that confirm the proper processes are running and can be logged into. Of course, those monitoring config files include the instance name, the TCP port, the DB2 database name, etc. There's also a reporting application that needs to be configured for the new instance. You get the idea. There's also XML that is loaded into WAS by the middleware team. We give them the values for them to plug into the XML - they could very easily hand us the template and we could give them back completed XML.

    Read the article

  • Setting font size of Closed Captions on iPhone using ffmpeg or mencoder

    - by forthrin
    Does anyone know how to either: Make ffmpeg set subtitle font size in the output video file Make mencoder produce an iPhone-compatible video file (with subtitles) I finally found out how to get Closed Captions video on iPhone, with mkv and srt files as source material. The secret was using the mov_text subtitle codec in ffmpeg (and turning on Closed Captions in the iPhone settings of course): ffmpeg -y -i in.mkv -i in.srt -map 0:0 -map 0:1 -map 1:0 -vcodec copy -acodec aac -ab 256k -scodec mov_text -strict -2 -metadata title="Title" -metadata:s:s:0 language=eng out.mp4 However, the font size appears very small on the iPhone, and I can't find out how to set it with ffmpeg (the iPhone has no option for this). I found out that mencoder has a -subfont-text-scale option, but I don't have a lot of experience with this program. The following, my best attempt so far, produces an output file which is not playable on the iPhone. sudo port install mplayer +mencoder_extras +osd mencoder in.mkv -sub in.srt -o out.mp4 -ovc copy -oac faac -faacopts br=256:mpeg=4:object=2 -channels 2 -srate 48000 -subfont-text-scale 10 -of lavf -lavfopts format=mp4 PS! As requested, here is the output from mencoder: 192 audio & 400 video codecs success: format: 0 data: 0x0 - 0xb64b9d2f libavformat version 54.6.101 (internal) libavformat file format detected. [matroska,webm @ 0x1015c9a50]Unknown entry 0x80 [lavf] stream 0: video (h264), -vid 0 [lavf] stream 1: audio (ac3), -aid 0, -alang eng VIDEO: [H264] 1280x544 0bpp 49.894 fps 0.0 kbps ( 0.0 kbyte/s) [V] filefmt:44 fourcc:0x34363248 size:1280x544 fps:49.894 ftime:=0.0200 ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.23.100 (internal) AUDIO: 48000 Hz, 2 ch, s16le, 448.0 kbit/29.17% (ratio: 56000->192000) Selected audio codec: [ffac3] afm: ffmpeg (FFmpeg AC-3) ========================================================================== ** MUXER_LAVF ***************************************************************** REMEMBER: MEncoder's libavformat muxing is presently broken and can generate INCORRECT files in the presence of B-frames. Moreover, due to bugs MPlayer will play these INCORRECT files as if nothing were wrong! ******************************************************************************* OK, exit. videocodec: framecopy (1280x544 0bpp fourcc=34363248) VIDEO CODEC ID: 28 AUDIO CODEC ID: 15002, TAG: 0 Writing header... [mp4 @ 0x1015c9a50]Codec for stream 0 does not use global headers but container format requires global headers [mp4 @ 0x1015c9a50]Codec for stream 1 does not use global headers but container format requires global headers Then the following repeats itself for every frame: Pos: 0.0s 1f ( 2%) 0.00fps Trem: 0min 0mb A-V:0.000 [0:0] [mp4 @ 0x1015c9a50]malformated aac bitstream, use -absf aac_adtstoasc Error while writing frame. I recognize -absf aac_adtstoasc as an ffmpeg option (does mencoder spawn ffmpeg?), but I don't know how to pass this option on (my hunch is this is not even the origin of the problem).

    Read the article

  • Why is the server performance so poor? What can be done to improve the speed of the server?

    - by fslsyed
    Very slow processing using Windows Server2008 R2 Standard with Service Pack One. Situation: Read a text file using the text data to populate a series of MS Sql tables. The converted data is used to generate monthly PDF invoice files; the PDF files are saved directly to the hard drive. The application is multi-threading with one thread used for the text conversion and three threads for PDF invoice generation. The text conversion is occurring concurrently with the invoice generation. Application Software: C# using Microsoft Visual Studio 2010 Ultimate. Crystal Report Writer 2011 with runtime 13_0_3 64 bit version. Targeted platform is x64; also tested as x86, and Any CPU with similar results. Microsoft .NET Framework 4.0. Microsoft Sql 2008 Issue: The software is running very slowly. The conversion of the text file is approximately six hundred fifty records per second and generation of the PDF files is approximately twelve invoices per minute. The text file to be converted is six hundred Meg with seven thousand invoices to be generated. The software was installed on three different machines from the same distribution files. The same text file was converted on each machine. The user executing the application was an administrator on each machine. The only variances were the machine and operating system. The configurations are as follows: Server: Operating System: Windows Server2008 R2 Standard 64-bit (6.1, Build7601) SP1 Service Pack: System Manufacturer: IBM System Model: System x3550 M3-[7944AC1]- BIOS: Default System BIOS Processor: Intel® Xeon® CPU E5620@ 2.4GHz (16 CPUs) Memory: 16384MB Notebook: Operating System: Windows 7 Home Premium Standard 64-bit (6.1, Build7601) System Manufacturer: Hewlett-Packard System Model: HP Pavilion dv7 Notebook PC BIOS: Default System BIOS Processor: AMD Phenom II N640 Dual-Core Processor 2.9GHz (2 CPUs) Memory: 6144MB Desktop: Operating System: Windows 7 Professional 64-bit (6.1, Build7601) SP1 System Manufacturer: Dell Inc. System Model: OptiPlex 960 BIOS: Phoenix ROM BIOS PLUS Version 1.10 A11 Processor: Intel Core™2 Quad CPU Q9650 @3.00GHZ (4 CPUs) Memory: 16384MB Processing results per machine: The applications were executed seven times with the averages being displayed below. Machine Text Records Invoices Generated Converted Per Minute Per Minute Server (1) 650 12 Notebook 980 17 Desktop 2,100 45 (1) The server is dedicated to execution of this application; no additional applications are being executed. Question: Why is the server performance so poor? What can be done to improve the speed of the server?

    Read the article

  • Windows Service Limit Crashes Services on Startup

    - by Paul Williams
    We have developed a custom Windows service in C# as part of a large Enterprise application. Our QA department tests multiple versions of this service. The QA lab has several (over 20) copies of this service installed on one Windows 2003 test box. Each copy is in its own folder and has a unique service name, though each executable file is named the same (OurWindowsService.exe, for example). Each service uses the same Windows credentials (a domain user). The purpose of this service is to handle MSMQ messages. The queued messages do all sorts of important stuff. For some reason, they can run only 5 of these services at a time. When we start a 6th, the service crashes on startup. For example, I can start #1, #2, #3, #4, and #5. When I start #6, it crashes. However, if I stop #1 and start #6, #6 runs fine, and now #1 fails to start. When the services crash, the following error appears in the Windows event log: Faulting application OurWindowsService.exe, version 5.40.1.1, faulting module kernel32.dll, version 5.2.3790.4480, fault address 0x0000bef7. I was able to use WinDbg to generate a postmortem dump file. The dump file revealed that the crash occurs trying to delay load SHLWAPI.dll: 0:000> kb100 ChildEBP RetAddr Args to Child 0012ece4 79037966 c06d007e 00000000 00000001 KERNEL32!RaiseException+0x53 0012ed4c 790099ba 00000008 0012ed08 7c82860c mscoree!__delayLoadHelper2+0x139 0012ed98 790075b1 001550c8 0012edac 0012fb34 mscoree!_tailMerge_**SHLWAPI_dll**+0xd 0012edb0 79007623 001550c8 0012edf8 0012edf4 mscoree!XMLGetVersionWithSupported+0x22 0012ee00 790069a4 aa06f1b0 00000000 000001fe mscoree!RuntimeRequest::GetRuntimeVersion+0x56 0012f478 790077aa 00000001 7903fb4c 0012fb34 mscoree!RuntimeRequest::ComputeVersionString+0x5bd 0012f89c 79007802 00000001 0012f8b4 7903fb4c mscoree!RuntimeRequest::FindVersionedRuntime+0x11c 0012f8b8 79007b19 00000001 00000000 aa06fa6c mscoree!RuntimeRequest::RequestRuntimeDll+0x2c 0012ffa4 79007c02 00000001 0012ffbc 00000000 mscoree!GetInstallation+0x72 0012ffc0 77e6f23b 00000000 00000000 7ffdf000 mscoree!_CorExeMain+0x12 0012fff0 00000000 79007bf0 00000000 78746341 KERNEL32!BaseProcessStart+0x23 I believe the error code handed to Kernel32.RaiseException, c06d007e, means Module Not Found, but I'm not certain. Does this sound familiar to anyone? Are we hitting some limit on the number of service instances on some file name? Does MSMQ dislike more than 5 listening services?

    Read the article

  • PSU failing or Mainboard failing?

    - by Andrei Rinea
    I am having some troubles lately powering on my desktop workstation. While starting up the PC after being off for hours (usually at least 8 hours) it randomly fails to do so. What happens is that : I press the power button; nothing happens I can hear a moderate buzzing noise at the back of the PC (near the PSU); but I can't say for sure that it's not from the mainboard. If I insist pressing the power button a few times in 1-2 minutes it'll start Another route would be that instead of (3) I will plug off the power cable from the PSU and wait for 30 seconds. Then I will press the power on and keep it for 30-60 seconds (I had some success at notebooks with a similar approach). Then I will plug back the cable in the PSU, press only once the power button and it will start normally. Also while running normally I keep hearing some low buzzing which seems to be fan-RPM-related (i.e. when processing images or doing CPU intensive work). What should I look into? UPDATE It's getting worse. It took more than 10 retries today and almost 20 minutes to start the computer. I tried the paperclip trick and the PSU behaves perfectly. I managed to start the computer like so : I pressed the on-button a few times and then left the PC in a pre-startup state (the fans were working the buzzing noise was strong and I went to eat. I thought I won't lit the house on fire so fast and without smelling. Back, after 10-15 min the computer booted up! Discussed with a fellow at Intel and he told me the capacitors on the mainboard are probably a bit shot. If they are shot, he said, it should start up warm perfectly. So I did restart it, warm, a few times (5 sec cooldown and then 40 sec cooldown and it started up perfectly). I can either replace the capacitors on the mainboard (doesn't sound worth it or replace the mainboard (this one sucks too :)) ) FINAL INFO : It was the PSU after all. Although it was powering the IDEs and SATAs the Mainboard power module was failing. I bought another mainboard just to find out that this wasn't the cause. Now I'll have to return it somehow. The spare PSU is now in the computer and doing well.. Although larger (500W), it's like a plane taking off.. I need a better one.

    Read the article

  • nginx can't see MySQL

    - by user135235
    I have a fully working Joomla 2.5.6 install driven by a local MySQL server, but I'd like to test nginx to see if it's a faster web serving experience than Apache. \ PHP 5.4.6 (PHP54w) \ CentOS 6.2 \ Joomla 2.5.6 \ PHP54w-fpm.i386 (FastCGI process manager) \ php -m shows: mysql & mysqli modules loaded Nginx seems to have installed fine via yum, it can process a PHP-info file via FastCGI perfectly OK (http://37.128.190.241/php.php) but when I stop Apache, start nginx instead and visit my site I get: "Database connection error (1): The MySQL adapter 'mysqli' is not available." I've tried adjusting my Joomla configuration.php to use mysql instead of mysqli but I get the same basic error, only this time "Database connection error (1): The MySQL adapter 'mysql' is not available" of course! Can anyone think what the problem might be please? I did try explicitly setting extension = mysqli.so and extension = mysql.so in my php.ini to try and force the issue (despite php -m showing they were both successfully loaded anyway) - no difference. I have a pretty standard nginx default.conf: server { listen 80; server_name www.MYDOMAIN.com; server_name_in_redirect off; access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log info; root /var/www/html/MYROOT_DIR; index index.php index.html index.htm default.html default.htm; # Support Clean (aka Search Engine Friendly) URLs location / { try_files $uri $uri/ /index.php?q=$uri&$args; } # deny running scripts inside writable directories location ~* /(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ { return 403; error_page 403 /403_error.html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi.conf; } # caching of files location ~* \.(ico|pdf|flv)$ { expires 1y; } location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ { expires 14d; } } Snip of output from phpinfo under nginx: Server API FPM/FastCGI Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/phar.ini, /etc/php.d/zip.ini Snip of output from phpinfo under Apache: Server API Apache 2.0 Handler Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/mysql.ini, /etc/php.d/mysqli.ini, /etc/php.d/pdo.ini, /etc/php.d/pdo_mysql.ini, /etc/php.d/pdo_sqlite.ini, /etc/php.d/phar.ini, /etc/php.d/sqlite3.ini, /etc/php.d/zip.ini Seems that with Apache, PHP is loading substantially more additional .ini files, including ones relating to mysql (mysql.ini, mysqli.ini, pdo_mysql.ini) than nginx. Any ideas how I get nginix to also call these additional .ini's ? Thanks in advance, Steve

    Read the article

  • How can I fix Problems with interlaced video jerking/flicking when playedback on DVD players? (Mixin

    - by Simon P Stevens
    I'm trying to make a DVD and the final DVD jerks when played on standalone DVD players. It seems to play fine on PCs. I think the problem may be to do with interlacing settings when rendering the final output, but I'll outline the whole editing process I have followed in case I've made a mistake somewhere else. Most of the footage comes from a sony handy cam (one of those mini DVD ones) so isn't great quality. It was set to "high quality" (haha) and 16:9 aspect ratio when it was recorded. I copy the files directly from the mini DVDs onto the hard drive and import them into Cinelerra. In Cinelerra I set the format to 25fps, 720x576, RGBA-8bit, 16:9, interlaced bottom fields first. When I've finished the editing, I add a Fields to frames effect (set to bottom first) to each video track. I render to audio and video separately: Audio: AC3, 128kbps Video: YUV4MPEG steam, video pipe settings: ffmpeg -f yuv4mpegpipe -i - -y -target dvd -flags +ilme+ildct mpeg2video % Cinelerra often crashes during the rendering, so I set it to generate a new video file at each label, and combine them using cat when I've got a sucesful render of each one. Once I've combined them, I use mencoder to re-index them: mencoder -forceidx -oac copy -ovc copy merged.m2v -o mergedReIndexed.m2v I combine the audio and video files using ffmpeg: ffmpeg -i AudioFile.ac3 -i VideoFile.m2v -target dvd -flags +ilme+ildct FinalMovie.mpg Then I build the menus with spumux and I create the DVD file system with dvdauthor, and finally I write it do a dvd-r like this: nice -n -20 growisofs -dvd-compat -speed=2 -Z /dev/dvd -dvd-video -V VIDEO ./ && eject /dev/dvd Originally, when I did it the DVD flickered badly, so as suggested in a guide I added the fields to frames effect in cinelerra. Now it doesn't "flicker", but has become "jerky" when there is lots of motion, particularly when the camera is moving, so the whole background moves. This is what I've tried so far: Removed "mpeg2video" from cinelerra video render pipe. Removed +ilme from render pipe. Removed +ildct from render pipe. Removed +ilme from render audio/video rejoin command. Removed +ildct from render audio/video rejoin command. Added -alt to render pipe. Added -alt to render audio/video rejoin command. Tried with and without the frames to fields effect in Cinelerra. and various combinations of the above. I've also tried this: change the Cinelerra fps to 50, use fields to frames (instead of frames to fields), render to an intermediate QTforlinux jpeg video stream, re-importing that back into Cinelerra, adding a frames to fields effect and then rendering that output as normal (@25fps), and I still have the same problem. Has anyone experienced this "jerking" playback before? Can anyone give any suggestions on how to fix it? (Like I say, it plays back fine on a PC, but not on any of the standalone players I've tried)

    Read the article

  • Webserver Responses Hanging

    - by drscroogemcduck
    From some networks requesting certain images on our webserver is very flakey. I've looked at tcpdumps on both sides and the server sends back part of the file and the client ACKs the TCP packet but the server never receives the ACK. The servers view: 41 19.941136 212.169.34.114 209.20.73.85 TCP 52456 > http [SYN] Seq=0 Win=8192 Len=0 MSS=1460 WS=2 42 19.941136 209.20.73.85 212.169.34.114 TCP http > 52456 [SYN, ACK] Seq=0 Ack=1 Win=5440 Len=0 MSS=1360 46 20.041142 212.169.34.114 209.20.73.85 TCP 52456 > http [ACK] Seq=1 Ack=1 Win=65280 Len=0 47 20.045142 212.169.34.114 209.20.73.85 HTTP GET /map/map/s+74-WBkWk0aR28Yy-YjXA== HTTP/1.1 48 20.045142 209.20.73.85 212.169.34.114 TCP http > 52456 [ACK] Seq=1 Ack=522 Win=6432 Len=0 49 20.045142 209.20.73.85 212.169.34.114 TCP [TCP segment of a reassembled PDU] (Part of the content of the image 2720 bytes. i assume it is reassembled in tcpdump and it is fragmented over the wire.) ** never receives the ACK sent in frame 282 and will eventually resend the tcp segment ** The clients view: 274 26.161773 10.0.16.67 209.20.73.85 TCP 52456 > http [SYN] Seq=0 Win=8192 Len=0 MSS=1460 WS=2 276 26.262867 209.20.73.85 10.0.16.67 TCP http > 52456 [SYN, ACK] Seq=0 Ack=1 Win=5440 Len=0 MSS=1360 277 26.263255 10.0.16.67 209.20.73.85 TCP 52456 > http [ACK] Seq=1 Ack=1 Win=65280 Len=0 278 26.265193 10.0.16.67 209.20.73.85 HTTP GET /map/map/s+74-WBkWk0aR28Yy-YjXA== HTTP/1.1 279 26.365562 209.20.73.85 10.0.16.67 TCP http > 52456 [ACK] Seq=1 Ack=522 Win=6432 Len=0 280 26.368002 209.20.73.85 10.0.16.67 TCP [TCP segment of a reassembled PDU] (Part of the content of the image. Only 1400 bytes.) 282 26.571380 10.0.16.67 209.20.73.85 TCP 52456 > http [ACK] Seq=522 Ack=1361 Win=65280 Len=0 The network we are having trouble with is NATd. Is there any kind of explanation for this weirdness?

    Read the article

  • Use an audio/video file from a Linux laptop via USB to be played by Magic Sing ET-23H

    - by AisIceEyes
    I am one of the technical directors of a regular karaoke contest event. For the karaoke contest itself, due to tight budget, we are using what one of the sponsors are providing - Magic Sing ET-23H . The video output of the Magic Sing ET-23H are broadcasted at two big screens that are being shown to the audience and event attendees. When a karaoke contestant provides his / her karaoke video, the video itself is in a readable USB flashdrive and is attached to the USB input of Magic Sing ET-23H. What really bugs me is that the interface of Magic Sing ET-23H are also being broadcasted at the big screen video feeds. The interface of choosing the video file is being seen in the Magic Sing ET-23H - also to the big video screens that are seen by the audience and event goers. I will post in the comments ( if my less than 10 reputation would allow me) the picture of Magic Sing ET-23KH USB input of the device. I always bring my laptop, Acer AS5742-7653, during the regular karaoke event. I'm using my laptop also for tallying of scores from the judges, and also playing audio files from contestants that did not provide a karaoke video. I personally am using different Linux distros, but I next to all the time use my Ubuntu Studio 12.04.3 64bit partition during the regular karaoke contest event. My question is this: Is there a way I can share a temporary video/audio file directly from the laptop I'm using, going to the Magic Sing ET-23H that can broadcast both the video/audio file? Just like how in Window's Avisynth AVS files, or VirtualDub's temporary avi file, or like using ffplay (of ffmpeg), etc. I have researched somewhat the matter and found links in SuperUser.com. Though I can only provide the links at the comments section of this post if my reputation of less than 10 would allow me. I have a hunch it is possible, but I have not fully understood the device being used at the event, Magic Sing ET-23H, if there are other ways for it to broadcast video and audio files besides its USB input. Any help to my current predicament is highly appreciated. Thank you. PS: Since I need at least 10 reputation to post more than 2 links and also post images, I will try to post the image & links at the comments (if my below 10 reputation would allow me).

    Read the article

  • Postfix + SASLAUTHD + MySQL authentication problems

    - by Or W
    I've been trying to sort this out for the past 6 hours or so, this is the error message I'm facing (Running CentOS x64): /var/log/maillog: Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: SASL authentication failure: Password verification failed Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: bzq-79-177-192-133.red.bezeqint.net[79.177.192.133]: SASL PLAIN authentication failed: authentication failure Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: bzq-79-177-192-133.red.bezeqint.net[79.177.192.133]: SASL LOGIN authentication failed: authentication failure /var/log/messages: Jun 22 20:15:38 ptroa saslauthd[9401]: do_auth : auth failure: [user=myuser] [service=smtp] [realm=domain.com] [mech=pam] [reason=PAM auth error] I have dovecot installed as well and I'm able to receive emails via the MySQL authentication. The problem is when I'm trying to use SMTP to send out emails. Some config files: /etc/postfix/main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smtpd_banner = Server Message biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = domain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, mysql:/etc/postfix/mysql-virtual_email2email.cf virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf virtual_mailbox_base = /home/vmail virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 smtpd_sasl_auth_enable = yes broken_sasl_auth_clients = yes smtpd_sasl_authenticated_header = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination virtual_create_maildirsize = yes virtual_maildir_extended = yes proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_cano$ virtual_transport = dovecot dovecot_destination_recipient_limit = 1 /etc/default/saslauthd: START=yes DESC="SASL Authentication Daemon" NAME="saslauthd" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" /etc/pam.d/smtp: #%PAM-1.0 #auth include password-auth #account include password-auth auth required pam_mysql.so user=mail_admin passwd=password host=127.0.0.1 db=mail table=users usercolumn=email passwdcolumn=password crypt=1 verbose=1 account sufficient pam_mysql.so user=mail_admin passwd=password host=127.0.0.1 db=mail table=users usercolumn=email passwdcolumn=password crypt=1 verbose=1

    Read the article

< Previous Page | 674 675 676 677 678 679 680 681 682 683 684 685  | Next Page >