Search Results

Search found 13563 results on 543 pages for 'email headers'.

Page 511/543 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • apache tomcat loadbalancing clustering on ubuntu

    - by user740010
    i am facing a problem in clustering the tomcat with apache as a loadbalancer using mod_jk on ubuntu. i have install apache2 on my ubuntu 11.04 and i have downloaded tomcat7 created two copies and kept them at two different location. 1st one is at /home/net4u/vishal/test/tomcatA 2nd one is at /home/net4u/vishal/test1/tomcatB i have made following changes to server.xml file in /conf folder 1. <Server port="8205" shutdown="SHUTDOWN"> 2. <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> 3.<Connector port="8209" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> 4. <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> similarly i have modified other tomcat i.e tomcatA server.xml content of the server.xml is as follow: -- <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8109" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- uncomment for clustering--> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> <!-- Use the LockOutRealm to prevent attempts to guess user passwords via a brute-force attack --> <Realm className="org.apache.catalina.realm.LockOutRealm"> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> </Realm> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> </Host> </Engine> i have install libapache2-mod-jk step 1. i have Created jk.load file in /etc/apache2/mods-enabled/jk.load content is as follows: LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so Create /etc/apache2/mods-enabled/jk.conf: JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/jk.log JkMount /ecommerce/* worker1 JkMount /images/* worker1 JkMount /content/* worker1 step 2. Created workers.properties file in /etc/apache2/workers.properties content is as follows: workers.tomcat_home=/home/vishal/Desktop/test/tomcatA workers.java_home=/usr/lib/jvm/default-java ps=/ worker.list=tomcatA,tomcatB,loadbalancer   worker.tomcatA.port=8109 worker.tomcatA.host=localhost worker.tomcatA.type=ajp13 worker.tomcatA.lbfactor=1   worker.tomcatB.port=8209 worker.tomcatB.host=localhost worker.tomcatB.type=ajp13 worker.tomcatB.lbfactor=1 worker.loadbalancer.type=lb worker.loadbalancer.balanced_workers=tomcatA,tomcatB worker.loadbalancer.sticky_session=1 i tried the same thing on the windows machine it is working.

    Read the article

  • How might I stop BACKSCATTER using Qmail?

    - by alecb
    New to ServerFault , please pardon if my details are too much Linux box acting as Virtual Host for domain hosting. Runs CentOs. Runs Parallels Plesk 9.x Regardless of the following, the SPAM keeps flowing in at 1-3 / second. An explanation of the problem... "xinetd service listens for SMTP connections and forwards to qmail-smtpd. The qmail service only process the queue, but does not control messages coming into the queue...that's why stopping it has no effect. If you stop xinetd AND qmail, then kill any open qmail-smtpd processes, all mail flow comes to a stop SOMETIMES Problem is, qmail-smtpd is not smart enough to check for valid mailboxes on the localhost before accepting the mail. So, it accepts bad mail with a forged replyto address which gets processed in the queue by qmail. Qmail cannot deliver locally and bounces to the forged replyto address." We believe the fix is to patch the qmail-smtpd process to give it the intelligence to check for the existence of local mailboxes BEFORE accepting the message. The problem is when we try to compile the chkuser patch we run into failures due to Plesk Control Panel." Is anyone aware of something we could do differently or better?" Other things that have NOT worked thus far: -Turning off any and all mail processes (to check as an indicator that an individual account has been compromised. This has been verified as NOT the case.) -Turning off mail AND http server processes (in the case of a compromised formmail) -Running EXIM in lieu of Qmail( easy/quick install but xinetd forces exim to close and restarts qmail on its own) -Turned on SPF protection via Plesk GUI. Does not help. -Turned on Greylisting via Plesk GUI. Does not help. -Disabled Bounce notifications via command line That which MIGHT work but have complications: -Use POSTFIX instead of QMAIL (No knowledge of POSTIFX and don't want to bother with it unless anyone knows it has potential to handle backscatter WELL before investing time) -As mentioned above, compiling a chkusr patch, we believe will STOP this problem, along with qmail (because of plesk in the mix, the comile fails every time and Parallels Plesk support is unresponsive unless I cough up MONEY) If I don't clear out the SPAM from the outgoing mail queue nightly, then it clogs up with millions of SPAMs and will bring down the OUTGOING email services. Any and all help welcome and appreciated!

    Read the article

  • Write STDOUT & STDERR to a logfile, also write STDERR to screen

    - by Stefan Lasiewski
    I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone). Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile. { command1 && command2 && command3 ; } > logfile.log 2>&1 Here is what I want to do with the output of these commands: STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems. Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored. It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this: { command1 && command2 && command3 ; } logfile.log 2&1 || mailx -s "There was an error" [email protected] The problem I run into is that STDERR loses context during I/O redirection. A '2&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2 error.log Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag. { ./configure && make --keep-going && make install ; } > build.log 2>&1 Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error. { ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1 I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.

    Read the article

  • FFMPEG Segfault Solutions

    - by Brentley_11
    I'm trying to convert a bunch of movies into h.264 mp4's using FFMPEG. These movies are sourced from various portable camcorders such as the Flip Mino HD and the Kodak ZI8. One issue I'm having with video from the ZI8 is it seems to be causing FFMPEG to segfault. Here is my command: ffmpeg -i 'XmasSailor720p60fps.MOV' -threads 2 -acodec libfaac -ab 96kb -vcodec libx264 -vpre hq -b 500kb -s 484x272 XmasSailor.mp4 Here is the output: FFmpeg version SVN-r20668, Copyright (c) 2000-2009 Fabrice Bellard, et al. built on Dec 2 2009 18:37:34 with gcc 4.2.4 (Ubuntu 4.2.4-1ubuntu4) configuration: --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared libavutil 50. 5. 1 / 50. 5. 1 libavcodec 52.42. 0 / 52.42. 0 libavformat 52.39. 2 / 52.39. 2 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0. 7. 2 / 0. 7. 2 libpostproc 51. 2. 0 / 51. 2. 0 Seems stream 0 codec frame rate differs from container frame rate: 59.94 (60000/1001) -> 29.97 (30000/1001) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'XmasSailor720p60fps.MOV': Duration: 00:00:05.37, start: 0.000000, bitrate: 12021 kb/s Stream #0.0(eng): Video: h264, yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 11994 kb/s, 29.97 tbr, 90k tbn, 59.94 tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 128 kb/s Metadata major_brand : qt minor_version : 0 compatible_brands: qt comment : KODAK Zi8 Pocket Video Camera comment-eng : KODAK Zi8 Pocket Video Camera [libx264 @ 0x99e1020]using SAR=1/1 [libx264 @ 0x99e1020]using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.1 Cache64 [libx264 @ 0x99e1020]profile High, level 2.1 Output #0, mp4, to 'XmasSailor.mp4': Stream #0.0(eng): Video: libx264, yuv420p, 484x272 [PAR 1:1 DAR 121:68], q=10-51, 500 kb/s, 30k tbn, 29.97 tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 96 kb/s Metadata comment : Encoded with the Statusfirm Video Transcoder Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding [h264 @ 0x99de950]B picture before any references, skipping [h264 @ 0x99de950]decode_slice_header error [h264 @ 0x99de950]no frame! Error while decoding stream #0.0 [h264 @ 0x99de950]B picture before any references, skipping [h264 @ 0x99de950]decode_slice_header error [h264 @ 0x99de950]no frame! Error while decoding stream #0.0 frame= 20 fps= 0 q=13797729.0 size= 0kB time=0.66 bitrate= 0.6kbits/s frame= 39 fps= 37 q=13797729.0 size= 0kB time=1.30 bitrate= 0.3kbits/s frame= 48 fps= 30 q=33.0 size= 11kB time=0.10 bitrate= 903.0kbits/s frame= 58 fps= 27 q=31.0 size= 22kB time=0.43 bitrate= 421.0kbits/s frame= 67 fps= 25 q=29.0 size= 41kB time=0.73 bitrate= 462.6kbits/s frame= 75 fps= 23 q=29.0 size= 59kB time=1.00 bitrate= 486.7kbits/s frame= 83 fps= 22 q=29.0 size= 81kB time=1.27 bitrate= 521.9kbits/s frame= 90 fps= 21 q=29.0 size= 97kB time=1.50 bitrate= 530.1kbits/s frame= 98 fps= 20 q=29.0 size= 114kB time=1.77 bitrate= 526.9kbits/s frame= 106 fps= 20 q=29.0 size= 134kB time=2.04 bitrate= 537.7kbits/s frame= 114 fps= 19 q=29.0 size= 150kB time=2.30 bitrate= 533.7kbits/s frame= 122 fps= 19 q=29.0 size= 172kB time=2.57 bitrate= 547.8kbits/s frame= 130 fps= 19 q=29.0 size= 193kB time=2.84 bitrate= 557.5kbits/s frame= 136 fps= 18 q=29.0 size= 211kB time=3.04 bitrate= 570.0kbits/s frame= 144 fps= 18 q=29.0 size= 242kB time=3.30 bitrate= 599.5kbits/s frame= 152 fps= 17 q=30.0 size= 261kB time=3.57 bitrate= 598.6kbits/s frame= 157 fps= 15 q=-1.0 Lsize= 368kB time=5.21 bitrate= 579.3kbits/s video:302kB audio:61kB global headers:0kB muxing overhead 1.416371% [libx264 @ 0x99e1020]frame I:1 Avg QP:27.22 size: 8720 [libx264 @ 0x99e1020]frame P:48 Avg QP:25.15 size: 3759 [libx264 @ 0x99e1020]frame B:108 Avg QP:30.10 size: 1105 [libx264 @ 0x99e1020]consecutive B-frames: 0.6% 11.5% 28.8% 59.0% [libx264 @ 0x99e1020]mb I I16..4: 28.5% 47.6% 23.9% [libx264 @ 0x99e1020]mb P I16..4: 0.8% 1.3% 0.5% P16..4: 50.6% 17.7% 13.1% 0.0% 0.0% skip:15.9% [libx264 @ 0x99e1020]mb B I16..4: 0.2% 0.3% 0.1% B16..8: 44.0% 1.2% 2.6% direct: 5.1% skip:46.5% L0:45.5% L1:51.0% BI: 3.5% [libx264 @ 0x99e1020]final ratefactor: 23.51 [libx264 @ 0x99e1020]8x8 transform intra:49.9% inter:67.9% [libx264 @ 0x99e1020]direct mvs spatial:98.1% temporal:1.9% [libx264 @ 0x99e1020]coded y,uvDC,uvAC intra: 54.7% 76.1% 41.4% inter: 17.1% 24.4% 7.8% [libx264 @ 0x99e1020]i16 v,h,dc,p: 18% 52% 5% 25% [libx264 @ 0x99e1020]i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 12% 22% 9% 7% 10% 10% 9% 8% 13% [libx264 @ 0x99e1020]i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 13% 18% 8% 8% 10% 13% 10% 9% 12% [libx264 @ 0x99e1020]Weighted P-Frames: Y:10.4% [libx264 @ 0x99e1020]ref P L0: 60.2% 15.3% 11.0% 7.6% 5.2% 0.7% [libx264 @ 0x99e1020]ref B L0: 72.6% 15.6% 11.8% [libx264 @ 0x99e1020]kb/s:471.17 Segmentation fault I'm wondering if anyone else has ran into similar issues. I wasn't able to find anything helpful via Google. Another question I have is if anyone knows of a company that offers paid support for FFMPEG. Thank you for your time.

    Read the article

  • Apache on Win32: Slow Transfers of single, static files in HTTP, fast in HTTPS

    - by Michael Lackner
    I have a weird problem with Apache 2.2.15 on Windows 2000 Server SP4. Basically, I am trying to serve larger static files, images, videos etc. The download seems to be capped at around 550kB/s even over 100Mbit LAN. I tried other protocols (FTP/FTPS/FTP+ES/SCP/SMB), and they are all in the multi-megabyte range. The strangest thing is that, when using Apache with HTTPS instead of HTTP, it serves very fast, around 2.7MByte/s! I also tried the AnalogX SimpleWWW server just to test the plain HTTP speed of it, and it gave me a healthy 3.3Mbyte/s. I am at a total loss here. I searched the web, and tried to change the following Apache configuration directives in httpd.conf, one at a time, mostly to no avail at all: SendBufferSize 1048576 #(tried multiples of that too, up to 100Mbytes) EnableSendfile Off #(minor performance boost) EnableMMAP Off Win32DisableAcceptEx HostnameLookups Off #(default) I also tried to tune the following registry parameters, setting their values to 4194304 in decimal (they are REG_DWORD), and rebooting afterwards: HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultReceiveWindow HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultSendWindow Additionally, I tried to install mod_bw, which sets the event timer precision to 1ms, and allows for bandwidth throttling. According to some people it boosts static file serving performance when set to unlimited bandwidth for everybody. Unfortunately, it did nothing for me. So: AnalogX HTTP: 3300kB/s Gene6 FTPD, plain: 3500kB/s Gene6 FTPD, Implicit and Explicit SSL, AES256 Cipher: 1800-2000kB/s freeSSHD: 1100kB/s SMB shared folder: about 3000kB/s Apache HTTP, plain: 550kB/s Apache HTTPS: 2700kB/s Clients that were used in the bandwidth testing: Internet Explorer 8 (HTTP, HTTPS) Firefox 8 (HTTP, HTTPS) Chrome 13 (HTTP, HTTPS) Opera 11.60 (HTTP, HTTPS) wget under CygWin (HTTP, HTTPS) FileZilla (FTP, FTPS, FTP+ES, SFTP) Windows Explorer (SMB) Generally, transfer speeds are not too high, but that's because the server machine is an old quad Pentium Pro 200MHz machine with 2GB RAM. However, I would like Apache to serve at at least 2Mbyte/s instead of 550kB/s, and that already works with HTTPS easily, so I fail to see why plain HTTP is so crippled. I am using a Kerio Winroute Firewall, but no Throttling and no special filters peeking into HTTP traffic, just the plain Firewall functionality for blocking/allowing connections. The Apache error.log (Loglevel info) shows no warnings, no errors. Also nothing strange to be seen in access.log. I have already stripped down my httpd.conf to the bare minimum just to make sure nothing is interfering, but that didn't help either. If you have any idea, help would be greatly appreciated, since I am totally out of ideas! Thanks! Edit: I have now tried a newer Apache 2.2.21 to see if it makes any difference. However, the behaviour is exactly the same. Edit 2: KM01 has requested a sniff on the HTTP headers, so here comes the LiveHTTPHeaders output (an extension to Firefox). The Output is generated on downloading a single file called "elephantsdream_source.264", which is an H.264/AVC elementary video stream under an Open Source license. I have taken the freedom to edit the URL, removing folders and changing the actual servers domain name to www.mydomain.com. Here it is: LiveHTTPHeaders, Plain HTTP: http://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:55:16 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain LiveHTTPHeaders, HTTPS: https://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:56:57 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain

    Read the article

  • Windows DHCP Server - get notification when a non-AD joined device gets an IP address

    - by TheCleaner
    SCENARIO To simplify this down to it's easiest example: I have a Windows 2008 R2 standard DC with the DHCP server role. It hands out IPs via various IPv4 scopes, no problem there. WHAT I'D LIKE I would like a way to create a notification/eventlog entry/similar whenever a device gets a DHCP address lease and that device IS NOT a domain joined computer in Active Directory. It doesn't matter to me whether it is custom Powershell, etc. Bottom line = I'd like a way to know when non-domain devices are on the network without using 802.1X at the moment. I know this won't account for static IP devices. I do have monitoring software that will scan the network and find devices, but it isn't quite this granular in detail. RESEARCH DONE/OPTIONS CONSIDERED I don't see any such possibilities with the built in logging. Yes, I'm aware of 802.1X and have the ability to implement it long-term at this location but we are some time away from a project like that, and while that would solve network authentication issues, this is still helpful to me outside of 802.1X goals. I've looked around for some script bits, etc. that might prove useful but the things I'm finding lead me to believe that my google-fu is failing me at the moment. I believe the below logic is sound (assuming there isn't some existing solution): Device receives DHCP address Event log entry is recorded (event ID 10 in the DHCP audit log should work (since a new lease is what I'd be most interested in, not renewals): http://technet.microsoft.com/en-us/library/dd759178.aspx) At this point a script of some kind would probably have to take over for the remaining "STEPS" below. Somehow query this DHCP log for these event ID 10's (I would love push, but I'm guessing pull is the only recourse here) Parse the query for the name of the device being assigned the new lease Query AD for the device's name IF not found in AD, send a notification email If anyone has any ideas on how to properly do this, I'd really appreciate it. I'm not looking for a "gimme the codez" but would love to know if there are alternatives to the above list or if I'm not thinking clear and another method exists for gathering this information. If you have code snippets/PS commands you'd like to share to help accomplish this, all the better.

    Read the article

  • Need help configurating my Tomcat server

    - by gablin
    I just reinstalled my entire server, and now I can't seem to get my JSP-based website to work on Tomcat anymore. I use the same server.xml file, which worked perfectly before the reinstallation, but no longer. Here's the content of the server.xml file which worked before: <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <!-- <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> --> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <!-- </Host> --> <Host name="www.rebootradio.nu"> <Alias>rebootradio.nu</Alias> <Context path="" docBase="D:/services/http/rebootradio.nu" debug="1" reloadable="true"/> </Host> </Engine> </Service> </Server> The JSP site doesn't use any WAR files or anything like that; there's just a default.jsp in the specified folder D:/services/http/rebootradio.nu which loads the site. As I said, this configuration worked before, but now with the latest verion of XAMPP and Tomcat it doesn't work anymore. All I get is a 404 message saying The requested resource () is not available.

    Read the article

  • FFMPEG Segfault Solutions

    - by Brentley_11
    I'm trying to convert a bunch of movies into h.264 mp4's using FFMPEG. These movies are sourced from various portable camcorders such as the Flip Mino HD and the Kodak ZI8. One issue I'm having with video from the ZI8 is it seems to be causing FFMPEG to segfault. Here is my command: ffmpeg -i 'XmasSailor720p60fps.MOV' -threads 2 -acodec libfaac -ab 96kb -vcodec libx264 -vpre hq -b 500kb -s 484x272 XmasSailor.mp4 Here is the output: FFmpeg version SVN-r20668, Copyright (c) 2000-2009 Fabrice Bellard, et al. built on Dec 2 2009 18:37:34 with gcc 4.2.4 (Ubuntu 4.2.4-1ubuntu4) configuration: --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared libavutil 50. 5. 1 / 50. 5. 1 libavcodec 52.42. 0 / 52.42. 0 libavformat 52.39. 2 / 52.39. 2 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0. 7. 2 / 0. 7. 2 libpostproc 51. 2. 0 / 51. 2. 0 Seems stream 0 codec frame rate differs from container frame rate: 59.94 (60000/1001) -> 29.97 (30000/1001) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'XmasSailor720p60fps.MOV': Duration: 00:00:05.37, start: 0.000000, bitrate: 12021 kb/s Stream #0.0(eng): Video: h264, yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 11994 kb/s, 29.97 tbr, 90k tbn, 59.94 tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 128 kb/s Metadata major_brand : qt minor_version : 0 compatible_brands: qt comment : KODAK Zi8 Pocket Video Camera comment-eng : KODAK Zi8 Pocket Video Camera [libx264 @ 0x99e1020]using SAR=1/1 [libx264 @ 0x99e1020]using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.1 Cache64 [libx264 @ 0x99e1020]profile High, level 2.1 Output #0, mp4, to 'XmasSailor.mp4': Stream #0.0(eng): Video: libx264, yuv420p, 484x272 [PAR 1:1 DAR 121:68], q=10-51, 500 kb/s, 30k tbn, 29.97 tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 96 kb/s Metadata comment : Encoded with the Statusfirm Video Transcoder Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding [h264 @ 0x99de950]B picture before any references, skipping [h264 @ 0x99de950]decode_slice_header error [h264 @ 0x99de950]no frame! Error while decoding stream #0.0 [h264 @ 0x99de950]B picture before any references, skipping [h264 @ 0x99de950]decode_slice_header error [h264 @ 0x99de950]no frame! Error while decoding stream #0.0 frame= 20 fps= 0 q=13797729.0 size= 0kB time=0.66 bitrate= 0.6kbits/s frame= 39 fps= 37 q=13797729.0 size= 0kB time=1.30 bitrate= 0.3kbits/s frame= 48 fps= 30 q=33.0 size= 11kB time=0.10 bitrate= 903.0kbits/s frame= 58 fps= 27 q=31.0 size= 22kB time=0.43 bitrate= 421.0kbits/s frame= 67 fps= 25 q=29.0 size= 41kB time=0.73 bitrate= 462.6kbits/s frame= 75 fps= 23 q=29.0 size= 59kB time=1.00 bitrate= 486.7kbits/s frame= 83 fps= 22 q=29.0 size= 81kB time=1.27 bitrate= 521.9kbits/s frame= 90 fps= 21 q=29.0 size= 97kB time=1.50 bitrate= 530.1kbits/s frame= 98 fps= 20 q=29.0 size= 114kB time=1.77 bitrate= 526.9kbits/s frame= 106 fps= 20 q=29.0 size= 134kB time=2.04 bitrate= 537.7kbits/s frame= 114 fps= 19 q=29.0 size= 150kB time=2.30 bitrate= 533.7kbits/s frame= 122 fps= 19 q=29.0 size= 172kB time=2.57 bitrate= 547.8kbits/s frame= 130 fps= 19 q=29.0 size= 193kB time=2.84 bitrate= 557.5kbits/s frame= 136 fps= 18 q=29.0 size= 211kB time=3.04 bitrate= 570.0kbits/s frame= 144 fps= 18 q=29.0 size= 242kB time=3.30 bitrate= 599.5kbits/s frame= 152 fps= 17 q=30.0 size= 261kB time=3.57 bitrate= 598.6kbits/s frame= 157 fps= 15 q=-1.0 Lsize= 368kB time=5.21 bitrate= 579.3kbits/s video:302kB audio:61kB global headers:0kB muxing overhead 1.416371% [libx264 @ 0x99e1020]frame I:1 Avg QP:27.22 size: 8720 [libx264 @ 0x99e1020]frame P:48 Avg QP:25.15 size: 3759 [libx264 @ 0x99e1020]frame B:108 Avg QP:30.10 size: 1105 [libx264 @ 0x99e1020]consecutive B-frames: 0.6% 11.5% 28.8% 59.0% [libx264 @ 0x99e1020]mb I I16..4: 28.5% 47.6% 23.9% [libx264 @ 0x99e1020]mb P I16..4: 0.8% 1.3% 0.5% P16..4: 50.6% 17.7% 13.1% 0.0% 0.0% skip:15.9% [libx264 @ 0x99e1020]mb B I16..4: 0.2% 0.3% 0.1% B16..8: 44.0% 1.2% 2.6% direct: 5.1% skip:46.5% L0:45.5% L1:51.0% BI: 3.5% [libx264 @ 0x99e1020]final ratefactor: 23.51 [libx264 @ 0x99e1020]8x8 transform intra:49.9% inter:67.9% [libx264 @ 0x99e1020]direct mvs spatial:98.1% temporal:1.9% [libx264 @ 0x99e1020]coded y,uvDC,uvAC intra: 54.7% 76.1% 41.4% inter: 17.1% 24.4% 7.8% [libx264 @ 0x99e1020]i16 v,h,dc,p: 18% 52% 5% 25% [libx264 @ 0x99e1020]i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 12% 22% 9% 7% 10% 10% 9% 8% 13% [libx264 @ 0x99e1020]i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 13% 18% 8% 8% 10% 13% 10% 9% 12% [libx264 @ 0x99e1020]Weighted P-Frames: Y:10.4% [libx264 @ 0x99e1020]ref P L0: 60.2% 15.3% 11.0% 7.6% 5.2% 0.7% [libx264 @ 0x99e1020]ref B L0: 72.6% 15.6% 11.8% [libx264 @ 0x99e1020]kb/s:471.17 Segmentation fault I'm wondering if anyone else has ran into similar issues. I wasn't able to find anything helpful via Google. Another question I have is if anyone knows of a company that offers paid support for FFMPEG. Thank you for your time.

    Read the article

  • High load on X3220 Quad Core Linux Apache server

    - by John Templar
    I'm seriously in need of help. My sites are now nearly impossible to use because of massive loads on my server. I'm already a month late on my mortgage and this really isn't helping my situation. I've been working on fixing this intermittent load problem for months (never this bad). I'm suspecting some kind of attack since I'm under DDOS attack a lot! I've been trying to figure out what is causing the load but I'm afraid I just don't have the experience or knowledge to understand all the data I've been looking at. I don't even know where to begin or how to test for the large array of attacks out there. Here's some data you might find useful... Server: Xeon X3220 Quad Core 2.4 GHz - Linux, FreeBSD 500 GB HD and 8 Gig of Ram. Runs Centos release 5.7 Server Version: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_qos/9.74 Warning: All sites are softcore adult sites - mostly fantasy art like elves and amazons. 1) Sites may run fine for weeks or just days at less than 10 load then start jumping to 40-80 load - no idea why. Same sites, same mods, same amount of traffic - just WHAM! 2) I get an email almost every day that says: "Large Number of Failed Login Attempts from IP (different each time)". My webhost (who almost never helps me) told me it was a udp flood or something. 3) I've changed the port for MySQL from the default. If I ever put it back to the default - I get Loads of over 100 from what must be a constant mysql port flood. 4) I've reconfigured MYSQL. Link: http://www.deadlyamazons.com/logs/mycnf.txt 5) I have 3 Joomla Jomsocial networks. I've spent a couple weeks turning all the mods/plugins off, waiting a day and then turning them back on the next day or later if there isn't any change (there hasn't been). For example, on Thursday I'll turn off videos, on Friday I'll turn off chat.. etc and nothing changes the load appreciably. 6) Joomla info: All SEF turned off - sh404sef completely disabled and removed. Components: Joomla 1.5.22, Jomsocial 2.0.5, Kunena 1/31/2011, HWDMediashare 11/22/2010 and JBolo Chat 2.7.3, Comet Chat or Envolve Chat. Page Compression is on, Cache is on 15 mins. Please click on this forum to see links to all my reports: http://forum.joomla.org/viewtopic.php?f=433&t=706035&p=2777500#p2777500 Any help would be highly appreciated.

    Read the article

  • FFmpeg creates emtpy (black) frames

    - by resamsel
    I have a set of images from a timelapse shot (172 JPG files) that I want to convert into a movie. I tried several parameters with FFmpeg, but all I get is a video with black frames (though it has the expected length). ffmpeg -f image2 -vcodec mjpeg -y -i img_%03d.jpg timelapse2.mpg The command above creates this video: http://sdm-net.org/data/timelapse2.mpg What I'm expecting is something like this (created with Time Lapse Assembler.app): https://vimeo.com/39038362 - This is my fallback option, but I'd really like to create timelapse movies from a script. I'm on OSX Lion (10.7.3) with FFmpeg version (0.10) installed via Homebrew. I also tried to find a proper version of mencoder for OSX, but this doesn't seem to be an easy task. Also, ImageMagick's convert doesn't seem to work nicely, it creates really bad output and it seems there's not much I can do about it... Edit: With libx264 and an mp4 container: ffmpeg -f image2 -y -i img_%03d.jpg -vcodec libx264 timelapse4.mp4 Output: ffmpeg version 0.10 Copyright (c) 2000-2012 the FFmpeg developers built on Mar 26 2012 13:47:02 with clang 3.0 (tags/Apple/clang-211.12) configuration: --prefix=/usr/local/Cellar/ffmpeg/0.10 --enable-shared --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --enable-libfreetype --cc=/usr/bin/clang --enable-libx264 --enable-libfaac --enable-libmp3lame --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libxvid --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libass --disable-ffplay libavutil 51. 34.101 / 51. 34.101 libavcodec 53. 60.100 / 53. 60.100 libavformat 53. 31.100 / 53. 31.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 60.100 / 2. 60.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 52. 0.100 / 52. 0.100 Input #0, image2, from 'img_%03d.jpg': Duration: 00:00:06.88, start: 0.000000, bitrate: N/A Stream #0:0: Video: mjpeg, yuvj420p, 3888x2592 [SAR 72:72 DAR 3:2], 25 fps, 25 tbr, 25 tbn, 25 tbc [buffer @ 0x7f8ec9415f20] w:3888 h:2592 pixfmt:yuvj420p tb:1/1000000 sar:72/72 sws_param: [libx264 @ 0x7f8ec981d800] using SAR=1/1 [libx264 @ 0x7f8ec981d800] frame MB size (243x162) > level limit (36864) [libx264 @ 0x7f8ec981d800] MB rate (984150) > level limit (983040) [libx264 @ 0x7f8ec981d800] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 AVX [libx264 @ 0x7f8ec981d800] profile High, level 5.1 [libx264 @ 0x7f8ec981d800] 264 - core 120 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'timelapse4.mp4': Metadata: encoder : Lavf53.31.100 Stream #0:0: Video: h264 (![0][0][0] / 0x0021), yuvj420p, 3888x2592 [SAR 72:72 DAR 3:2], q=-1--1, 25 tbn, 25 tbc Stream mapping: Stream #0:0 -> #0:0 (mjpeg -> libx264) Press [q] to stop, [?] for help frame= 172 fps= 18 q=-1.0 Lsize= 259kB time=00:00:06.80 bitrate= 312.3kbits/s video:256kB audio:0kB global headers:0kB muxing overhead 1.089647% [libx264 @ 0x7f8ec981d800] frame I:1 Avg QP: 9.60 size:212820 [libx264 @ 0x7f8ec981d800] frame P:43 Avg QP:30.50 size: 291 [libx264 @ 0x7f8ec981d800] frame B:128 Avg QP:31.00 size: 285 [libx264 @ 0x7f8ec981d800] consecutive B-frames: 0.6% 0.0% 1.7% 97.7% [libx264 @ 0x7f8ec981d800] mb I I16..4: 22.5% 77.2% 0.3% [libx264 @ 0x7f8ec981d800] mb P I16..4: 0.0% 0.0% 0.0% P16..4: 0.0% 0.0% 0.0% 0.0% 0.0% skip:100.0% [libx264 @ 0x7f8ec981d800] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 0.0% 0.0% 0.0% direct: 0.0% skip:100.0% L0: 1.2% L1:98.8% BI: 0.0% [libx264 @ 0x7f8ec981d800] 8x8 transform intra:77.2% inter:100.0% [libx264 @ 0x7f8ec981d800] coded y,uvDC,uvAC intra: 41.2% 23.4% 0.6% inter: 0.0% 0.0% 0.0% [libx264 @ 0x7f8ec981d800] i16 v,h,dc,p: 40% 25% 35% 1% [libx264 @ 0x7f8ec981d800] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 36% 32% 30% 1% 0% 0% 0% 0% 0% [libx264 @ 0x7f8ec981d800] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 51% 40% 6% 1% 1% 0% 1% 0% 1% [libx264 @ 0x7f8ec981d800] i8c dc,h,v,p: 60% 21% 19% 0% [libx264 @ 0x7f8ec981d800] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0x7f8ec981d800] ref P L0: 92.3% 0.0% 0.0% 7.7% [libx264 @ 0x7f8ec981d800] ref B L0: 50.0% 0.0% 50.0% [libx264 @ 0x7f8ec981d800] ref B L1: 99.4% 0.6% [libx264 @ 0x7f8ec981d800] kb/s:304.49 Output timelapse4.mp4 (beacause of spam protection I can only post two links with my reputation): http sdm-net.org/data/timelapse4.mp4

    Read the article

  • Preventing 'Reply-All' to Exchange Distribution Groups

    - by Larold
    This is another question in a short series regarding a challenging Exchange project my co-workers have been asked to implement. (I'm helping even though I'm primarily a Unix guy because I volunteered to learn powershell and implement as much of the project in code as I could.) Background: We have been asked to create many distribution groups, say about 500+. These groups will contain two types of members. (Apologies if I get these terms wrong.) One type will be internal AD users, and the other type will be external users that I create Mail Contact entries for. We have been asked to make it so that a "Reply All" is not possible to any messages sent to these groups. I don't believe that is 100% possible to enforce for the following reasons. My question is - is my following reasoning sound? If not, please feel free to educate me on if / how things can properly be implemeneted. Thanks! My reasoning on why it's impossible to prevent 100% of potential reply-all actions: An interal AD user could put the DL in their To: field. They then click the '+' to expand the group. The group contains two external mail contacts. The message is sent to everyone, including those external contacts. External user #1 decides to reply-all, and his mail goes to, at least, external user #2, which wouldn't even involve our Exchange mail relays. An internal AD user could place the DL in their Outlook To: field, then click the '+' button to expand the DL. They then fire off an email to everyone that was in the group. (But the individual addresses are listed in the 'To:' field.) Because we now have a message sent to multiple recipients in the To: field, the addresses have been "exposed", and anyone is free to reply-all, and the messages just get sent to everyone in the To: field. Even if we try to set a Reply-To: field for all of these DLs, external mail clients are not obligated to abide by it, or force users to abide by it. Are my two points above valid? (I admit, they are somewhat similar.) Am I correct to tell our leadership "It is not possible to prevent 100% of the cases where someone will want to Reply-All to these groups UNLESS we train the users sending emails to these groups that the Bcc: field is to be used at all times." I am dying for any insight or parts of the equation I'm not seeing clearly. Thank you!!!

    Read the article

  • How does this main domain have a CNAME record?

    - by TRiG
    I was under the impression that only subdomains could have CNAME records: main domains need to define all their own records. However, apt-get.com seems to have only a CNAME record. How can this work? $ dig apt-get.com ; <<>> DiG 9.8.1-P1 <<>> apt-get.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45743 ;; flags: qr rd ra; QUERY: 1, ANSWER: 9, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;apt-get.com. IN A ;; ANSWER SECTION: apt-get.com. 86336 IN CNAME thie5ku9.dsgeneration.com. thie5ku9.dsgeneration.com. 60 IN A 208.73.211.242 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.246 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.166 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.232 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.161 thie5ku9.dsgeneration.com. 60 IN A 208.73.210.233 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.186 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.188 ;; Query time: 59 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Tue Jun 10 15:05:48 2014 ;; MSG SIZE rcvd: 193 $ dig apt-get.com ns ; <<>> DiG 9.8.1-P1 <<>> apt-get.com ns ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 43831 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;apt-get.com. IN NS ;; Query time: 26 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Tue Jun 10 15:12:37 2014 ;; MSG SIZE rcvd: 29 $ dig apt-get.com ns @b.gtld-servers.net ; <<>> DiG 9.8.1-P1 <<>> apt-get.com ns @b.gtld-servers.net ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38228 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 2 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;apt-get.com. IN NS ;; AUTHORITY SECTION: apt-get.com. 172800 IN NS ns1.domainrecover.com. apt-get.com. 172800 IN NS ns2.domainrecover.com. ;; ADDITIONAL SECTION: ns1.domainrecover.com. 172800 IN A 66.45.232.66 ns2.domainrecover.com. 172800 IN A 65.23.159.179 ;; Query time: 70 msec ;; SERVER: 192.33.14.30#53(192.33.14.30) ;; WHEN: Tue Jun 10 15:07:05 2014 ;; MSG SIZE rcvd: 111 The domain does resolve. I get the following headers: GET / HTTP/1.1 User-Agent: Testing_Sniffer/4.15 Host: apt-get.com Accept: */* HTTP/1.0 200 (OK) Cache-Control: private, no-cache, must-revalidate Connection: Keep-Alive Pragma: no-cache Server: Oversee Turing v1.0.0 Content-Length: 1347 Content-Type: text/html Expires: Mon, 26 Jul 1997 05:00:00 GMT Keep-Alive: timeout=3, max=96 P3P: policyref="http://www.dsparking.com/w3c/p3p.xml", CP="NOI DSP COR ADMa OUR NOR STA" Set-Cookie: parkinglot=1; domain=.apt-get.com; path=/; expires=Wed, 11-Jun-2014 14:10:37 GMT <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Frameset//EN" "http://www.w3.org/TR/html4/frameset.dtd"> <!-- turing_cluster_prod --> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>apt-get.com</title> <meta name="keywords" content="apt-get.com" /> <meta name="description" content="apt-get.com" /> <meta name="robots" content="index, follow" /> <meta name="revisit-after" content="10" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <script type="text/javascript"> document.cookie = "jsc=1"; </script> </head> <frameset rows="100%,*" frameborder="no" border="0" framespacing="0"> <frame src="http://apt-get.com?epl=5PfLSSqWrYDAt-gbwMDK_rA3b1UJCYVTJHfxTzr9FTDQV84b6vAgVhU3FTeCRQNiuRNv79Ni0V3mkEVNRhpqo2gpMjp5iOIR1w2_EISPENaqzoXohVXl2QI3ryXlRCB4FaIIaxynnWXWY6QBgBgNiIZ6agD1NBoNGg0ajXpUCXUAIJDer78AAOB_AwAAQIDbCwAAe_NWlVlTJllBMTZoWkKPAAAA8A" name="apt-get.com"> </frameset> <noframes> <body><a href="http://apt-get.com?epl=5PfLSSqWrYDAt-gbwMDK_rA3b1UJCYVTJHfxTzr9FTDQV84b6vAgVhU3FTeCRQNiuRNv79Ni0V3mkEVNRhpqo2gpMjp5iOIR1w2_EISPENaqzoXohVXl2QI3ryXlRCB4FaIIaxynnWXWY6QBgBgNiIZ6agD1NBoNGg0ajXpUCXUAIJDer78AAOB_AwAAQIDbCwAAe_NWlVlTJllBMTZoWkKPAAAA8A">Click here to go to apt-get.com</a>.</body> </noframes> </html>

    Read the article

  • Getting an boot error when starting computer

    - by Rob Avery IV
    I was in the middle of watching a movie on Netflix, then suddenly everything started crashing. First, explorer.exe closed down, then Google chrome. I had multiple things running in the background (Steam, Raptr, etc.). Individuality, each of those apps closed down also. When they did, a small dialog box popped up for each of them, one at a time, saying that it was missing a file, it couldn't run anymore, or something similar to that. It also had some jumbled up "code" with numbers and letters that I couldn't read. Ever since then, everytime I turn my computer on, it will run for a few seconds and give this error "Reboot and select proper boot device or insert boot media in selected boot device and press a key_". No matter how many times I try to reboot it, it always gives me the same error. A day later after this happened I was able to start the computer, but before it booted, it told me that I didn't shut down the computer properly and asked how I wanted to run the OS (Run Windows in Safety Mode, Run Windows Normally, etc.). Once I logged, everything went SUPER slow and everything crashed almost instantly. The only thing I opened was Microsoft Security Essentials and only got in about two clicks before it was "Not Responding". Then, after that the whole computer froze and I had to restart it. Now, it's back to saying what it originally said, "Reboot and select proper boot device or insert boot media in selected boot device and press a key_". I built this PC back in February 2012. Here are the specs: OS: Windows 7 Ultimate CPU: AMD 8-core GPU: Nvidia GTX Force 560 Ti RAM: 16GB Hard Drive: Hitachi Deskstar 750GB I'm usually very good taking care of my PC. I don't download anything that's not from a trusted site or source. I don't open up any spam email or such or go to any harmful websites like porn or stream movies. I am very clean with the things I do with my PC and don't do many DIFFERENT things with it. I use it pretty often especially for video games and doing homework in Eclipse. Also, good to note that I don't have any Norton or antisoftware installed. I have Microsoft Security Essentials installed but never did a scan. Thanks!

    Read the article

  • POSTFIX bouncing when destination is my domain

    - by ZeC
    I am using provider mail hosting to send emails. On my Webserver I also have Postfix running and configured. Here is my main.cf smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = yes readme_directory = no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = 2-5-8.bih.net.ba alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = bhcom.info, 2-5-8.bih.net.ba, localhost.bih.net.ba, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = mailbox_size_limit = 10485760 recipient_delimiter = + inet_interfaces = 80.65.85.114 When I try sending email to my hosted domain name, every message gets bounced with this error: Nov 4 20:38:34 2-5-8 postfix/pickup[802]: 1492A3E0C6C: uid=0 from=<[email protected]> Nov 4 20:38:34 2-5-8 postfix/cleanup[988]: 1492A3E0C6C: message-id=<[email protected]> Nov 4 20:38:34 2-5-8 postfix/qmgr[803]: 1492A3E0C6C: from=<[email protected]>, size=348, nrcpt=1 (queue active) Nov 4 20:38:34 2-5-8 postfix/local[990]: 1492A3E0C6C: to=<[email protected]>, relay=local, delay=0.12, delays=0.08/0.01/0/0.04, dsn=5.1.1, status=bounced (unknown user: "info") Nov 4 20:38:34 2-5-8 postfix/cleanup[988]: 28ED53E0C6D: message-id=<[email protected]> Nov 4 20:38:34 2-5-8 postfix/qmgr[803]: 28ED53E0C6D: from=<>, size=2056, nrcpt=1 (queue active) Nov 4 20:38:34 2-5-8 postfix/bounce[991]: 1492A3E0C6C: sender non-delivery notification: 28ED53E0C6D Nov 4 20:38:34 2-5-8 postfix/qmgr[803]: 1492A3E0C6C: removed Nov 4 20:38:34 2-5-8 postfix/local[990]: 28ED53E0C6D: to=<[email protected]>, relay=local, delay=0.06, delays=0.03/0/0/0.02, dsn=5.1.1, status=bounced (unknown user: "razvoj") Nov 4 20:38:34 2-5-8 postfix/qmgr[803]: 28ED53E0C6D: removed However, when I try to @gmail.com, it sends message without problems, and here is log. What might be the issue? Nov 4 20:41:23 2-5-8 postfix/pickup[802]: B2EC63E0C6C: uid=0 from=<[email protected]> Nov 4 20:41:23 2-5-8 postfix/cleanup[1022]: B2EC63E0C6C: message-id=<[email protected]> Nov 4 20:41:23 2-5-8 postfix/qmgr[803]: B2EC63E0C6C: from=<[email protected]>, size=350, nrcpt=1 (queue active) Nov 4 20:41:23 2-5-8 postfix/smtp[1024]: connect to gmail-smtp-in.l.google.com[2a00:1450:4001:c02::1a]:25: Network is unreachable Nov 4 20:41:24 2-5-8 postfix/smtp[1024]: B2EC63E0C6C: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[173.194.70.26]:25, delay=0.97, delays=0.08/0.01/0.27/0.62, dsn=2.0.0, status=sent (250 2.0.0 OK 1352058066 f7si2180442eeo.46) Nov 4 20:41:24 2-5-8 postfix/qmgr[803]: B2EC63E0C6C: removed

    Read the article

  • Postfix issues sending mail to addresses under domain located on server

    - by iamthewit
    I recently installed virtualmin on my nice shiny new rackspace cloud. Everything went seemlessly but I've been having some issues getting emails to send properly. The problem seems to be that the server can not send mail to email addresses where the domain is owned by my server. For example, on my server I run multiple virtual domains, lets call this one test.com. When I run the mail command from shell (mail [email protected]) I get the following back from my maillog: Oct 6 14:55:18 test postfix/pickup[8737]: DC1131612CC: uid=0 from= Oct 6 14:55:18 test postfix/cleanup[8769]: DC1131612CC: [email protected] Oct 6 14:55:18 test postfix/qmgr[8738]: DC1131612CC: [email protected], size=353, nrcpt=1 (queue active) Oct 6 14:55:18 test postfix/error[8771]: DC1131612CC: [email protected], relay=none, delay=0, delays=0/0/0/0, dsn=5.0.0, status=bounced (User unknown in virtual alias table) Oct 6 14:55:18 test postfix/cleanup[8769]: DD07D1612D1: [email protected] Oct 6 14:55:18 test postfix/bounce[8772]: DC1131612CC: sender non-delivery notification: DD07D1612D1 Oct 6 14:55:18 test postfix/qmgr[8738]: DD07D1612D1: from=<, size=2268, nrcpt=1 (queue active) Oct 6 14:55:18 test postfix/qmgr[8738]: DC1131612CC: removed Oct 6 14:55:18 test postfix/local[8773]: DD07D1612D1: [email protected], relay=local, delay=0.03, delays=0/0/0/0.03, dsn=2.0.0, status=sent (delivered to command: /usr/bin/procmail-wrapper -o -a $DOMAIN -d $LOGNAME) Oct 6 14:55:18 test postfix/qmgr[8738]: DD07D1612D1: removed when I run mail [email protected] the message is sent and received perfectly fine. I'm a bit of a noob when it comes to servers, but I pick things up fairly quickly, so please excuse any incorrect terminology and my general noobiness. Any help would be greatly appreciated, I've been googling for quite a while but I haven't found a solution yet, I'll add a copy of my main.cf file in a response below cheers guys here is the reformatted postconf, do you want the reformatted main.cf file too, or is this enough? alias_database = hash:/etc/postfix/aliases alias_maps = hash:/etc/postfix/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no mailbox_command = /usr/bin/procmail-wrapper -o -a $DOMAIN -d $LOGNAME mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man myhostname = server.test.com newaliases_path = /usr/bin/newaliases.postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sender_bcc_maps = hash:/etc/postfix/bcc sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = permit_mynetworks permit_sasl_authenticated reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/virtual

    Read the article

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

  • One user sometimes gets an unknown certificate error opening Outlook

    - by Chris
    Let me clarify a little. This isn't an unknown certificate error it's an unknown certificate error in so much as I can't figure out where the certificate comes from. This happens on a Win 7 Enterprise machine connecting to Exchange 2010 with Outlook 2010. The error he gets is that the root is not trusted because it's a self-signed cert. Take a look at this screenshot because even if I had generated this myself I wouldn't have put "SomeOrganizationalUnit" or "SomeCity" or "SomeState", etc. (Red block covers our domain name.) I'm a little concerned this is a symptom of a security breach. Exchange 2010 has three certificates installed but none of them are this certificate. They all have different expiration dates (one is expired) and different meta-data. edit: There are two scenarios that I see the certificate warning and one of them I can reliably repeat. When the user leaves his computer on over night Outlook pops the Security Warning window. I don't know what time this happens. Using Outlook Anywhere if I connect to Exchange externally via a cellular USB modem the Security Warning window will appear every time I close and reopen Outlook. Whether I say Yes or No does not make a difference on whether or not I can connect to Exchange and send/receive email. In other words, I can always connect to Exchange. I've checked my two Exchange servers and my Cisco router for a certificate that matches this one and I can't find it. edit 2: Here is a screenshot of the Security Alert window. (I've been calling it Security Warning... My mistake.) edit 3: I stopped seeing this error several weeks ago but I can't tie it to any single event (because I just sort of realized that warning had stopped showing up) but I think I found the source of the certificate. Last week I found out that the certificate on our website DomainA.com was invalid. I knew that our web admin had installed a valid certificate so when I look into the problem I found out I was being presented with the invalid certificate that this posting is in regards to. The Exchange server's domain is mail.DomainA.com so I can only guess that Outlook was passing this invalid certificate through as it did some kind of check on DomainA.com. This issue is still a mystery because the certificate warning stopped appearing several weeks ago whereas the invalid certificate issue on the website was only fixed last week. It ended up being a problem with the website control panel. The valid certificate was installed but not being served for some reason and instead the self-signed cert was being served.

    Read the article

  • Screen Casting using ffmpeg (too fast)

    - by rowman
    I can use ffmpeg to make screen casts: ffmpeg -f x11grab -s 1280x800 -i :0.0 -c:v libx264 -framerate 30 -r 30 -crf 18 out.mkv However the output comes out to be too fast paced. It also happens with GTK RecordMyDesktop if I enable the encode on the fly. So, the questions is how to get a normal video pace. Also in order to capture the sound with ffmpeg what option should be used? FFmpeg Output: ffmpeg -f x11grab -s 1280x800 -r 30 -i :0.0 -c:v libx264 -framerate 30 -r 30 -crf 18 out.mkv ffmpeg version N-35162-g87244c8 Copyright (c) 2000-2012 the FFmpeg developers built on Oct 7 2012 15:56:19 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5) configuration: --enable-gpl --enable-libfaac --enable-libfdk-aac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-x11grab --enable-libx264 --enable-nonfree --enable-version3 libavutil 51. 73.102 / 51. 73.102 libavcodec 54. 64.100 / 54. 64.100 libavformat 54. 29.105 / 54. 29.105 libavdevice 54. 3.100 / 54. 3.100 libavfilter 3. 19.102 / 3. 19.102 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 16.100 / 0. 16.100 libpostproc 52. 1.100 / 52. 1.100 [x11grab @ 0xab896a0] device: :0.0 -> display: :0.0 x: 0 y: 0 width: 1280 height: 800 [x11grab @ 0xab896a0] shared memory extension found [x11grab @ 0xab896a0] Estimating duration from bitrate, this may be inaccurate Input #0, x11grab, from ':0.0': Duration: N/A, start: 1350136942.608988, bitrate: 983040 kb/s Stream #0:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 1280x800, 983040 kb/s, 30 tbr, 1000k tbn, 30 tbc [libx264 @ 0xab87320] using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64 SlowCTZ SlowAtom [libx264 @ 0xab87320] profile High 4:4:4 Predictive, level 3.2, 4:4:4 8-bit [libx264 @ 0xab87320] 264 - core 128 r2 198a7ea - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=18.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, matroska, to 'out.mkv': Metadata: encoder : Lavf54.29.105 Stream #0:0: Video: h264, yuv444p, 1280x800, q=-1--1, 1k tbn, 30 tbc Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> libx264) Press [q] to stop, [?] for help frame= 10 fps=0.0 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 19 fps= 17 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 28 fps= 17 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 37 fps= 17 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 45 fps= 16 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 47 fps= 14 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 52 fps= 13 q=24.0 size= 257kB time=00:00:00.00 bitrate=2101632.0kbiframe= 55 fps= 12 q=24.0 size= 257kB time=00:00:00.10 bitrate=20808.2kbitsframe= 59 fps= 11 q=24.0 size= 289kB time=00:00:00.23 bitrate=10145.0kbitsframe= 64 fps= 11 q=24.0 size= 289kB time=00:00:00.40 bitrate=5894.7kbits/frame= 70 fps= 11 q=24.0 size= 289kB time=00:00:00.60 bitrate=3933.1kbits/frame= 72 fps= 10 q=24.0 size= 289kB time=00:00:00.66 bitrate=3549.2kbits/frame= 77 fps=9.8 q=24.0 size= 289kB time=00:00:00.83 bitrate=2837.7kbits/frame= 80 fps=9.6 q=24.0 size= 289kB time=00:00:00.93 bitrate=2533.5kbits/frame= 85 fps=9.3 q=24.0 size= 289kB time=00:00:01.10 bitrate=2146.9kbits/frame= 89 fps=9.3 q=24.0 size= 289kB time=00:00:01.23 bitrate=1917.1kbits/frame= 92 fps=9.1 q=24.0 size= 289kB time=00:00:01.33 bitrate=1773.3kbits/frame= 96 fps=9.0 q=24.0 size= 289kB time=00:00:01.46 bitrate=1612.4kbits/frame= 99 fps=8.8 q=24.0 size= 321kB time=00:00:01.56 bitrate=1676.8kbits/frame= 104 fps=8.7 q=24.0 size= 321kB time=00:00:01.73 bitrate=1515.2kbits/frame= 109 fps=5.3 q=24.0 Lsize= 1093kB time=00:00:03.56 bitrate=2511.5kbits/s video:1092kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.120198% [libx264 @ 0xab87320] frame I:3 Avg QP:18.93 size:142610 [libx264 @ 0xab87320] frame P:43 Avg QP:20.79 size: 15751 [libx264 @ 0xab87320] frame B:63 Avg QP:23.75 size: 195 [libx264 @ 0xab87320] consecutive B-frames: 21.1% 1.8% 11.0% 66.1% [libx264 @ 0xab87320] mb I I16..4: 50.0% 21.1% 28.9% [libx264 @ 0xab87320] mb P I16..4: 6.1% 0.9% 3.2% P16..4: 5.5% 1.2% 0.6% 0.0% 0.0% skip:82.5% [libx264 @ 0xab87320] mb B I16..4: 0.4% 0.1% 0.0% B16..8: 2.9% 0.1% 0.0% direct: 0.0% skip:96.5% L0:40.7% L1:57.0% BI: 2.3% [libx264 @ 0xab87320] 8x8 transform intra:14.5% inter:46.1% [libx264 @ 0xab87320] coded y,u,v intra: 33.5% 24.1% 25.4% inter: 0.9% 0.4% 0.4% [libx264 @ 0xab87320] i16 v,h,dc,p: 70% 26% 1% 3% [libx264 @ 0xab87320] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 11% 21% 30% 5% 7% 5% 7% 4% 10% [libx264 @ 0xab87320] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 32% 35% 12% 2% 4% 3% 4% 3% 5% [libx264 @ 0xab87320] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0xab87320] ref P L0: 57.0% 5.6% 26.8% 10.6% [libx264 @ 0xab87320] ref B L0: 69.4% 22.6% 8.0% [libx264 @ 0xab87320] ref B L1: 93.7% 6.3% [libx264 @ 0xab87320] kb/s:2460.40

    Read the article

  • Nginx & Apache Cannot get try_files to work with permalinks

    - by tcherokee
    I have been working on this for the past two weeks not and for some reason I cannot seem to get nginx's try_files to work with my wordpress permalinks. I am hoping someone will be able to tell me where I am going wrong and also hopefully tell me if I made any major errors with my configurations as well (I am an nginx newbie... but learning :) ). Here are my Configuration files nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## # Defines the cache log format, cache log location # and the main access log location. log_format cache '***$time_local ' '$upstream_cache_status ' 'Cache-Control: $upstream_http_cache_control ' 'Expires: $upstream_http_expires ' '$host ' '"$request" ($status) ' '"$http_user_agent" ' ; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } mydomain.com.conf server { listen 123.456.78.901:80; # IP goes here. server_name www.mydomain.com mydomain.com; #root /var/www/mydomain.com/prod; index index.php; ## mydomain.com -> www.mydomain.com (301 - Permanent) if ($host !~* ^(www|dev)) { rewrite ^/(.*)$ $scheme://www.$host/$1 permanent; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # All media (including uploaded) is under wp-content/ so # instead of caching the response from apache, we're just # going to use nginx to serve directly from there. location ~* ^/(wp-content|wp-includes)/(.*)\.(jpg|png|gif|jpeg|css|js|m$ root /var/www/mydomain.com/prod; } # Don't cache these pages. location ~* ^/(wp-admin|wp-login.php) { proxy_pass http://backend; } location / { if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") { set $do_not_cache 1; } proxy_cache_key "$scheme://$host$request_uri $do_not_cache"; proxy_cache main; proxy_pass http://backend; proxy_cache_valid 30m; # 200, 301 and 302 will be cached. # Fallback to stale cache on certain errors. # 503 is deliberately missing, if we're down for maintenance # we want the page to display. #try_files $uri $uri/ /index.php?q=$uri$args; #try_files $uri =404; proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_504 http_404; } # Cache purge URL - works in tandem with WP plugin. # location ~ /purge(/.*) { # proxy_cache_purge main "$scheme://$host$1"; # } # No access to .htaccess files. location ~ /\.ht { deny all; } } # End server gzip.conf # Gzip Configuration. gzip on; gzip_disable msie6; gzip_static on; gzip_comp_level 4; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; proxy.conf # Set proxy headers for the passthrough proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; add_header X-Cache-Status $upstream_cache_status; backend.conf upstream backend { # Defines backends. # Extracting here makes it easier to load balance # in the future. Needs to be specific IP as Plesk # doesn't have Apache listening on localhost. ip_hash; server 127.0.0.1:8001; # IP goes here. } cache.conf # Proxy cache and temp configuration. proxy_cache_path /var/www/nginx_cache levels=1:2 keys_zone=main:10m max_size=1g inactive=30m; proxy_temp_path /var/www/nginx_temp; proxy_cache_key "$scheme://$host$request_uri"; proxy_redirect off; # Cache different return codes for different lengths of time # We cached normal pages for 10 minutes proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; The two commented out try_files in location \ of the mydomain config files are the ones I tried. This error I found in the error log can be found below. ...rewrite or internal redirection cycle while internally redirecting to "/index.php" Thanks in advance

    Read the article

  • ntpdate works, but ntpd can't synchronize

    - by dafydd
    This is in RHEL 5.5. First, ntpdate to the remote host works: $ ntpdate XXX.YYY.4.21 24 Oct 16:01:17 ntpdate[5276]: adjust time server XXX.YYY.4.21 offset 0.027291 sec Second, here are the server lines in my /etc/ntp.conf. All restrict lines have been commented out for troubleshooting. server 127.127.1.0 server XXX.YYY.4.21 I execute service ntpd start and check with ntpq: $ ntpq ntpq> peer remote refid st t when poll reach delay offset jitter ============================================================================== *LOCAL(0) .LOCL. 5 l 36 64 377 0.000 0.000 0.001 timeserver.doma .LOCL. 1 u 39 128 377 0.489 51.261 58.975 ntpq> opeer remote local st t when poll reach delay offset disp ============================================================================== *LOCAL(0) 127.0.0.1 5 l 40 64 377 0.000 0.000 0.001 timeserver.doma XXX.YYY.22.169 1 u 43 128 377 0.489 51.261 58.975 XXX.YYY.22.169 is the address of the host I'm working on. A reverse lookup on the IP address in my ntp.conf file validates that the ntpq output is correctly naming the remote server. However, as you can see, it appears to just roll over to my .LOCL. time server. Also, ntptrace just returns the local time server, and ntptrace XXX.YYY.4.21 times out. $ ntptrace localhost.localdomain: stratum 6, offset 0.000000, synch distance 0.948181 $ ntptrace XXX.YYY.4.21 XXX.YYY.4.21: timed out, nothing received ***Request timed out This looks like my ntp daemon is just querying itself. I am thinking about the possibility that the router-I-don't-control between my test network timeserver and the corporate network timeserver is blocking on source port. (I think ntpdate sends on port 123, which gets it around that filter and is why I can't use it while ntpd is running.) I have email in to the network folks to check that. Finally, telnet XXX.YYY.4.21 123 never times out or completes a connection. The questions: What am I missing, here? What else can I check to try to figure out where this connection is failing? Would strace ntptrace XXX.YYY.4.21 show me the source port ntptrace is sending from? I can deconstruct most strace calls, but I can't figure out the location of that datum. If I can't directly examine the gateway router between my test network and the timeserver, how might I build evidence that it's responsible for these disconnections? Alternately, how might I rule it out?

    Read the article

  • Sharing files between multiple sites using only desktop software

    - by perlyking
    Our organisation has three sites; a head office, where the master copies of company files are stored, plus two branch offices using only workstations and a NAS or two. Currently we're talking about <10GB. At the main office, we have no admin access to the file server, as this is entirely controlled by the larger institution where we are located. For the same reason, we have no VPN remote access to this network. Instead, we simply have access to a network share using over a Novell LAN. Question: how can we share files between offices in way that minimises latency, i.e. that gives us a mirror of the main network share at each site? (There is little likelihood of concurrent editing, and we can live with the odd file conflict now and again). Up to now branch office staff have had to use GotoMyPC-type solutions to remotely access files held at the main office. Or email. I was hoping to use Google Drive on a dedicated workstation at each office to sync the contents of the network share (head office) or NAS (branch offices) via the cloud, but at my last attempt (29 Jun '12), the Google Drive installer would not allow me to designate the remote network share as the "target" folder. (I chose Google Drive over Drobbox et al. as we already use GMail for corporate mail) The next idea was to use a designated workstation at head office to mirror the network share to a local drive, then use Google Drive to push that to the cloud. This seems a step too far. Nor do I have any good ideas about how to achieve this network/local mirroring, as we can't, for example, install the rsync daemon on the server. I do not want to use Google Drive locally on each workstation as this will inconvenience users, and more importantly, move files off the backed-up, well-maintained (UPS, RAID etc) network share at head office. Our budget is only in the £100's. Should we perhaps just ditch the head office server and use something like JungleDisk? At least this presents the user with what appears to be a mapped drive.

    Read the article

  • New-ManagedContentSettings - not working properly under Exchange 2010

    - by mfinni
    I have a client that is divesting a business unit into a new AD forest, Exchange org, etc. We're using Quest tools to migrate users and mailboxes. However, I have to build the new infrastructure to match the old one. In the old one, we're using Managed Folder Mailbox Policies to limit (or allow) retention. They started with Exchange 2007 and never upgraded to Retention Policies; oh well. So, in the old environment, when you use a 2007 server to define a new Managed Content Setting, you can pick "Email" from the dropdown for MessageClass. This is a display name; the actual MessageClass values are thus: MessageClass : IPM.Note;IPM.Note.AS/400 Move Notification Form v1.0;IPM.Note.Delayed;IPM.Note.Exchange.ActiveSync.Report;IPM.Note.JournalReport.Msg;IPM.Note.JournalReport.Tnef;IPM.Note.Microsoft.Missed.Voice;IPM.Note.Rules.OofTemplate.Microsoft;IPM.Note.Rules.ReplyTemplate.Microsoft;IPM.Note.Secure.Sign;IPM.Note.SMIME;IPM.Note.SMIME.MultipartSigned;IPM.Note.StorageQuotaWarning;IPM.Note.StorageQuotaWarning.Warning;IPM.Notification.Meeting.Forward;IPM.Outlook.Recall;IPM.Recall.Report.Success;IPM.Schedule.Meeting.*;REPORT.IPM.Note.NDR If I take that and try to mangle it into a new cmdlet for Ex2010 in my new environment here's what I get New-ManagedContentSettings -Name "Delete Messages older then 90 days" -FolderName "Entire Mailbox" -RetentionEnabled $True -AgeLimitForRetention 90 -TriggerForRetention WhenDelivered -RetentionAction DeleteAndAllowRecovery -MessageClass "IPM.Note","IPM.Note.AS/400MoveNotificationFormv1.0","IPM.Note.Delayed","IPM.Note.Exchange.ActiveSync.Report","IPM.Note.JournalReport.Msg","IPM.Note.JournalReport.Tnef","IPM.Note.Microsoft.Missed.Voice","IPM.Note.Rules.OofTemplate.Microsoft","IPM.Note.Rules.ReplyTemplate.Microsoft","IPM.Note.Secure.Sign","IPM.Note.SMIME","IPM.Note.SMIME.MultipartSigned","IPM.Note.StorageQuotaWarning","IPM.Note.StorageQuotaWarning.Warning","IPM.Notification.Meeting.Forward","IPM.Outlook.Recall","IPM.Recall.Report.Success","IPM.Schedule.Meeting.*","REPORT.IPM.Note.NDR" -whatif Invoke-Command : Cannot bind parameter 'MessageClass' to the target. Exception setting "MessageClass": "The length of t he property is too long. The maximum length is 255 and the length of the value provided is 518." At C:\Users\MFinnigan.sa\AppData\Roaming\Microsoft\Exchange\RemotePowerShell\pfexcas02.fve.ad.5ssl.com\pfexcas02.fve.ad .5ssl.com.psm1:28204 char:29 + $scriptCmd = { & <<<< $script:InvokeCommand ` + CategoryInfo : WriteError: (:) [New-ManagedContentSettings], ParameterBindingException + FullyQualifiedErrorId : ParameterBindingFailed,Microsoft.Exchange.Management.SystemConfigurationTasks.NewManaged ContentSettings So, the config object can store all that mess, but I can't fit it in through the cmdlet to create the object. Lovely. Any ideas?

    Read the article

  • Apache VirtualHost Blockhole (Eats All Requests on All Ports on an IP)

    - by Synetech inc.
    I’m exhausted. I just spent the last two hours chasing a goose that I have been after on-and-off for the past year. Here is the goal, put as succinctly as possible. Step 1: HOSTS File: 127.0.0.5 NastyAdServer.com 127.0.0.5 xssServer.com 127.0.0.5 SQLInjector.com 127.0.0.5 PornAds.com 127.0.0.5 OtherBadSites.com … Step 2: Apache httpd.conf <VirtualHost 127.0.0.5:80> ServerName adkiller DocumentRoot adkiller RewriteEngine On RewriteRule (\.(gif|jpg|png|jpeg)$) /p.png [L] RewriteRule (.*) /ad.htm [L] </VirtualHost> So basically what happens is that the HOSTS file redirects designated domains to the localhost, but to a specific loopback IP address. Apache listens for any requests on this address and serves either a transparent pixel graphic, or else an empty HTML file. Thus, any page or graphic on any of the bad sites is replaced with nothing (in other words an ad/malware/porn/etc. blocker). This works great as is (and has been for me for years now). The problem is that these bad things are no longer limited to just HTTP traffic. For example: <script src="http://NastyAdServer.com:99"> or <iframe src="https://PornAds.com/ad.html"> or a Trojan using ftp://spammaster.com/[email protected];[email protected];[email protected] or an app “phoning home” with private info in a crafted ICMP packet by pinging CardStealer.ru:99 Handling HTTPS is a relatively minor bump. I can create a separate VirtualHost just like the one above, replacing port 80 with 443, and adding in SSL directives. This leaves the other ports to be dealt with. I tried using * for the port, but then I get overlap errors. I tried redirecting all request to the HTTPS server and visa-versa but neither worked; either the SSL requests wouldn’t redirect correctly or else the HTTP requests gave the You’re speaking plain HTTP to an SSL-enabled server port… error. Further, I cannot figure out a way to test if other ports are being successfully redirected (I could try using a browser, but what about FTP, ICMP, etc.?) I realize that I could just use a port-blocker (eg ProtoWall, PeerBlock, etc.), but there’s two issues with that. First, I am blocking domains with this method, not IP addresses, so to use a port-blocker, I would have to get each and every domain’s IP, and update theme frequently. Second, using this method, I can have Apache keep logs of all the ad/malware/spam/etc. requests for future analysis (my current AdKiller logs are already 466MB right now). I appreciate any help in successfully setting up an Apache VirtualHost blackhole. Thanks.

    Read the article

  • Repair corrupt hard disk on Mac without install CD

    - by Sarah
    The hard disk of my late 2009 MacBook Pro appears to have become corrupted. I am traveling and do not have my install CD (and won't for several weeks, nor will I be anywhere near an Apple store). The hard disk is not the original, which failed in June 2011. It's some Hitachi replacement installed by IT. History: I was typing an email this afternoon, my computer suddenly started making soft clicking sounds and then froze. I was not moving around. I rebooted, which took a while. I heard more clicking sounds and the computer froze at least once again. It's now kind of working, with mdworker sucking up one CPU. There are no awkward hard drive sounds when I run Chrome or play music. However, when I launched Stickies, I found no trace of my saved Stickies. I ran a live disk verification from within Disk Utility, and it reported Problem: As reported, I don't have access to an installation disc and am nowhere near an area where I can get one for at least two weeks. I have the option of asking someone to go to some trouble and expense to get one for me, but I'm not sure it's worth it: I've read that I can use fsck from single-user mode to repair the disk. Should I just try this? Is it risky? I'm concerned that the clicky sound portends imminent (mechanical) hard drive failure, so it's not worth doing a silly repair. This hard disk is backed up, but I definitely won't be able to access the backup while traveling. I'd like to maximize the probability that I can keep using my computer (and all its current files) while traveling. Update I bit the bullet and ran fsck -fy from single-user mode. It only needed one pass (modification) to reach the "okay" stage. However, rebooting took nearly 5 min and involved several rounds of scratchy sounds and a few bad clicks. I'm now back to kind of using my computer (the same files are missing as before). When I ran live disk verification from Disk Utility this time, however, it reported that the volume appears to be OK. Am I right to infer from the scratchy sounds, however, that my hard drive is still rapidly on its way out? Is there anything else I can do to increase its functionality over the next few weeks?

    Read the article

  • Exception Errno::EPIPE in Passenger RequestHandler (Broken pipe)

    - by Millisami
    Hi, Upgraded to Rails 2.3.2 and Passenger 2.2.4 on Ubuntu hardy slice at slicehost with Apache2 I'm getting this same above discussed error in my Apache error.log of system /var/logs/apache2/ [ pid=4249 file=ext/apache2/Hooks.cpp:638 time=2009-07-04 11:47:32.752 ]: No data received from the backend application (process 4383) within 45000 msec. Either the backend application is frozen, or your TimeOut value of 45 seconds is too low. Please check whether your application is frozen, or increase the value of the TimeOut configuration directive. *** Exception Errno::EPIPE in Passenger RequestHandler (Broken pipe) (process 4391): from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/rack/request_handler.rb:93:in `write' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/rack/request_handler.rb:93:in `process_request' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_request_handler.rb:206:in `main_loop' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/railz/application_spawner.rb:376:in `start_request_handler' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/railz/application_spawner.rb:334:in `handle_spawn_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/utils.rb:182:in `safe_fork' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/railz/application_spawner.rb:332:in `handle_spawn_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:351:in `__send__' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:351:in `main_loop' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:195:in `start_synchronously' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:162:in `start' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/railz/application_spawner.rb:213:in `start' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/spawn_manager.rb:261:in `spawn_rails_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server_collection.rb:126:in `lookup_or_add' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/spawn_manager.rb:255:in `spawn_rails_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server_collection.rb:80:in `synchronize' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server_collection.rb:79:in `synchronize' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/spawn_manager.rb:254:in `spawn_rails_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/spawn_manager.rb:153:in `spawn_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/spawn_manager.rb:286:in `handle_spawn_application' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:351:in `__send__' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:351:in `main_loop' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/lib/ phusion_passenger/abstract_server.rb:195:in `start_synchronously' from /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/bin/passenger-spawn- server:61 *** Exception Errno::EPIPE in Passenger RequestHandler (Broken pipe) (process 4383): and these too. pid=4362 file=ext/apache2/Hooks.cpp:638 time=2009-07-04 11:55:19.251 ]: No data received from the backend application (process 4383) within 45000 msec. Either the backend application is frozen, or your TimeOut value of 45 seconds is too low. Please check whether your application is frozen, or increase the value of the TimeOut configuration directive. [ pid=4298 file=ext/apache2/Hooks.cpp:638 time=2009-07-04 11:55:19.255 ]: No data received from the backend application (process 4252) within 45000 msec. Either the backend application is frozen, or your TimeOut value of 45 seconds is too low. Please check whether your application is frozen, or increase the value of the TimeOut configuration directive. [Sat Jul 04 11:55:19 2009] [error] [client 86.96.226.13] Premature end of script headers: 41, referer: http://domain.com/ [ pid=4373 file=ext/apache2/Hooks.cpp:638 time=2009-07-04 11:55:19.559 ]: Its getting me mad and on the browser, sometimes its show and when refreshed, Application Error 500 shows up in frequent basis. any directions??

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >