Search Results

Search found 22829 results on 914 pages for 'nautilus script'.

Page 608/914 | < Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >

  • sum up value from textfile - bash

    - by user3493435
    I am trying to sum up the values in a textfile try.txt firstNumber,1 secondNumber,2 I tried with this script #!/bin/bash while IFS, read -r -a array; do printf "%s %s\n" "${array[0]} ${array[1]}" for n in "${array[1]}"; do ((total += n)) echo "total =" $total done done < try.txt and I landed up with this output firstNumber 1 total = 1 secondNumber 2 total = 3 expected output firstNumber 1 secondNumber 2 total = 3 Thanks in advance

    Read the article

  • Creating a string variable name from the value of another string

    - by NeonGlow
    In my bash script I have two variables 'CONFIG_OPTION' and 'CONFIG_VALUE' which contain string VENDOR_NAME and Default_Vendor respectively. I need to create a variable with name '$CONFIG_OPTION' ie VENDOR_NAME and assign the value in CONFIG_VALUE to newly created variable. How I can do this? I tried $CONFIG_OPTION=$CONFIG_VALUE But I am getting an error on this line as './Build.bash: line 137: VENDOR_NAME="Default_Vendor": command not found' Thanks.

    Read the article

  • get all form selected lable

    - by erfaan
    i need to jquery script for show or append all selected lable of chekbox , radio button , options from form to views in page . like andvanced search form , after submit show your selected options in top of result list .

    Read the article

  • How can I limit the number of registrants to an event?

    - by user356900
    I've set up a basic html/php submission form where people can register for our event, but need a way to replace the submission form webpage with one that reads something like "We have reached our registration limit" when we reach a certain number of submitted forms. Our database is MySQL (if that makes a difference) I've looked around on the web but people either say to count the entries by hand, or the ones that do have an automated system use CMS like drupal or joomla. Is it possible to setup an automated script that will do this?

    Read the article

  • Javascript .removeChild() only deletes even nodes?

    - by user1476297
    first posting. I am trying dynamically add children DIV under a DIV with ID="prnt". Addition of nodes work fine no problem. However strange enough when it comes to deleted nodes its only deleting the even numbered nodes including 0. Why is this, I could be something stupid but it seem more like a bug. I could be very wrong. Please help Thank you in advance. <script type="text/javascript"> function displayNodes() { var prnt = document.getElementById("prnt"); var chlds = prnt.childNodes; var cont = document.getElementById("content"); for(i = 0; i < chlds.length; i++) { if(chlds[i].nodeType == 1) { cont.innerHTML +="<br />"; cont.innerHTML +="Node # " + (i+1); cont.innerHTML +="<br />"; cont.innerHTML +=chlds[i].nodeName; cont.innerHTML +="<br />"; } } } function deleteENodes() { var prnt = document.getElementById("prnt"); var chlds = prnt.childNodes; for(i = 0; i < chlds.length; i++) { if(!(chlds[i].nodeType == 3)) { prnt.removeChild(chlds[i]); } } } function AddENodes() { var prnt = document.getElementById("prnt"); //Only even nodes are deletable PROBLEM for(i = 0; i < 10; i++) { var newDIV = document.createElement('div'); newDIV.setAttribute("id", "c"+(i)); var text = document.createTextNode("New Inserted Child "+(i)); newDIV.appendChild(text); prnt.appendChild(newDIV); } } </script> <title>Checking Div Nodes</title> </head> <body> <div id="prnt"> Parent 1 </div> <br /> <br /> <br /> <button type="button" onclick="displayNodes()">Show Node Info</button> <button type="button" onclick="deleteENodes()">Remove All Element Nodes Under Parent 1</button> <button type="button" onclick="AddENodes()">Add 5 New DIV Nodes</button> <div id="content"> </div> </body>

    Read the article

  • jquery code completion and netbeans... not working, although done like in the manual

    - by blincv
    Hey, what i read on several help-pages, was, that getting jquery code completion to work, was just: getting targeted browsers right (the only one i choose was firefox 3.x or later) putting jquery file into project (it is now visible in "source files") adding the "script type"etc.-line (tried filename with and without / before ) Did this... but still no code completion. Any ideas? Got windows 7 and netbeans 7, 32bit system. Wtf is wrong? :( http://www.loaditup.de/files/615138.png

    Read the article

  • usleep() php5 uses 40% of idle CPU

    - by Marcin
    Hi guys I have a weird question, I have a cli php script which uses usleep (somtimes 1sec, sometimes 2sec, somtimes 100ms it depends) if there is some wait required, but what I have noticed its that once on usleep() it seems to use about 40% of idle CPU: Cpu(s): 5.3%us, 21.3%sy, 0.0%ni, 57.2%id, 0.0%wa, 0.0%hi, 0.0%si, 16.1%st any ideas ? cheers

    Read the article

  • Lower CPU % used by FFMPEG Process via PHP (FLV Video Conversion)

    - by BoRo
    Hi, Okay, so I currently execute an ffmpeg command via PHP to run a video conversion. The Problem I'm Having is during the conversion, the ffmpeg process(es) use up so much CPU/Processing Power (near 100%), which slows down the response of my webserver. Is there a Way (crontab or script) I can limit the ffmpeg processes to a certain CPU percentage? Thanks,

    Read the article

  • How to ignore %% from being evaluated?

    - by murxx
    Hi, one of my variables has the value %val% - this is exactly the name! So: set variable=%val% What happens now is that when running the script the variable will be set to nothing as the %val% is being evaluated! But this is not what I want... How can I tell DOS to ignore the %-sign here? Can anybody out there help me with my question? Thanks a lot...

    Read the article

  • Using R to Analyze G1GC Log Files

    - by user12620111
    Using R to Analyze G1GC Log Files body, td { font-family: sans-serif; background-color: white; font-size: 12px; margin: 8px; } tt, code, pre { font-family: 'DejaVu Sans Mono', 'Droid Sans Mono', 'Lucida Console', Consolas, Monaco, monospace; } h1 { font-size:2.2em; } h2 { font-size:1.8em; } h3 { font-size:1.4em; } h4 { font-size:1.0em; } h5 { font-size:0.9em; } h6 { font-size:0.8em; } a:visited { color: rgb(50%, 0%, 50%); } pre { margin-top: 0; max-width: 95%; border: 1px solid #ccc; white-space: pre-wrap; } pre code { display: block; padding: 0.5em; } code.r, code.cpp { background-color: #F8F8F8; } table, td, th { border: none; } blockquote { color:#666666; margin:0; padding-left: 1em; border-left: 0.5em #EEE solid; } hr { height: 0px; border-bottom: none; border-top-width: thin; border-top-style: dotted; border-top-color: #999999; } @media print { * { background: transparent !important; color: black !important; filter:none !important; -ms-filter: none !important; } body { font-size:12pt; max-width:100%; } a, a:visited { text-decoration: underline; } hr { visibility: hidden; page-break-before: always; } pre, blockquote { padding-right: 1em; page-break-inside: avoid; } tr, img { page-break-inside: avoid; } img { max-width: 100% !important; } @page :left { margin: 15mm 20mm 15mm 10mm; } @page :right { margin: 15mm 10mm 15mm 20mm; } p, h2, h3 { orphans: 3; widows: 3; } h2, h3 { page-break-after: avoid; } } pre .operator, pre .paren { color: rgb(104, 118, 135) } pre .literal { color: rgb(88, 72, 246) } pre .number { color: rgb(0, 0, 205); } pre .comment { color: rgb(76, 136, 107); } pre .keyword { color: rgb(0, 0, 255); } pre .identifier { color: rgb(0, 0, 0); } pre .string { color: rgb(3, 106, 7); } var hljs=new function(){function m(p){return p.replace(/&/gm,"&").replace(/"}while(y.length||w.length){var v=u().splice(0,1)[0];z+=m(x.substr(q,v.offset-q));q=v.offset;if(v.event=="start"){z+=t(v.node);s.push(v.node)}else{if(v.event=="stop"){var p,r=s.length;do{r--;p=s[r];z+=("")}while(p!=v.node);s.splice(r,1);while(r'+M[0]+""}else{r+=M[0]}O=P.lR.lastIndex;M=P.lR.exec(L)}return r+L.substr(O,L.length-O)}function J(L,M){if(M.sL&&e[M.sL]){var r=d(M.sL,L);x+=r.keyword_count;return r.value}else{return F(L,M)}}function I(M,r){var L=M.cN?'':"";if(M.rB){y+=L;M.buffer=""}else{if(M.eB){y+=m(r)+L;M.buffer=""}else{y+=L;M.buffer=r}}D.push(M);A+=M.r}function G(N,M,Q){var R=D[D.length-1];if(Q){y+=J(R.buffer+N,R);return false}var P=q(M,R);if(P){y+=J(R.buffer+N,R);I(P,M);return P.rB}var L=v(D.length-1,M);if(L){var O=R.cN?"":"";if(R.rE){y+=J(R.buffer+N,R)+O}else{if(R.eE){y+=J(R.buffer+N,R)+O+m(M)}else{y+=J(R.buffer+N+M,R)+O}}while(L1){O=D[D.length-2].cN?"":"";y+=O;L--;D.length--}var r=D[D.length-1];D.length--;D[D.length-1].buffer="";if(r.starts){I(r.starts,"")}return R.rE}if(w(M,R)){throw"Illegal"}}var E=e[B];var D=[E.dM];var A=0;var x=0;var y="";try{var s,u=0;E.dM.buffer="";do{s=p(C,u);var t=G(s[0],s[1],s[2]);u+=s[0].length;if(!t){u+=s[1].length}}while(!s[2]);if(D.length1){throw"Illegal"}return{r:A,keyword_count:x,value:y}}catch(H){if(H=="Illegal"){return{r:0,keyword_count:0,value:m(C)}}else{throw H}}}function g(t){var p={keyword_count:0,r:0,value:m(t)};var r=p;for(var q in e){if(!e.hasOwnProperty(q)){continue}var s=d(q,t);s.language=q;if(s.keyword_count+s.rr.keyword_count+r.r){r=s}if(s.keyword_count+s.rp.keyword_count+p.r){r=p;p=s}}if(r.language){p.second_best=r}return p}function i(r,q,p){if(q){r=r.replace(/^((]+|\t)+)/gm,function(t,w,v,u){return w.replace(/\t/g,q)})}if(p){r=r.replace(/\n/g,"")}return r}function n(t,w,r){var x=h(t,r);var v=a(t);var y,s;if(v){y=d(v,x)}else{return}var q=c(t);if(q.length){s=document.createElement("pre");s.innerHTML=y.value;y.value=k(q,c(s),x)}y.value=i(y.value,w,r);var u=t.className;if(!u.match("(\\s|^)(language-)?"+v+"(\\s|$)")){u=u?(u+" "+v):v}if(/MSIE [678]/.test(navigator.userAgent)&&t.tagName=="CODE"&&t.parentNode.tagName=="PRE"){s=t.parentNode;var p=document.createElement("div");p.innerHTML=""+y.value+"";t=p.firstChild.firstChild;p.firstChild.cN=s.cN;s.parentNode.replaceChild(p.firstChild,s)}else{t.innerHTML=y.value}t.className=u;t.result={language:v,kw:y.keyword_count,re:y.r};if(y.second_best){t.second_best={language:y.second_best.language,kw:y.second_best.keyword_count,re:y.second_best.r}}}function o(){if(o.called){return}o.called=true;var r=document.getElementsByTagName("pre");for(var p=0;p|=||=||=|\\?|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~";this.ER="(?![\\s\\S])";this.BE={b:"\\\\.",r:0};this.ASM={cN:"string",b:"'",e:"'",i:"\\n",c:[this.BE],r:0};this.QSM={cN:"string",b:'"',e:'"',i:"\\n",c:[this.BE],r:0};this.CLCM={cN:"comment",b:"//",e:"$"};this.CBLCLM={cN:"comment",b:"/\\*",e:"\\*/"};this.HCM={cN:"comment",b:"#",e:"$"};this.NM={cN:"number",b:this.NR,r:0};this.CNM={cN:"number",b:this.CNR,r:0};this.BNM={cN:"number",b:this.BNR,r:0};this.inherit=function(r,s){var p={};for(var q in r){p[q]=r[q]}if(s){for(var q in s){p[q]=s[q]}}return p}}();hljs.LANGUAGES.cpp=function(){var a={keyword:{"false":1,"int":1,"float":1,"while":1,"private":1,"char":1,"catch":1,"export":1,virtual:1,operator:2,sizeof:2,dynamic_cast:2,typedef:2,const_cast:2,"const":1,struct:1,"for":1,static_cast:2,union:1,namespace:1,unsigned:1,"long":1,"throw":1,"volatile":2,"static":1,"protected":1,bool:1,template:1,mutable:1,"if":1,"public":1,friend:2,"do":1,"return":1,"goto":1,auto:1,"void":2,"enum":1,"else":1,"break":1,"new":1,extern:1,using:1,"true":1,"class":1,asm:1,"case":1,typeid:1,"short":1,reinterpret_cast:2,"default":1,"double":1,register:1,explicit:1,signed:1,typename:1,"try":1,"this":1,"switch":1,"continue":1,wchar_t:1,inline:1,"delete":1,alignof:1,char16_t:1,char32_t:1,constexpr:1,decltype:1,noexcept:1,nullptr:1,static_assert:1,thread_local:1,restrict:1,_Bool:1,complex:1},built_in:{std:1,string:1,cin:1,cout:1,cerr:1,clog:1,stringstream:1,istringstream:1,ostringstream:1,auto_ptr:1,deque:1,list:1,queue:1,stack:1,vector:1,map:1,set:1,bitset:1,multiset:1,multimap:1,unordered_set:1,unordered_map:1,unordered_multiset:1,unordered_multimap:1,array:1,shared_ptr:1}};return{dM:{k:a,i:"",k:a,r:10,c:["self"]}]}}}();hljs.LANGUAGES.r={dM:{c:[hljs.HCM,{cN:"number",b:"\\b0[xX][0-9a-fA-F]+[Li]?\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\b\\d+(?:[eE][+\\-]?\\d*)?L\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\b\\d+\\.(?!\\d)(?:i\\b)?",e:hljs.IMMEDIATE_RE,r:1},{cN:"number",b:"\\b\\d+(?:\\.\\d*)?(?:[eE][+\\-]?\\d*)?i?\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\.\\d+(?:[eE][+\\-]?\\d*)?i?\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"keyword",b:"(?:tryCatch|library|setGeneric|setGroupGeneric)\\b",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\.\\.\\.",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\.\\.\\d+(?![\\w.])",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\b(?:function)",e:hljs.IMMEDIATE_RE,r:2},{cN:"keyword",b:"(?:if|in|break|next|repeat|else|for|return|switch|while|try|stop|warning|require|attach|detach|source|setMethod|setClass)\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"literal",b:"(?:NA|NA_integer_|NA_real_|NA_character_|NA_complex_)\\b",e:hljs.IMMEDIATE_RE,r:10},{cN:"literal",b:"(?:NULL|TRUE|FALSE|T|F|Inf|NaN)\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"identifier",b:"[a-zA-Z.][a-zA-Z0-9._]*\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"operator",b:"|=||   Using R to Analyze G1GC Log Files   Using R to Analyze G1GC Log Files Introduction Working in Oracle Platform Integration gives an engineer opportunities to work on a wide array of technologies. My team’s goal is to make Oracle applications run best on the Solaris/SPARC platform. When looking for bottlenecks in a modern applications, one needs to be aware of not only how the CPUs and operating system are executing, but also network, storage, and in some cases, the Java Virtual Machine. I was recently presented with about 1.5 GB of Java Garbage First Garbage Collector log file data. If you’re not familiar with the subject, you might want to review Garbage First Garbage Collector Tuning by Monica Beckwith. The customer had been running Java HotSpot 1.6.0_31 to host a web application server. I was told that the Solaris/SPARC server was running a Java process launched using a commmand line that included the following flags: -d64 -Xms9g -Xmx9g -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:InitiatingHeapOccupancyPercent=80 -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+PrintGC -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps -XX:+PrintFlagsFinal -XX:+DisableExplicitGC -XX:+UnlockExperimentalVMOptions -XX:ParallelGCThreads=8 Several sources on the internet indicate that if I were to print out the 1.5 GB of log files, it would require enough paper to fill the bed of a pick up truck. Of course, it would be fruitless to try to scan the log files by hand. Tools will be required to summarize the contents of the log files. Others have encountered large Java garbage collection log files. There are existing tools to analyze the log files: IBM’s GC toolkit The chewiebug GCViewer gchisto HPjmeter Instead of using one of the other tools listed, I decide to parse the log files with standard Unix tools, and analyze the data with R. Data Cleansing The log files arrived in two different formats. I guess that the difference is that one set of log files was generated using a more verbose option, maybe -XX:+PrintHeapAtGC, and the other set of log files was generated without that option. Format 1 In some of the log files, the log files with the less verbose format, a single trace, i.e. the report of a singe garbage collection event, looks like this: {Heap before GC invocations=12280 (full 61): garbage-first heap total 9437184K, used 7499918K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) region size 4096K, 1 young (4096K), 0 survivors (0K) compacting perm gen total 262144K, used 144077K [0xffffffff40000000, 0xffffffff50000000, 0xffffffff50000000) the space 262144K, 54% used [0xffffffff40000000, 0xffffffff48cb3758, 0xffffffff48cb3800, 0xffffffff50000000) No shared spaces configured. 2014-05-14T07:24:00.988-0700: 60586.353: [GC pause (young) 7324M->7320M(9216M), 0.1567265 secs] Heap after GC invocations=12281 (full 61): garbage-first heap total 9437184K, used 7496533K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) region size 4096K, 0 young (0K), 0 survivors (0K) compacting perm gen total 262144K, used 144077K [0xffffffff40000000, 0xffffffff50000000, 0xffffffff50000000) the space 262144K, 54% used [0xffffffff40000000, 0xffffffff48cb3758, 0xffffffff48cb3800, 0xffffffff50000000) No shared spaces configured. } A simple grep can be used to extract a summary: $ grep "\[ GC pause (young" g1gc.log 2014-05-13T13:24:35.091-0700: 3.109: [GC pause (young) 20M->5029K(9216M), 0.0146328 secs] 2014-05-13T13:24:35.440-0700: 3.459: [GC pause (young) 9125K->6077K(9216M), 0.0086723 secs] 2014-05-13T13:24:37.581-0700: 5.599: [GC pause (young) 25M->8470K(9216M), 0.0203820 secs] 2014-05-13T13:24:42.686-0700: 10.704: [GC pause (young) 44M->15M(9216M), 0.0288848 secs] 2014-05-13T13:24:48.941-0700: 16.958: [GC pause (young) 51M->20M(9216M), 0.0491244 secs] 2014-05-13T13:24:56.049-0700: 24.066: [GC pause (young) 92M->26M(9216M), 0.0525368 secs] 2014-05-13T13:25:34.368-0700: 62.383: [GC pause (young) 602M->68M(9216M), 0.1721173 secs] But that format wasn't easily read into R, so I needed to be a bit more tricky. I used the following Unix command to create a summary file that was easy for R to read. $ echo "SecondsSinceLaunch BeforeSize AfterSize TotalSize RealTime" $ grep "\[GC pause (young" g1gc.log | grep -v mark | sed -e 's/[A-SU-z\(\),]/ /g' -e 's/->/ /' -e 's/: / /g' | more SecondsSinceLaunch BeforeSize AfterSize TotalSize RealTime 2014-05-13T13:24:35.091-0700 3.109 20 5029 9216 0.0146328 2014-05-13T13:24:35.440-0700 3.459 9125 6077 9216 0.0086723 2014-05-13T13:24:37.581-0700 5.599 25 8470 9216 0.0203820 2014-05-13T13:24:42.686-0700 10.704 44 15 9216 0.0288848 2014-05-13T13:24:48.941-0700 16.958 51 20 9216 0.0491244 2014-05-13T13:24:56.049-0700 24.066 92 26 9216 0.0525368 2014-05-13T13:25:34.368-0700 62.383 602 68 9216 0.1721173 Format 2 In some of the log files, the log files with the more verbose format, a single trace, i.e. the report of a singe garbage collection event, was more complicated than Format 1. Here is a text file with an example of a single G1GC trace in the second format. As you can see, it is quite complicated. It is nice that there is so much information available, but the level of detail can be overwhelming. I wrote this awk script (download) to summarize each trace on a single line. #!/usr/bin/env awk -f BEGIN { printf("SecondsSinceLaunch IncrementalCount FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize\n") } ###################### # Save count data from lines that are at the start of each G1GC trace. # Each trace starts out like this: # {Heap before GC invocations=14 (full 0): # garbage-first heap total 9437184K, used 325496K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) ###################### /{Heap.*full/{ gsub ( "\\)" , "" ); nf=split($0,a,"="); split(a[2],b," "); getline; if ( match($0, "first") ) { G1GC=1; IncrementalCount=b[1]; FullCount=substr( b[3], 1, length(b[3])-1 ); } else { G1GC=0; } } ###################### # Pull out time stamps that are in lines with this format: # 2014-05-12T14:02:06.025-0700: 94.312: [GC pause (young), 0.08870154 secs] ###################### /GC pause/ { DateTime=$1; SecondsSinceLaunch=substr($2, 1, length($2)-1); } ###################### # Heap sizes are in lines that look like this: # [ 4842M->4838M(9216M)] ###################### /\[ .*]$/ { gsub ( "\\[" , "" ); gsub ( "\ \]" , "" ); gsub ( "->" , " " ); gsub ( "\\( " , " " ); gsub ( "\ \)" , " " ); split($0,a," "); if ( split(a[1],b,"M") > 1 ) {BeforeSize=b[1]*1024;} if ( split(a[1],b,"K") > 1 ) {BeforeSize=b[1];} if ( split(a[2],b,"M") > 1 ) {AfterSize=b[1]*1024;} if ( split(a[2],b,"K") > 1 ) {AfterSize=b[1];} if ( split(a[3],b,"M") > 1 ) {TotalSize=b[1]*1024;} if ( split(a[3],b,"K") > 1 ) {TotalSize=b[1];} } ###################### # Emit an output line when you find input that looks like this: # [Times: user=1.41 sys=0.08, real=0.24 secs] ###################### /\[Times/ { if (G1GC==1) { gsub ( "," , "" ); split($2,a,"="); UserTime=a[2]; split($3,a,"="); SysTime=a[2]; split($4,a,"="); RealTime=a[2]; print DateTime,SecondsSinceLaunch,IncrementalCount,FullCount,UserTime,SysTime,RealTime,BeforeSize,AfterSize,TotalSize; G1GC=0; } } The resulting summary is about 25X smaller that the original file, but still difficult for a human to digest. SecondsSinceLaunch IncrementalCount FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ... 2014-05-12T18:36:34.669-0700: 3985.744 561 0 0.57 0.06 0.16 1724416 1720320 9437184 2014-05-12T18:36:34.839-0700: 3985.914 562 0 0.51 0.06 0.19 1724416 1720320 9437184 2014-05-12T18:36:35.069-0700: 3986.144 563 0 0.60 0.04 0.27 1724416 1721344 9437184 2014-05-12T18:36:35.354-0700: 3986.429 564 0 0.33 0.04 0.09 1725440 1722368 9437184 2014-05-12T18:36:35.545-0700: 3986.620 565 0 0.58 0.04 0.17 1726464 1722368 9437184 2014-05-12T18:36:35.726-0700: 3986.801 566 0 0.43 0.05 0.12 1726464 1722368 9437184 2014-05-12T18:36:35.856-0700: 3986.930 567 0 0.30 0.04 0.07 1726464 1723392 9437184 2014-05-12T18:36:35.947-0700: 3987.023 568 0 0.61 0.04 0.26 1727488 1723392 9437184 2014-05-12T18:36:36.228-0700: 3987.302 569 0 0.46 0.04 0.16 1731584 1724416 9437184 Reading the Data into R Once the GC log data had been cleansed, either by processing the first format with the shell script, or by processing the second format with the awk script, it was easy to read the data into R. g1gc.df = read.csv("summary.txt", row.names = NULL, stringsAsFactors=FALSE,sep="") str(g1gc.df) ## 'data.frame': 8307 obs. of 10 variables: ## $ row.names : chr "2014-05-12T14:00:32.868-0700:" "2014-05-12T14:00:33.179-0700:" "2014-05-12T14:00:33.677-0700:" "2014-05-12T14:00:35.538-0700:" ... ## $ SecondsSinceLaunch: num 1.16 1.47 1.97 3.83 6.1 ... ## $ IncrementalCount : int 0 1 2 3 4 5 6 7 8 9 ... ## $ FullCount : int 0 0 0 0 0 0 0 0 0 0 ... ## $ UserTime : num 0.11 0.05 0.04 0.21 0.08 0.26 0.31 0.33 0.34 0.56 ... ## $ SysTime : num 0.04 0.01 0.01 0.05 0.01 0.06 0.07 0.06 0.07 0.09 ... ## $ RealTime : num 0.02 0.02 0.01 0.04 0.02 0.04 0.05 0.04 0.04 0.06 ... ## $ BeforeSize : int 8192 5496 5768 22528 24576 43008 34816 53248 55296 93184 ... ## $ AfterSize : int 1400 1672 2557 4907 7072 14336 16384 18432 19456 21504 ... ## $ TotalSize : int 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 ... head(g1gc.df) ## row.names SecondsSinceLaunch IncrementalCount ## 1 2014-05-12T14:00:32.868-0700: 1.161 0 ## 2 2014-05-12T14:00:33.179-0700: 1.472 1 ## 3 2014-05-12T14:00:33.677-0700: 1.969 2 ## 4 2014-05-12T14:00:35.538-0700: 3.830 3 ## 5 2014-05-12T14:00:37.811-0700: 6.103 4 ## 6 2014-05-12T14:00:41.428-0700: 9.720 5 ## FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ## 1 0 0.11 0.04 0.02 8192 1400 9437184 ## 2 0 0.05 0.01 0.02 5496 1672 9437184 ## 3 0 0.04 0.01 0.01 5768 2557 9437184 ## 4 0 0.21 0.05 0.04 22528 4907 9437184 ## 5 0 0.08 0.01 0.02 24576 7072 9437184 ## 6 0 0.26 0.06 0.04 43008 14336 9437184 Basic Statistics Once the data has been read into R, simple statistics are very easy to generate. All of the numbers from high school statistics are available via simple commands. For example, generate a summary of every column: summary(g1gc.df) ## row.names SecondsSinceLaunch IncrementalCount FullCount ## Length:8307 Min. : 1 Min. : 0 Min. : 0.0 ## Class :character 1st Qu.: 9977 1st Qu.:2048 1st Qu.: 0.0 ## Mode :character Median :12855 Median :4136 Median : 12.0 ## Mean :12527 Mean :4156 Mean : 31.6 ## 3rd Qu.:15758 3rd Qu.:6262 3rd Qu.: 61.0 ## Max. :55484 Max. :8391 Max. :113.0 ## UserTime SysTime RealTime BeforeSize ## Min. :0.040 Min. :0.0000 Min. : 0.0 Min. : 5476 ## 1st Qu.:0.470 1st Qu.:0.0300 1st Qu.: 0.1 1st Qu.:5137920 ## Median :0.620 Median :0.0300 Median : 0.1 Median :6574080 ## Mean :0.751 Mean :0.0355 Mean : 0.3 Mean :5841855 ## 3rd Qu.:0.920 3rd Qu.:0.0400 3rd Qu.: 0.2 3rd Qu.:7084032 ## Max. :3.370 Max. :1.5600 Max. :488.1 Max. :8696832 ## AfterSize TotalSize ## Min. : 1380 Min. :9437184 ## 1st Qu.:5002752 1st Qu.:9437184 ## Median :6559744 Median :9437184 ## Mean :5785454 Mean :9437184 ## 3rd Qu.:7054336 3rd Qu.:9437184 ## Max. :8482816 Max. :9437184 Q: What is the total amount of User CPU time spent in garbage collection? sum(g1gc.df$UserTime) ## [1] 6236 As you can see, less than two hours of CPU time was spent in garbage collection. Is that too much? To find the percentage of time spent in garbage collection, divide the number above by total_elapsed_time*CPU_count. In this case, there are a lot of CPU’s and it turns out the the overall amount of CPU time spent in garbage collection isn’t a problem when viewed in isolation. When calculating rates, i.e. events per unit time, you need to ask yourself if the rate is homogenous across the time period in the log file. Does the log file include spikes of high activity that should be separately analyzed? Averaging in data from nights and weekends with data from business hours may alias problems. If you have a reason to suspect that the garbage collection rates include peaks and valleys that need independent analysis, see the “Time Series” section, below. Q: How much garbage is collected on each pass? The amount of heap space that is recovered per GC pass is surprisingly low: At least one collection didn’t recover any data. (“Min.=0”) 25% of the passes recovered 3MB or less. (“1st Qu.=3072”) Half of the GC passes recovered 4MB or less. (“Median=4096”) The average amount recovered was 56MB. (“Mean=56390”) 75% of the passes recovered 36MB or less. (“3rd Qu.=36860”) At least one pass recovered 2GB. (“Max.=2121000”) g1gc.df$Delta = g1gc.df$BeforeSize - g1gc.df$AfterSize summary(g1gc.df$Delta) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0 3070 4100 56400 36900 2120000 Q: What is the maximum User CPU time for a single collection? The worst garbage collection (“Max.”) is many standard deviations away from the mean. The data appears to be right skewed. summary(g1gc.df$UserTime) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0.040 0.470 0.620 0.751 0.920 3.370 sd(g1gc.df$UserTime) ## [1] 0.3966 Basic Graphics Once the data is in R, it is trivial to plot the data with formats including dot plots, line charts, bar charts (simple, stacked, grouped), pie charts, boxplots, scatter plots histograms, and kernel density plots. Histogram of User CPU Time per Collection I don't think that this graph requires any explanation. hist(g1gc.df$UserTime, main="User CPU Time per Collection", xlab="Seconds", ylab="Frequency") Box plot to identify outliers When the initial data is viewed with a box plot, you can see the one crazy outlier in the real time per GC. Save this data point for future analysis and drop the outlier so that it’s not throwing off our statistics. Now the box plot shows many outliers, which will be examined later, using times series analysis. Notice that the scale of the x-axis changes drastically once the crazy outlier is removed. par(mfrow=c(2,1)) boxplot(g1gc.df$UserTime,g1gc.df$SysTime,g1gc.df$RealTime, main="Box Plot of Time per GC\n(dominated by a crazy outlier)", names=c("usr","sys","elapsed"), xlab="Seconds per GC", ylab="Time (Seconds)", horizontal = TRUE, outcol="red") crazy.outlier.df=g1gc.df[g1gc.df$RealTime > 400,] g1gc.df=g1gc.df[g1gc.df$RealTime < 400,] boxplot(g1gc.df$UserTime,g1gc.df$SysTime,g1gc.df$RealTime, main="Box Plot of Time per GC\n(crazy outlier excluded)", names=c("usr","sys","elapsed"), xlab="Seconds per GC", ylab="Time (Seconds)", horizontal = TRUE, outcol="red") box(which = "outer", lty = "solid") Here is the crazy outlier for future analysis: crazy.outlier.df ## row.names SecondsSinceLaunch IncrementalCount ## 8233 2014-05-12T23:15:43.903-0700: 20741 8316 ## FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ## 8233 112 0.55 0.42 488.1 8381440 8235008 9437184 ## Delta ## 8233 146432 R Time Series Data To analyze the garbage collection as a time series, I’ll use Z’s Ordered Observations (zoo). “zoo is the creator for an S3 class of indexed totally ordered observations which includes irregular time series.” require(zoo) ## Loading required package: zoo ## ## Attaching package: 'zoo' ## ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric head(g1gc.df[,1]) ## [1] "2014-05-12T14:00:32.868-0700:" "2014-05-12T14:00:33.179-0700:" ## [3] "2014-05-12T14:00:33.677-0700:" "2014-05-12T14:00:35.538-0700:" ## [5] "2014-05-12T14:00:37.811-0700:" "2014-05-12T14:00:41.428-0700:" options("digits.secs"=3) times=as.POSIXct( g1gc.df[,1], format="%Y-%m-%dT%H:%M:%OS%z:") g1gc.z = zoo(g1gc.df[,-c(1)], order.by=times) head(g1gc.z) ## SecondsSinceLaunch IncrementalCount FullCount ## 2014-05-12 17:00:32.868 1.161 0 0 ## 2014-05-12 17:00:33.178 1.472 1 0 ## 2014-05-12 17:00:33.677 1.969 2 0 ## 2014-05-12 17:00:35.538 3.830 3 0 ## 2014-05-12 17:00:37.811 6.103 4 0 ## 2014-05-12 17:00:41.427 9.720 5 0 ## UserTime SysTime RealTime BeforeSize AfterSize ## 2014-05-12 17:00:32.868 0.11 0.04 0.02 8192 1400 ## 2014-05-12 17:00:33.178 0.05 0.01 0.02 5496 1672 ## 2014-05-12 17:00:33.677 0.04 0.01 0.01 5768 2557 ## 2014-05-12 17:00:35.538 0.21 0.05 0.04 22528 4907 ## 2014-05-12 17:00:37.811 0.08 0.01 0.02 24576 7072 ## 2014-05-12 17:00:41.427 0.26 0.06 0.04 43008 14336 ## TotalSize Delta ## 2014-05-12 17:00:32.868 9437184 6792 ## 2014-05-12 17:00:33.178 9437184 3824 ## 2014-05-12 17:00:33.677 9437184 3211 ## 2014-05-12 17:00:35.538 9437184 17621 ## 2014-05-12 17:00:37.811 9437184 17504 ## 2014-05-12 17:00:41.427 9437184 28672 Example of Two Benchmark Runs in One Log File The data in the following graph is from a different log file, not the one of primary interest to this article. I’m including this image because it is an example of idle periods followed by busy periods. It would be uninteresting to average the rate of garbage collection over the entire log file period. More interesting would be the rate of garbage collect in the two busy periods. Are they the same or different? Your production data may be similar, for example, bursts when employees return from lunch and idle times on weekend evenings, etc. Once the data is in an R Time Series, you can analyze isolated time windows. Clipping the Time Series data Flashing back to our test case… Viewing the data as a time series is interesting. You can see that the work intensive time period is between 9:00 PM and 3:00 AM. Lets clip the data to the interesting period:     par(mfrow=c(2,1)) plot(g1gc.z$UserTime, type="h", main="User Time per GC\nTime: Complete Log File", xlab="Time of Day", ylab="CPU Seconds per GC", col="#1b9e77") clipped.g1gc.z=window(g1gc.z, start=as.POSIXct("2014-05-12 21:00:00"), end=as.POSIXct("2014-05-13 03:00:00")) plot(clipped.g1gc.z$UserTime, type="h", main="User Time per GC\nTime: Limited to Benchmark Execution", xlab="Time of Day", ylab="CPU Seconds per GC", col="#1b9e77") box(which = "outer", lty = "solid") Cumulative Incremental and Full GC count Here is the cumulative incremental and full GC count. When the line is very steep, it indicates that the GCs are repeating very quickly. Notice that the scale on the Y axis is different for full vs. incremental. plot(clipped.g1gc.z[,c(2:3)], main="Cumulative Incremental and Full GC count", xlab="Time of Day", col="#1b9e77") GC Analysis of Benchmark Execution using Time Series data In the following series of 3 graphs: The “After Size” show the amount of heap space in use after each garbage collection. Many Java objects are still referenced, i.e. alive, during each garbage collection. This may indicate that the application has a memory leak, or may indicate that the application has a very large memory footprint. Typically, an application's memory footprint plateau's in the early stage of execution. One would expect this graph to have a flat top. The steep decline in the heap space may indicate that the application crashed after 2:00. The second graph shows that the outliers in real execution time, discussed above, occur near 2:00. when the Java heap seems to be quite full. The third graph shows that Full GCs are infrequent during the first few hours of execution. The rate of Full GC's, (the slope of the cummulative Full GC line), changes near midnight.   plot(clipped.g1gc.z[,c("AfterSize","RealTime","FullCount")], xlab="Time of Day", col=c("#1b9e77","red","#1b9e77")) GC Analysis of heap recovered Each GC trace includes the amount of heap space in use before and after the individual GC event. During garbage coolection, unreferenced objects are identified, the space holding the unreferenced objects is freed, and thus, the difference in before and after usage indicates how much space has been freed. The following box plot and bar chart both demonstrate the same point - the amount of heap space freed per garbage colloection is surprisingly low. par(mfrow=c(2,1)) boxplot(as.vector(clipped.g1gc.z$Delta), main="Amount of Heap Recovered per GC Pass", xlab="Size in KB", horizontal = TRUE, col="red") hist(as.vector(clipped.g1gc.z$Delta), main="Amount of Heap Recovered per GC Pass", xlab="Size in KB", breaks=100, col="red") box(which = "outer", lty = "solid") This graph is the most interesting. The dark blue area shows how much heap is occupied by referenced Java objects. This represents memory that holds live data. The red fringe at the top shows how much data was recovered after each garbage collection. barplot(clipped.g1gc.z[,c("AfterSize","Delta")], col=c("#7570b3","#e7298a"), xlab="Time of Day", border=NA) legend("topleft", c("Live Objects","Heap Recovered on GC"), fill=c("#7570b3","#e7298a")) box(which = "outer", lty = "solid") When I discuss the data in the log files with the customer, I will ask for an explaination for the large amount of referenced data resident in the Java heap. There are two are posibilities: There is a memory leak and the amount of space required to hold referenced objects will continue to grow, limited only by the maximum heap size. After the maximum heap size is reached, the JVM will throw an “Out of Memory” exception every time that the application tries to allocate a new object. If this is the case, the aplication needs to be debugged to identify why old objects are referenced when they are no longer needed. The application has a legitimate requirement to keep a large amount of data in memory. The customer may want to further increase the maximum heap size. Another possible solution would be to partition the application across multiple cluster nodes, where each node has responsibility for managing a unique subset of the data. Conclusion In conclusion, R is a very powerful tool for the analysis of Java garbage collection log files. The primary difficulty is data cleansing so that information can be read into an R data frame. Once the data has been read into R, a rich set of tools may be used for thorough evaluation.

    Read the article

  • postfix with mailman

    - by Thufir
    What should happen is that [email protected] should be delivered to that users inbox on localhost, user@localhost. Thunderbird works fine at reading user@localhost. I'm just using a small portion of postfix-dovecot with Ubuntu mailman. How can I get postfix to recognize the FQDN and deliver them to a localhost inbox? root@dur:~# root@dur:~# tail /var/log/mail.err;tail /var/log/mailman/subscribe;postconf -n Aug 27 18:59:16 dur dovecot: lda(root): Error: chdir(/root) failed: Permission denied Aug 27 18:59:16 dur dovecot: lda(root): Error: user root: Initialization failed: Initializing mail storage from mail_location setting failed: stat(/root/Maildir) failed: Permission denied (euid=65534(nobody) egid=65534(nogroup) missing +x perm: /root, dir owned by 0:0 mode=0700) Aug 27 18:59:16 dur dovecot: lda(root): Fatal: Invalid user settings. Refer to server log for more information. Aug 27 20:09:16 dur postfix/trivial-rewrite[15896]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 21:19:17 dur postfix/trivial-rewrite[16569]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 22:27:00 dur postfix[17042]: fatal: usage: postfix [-c config_dir] [-Dv] command Aug 27 22:29:19 dur postfix/trivial-rewrite[17062]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 22:59:07 dur postfix/postfix-script[17459]: error: unknown command: 'restart' Aug 27 22:59:07 dur postfix/postfix-script[17460]: fatal: usage: postfix start (or stop, reload, abort, flush, check, status, set-permissions, upgrade-configuration) Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 21:39:03 2012 (16734) cola: pending "[email protected]" <[email protected]> 127.0.0.1 Aug 27 21:40:37 2012 (16749) cola: pending "[email protected]" <[email protected]> 127.0.0.1 Aug 27 22:45:31 2012 (17288) gmane.mail.mailman.user.1: pending [email protected] 127.0.0.1 Aug 27 22:45:46 2012 (17293) gmane.mail.mailman.user.1: pending [email protected] 127.0.0.1 Aug 27 23:02:01 2012 (17588) test3: pending [email protected] 127.0.0.1 Aug 27 23:05:41 2012 (17652) test4: pending [email protected] 127.0.0.1 Aug 27 23:56:20 2012 (17985) test5: pending [email protected] 127.0.0.1 alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# there's definitely a transport problem: root@dur:~# root@dur:~# root@dur:~# grep transport /var/log/mail.log | tail Aug 27 22:29:19 dur postfix/trivial-rewrite[17062]: warning: hash:/etc/postfix/transport lookup error for "[email protected]" Aug 27 22:29:19 dur postfix/trivial-rewrite[17062]: warning: transport_maps lookup failure Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport lookup error for "*" Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport lookup error for "*" Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport lookup error for "[email protected]" Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: transport_maps lookup failure root@dur:~# trying to add the transport file: EDIT root@dur:~# root@dur:~# touch /etc/postfix/transport root@dur:~# ll /etc/postfix/transport -rw-r--r-- 1 root root 0 Aug 28 00:16 /etc/postfix/transport root@dur:~# root@dur:~# cd /etc/postfix/ root@dur:/etc/postfix# root@dur:/etc/postfix# postmap transport root@dur:/etc/postfix# root@dur:/etc/postfix# cat transport

    Read the article

  • Rails requires Rubygems but I have the gems

    - by fogonthedowns
    Update I notice that which ruby and whereis ruby are different locations which ruby /opt/local/bin/ruby whereis ruby /usr/bin/ruby I recently upgraded ruby to ruby 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin10] and I think I broke rails. When I attempt to load rails. I get an odd message. Please help! $ ruby script/server Rails requires RubyGems = 1.3.2. Please install RubyGems and try again: http://rubygems.rubyforge.org $ rails -v Rails 3.0.0.beta $ gem -v 1.3.6 $ which gem /usr/bin/gem $ whereis gem /usr/bin/gem $ which rails /usr/bin/rails $ whereis rails /usr/bin/rails $ /usr/bin/gem -v 1.3.6 $ /usr/bin/rails -v Rails 3.0.0.beta $ ruby script/console Rails requires RubyGems >= 1.3.2. Please install RubyGems and try again: http://rubygems.rubyforge.org $ gem list rails *** LOCAL GEMS *** rails (3.0.0.beta, 2.3.5, 2.2.2, 1.2.6) $ gem list *** LOCAL GEMS *** abstract (1.0.0) actionmailer (3.0.0.beta, 2.3.5, 2.2.2, 1.3.6) actionpack (3.0.0.beta, 2.3.5, 2.2.2, 1.13.6) actionwebservice (1.2.6) activemerchant (1.4.1) activemodel (3.0.0.beta) activerecord (3.0.0.beta, 2.3.5, 2.2.2, 1.15.6) activerecord-tableless (0.1.0) activeresource (3.0.0.beta, 2.3.5, 2.2.2) activesupport (3.0.0.beta, 2.3.5, 2.2.2, 1.4.4) acts_as_ferret (0.4.3) arel (0.2.pre) authlogic (2.1.3) builder (2.1.2) bundler (0.9.3) calendar_date_select (1.15) capistrano (2.5.2) cgi_multipart_eof_fix (2.5.0) chronic (0.2.3) columnize (0.3.1) compass (0.8.17) daemons (1.0.10) dnssd (0.6.0) erubis (2.6.5) fastercsv (1.5.0) fastthread (1.0.1) fcgi (0.8.7) ferret (0.11.6) flay (1.4.0) flog (2.4.0) gbarcode (0.98.16) gem_plugin (0.2.3) git (1.2.5) haml (2.2.15) haml-edge (2.3.100) highline (1.5.0) hoe (2.4.0) hpricot (0.6.164) i18n (0.3.3) javan-whenever (0.3.7) jeweler (1.4.0) jscruggs-metric_fu (1.1.5) json_pure (1.2.0) libxml-ruby (1.1.2) linecache (0.43) mail (2.1.2) mechanize (0.9.3) memcache-client (1.7.8) mime-types (1.16) mislav-will_paginate (2.3.11) mocha (0.9.7) mojombo-chronic (0.3.0) mongrel (1.1.5) needle (1.3.0) net-scp (1.0.1) net-sftp (2.0.1, 1.1.1) net-ssh (2.0.4, 1.1.4) net-ssh-gateway (1.0.0) nifty-generators (0.3.0) nokogiri (1.4.0) openrain-action_mailer_tls (1.1.3) passenger (2.2.5) polyglot (0.2.9) prawn (0.6.3) prawn-core (0.6.3) prawn-format (0.2.3) prawn-layout (0.3.2) prawn-security (0.1.1) rack (1.1.0, 1.0.1) rack-mount (0.4.5) rack-test (0.5.3) rails (3.0.0.beta, 2.3.5, 2.2.2, 1.2.6) railties (3.0.0.beta) rake (0.8.7, 0.8.3) rake-compiler (0.6.0) RedCloth (4.1.1) reek (1.2.6) relevance-rcov (0.9.2.1) rmagick (2.12.2) roodi (2.1.0) rsl-stringex (1.0.3) rspec (1.2.9) rspec-rails (1.2.9) ruby-debug (0.10.3) ruby-debug-base (0.10.3) ruby-openid (2.1.2) ruby-yadis (0.3.4) ruby2ruby (1.2.4) ruby_parser (2.0.4) rubyforge (2.0.3) rubygems-update (1.3.6, 1.3.5) rubynode (0.1.5) searchlogic (2.3.9) sexp_processor (3.0.3) spree (0.9.4) sqlite3-ruby (1.2.5, 1.2.4) termios (0.9.4) test-unit (2.0.5) text-format (1.0.0) text-hyphen (1.0.0) thor (0.13.0) tlsmail (0.0.1) topfunky-gruff (0.3.5) treetop (1.4.3) tzinfo (0.3.16) xmpp4r (0.4)

    Read the article

  • Failed upgrade of PHP on Ubunutu 12.04, error: Sub-process /usr/bin/dpkg returned an error code (1)

    - by DanielAttard
    I just tried to upgrade my version of PHP on Ubuntu 12.04 and now I have messed it up. First I did this: sudo add-apt-repository ppa:ondrej/php5-oldstable Then I did this: sudo apt-get update Then finally I did this: sudo apt-get install php5 And now I am getting an error message about Sub-process /usr/bin/dpkg returned an error code (1) What have I done wrong? How can I fix this problem? Thanks. Here are the errors received: Do you want to continue [Y/n]? Y debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Setting up libapache2-mod-php5 (5.4.28-1+deb.sury.org~precise+1) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing libapache2-mod-php5 (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Setting up php5-cli (5.4.28-1+deb.sury.org~precise+1) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing php5-cli (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-curl: php5-curl depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-curl (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-gd: php5-gd depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-gd (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-mcrypt: php5-mcrypt depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-mcrypt (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-mysql: php5-mysql depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-mysql (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5: php5 depends on libapache2-mod-php5 (>= 5.4.28-1+deb.sury.org~precise+1) | libapache2-mod-php5filter (>= 5.4.28-1+deb.sury.org~precise+1) | php5-cgi (>= 5.4.28-1+deb.sury.org~precise+1) | php5-fpm (>= 5.4.28-1+deb.sury.org~precise+1); however: Package libapache2-mod-php5 is not configured yet. Package libapache2-mod-php5filter is not installed. Package php5-cgi is not installed. Package php5-fpm is not installed. dpkg: error processing php5 (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: libapache2-mod-php5 php5-cli php5-curl php5-gd php5-mcrypt php5-mysql php5 E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Persistent static routes fail on MacOS 10.6.5 startup!

    - by verbalicious
    I'm unable to get static routes to persist a reboot on Mac OS 10.6.5. I've tried all of the methods prescribed in Google search results, and previous posts on this site. I've tried manually creating a launchd daemon, and used RouteSplit's launchd daemon to no avail. It's clear that the interface is not ready when these methods attempt to apply the route. This workstation in question is getting its IP from DHCP and probably hasn't gotten its DHCP lease when the command runs. We're able to apply the route by hand when logged in, but not through startup methods. Is there another way to apply this route by sneaking the command into something later, but before the login window appears to the user? Here is some relevant log info from system.log. You can see the "route: writing to routing socket: Network is unreachable" errors where my launchd script fires off. I've tried adding extra "sleep" and "ipconfig waitall" statements later in the script but this doesn't fly. Dec 15 19:30:41 localhost com.apple.launchd[1]: *** launchd[1] has started up. *** Dec 15 19:30:45 localhost mDNSResponder[18]: mDNSResponder mDNSResponder-258.13 (Oct 8 2010 17:10:30) starting Dec 15 19:30:47 localhost configd[15]: bootp_session_transmit: bpf_write(en1) failed: Network is down (50) Dec 15 19:30:47 localhost configd[15]: DHCP en1: INIT transmit failed Dec 15 19:30:47 localhost configd[15]: network configuration changed. Dec 15 19:30:47 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:30:47 Administrators-MacBook-Pro blued[16]: Apple Bluetooth daemon started Dec 15 19:30:52 Administrators-MacBook-Pro syslog[67]: routes.sh: Starting RouteSplit Dec 15 19:30:53 Administrators-MacBook-Pro com.apple.usbmuxd[41]: usbmuxd-207 built for iTunesTenOne on Oct 19 2010 at 13:50:35, running 64 bit Dec 15 19:30:54 Administrators-MacBook-Pro /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow[50]: Login Window Application Started Dec 15 19:30:55 Administrators-MacBook-Pro bootlog[61]: BOOT_TIME: 1292459441 0 Dec 15 19:30:55 Administrators-MacBook-Pro syslog[86]: routes.sh: static route 192.168.0.0/23 192.168.2.2 Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: route: writing to routing socket: Network is unreachable Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: add net 192.168.0.0: gateway 192.168.2.2: Network is unreachable Dec 15 19:30:57 Administrators-MacBook-Pro org.apache.httpd[38]: httpd: Could not reliably determine the server's fully qualified domain name, using Administrators-MacBook-Pro.local for ServerName Dec 15 19:30:58 Administrators-MacBook-Pro loginwindow[50]: Login Window Started Security Agent Dec 15 19:30:58 Administrators-MacBook-Pro WindowServer[89]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:30:58 Administrators-MacBook-Pro com.apple.WindowServer[89]: Wed Dec 15 19:30:58 Administrators-MacBook-Pro.local WindowServer[89] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:31:18 Administrators-MacBook-Pro configd[15]: network configuration changed. Dec 15 19:31:19 administrators-macbook-pro configd[15]: setting hostname to "administrators-macbook-pro.local" Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[121]: /usr/libexec/ntpd-wrapper: scutil key State:/Network/Global/DNS not present after 30 seconds Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp options: a=2 v=1 e=0.100 E=5.000 P=2147483647.000 Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: d=15 c=5 x=0 op=1 l=/var/run/sntp.pid f= time.apple.com Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp: getaddrinfo(hostname, ntp) failed with nodename nor servname provided, or not known Dec 15 19:31:27 administrators-macbook-pro configd[15]: network configuration changed. Dec 15 19:31:27 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:31:27 Administrators-MacBook-Pro ntpd[37]: Cannot find existing interface for address 17.151.16.20 Dec 15 19:31:27 Administrators-MacBook-Pro ntpd_initres[125]: ntpd indicates no data available! Dec 15 19:31:31 Administrators-MacBook-Pro sshd[128]: USER_PROCESS: 133 ttys000 Dec 15 19:31:37 Administrators-MacBook-Pro sudo[138]: administrator : TTY=ttys000 ; PWD=/Users/administrator ; USER=root ; COMMAND=/usr/bin/less /var/log/system.log ``You can see the following line in /var/log/kernel.log that shows the en0 interface coming up: Dec 15 19:30:51 Administrators-MacBook-Pro kernel[0]: Ethernet [AppleBCM5701Ethernet]: Link up on en0, 1-Gigabit, Full-duplex, No flow-control, Debug [796d,0f01,0de1,0300,c1e1,3800]

    Read the article

  • PHP crashing (seg-fault) under mod_fcgi, apache

    - by Andras Gyomrey
    I've been programming a site using: Zend Framework 1.11.5 (complete MVC) PHP 5.3.6 Apache 2.2.19 CentOS 5.6 i686 virtuozzo on vps cPanel WHM 11.30.1 (build 4) Mysql 5.1.56-log Mysqli API 5.1.56 The issue started here http://stackoverflow.com/questions/6769515/php-programming-seg-fault. In brief, php is giving me random segmentation-faults. [Wed Jul 20 17:45:34 2011] [error] mod_fcgid: process /usr/local/cpanel/cgi-sys/php5(11562) exit(communication error), get unexpected signal 11 [Wed Jul 20 17:45:34 2011] [warn] [client 190.78.208.30] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Wed Jul 20 17:45:34 2011] [error] [client 190.78.208.30] Premature end of script headers: index.php About extensions. When i compile php with "--enable-debug" flag, i have to disable this line: zend_extension="/usr/local/IonCube/ioncube_loader_lin_5.3.so" Otherwise, the server doesn't accept requests and i get a "The connection with the server was reset". It is possible that i have to disable eaccelerator too because of the same reason. I still don't get why apache gets running it some times and some others not: extension="eaccelerator.so" Anyway, after i get httpd running, seg-faults can occurr randomly. If i don't compile php with "--enable-debug" flag, i can get DETERMINISTICALLY a php crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $row = $db->fetchRow("SHOW CREATE TABLE 222AFI"); } } ?> BUT if i compile php with "--enable-debug" flag, it's really hard to get this error. I must add some complexity to make it crash. I have to be doing many paralell requests for a few seconds to get a crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $tableList = $db->listTables(); foreach ($tableList as $tableName){ $row = $db->fetchRow("SHOW CREATE TABLE " . $db->quoteIdentifier($tableName)); file_put_contents( DB_DEFINITIONS_PATH . '/' . $tableName . '.sql', $row['Create Table'] . ';' ); } } } ?> Please notice this is the same script, but creating DDL for all tables in database rather than for one. It seems that if php is heavy loaded (with extensions and me doing many paralell requests) it's when i get php to crash. About starting httpd with "-X": i've tried. The thing is, it is already hard to make php crash with --enable-debug. With "-X" option (which only enables one child process) i can't do parallel requests. So i haven't been able to create to proper debug backtrace: https://bugs.php.net/bugs-generating-backtrace.php My concrete question is, what do i do to get a coredump? root@GWT4 [~]# httpd -V Server version: Apache/2.2.19 (Unix) Server built: Jul 20 2011 19:18:58 Cpanel::Easy::Apache v3.4.2 rev9999 Server's Module Magic Number: 20051115:28 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 32-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf"

    Read the article

  • Can someone help me install MYSQL server pelase? This is bugging me....

    - by Alex
    $ sudo aptitude install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following NEW packages will be installed: libhtml-template-perl{a} mysql-server mysql-server-5.0{a} mysql-server-core-5.0{a} 0 packages upgraded, 4 newly installed, 0 to remove and 12 not upgraded. Need to get 0B/27.7MB of archives. After unpacking 91.1MB will be used. Do you want to continue? [Y/n/?] y Writing extended state information... Done Preconfiguring packages ... Selecting previously deselected package mysql-server-core-5.0. (Reading database ... 17022 files and directories currently installed.) Unpacking mysql-server-core-5.0 (from .../mysql-server-core-5.0_5.1.30really5.0.75-0ubuntu10.3_amd64.deb) ... Selecting previously deselected package mysql-server-5.0. Unpacking mysql-server-5.0 (from .../mysql-server-5.0_5.1.30really5.0.75-0ubuntu10.3_amd64.deb) ... Selecting previously deselected package libhtml-template-perl. Unpacking libhtml-template-perl (from .../libhtml-template-perl_2.9-1_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.1.30really5.0.75-0ubuntu10.3_all.deb) ... Setting up mysql-server-core-5.0 (5.1.30really5.0.75-0ubuntu10.3) ... Setting up mysql-server-5.0 (5.1.30really5.0.75-0ubuntu10.3) ... * Stopping MySQL database server mysqld [ OK ] /var/lib/dpkg/info/mysql-server-5.0.postinst: line 144: /etc/mysql/conf.d/old_passwords.cnf: No such file or directory dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 Setting up libhtml-template-perl (2.9-1) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up mysql-server-5.0 (5.1.30really5.0.75-0ubuntu10.3) ... * Stopping MySQL database server mysqld [ OK ] /var/lib/dpkg/info/mysql-server-5.0.postinst: line 144: /etc/mysql/conf.d/old_passwords.cnf: No such file or directory dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Writing extended state information... Done Before I installed it, I ran this: sudo aptitude purge mysql-server mysql-server-5.0 This has happened before. I remember before, I did something with dpkg with fixed it.

    Read the article

  • iptables not allowing mysql connections to aliased ips?

    - by Curtis
    I have a fairly simple iptables firewall on a server that provides MySQL services, but iptables seems to be giving me very inconsistent results. The default policy on the script is as follows: iptables -P INPUT DROP I can then make MySQL public with the following rule: iptables -A INPUT -p tcp --dport 3306 -j ACCEPT With this rule in place, I can connect to MySQL from any source IP to any destination IP on the server without a problem. However, when I try to restrict access to just three IPs by replacing the above line with the following, I run into trouble (xxx=masked octect): iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.XXX.XXX.184 -j ACCEPT iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.XXX.XXX.196 -j ACCEPT iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.XXX.XXX.251 -j ACCEPT Once the above rules are in place, the following happens: I can connect to the MySQL server from the .184, .196 and .251 hosts just fine as long as am connecting to the MySQL server using it's default IP address or an IP alias in the same subnet as the default IP address. I am unable to connect to MySQL using IP aliases that are assigned to the server from a different subnet than the server's default IP when I'm coming from the .184 or .196 hosts, but .251 works just fine. From the .184 or .196 hosts, a telnet attempt just hangs... # telnet 209.xxx.xxx.22 3306 Trying 209.xxx.xxx.22... If I remove the .251 line (making .196 the last rule added), the .196 host still can not connect to MySQL using IP aliases (so it's not the order of the rules that is causing the inconsistent behavior). I know, this particular test was silly as it shouldn't matter what order these three rules are added in, but I figured someone might ask. If I switch back to the "public" rule, all hosts can connect to the MySQL server using either the default or aliased IPs (in either subnet): iptables -A INPUT -p tcp --dport 3306 -j ACCEPT The server is running in a CentOS 5.4 OpenVZ/Proxmox container (2.6.32-4-pve). And, just in case you prefer to see the problem rules in the context of the iptables script, here it is (xxx=masked octect): # Flush old rules, old custom tables /sbin/iptables --flush /sbin/iptables --delete-chain # Set default policies for all three default chains /sbin/iptables -P INPUT DROP /sbin/iptables -P FORWARD DROP /sbin/iptables -P OUTPUT ACCEPT # Enable free use of loopback interfaces /sbin/iptables -A INPUT -i lo -j ACCEPT /sbin/iptables -A OUTPUT -o lo -j ACCEPT # All TCP sessions should begin with SYN /sbin/iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # Accept inbound TCP packets (Do this *before* adding the 'blocked' chain) /sbin/iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow the server's own IP to connect to itself /sbin/iptables -A INPUT -i eth0 -s 208.xxx.xxx.178 -j ACCEPT # Add the 'blocked' chain *after* we've accepted established/related connections # so we remain efficient and only evaluate new/inbound connections /sbin/iptables -N BLOCKED /sbin/iptables -A INPUT -j BLOCKED # Accept inbound ICMP messages /sbin/iptables -A INPUT -p ICMP --icmp-type 8 -j ACCEPT /sbin/iptables -A INPUT -p ICMP --icmp-type 11 -j ACCEPT # ssh (private) /sbin/iptables -A INPUT -p tcp --dport 22 -m state --state NEW -s xxx.xxx.xxx.xxx -j ACCEPT # ftp (private) /sbin/iptables -A INPUT -p tcp --dport 21 -m state --state NEW -s xxx.xxx.xxx.xxx -j ACCEPT # www (public) /sbin/iptables -A INPUT -p tcp --dport 80 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 443 -j ACCEPT # smtp (public) /sbin/iptables -A INPUT -p tcp --dport 25 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 2525 -j ACCEPT # pop (public) /sbin/iptables -A INPUT -p tcp --dport 110 -j ACCEPT # mysql (private) /sbin/iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.xxx.xxx.184 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.xxx.xxx.196 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.xxx.xxx.251 -j ACCEPT Any ideas? Thanks in advance. :-)

    Read the article

  • Linux pptp client stops working after several hours

    - by Aron Rotteveel
    Here's the situation: Setup: 1 Windows Server 2008 machine acting as a Domain Controller and RRAS server 1 CentOS machine in a datacentre located elsewhere PPTP client running on CentOS machine, connected to the DC via When I connect to the DC, everything is working fine. I have set up a static IP for the dialup connection in my RRAS server so that the CentOS machine is automatically assigned the IP 192.168.1.240. Inside the VPN, it is not possible to access this machine on the local IP-address. Perfect. However, after several hours, it simply seems to stop working (IE: I cannot ping to or from this machine on the local network). The strange thing is, however: The DC shows the VPN client as still being connected The CentOS machine shows the network interface as being up There are no entries in my /var/log/messages that indicate a problem Output from ifconfig: ppp0 Link encap:Point-to-Point Protocol inet addr:192.168.1.240 P-t-P:192.168.1.160 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1396 Metric:1 RX packets:43 errors:0 dropped:0 overruns:0 frame:0 TX packets:58 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:4511 (4.4 KiB) TX bytes:15071 (14.7 KiB) Output from route -n: 192.168.1.160 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 ppp0 I have the following in my ip-up.local: route add -net 192.168.1.0 netmask 255.255.255.0 dev ppp0 The situation can be easily fixed by issueing a killall pppd and re-connecting. However, I obviously do not want to do this every X-hours or so. I have tried running pppd with both the debug as the kdebug flag but cannot find the cause of this problem. Currently, my ppp0 network interface seems to be running and the last log lines mentioning it are: Feb 19 14:10:40 graviton pppd[10934]: local IP address 192.168.1.240 Feb 19 14:10:40 graviton pppd[10934]: remote IP address 192.168.1.160 Feb 19 14:10:40 graviton pppd[10934]: Script /etc/ppp/ip-up started (pid 10952) Feb 19 14:10:40 graviton pppd[10934]: Script /etc/ppp/ip-up finished (pid 10952), status = 0x0 Feb 19 14:11:27 graviton pptp[10935]: anon log[decaps_gre:pptp_gre.c:414]: buffering packet 190 (expecting 189, lost or reordered) Feb 19 14:11:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Request received. Feb 19 14:11:37 graviton pptp[10942]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 6 'Echo-Reply' Feb 19 14:12:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Request received. Feb 19 14:12:37 graviton pptp[10942]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 6 'Echo-Reply' Feb 19 14:12:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:13:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:14:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:15:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:16:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:19:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:19:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:679]: no more Echo Reply/Request packets will be reported. I have enabled the persist option. The network interface is still running, but it is still impossible to send data through the VPN. Any help is appreciated.

    Read the article

  • Redmine with Apache 2 + Passenger nightmare --- site is up and available, but Redmine doesn't execute

    - by CptSupermrkt
    I was determined to figure this out myself, but I've been at it for a total of more than 10 hours, and I just can't figure this out. First, let me detail my environment (which I cannot change): Server version: Apache/2.2.15 (Unix) Ruby version: ruby 1.9.3p448 Rails version: Rails 4.0.1 Passenger version: Phusion Passenger version 4.0.5 Redmine version: 2.3.3 I have followed the Redmine instructions all the way through the test webserver to check that installation was successful with this command: ruby script/rails server webrick -e production The roadblock which I cannot overcome is getting Apache and Passenger to interpret and properly serve Redmine. I have searched pretty much every possible link within the first 10 pages or so of Google results. Everywhere I go I come across conflicting/contradicting/outdated information. We have a "weird" setup with Apache (which I inherited and cannot change). Redmine needs to be served through SSL, but Apache already has another website it's serving through SSL called Twiki. By "weird", what I mean is that our file structure is entirely different from all the tutorials out there on this version of Apache which have directories like "available-sites" and such. Here are the abbreviated versions of some of our config files: /etc/httpd/conf/httpd.conf (the global configuration file --- note that NO VirtualHost is defined here): ServerRoot "/etc/httpd" ... LoadModule passenger_module /usr/local/pkg/ruby/1.9.3-p448/lib/ruby/gems/1.9.1/gems/passenger-4.0.5/libout/apache2/mod_passenger.so PassengerRoot /usr/local/pkg/ruby/1.9.3-p448/lib/ruby/gems/1.9.1/gems/passenger-4.0.5 PassengerDefaultRuby /usr/local/pkg/ruby/1.9.3-p448/bin/ruby Include conf.d/*.conf ... User apache Group apache ... DocumentRoot "/var/www/html" So just to clarify, the above httpd.conf file does NOT have a VirtualHost section. /etc/httpd/conf.d/ssl.conf (defines the VirtualHost for ssl): Listen 443 <VirtualHost _default_:443> SSLEngine on ... SSLCertificateFile /etc/pki/tls/certs/localhost.crt </VirtualHost> /etc/httpd/conf.d/twiki.conf (this works just fine --- note this does NOT define a VirtualHost): ScriptAlias /twiki/bin/ "/var/www/twiki/bin/" Alias /twiki/ "/var/www/twiki/" <Directory "/var/www/twiki/bin"> AllowOverride None Order Deny,Allow Deny from all AuthType Basic AuthName "our team" AuthBasicProvider ldap ...a lot of ldap and authorization stuff Options ExecCGI FollowSymLinks SetHandler cgi-script </Directory> /etc/httpd/conf.d/redmine.conf: Alias /redmine/ "/var/www/redmine/public/" <Directory "/var/www/redmine/public"> Options Indexes ExecCGI FollowSymLinks Order allow,deny Allow from all AllowOverride all </Directory> The amazing thing is that this doesn't completely NOT work: I can successfully open up https://someserver/redmine/ with SSL and the https://someserver/twiki/ site remains unaffected. This tells me that it IS possible to have two separate sites up with one SSL configuration, so I don't think that's the problem. The problem is is that it opens up to the file index. I can navigate around my Redmine file structure, but no code ever gets executed. For example, there is a file included with Redmine called dispatch.fcgi in the public folder. https://someserver/redmine/dispatch.fcgi opens, but just as plain text code in the browser. As I understand it, in the case of using Passenger, CGI and FastCGI stuff is irrelevant/unused.

    Read the article

  • Persistent static routes fail on MacOS 10.6.5 startup!

    - by verbalicious
    I'm unable to get static routes to persist a reboot on Mac OS 10.6.5. I've tried all of the methods prescribed in Google search results, and previous posts on this site. I've tried manually creating a launchd daemon, and used RouteSplit's launchd daemon to no avail. It's clear that the interface is not ready when these methods attempt to apply the route. This workstation in question is getting its IP from DHCP and probably hasn't gotten its DHCP lease when the command runs. We're able to apply the route by hand when logged in, but not through startup methods. Is there another way to apply this route by sneaking the command into something later, but before the login window appears to the user? Here is some relevant log info from system.log. You can see the "route: writing to routing socket: Network is unreachable" errors where my launchd script fires off. I've tried adding extra "sleep" and "ipconfig waitall" statements later in the script but this doesn't fly. Dec 15 19:30:41 localhost com.apple.launchd[1]: *** launchd[1] has started up. *** Dec 15 19:30:45 localhost mDNSResponder[18]: mDNSResponder mDNSResponder-258.13 (Oct 8 2010 17:10:30) starting Dec 15 19:30:47 localhost configd[15]: bootp_session_transmit: bpf_write(en1) failed: Network is down (50) Dec 15 19:30:47 localhost configd[15]: DHCP en1: INIT transmit failed Dec 15 19:30:47 localhost configd[15]: network configuration changed. Dec 15 19:30:47 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:30:47 Administrators-MacBook-Pro blued[16]: Apple Bluetooth daemon started Dec 15 19:30:52 Administrators-MacBook-Pro syslog[67]: routes.sh: Starting RouteSplit Dec 15 19:30:53 Administrators-MacBook-Pro com.apple.usbmuxd[41]: usbmuxd-207 built for iTunesTenOne on Oct 19 2010 at 13:50:35, running 64 bit Dec 15 19:30:54 Administrators-MacBook-Pro /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow[50]: Login Window Application Started Dec 15 19:30:55 Administrators-MacBook-Pro bootlog[61]: BOOT_TIME: 1292459441 0 Dec 15 19:30:55 Administrators-MacBook-Pro syslog[86]: routes.sh: static route 192.168.0.0/23 192.168.2.2 Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: route: writing to routing socket: Network is unreachable Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: add net 192.168.0.0: gateway 192.168.2.2: Network is unreachable Dec 15 19:30:57 Administrators-MacBook-Pro org.apache.httpd[38]: httpd: Could not reliably determine the server's fully qualified domain name, using Administrators-MacBook-Pro.local for ServerName Dec 15 19:30:58 Administrators-MacBook-Pro loginwindow[50]: Login Window Started Security Agent Dec 15 19:30:58 Administrators-MacBook-Pro WindowServer[89]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:30:58 Administrators-MacBook-Pro com.apple.WindowServer[89]: Wed Dec 15 19:30:58 Administrators-MacBook-Pro.local WindowServer[89] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:31:18 Administrators-MacBook-Pro configd[15]: network configuration changed. Dec 15 19:31:19 administrators-macbook-pro configd[15]: setting hostname to "administrators-macbook-pro.local" Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[121]: /usr/libexec/ntpd-wrapper: scutil key State:/Network/Global/DNS not present after 30 seconds Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp options: a=2 v=1 e=0.100 E=5.000 P=2147483647.000 Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: d=15 c=5 x=0 op=1 l=/var/run/sntp.pid f= time.apple.com Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp: getaddrinfo(hostname, ntp) failed with nodename nor servname provided, or not known Dec 15 19:31:27 administrators-macbook-pro configd[15]: network configuration changed. Dec 15 19:31:27 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:31:27 Administrators-MacBook-Pro ntpd[37]: Cannot find existing interface for address 17.151.16.20 Dec 15 19:31:27 Administrators-MacBook-Pro ntpd_initres[125]: ntpd indicates no data available! Dec 15 19:31:31 Administrators-MacBook-Pro sshd[128]: USER_PROCESS: 133 ttys000 Dec 15 19:31:37 Administrators-MacBook-Pro sudo[138]: administrator : TTY=ttys000 ; PWD=/Users/administrator ; USER=root ; COMMAND=/usr/bin/less /var/log/system.log ``You can see the following line in /var/log/kernel.log that shows the en0 interface coming up: Dec 15 19:30:51 Administrators-MacBook-Pro kernel[0]: Ethernet [AppleBCM5701Ethernet]: Link up on en0, 1-Gigabit, Full-duplex, No flow-control, Debug [796d,0f01,0de1,0300,c1e1,3800]

    Read the article

  • How do I troubleshoot a "Bad Request" in Apache2?

    - by Nick
    I have a PHP application that loads for all URLs except the home page. Visiting "https://my.site.com/" produces a "Bad Request" error message. Any other URL, for example, "https://my.site.com/SomePage/" works just fine. It's only the home page that does not work. All pages use mod_rewrite and get routed through a single dispatch script, Director.php. Accessing Director.php directly also produces the "Bad Request" error. BUT- ALL of the other requests go through Director, and they all work just fine, (excluding the home page), so it can't be an issue with the Director.php script? OR can it? I'm not seeing anything in the Apache2 error log, and I'm not seeing any PHP errors in the PHP Error log. I've tried changing the first line of Director.php to read: echo 'test'; exit(); But I still get a "Bad Request". This is the rewrite log for a request to the home page: 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (2) init rewrite engine with requested uri / 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/$' to uri '/' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$' to uri '/' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da48b28/initial] (1) pass through / 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (2) init rewrite engine with requested uri /Director.php 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (2) rewrite '/Director.php' - '-[L,NC]' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/$' to uri '-[L,NC]' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (3) applying pattern '^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$' to uri '-[L,NC]' 123.123.123.123 - - [18/Feb/2011:05:38:49 +0000] [my.site.com/sid#7f273d77cb80][rid#7f273da5a298/subreq] (2) local path result: -[L,NC] Apache2 Access Log my.site.com:443 123.123.123.123 - - [18/Feb/2011:05:44:19 +0000] "GET / HTTP/1.1" 400 3223 "-" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.8) Gecko/20100723 Ubuntu/10.04 (lucid) Firefox/3.6.8" Any ideas? I don't know what else to try? UPDATE: Here's my vhost conf: RewriteEngine On RewriteLog "/LiveWebs/mysite.com/rewrite.log" RewriteLogLevel 5 # Dont rewite Crons folder ReWriteRule ^/Crons/ - [L,NC] ReWriteRule ^/phpmyadmin - [L,NC] ReWriteRule .php$ -[L,NC] # this is the problem!! RewriteCond %{REQUEST_URI} !^/images/ [NC] RewriteRule ^/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1 [L,QSA] RewriteCond %{REQUEST_URI} !^/images/ [NC] RewriteRule ^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1&action=$2 [L,QSA] The problem is the line "ReWriteRule .php$ -[L,NC]". When I comment it out, the home page loads. The question is, how do I make URLS that actually end in .php go straight through (without breaking the home page)?

    Read the article

< Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >