Search Results

Search found 19217 results on 769 pages for 'log parser'.

Page 10/769 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Cocoa - simple collada parser

    - by madsonic
    I´ve been researching a bit in the net but didnt find any Straightforward answer, i am looking for a simple way to Load Vertex and U/V data from a collada .dae file, i don´t need textures and normals for my application (for now) but i want it as lightweight&fast as possible. Any hints or existing solutions?

    Read the article

  • A MySQL schema parser in Java?

    - by Richard
    Hi all, Does anyone know whether there is a java library for parsing a MySQL schema? In code I want to be able to determine the tables and fields specified in a schema. Or am I going to have to write my own? Thanks Richard.

    Read the article

  • identifier token keyword antlr parser

    - by batman_for
    How to handle the case where the token 'for' is used in two different situations in the language to parse? Such as statement and as a "parameter" as the following example: echo for print example for i in {0..10..2} do echo "Welcome $i times" done Output: for print example Welcome 0 times Welcome 2 times Welcome 4 times Welcome 6 times Welcome 8 times Welcome 10 times Thanks.

    Read the article

  • how to check an ANTLR token is only used once or less in the parser

    - by Simon Kenyon Shepard
    In Antlr, if I have a rule for example: someRule : TOKENA TOKENB; it would accept : "tokena tokenb" if I would like TOKENA to be optional, I can say, someRule : TOKENA* TOKENB; then I can have : "tokena tokenb" or "tokenb" or "tokena tokena tokenb" but this also means it can be repeated more that once. Is there anyway I can say this token can be there 1 or less times but not more than one? so it would accept: "tokena tokenb" or "tokenb" BUT NOT "tokena tokena tokenb"? Many thanks

    Read the article

  • small jquery parser

    - by Johua
    input is a string (var str) something like this: `<<hello there//abcd//1234>> I want to know how to best extract the information to have this: var a = "hello there"; var b = "abcd"; var c = 1234; case a: select everything after << until first // case b: select everything after first // until second // case c: select everything after second // until Anybody knows a simple solution? THX

    Read the article

  • Looking for a C# code parser

    - by Blindy
    I'm looking for a set of classes (preferably in the .net framework) that will parse C# code and return a list of functions with parameters, classes with their methods, properties etc. Ideally it would provide all that's needed to build my own intellisense. I have a feeling something like this should be in the .net framework, given all the reflection stuff they offer, but if not then an open source alternative is good enough. What I'm trying to build is basically something like Snippet Compiler, but with a twist. I'm trying to figure out how to get the code dom first. I tried googling for this but I'm not sure what the correct term for this is so I came up empty.

    Read the article

  • I need a small parser (jquery) that knows if an input is URL:PORT format

    - by Johua
    You have a user input string from a textbox for example var strInput = $("#txtBox").val(); strInput has some string now. I need to now if it is of this format: IP:PORT Basically a user can input something like this: http://192.168.300.22:20000 1) Frist part (protocol): http:// always needs to be replaced by: https:// 2) Second part (everything until the ":" sign): 192.168.200.22 (or www.google.com) 3) Third part (port): everyhing after ":" (example: 9999, 100000) I step): TAKE THE INPUT var strInput = $("#txtBox").val(); II) step): PARSE THE INPUT III) results): var strProtocol = "https//"; var strIP = parsedIP; var strPORT = parsedPORT; So i need 2 know how to get the values (parsedIP and parsedPORT).

    Read the article

  • Move SQL Server transaction log to another disk

    - by Jim Lahman
    When restoring a database backup, by default, SQL Server places the database files in the master database file directory.  In this example, that location is in L:\MSSQL10.CHTL\MSSQL\DATA as shown by the issuance of sp_helpfile   Hence, the restored files for the database CHTL_L2_DB are in the same directory     Per SQL Server best practices, the log file should be on its own disk drive so that the database and log file can operate in a sequential manner and perform optimally. The steps to move the log file is as follows: Record the location of the database files and the transaction log files Note the future destination of the transaction log file Get exclusive access to the database Detach from the database Move the log file to the new location Attach to the database Verify new location of transaction log Record the location of the database file To view the current location of the database files, use the system stored procedure, sp_helpfile 1: use chtl_l2_db 2: go 3:   4: sp_helpfile 5: go   Note the future destination of the transaction log file The future destination of the transaction log file will be located in K:\MSSQLLog   Get exclusive access to the database To get exclusive access to the database, alter the database access to single_user.  If users are still connected to the database, remove them by using with rollback immediate option.  Note:  If you had a pane connected to the database when the it is placed into single_user mode, then you will be presented with a reconnection dialog box. 1: alter database chtl_l2_db 2: set single_user with rollback immediate 3: go Detach from the database   Now detach from the database so that we can use windows explorer to move the transaction log file 1: use master 2: go 3:   4: sp_detach_db 'chtl_l2_db' 5: go   After copying the transaction log file re-attach to the database 1: use master 2: go 3:   4: sp_attach_db 'chtl_l2_db', 5: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB.MDF', 6: 'K:\MSSQLLog\CHTL_L2_DB_4.LDF', 7: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_1.NDF', 8: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_2.NDF', 9: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_3.NDF' 10: GO

    Read the article

  • Ubuntu Dependency Problem in Activity log Manager

    - by Incredible
    incredible@incredible-Inspiron-N5010:~$ sudo apt-get -f install [sudo] password for incredible: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: activity-log-manager The following packages will be upgraded: activity-log-manager 1 upgraded, 0 newly installed, 0 to remove and 287 not upgraded. 1 not fully installed or removed. Need to get 0 B/60.3 kB of archives. After this operation, 29.7 kB disk space will be freed. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of activity-log-manager: activity-log-manager depends on activity-log-manager-common (= 0.9.4-0ubuntu3); however: Version of activity-log-manager-common on system is 0.9.4-0ubuntu3.1. activity-log-manager-control-center (0.9.4-0ubuntu3.1) breaks activity-log-manager (<< 0.9.4-0ubuntu3.1) and is installed. Version of activity-log-manager to be configured is 0.9.4-0ubuntu3. dpkg: error processing activity-log-manager (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: activity-log-manager E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Web log analyser with daily statistics per URL

    - by Mat
    Are there any good web server log analysis tools that can provide me with daily statistics on individual URLs? I guess I'm looking at something that can drill down into particular URLs and on particular days rather than just a monthly summary report. The following don't seem to meet my needs as they don't offer drilling down to get more detailed info: awstats analog webalizer (I'm running an nginx frontend into Apache with nginx outputting 'combined' format logfiles if it makes any difference.)

    Read the article

  • Binary Log Format in MySQL

    - by amritansu
    Reference manual for MySQL 5.6 states that " Some changes, however, still use the statement-based format. Examples include all DDL (data definition language) statements such as CREATE TABLE, ALTER TABLE, or DROP TABLE. " Does this statement means that even if we have ROW format for binary logs all DDLs will be logged in binary log as statement based? How does this affect replication? Kindly help me to understand this.

    Read the article

  • log shipping of biztalk database on SQL server 2008 standard edition

    - by Manjot
    Hi, I want to do log shipping for biztalk databases on SQL server 2008 standard edition (server A) to another SQL server 2008 standard edition (server B). I was told that for biztalk, logshipping is not like standard logshipping. I was able to find 2 links: http://msdn.microsoft.com/en-us/library/cc296836%28v=BTS.10%29.aspx http://msdn.microsoft.com/en-us/library/cc296741%28v=BTS.10%29.aspx but they are not talking about SQL 2008 servers. Can anyone please help in this? Thanks in advance

    Read the article

  • Could not start the event log service on Local Computer

    - by wcpro
    I'm getting a strange error on my windows 2003 R2 - Enterprise Edition w/ service pack 2 server Could not start the event log service on Local Computer Error 1075: The dependency service does not exist or has been marked for deletion. Is there any idea as to what could be causing this or how i can remedy it?

    Read the article

  • OAS log files filling up hard drive

    - by Andrew Hampton
    We've had issues with log files for Oracle Application Server filling up the hard drive on our server. The files are in the /network/admin folder and are named server.log_XXXXX.trc and client.log_XXXXX.trc where XXXXX are 5 digits. The files are typically anywhere from 1-2MB in size but can be up to 100MB and thousands of them are created at a rate of about 5-10 per minute. Does anyone know how to disable these logs? Thanks!

    Read the article

  • Windows Event Log wrong Source column value

    - by O.O
    In the Event Viewer in Windows 7 there is a Source column that is set by my Windows Service application. The value is set to TOS and usually when a log entry is associated to my application, it has TOS as the Source column value. However, when the service fails to start (or some other kind of error occurs) I get a Source of one of the following values: Application Error Service Control Manager .NET Runtime I don't understand why the value is not always TOS Also, is it possible to force it to use TOS every time?

    Read the article

  • SQL Server 2000 -- Log Shipping reliability?

    - by Chris J
    I've been asked to look into log shipping for SQL Server 2000 (yes, 2000): something in my memory tells me that I looked at this years ago and there were question marks over it's reliability. I'm trying to google stuff, but given the age of 2000 now I've put pulled up anything to confirm this -- most seem to say they're using it without problem, so just want confirm whether I'm just being delusional, or whether there were problems, but with a fully patched SP4 box these don't exist any more. Cheers!

    Read the article

  • VNC - Is there any way to turn off logging/log files

    - by Ke
    Hi, I've looked everywhere for a solution to this. Is there any way to turn off this logging in VNC? VNC seems to be logging some large updates I'm doing in mysql and taking up my whole hard drive space. The only way to get rid of the log file is to reboot, which I would prefer not to have to do if possible. Cheers

    Read the article

  • How do I show a log analysis in Splunk?

    - by Vinod K
    I have made my ubuntu server a centralized log server...I have splunk installed in the /opt directory of the ubuntu server. I have one of the another machines sending logs to this ubuntu server..In the splunk interface i have added in the network ports as UDP port 514...and also have added in the "file and directory" /var/log. The client has also been configured properly...How do I show analysis of the logs??

    Read the article

  • Using R to Analyze G1GC Log Files

    - by user12620111
    Using R to Analyze G1GC Log Files body, td { font-family: sans-serif; background-color: white; font-size: 12px; margin: 8px; } tt, code, pre { font-family: 'DejaVu Sans Mono', 'Droid Sans Mono', 'Lucida Console', Consolas, Monaco, monospace; } h1 { font-size:2.2em; } h2 { font-size:1.8em; } h3 { font-size:1.4em; } h4 { font-size:1.0em; } h5 { font-size:0.9em; } h6 { font-size:0.8em; } a:visited { color: rgb(50%, 0%, 50%); } pre { margin-top: 0; max-width: 95%; border: 1px solid #ccc; white-space: pre-wrap; } pre code { display: block; padding: 0.5em; } code.r, code.cpp { background-color: #F8F8F8; } table, td, th { border: none; } blockquote { color:#666666; margin:0; padding-left: 1em; border-left: 0.5em #EEE solid; } hr { height: 0px; border-bottom: none; border-top-width: thin; border-top-style: dotted; border-top-color: #999999; } @media print { * { background: transparent !important; color: black !important; filter:none !important; -ms-filter: none !important; } body { font-size:12pt; max-width:100%; } a, a:visited { text-decoration: underline; } hr { visibility: hidden; page-break-before: always; } pre, blockquote { padding-right: 1em; page-break-inside: avoid; } tr, img { page-break-inside: avoid; } img { max-width: 100% !important; } @page :left { margin: 15mm 20mm 15mm 10mm; } @page :right { margin: 15mm 10mm 15mm 20mm; } p, h2, h3 { orphans: 3; widows: 3; } h2, h3 { page-break-after: avoid; } } pre .operator, pre .paren { color: rgb(104, 118, 135) } pre .literal { color: rgb(88, 72, 246) } pre .number { color: rgb(0, 0, 205); } pre .comment { color: rgb(76, 136, 107); } pre .keyword { color: rgb(0, 0, 255); } pre .identifier { color: rgb(0, 0, 0); } pre .string { color: rgb(3, 106, 7); } var hljs=new function(){function m(p){return p.replace(/&/gm,"&").replace(/"}while(y.length||w.length){var v=u().splice(0,1)[0];z+=m(x.substr(q,v.offset-q));q=v.offset;if(v.event=="start"){z+=t(v.node);s.push(v.node)}else{if(v.event=="stop"){var p,r=s.length;do{r--;p=s[r];z+=("")}while(p!=v.node);s.splice(r,1);while(r'+M[0]+""}else{r+=M[0]}O=P.lR.lastIndex;M=P.lR.exec(L)}return r+L.substr(O,L.length-O)}function J(L,M){if(M.sL&&e[M.sL]){var r=d(M.sL,L);x+=r.keyword_count;return r.value}else{return F(L,M)}}function I(M,r){var L=M.cN?'':"";if(M.rB){y+=L;M.buffer=""}else{if(M.eB){y+=m(r)+L;M.buffer=""}else{y+=L;M.buffer=r}}D.push(M);A+=M.r}function G(N,M,Q){var R=D[D.length-1];if(Q){y+=J(R.buffer+N,R);return false}var P=q(M,R);if(P){y+=J(R.buffer+N,R);I(P,M);return P.rB}var L=v(D.length-1,M);if(L){var O=R.cN?"":"";if(R.rE){y+=J(R.buffer+N,R)+O}else{if(R.eE){y+=J(R.buffer+N,R)+O+m(M)}else{y+=J(R.buffer+N+M,R)+O}}while(L1){O=D[D.length-2].cN?"":"";y+=O;L--;D.length--}var r=D[D.length-1];D.length--;D[D.length-1].buffer="";if(r.starts){I(r.starts,"")}return R.rE}if(w(M,R)){throw"Illegal"}}var E=e[B];var D=[E.dM];var A=0;var x=0;var y="";try{var s,u=0;E.dM.buffer="";do{s=p(C,u);var t=G(s[0],s[1],s[2]);u+=s[0].length;if(!t){u+=s[1].length}}while(!s[2]);if(D.length1){throw"Illegal"}return{r:A,keyword_count:x,value:y}}catch(H){if(H=="Illegal"){return{r:0,keyword_count:0,value:m(C)}}else{throw H}}}function g(t){var p={keyword_count:0,r:0,value:m(t)};var r=p;for(var q in e){if(!e.hasOwnProperty(q)){continue}var s=d(q,t);s.language=q;if(s.keyword_count+s.rr.keyword_count+r.r){r=s}if(s.keyword_count+s.rp.keyword_count+p.r){r=p;p=s}}if(r.language){p.second_best=r}return p}function i(r,q,p){if(q){r=r.replace(/^((]+|\t)+)/gm,function(t,w,v,u){return w.replace(/\t/g,q)})}if(p){r=r.replace(/\n/g,"")}return r}function n(t,w,r){var x=h(t,r);var v=a(t);var y,s;if(v){y=d(v,x)}else{return}var q=c(t);if(q.length){s=document.createElement("pre");s.innerHTML=y.value;y.value=k(q,c(s),x)}y.value=i(y.value,w,r);var u=t.className;if(!u.match("(\\s|^)(language-)?"+v+"(\\s|$)")){u=u?(u+" "+v):v}if(/MSIE [678]/.test(navigator.userAgent)&&t.tagName=="CODE"&&t.parentNode.tagName=="PRE"){s=t.parentNode;var p=document.createElement("div");p.innerHTML=""+y.value+"";t=p.firstChild.firstChild;p.firstChild.cN=s.cN;s.parentNode.replaceChild(p.firstChild,s)}else{t.innerHTML=y.value}t.className=u;t.result={language:v,kw:y.keyword_count,re:y.r};if(y.second_best){t.second_best={language:y.second_best.language,kw:y.second_best.keyword_count,re:y.second_best.r}}}function o(){if(o.called){return}o.called=true;var r=document.getElementsByTagName("pre");for(var p=0;p|=||=||=|\\?|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~";this.ER="(?![\\s\\S])";this.BE={b:"\\\\.",r:0};this.ASM={cN:"string",b:"'",e:"'",i:"\\n",c:[this.BE],r:0};this.QSM={cN:"string",b:'"',e:'"',i:"\\n",c:[this.BE],r:0};this.CLCM={cN:"comment",b:"//",e:"$"};this.CBLCLM={cN:"comment",b:"/\\*",e:"\\*/"};this.HCM={cN:"comment",b:"#",e:"$"};this.NM={cN:"number",b:this.NR,r:0};this.CNM={cN:"number",b:this.CNR,r:0};this.BNM={cN:"number",b:this.BNR,r:0};this.inherit=function(r,s){var p={};for(var q in r){p[q]=r[q]}if(s){for(var q in s){p[q]=s[q]}}return p}}();hljs.LANGUAGES.cpp=function(){var a={keyword:{"false":1,"int":1,"float":1,"while":1,"private":1,"char":1,"catch":1,"export":1,virtual:1,operator:2,sizeof:2,dynamic_cast:2,typedef:2,const_cast:2,"const":1,struct:1,"for":1,static_cast:2,union:1,namespace:1,unsigned:1,"long":1,"throw":1,"volatile":2,"static":1,"protected":1,bool:1,template:1,mutable:1,"if":1,"public":1,friend:2,"do":1,"return":1,"goto":1,auto:1,"void":2,"enum":1,"else":1,"break":1,"new":1,extern:1,using:1,"true":1,"class":1,asm:1,"case":1,typeid:1,"short":1,reinterpret_cast:2,"default":1,"double":1,register:1,explicit:1,signed:1,typename:1,"try":1,"this":1,"switch":1,"continue":1,wchar_t:1,inline:1,"delete":1,alignof:1,char16_t:1,char32_t:1,constexpr:1,decltype:1,noexcept:1,nullptr:1,static_assert:1,thread_local:1,restrict:1,_Bool:1,complex:1},built_in:{std:1,string:1,cin:1,cout:1,cerr:1,clog:1,stringstream:1,istringstream:1,ostringstream:1,auto_ptr:1,deque:1,list:1,queue:1,stack:1,vector:1,map:1,set:1,bitset:1,multiset:1,multimap:1,unordered_set:1,unordered_map:1,unordered_multiset:1,unordered_multimap:1,array:1,shared_ptr:1}};return{dM:{k:a,i:"",k:a,r:10,c:["self"]}]}}}();hljs.LANGUAGES.r={dM:{c:[hljs.HCM,{cN:"number",b:"\\b0[xX][0-9a-fA-F]+[Li]?\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\b\\d+(?:[eE][+\\-]?\\d*)?L\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\b\\d+\\.(?!\\d)(?:i\\b)?",e:hljs.IMMEDIATE_RE,r:1},{cN:"number",b:"\\b\\d+(?:\\.\\d*)?(?:[eE][+\\-]?\\d*)?i?\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\.\\d+(?:[eE][+\\-]?\\d*)?i?\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"keyword",b:"(?:tryCatch|library|setGeneric|setGroupGeneric)\\b",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\.\\.\\.",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\.\\.\\d+(?![\\w.])",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\b(?:function)",e:hljs.IMMEDIATE_RE,r:2},{cN:"keyword",b:"(?:if|in|break|next|repeat|else|for|return|switch|while|try|stop|warning|require|attach|detach|source|setMethod|setClass)\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"literal",b:"(?:NA|NA_integer_|NA_real_|NA_character_|NA_complex_)\\b",e:hljs.IMMEDIATE_RE,r:10},{cN:"literal",b:"(?:NULL|TRUE|FALSE|T|F|Inf|NaN)\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"identifier",b:"[a-zA-Z.][a-zA-Z0-9._]*\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"operator",b:"|=||   Using R to Analyze G1GC Log Files   Using R to Analyze G1GC Log Files Introduction Working in Oracle Platform Integration gives an engineer opportunities to work on a wide array of technologies. My team’s goal is to make Oracle applications run best on the Solaris/SPARC platform. When looking for bottlenecks in a modern applications, one needs to be aware of not only how the CPUs and operating system are executing, but also network, storage, and in some cases, the Java Virtual Machine. I was recently presented with about 1.5 GB of Java Garbage First Garbage Collector log file data. If you’re not familiar with the subject, you might want to review Garbage First Garbage Collector Tuning by Monica Beckwith. The customer had been running Java HotSpot 1.6.0_31 to host a web application server. I was told that the Solaris/SPARC server was running a Java process launched using a commmand line that included the following flags: -d64 -Xms9g -Xmx9g -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:InitiatingHeapOccupancyPercent=80 -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+PrintGC -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps -XX:+PrintFlagsFinal -XX:+DisableExplicitGC -XX:+UnlockExperimentalVMOptions -XX:ParallelGCThreads=8 Several sources on the internet indicate that if I were to print out the 1.5 GB of log files, it would require enough paper to fill the bed of a pick up truck. Of course, it would be fruitless to try to scan the log files by hand. Tools will be required to summarize the contents of the log files. Others have encountered large Java garbage collection log files. There are existing tools to analyze the log files: IBM’s GC toolkit The chewiebug GCViewer gchisto HPjmeter Instead of using one of the other tools listed, I decide to parse the log files with standard Unix tools, and analyze the data with R. Data Cleansing The log files arrived in two different formats. I guess that the difference is that one set of log files was generated using a more verbose option, maybe -XX:+PrintHeapAtGC, and the other set of log files was generated without that option. Format 1 In some of the log files, the log files with the less verbose format, a single trace, i.e. the report of a singe garbage collection event, looks like this: {Heap before GC invocations=12280 (full 61): garbage-first heap total 9437184K, used 7499918K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) region size 4096K, 1 young (4096K), 0 survivors (0K) compacting perm gen total 262144K, used 144077K [0xffffffff40000000, 0xffffffff50000000, 0xffffffff50000000) the space 262144K, 54% used [0xffffffff40000000, 0xffffffff48cb3758, 0xffffffff48cb3800, 0xffffffff50000000) No shared spaces configured. 2014-05-14T07:24:00.988-0700: 60586.353: [GC pause (young) 7324M->7320M(9216M), 0.1567265 secs] Heap after GC invocations=12281 (full 61): garbage-first heap total 9437184K, used 7496533K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) region size 4096K, 0 young (0K), 0 survivors (0K) compacting perm gen total 262144K, used 144077K [0xffffffff40000000, 0xffffffff50000000, 0xffffffff50000000) the space 262144K, 54% used [0xffffffff40000000, 0xffffffff48cb3758, 0xffffffff48cb3800, 0xffffffff50000000) No shared spaces configured. } A simple grep can be used to extract a summary: $ grep "\[ GC pause (young" g1gc.log 2014-05-13T13:24:35.091-0700: 3.109: [GC pause (young) 20M->5029K(9216M), 0.0146328 secs] 2014-05-13T13:24:35.440-0700: 3.459: [GC pause (young) 9125K->6077K(9216M), 0.0086723 secs] 2014-05-13T13:24:37.581-0700: 5.599: [GC pause (young) 25M->8470K(9216M), 0.0203820 secs] 2014-05-13T13:24:42.686-0700: 10.704: [GC pause (young) 44M->15M(9216M), 0.0288848 secs] 2014-05-13T13:24:48.941-0700: 16.958: [GC pause (young) 51M->20M(9216M), 0.0491244 secs] 2014-05-13T13:24:56.049-0700: 24.066: [GC pause (young) 92M->26M(9216M), 0.0525368 secs] 2014-05-13T13:25:34.368-0700: 62.383: [GC pause (young) 602M->68M(9216M), 0.1721173 secs] But that format wasn't easily read into R, so I needed to be a bit more tricky. I used the following Unix command to create a summary file that was easy for R to read. $ echo "SecondsSinceLaunch BeforeSize AfterSize TotalSize RealTime" $ grep "\[GC pause (young" g1gc.log | grep -v mark | sed -e 's/[A-SU-z\(\),]/ /g' -e 's/->/ /' -e 's/: / /g' | more SecondsSinceLaunch BeforeSize AfterSize TotalSize RealTime 2014-05-13T13:24:35.091-0700 3.109 20 5029 9216 0.0146328 2014-05-13T13:24:35.440-0700 3.459 9125 6077 9216 0.0086723 2014-05-13T13:24:37.581-0700 5.599 25 8470 9216 0.0203820 2014-05-13T13:24:42.686-0700 10.704 44 15 9216 0.0288848 2014-05-13T13:24:48.941-0700 16.958 51 20 9216 0.0491244 2014-05-13T13:24:56.049-0700 24.066 92 26 9216 0.0525368 2014-05-13T13:25:34.368-0700 62.383 602 68 9216 0.1721173 Format 2 In some of the log files, the log files with the more verbose format, a single trace, i.e. the report of a singe garbage collection event, was more complicated than Format 1. Here is a text file with an example of a single G1GC trace in the second format. As you can see, it is quite complicated. It is nice that there is so much information available, but the level of detail can be overwhelming. I wrote this awk script (download) to summarize each trace on a single line. #!/usr/bin/env awk -f BEGIN { printf("SecondsSinceLaunch IncrementalCount FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize\n") } ###################### # Save count data from lines that are at the start of each G1GC trace. # Each trace starts out like this: # {Heap before GC invocations=14 (full 0): # garbage-first heap total 9437184K, used 325496K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) ###################### /{Heap.*full/{ gsub ( "\\)" , "" ); nf=split($0,a,"="); split(a[2],b," "); getline; if ( match($0, "first") ) { G1GC=1; IncrementalCount=b[1]; FullCount=substr( b[3], 1, length(b[3])-1 ); } else { G1GC=0; } } ###################### # Pull out time stamps that are in lines with this format: # 2014-05-12T14:02:06.025-0700: 94.312: [GC pause (young), 0.08870154 secs] ###################### /GC pause/ { DateTime=$1; SecondsSinceLaunch=substr($2, 1, length($2)-1); } ###################### # Heap sizes are in lines that look like this: # [ 4842M->4838M(9216M)] ###################### /\[ .*]$/ { gsub ( "\\[" , "" ); gsub ( "\ \]" , "" ); gsub ( "->" , " " ); gsub ( "\\( " , " " ); gsub ( "\ \)" , " " ); split($0,a," "); if ( split(a[1],b,"M") > 1 ) {BeforeSize=b[1]*1024;} if ( split(a[1],b,"K") > 1 ) {BeforeSize=b[1];} if ( split(a[2],b,"M") > 1 ) {AfterSize=b[1]*1024;} if ( split(a[2],b,"K") > 1 ) {AfterSize=b[1];} if ( split(a[3],b,"M") > 1 ) {TotalSize=b[1]*1024;} if ( split(a[3],b,"K") > 1 ) {TotalSize=b[1];} } ###################### # Emit an output line when you find input that looks like this: # [Times: user=1.41 sys=0.08, real=0.24 secs] ###################### /\[Times/ { if (G1GC==1) { gsub ( "," , "" ); split($2,a,"="); UserTime=a[2]; split($3,a,"="); SysTime=a[2]; split($4,a,"="); RealTime=a[2]; print DateTime,SecondsSinceLaunch,IncrementalCount,FullCount,UserTime,SysTime,RealTime,BeforeSize,AfterSize,TotalSize; G1GC=0; } } The resulting summary is about 25X smaller that the original file, but still difficult for a human to digest. SecondsSinceLaunch IncrementalCount FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ... 2014-05-12T18:36:34.669-0700: 3985.744 561 0 0.57 0.06 0.16 1724416 1720320 9437184 2014-05-12T18:36:34.839-0700: 3985.914 562 0 0.51 0.06 0.19 1724416 1720320 9437184 2014-05-12T18:36:35.069-0700: 3986.144 563 0 0.60 0.04 0.27 1724416 1721344 9437184 2014-05-12T18:36:35.354-0700: 3986.429 564 0 0.33 0.04 0.09 1725440 1722368 9437184 2014-05-12T18:36:35.545-0700: 3986.620 565 0 0.58 0.04 0.17 1726464 1722368 9437184 2014-05-12T18:36:35.726-0700: 3986.801 566 0 0.43 0.05 0.12 1726464 1722368 9437184 2014-05-12T18:36:35.856-0700: 3986.930 567 0 0.30 0.04 0.07 1726464 1723392 9437184 2014-05-12T18:36:35.947-0700: 3987.023 568 0 0.61 0.04 0.26 1727488 1723392 9437184 2014-05-12T18:36:36.228-0700: 3987.302 569 0 0.46 0.04 0.16 1731584 1724416 9437184 Reading the Data into R Once the GC log data had been cleansed, either by processing the first format with the shell script, or by processing the second format with the awk script, it was easy to read the data into R. g1gc.df = read.csv("summary.txt", row.names = NULL, stringsAsFactors=FALSE,sep="") str(g1gc.df) ## 'data.frame': 8307 obs. of 10 variables: ## $ row.names : chr "2014-05-12T14:00:32.868-0700:" "2014-05-12T14:00:33.179-0700:" "2014-05-12T14:00:33.677-0700:" "2014-05-12T14:00:35.538-0700:" ... ## $ SecondsSinceLaunch: num 1.16 1.47 1.97 3.83 6.1 ... ## $ IncrementalCount : int 0 1 2 3 4 5 6 7 8 9 ... ## $ FullCount : int 0 0 0 0 0 0 0 0 0 0 ... ## $ UserTime : num 0.11 0.05 0.04 0.21 0.08 0.26 0.31 0.33 0.34 0.56 ... ## $ SysTime : num 0.04 0.01 0.01 0.05 0.01 0.06 0.07 0.06 0.07 0.09 ... ## $ RealTime : num 0.02 0.02 0.01 0.04 0.02 0.04 0.05 0.04 0.04 0.06 ... ## $ BeforeSize : int 8192 5496 5768 22528 24576 43008 34816 53248 55296 93184 ... ## $ AfterSize : int 1400 1672 2557 4907 7072 14336 16384 18432 19456 21504 ... ## $ TotalSize : int 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 ... head(g1gc.df) ## row.names SecondsSinceLaunch IncrementalCount ## 1 2014-05-12T14:00:32.868-0700: 1.161 0 ## 2 2014-05-12T14:00:33.179-0700: 1.472 1 ## 3 2014-05-12T14:00:33.677-0700: 1.969 2 ## 4 2014-05-12T14:00:35.538-0700: 3.830 3 ## 5 2014-05-12T14:00:37.811-0700: 6.103 4 ## 6 2014-05-12T14:00:41.428-0700: 9.720 5 ## FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ## 1 0 0.11 0.04 0.02 8192 1400 9437184 ## 2 0 0.05 0.01 0.02 5496 1672 9437184 ## 3 0 0.04 0.01 0.01 5768 2557 9437184 ## 4 0 0.21 0.05 0.04 22528 4907 9437184 ## 5 0 0.08 0.01 0.02 24576 7072 9437184 ## 6 0 0.26 0.06 0.04 43008 14336 9437184 Basic Statistics Once the data has been read into R, simple statistics are very easy to generate. All of the numbers from high school statistics are available via simple commands. For example, generate a summary of every column: summary(g1gc.df) ## row.names SecondsSinceLaunch IncrementalCount FullCount ## Length:8307 Min. : 1 Min. : 0 Min. : 0.0 ## Class :character 1st Qu.: 9977 1st Qu.:2048 1st Qu.: 0.0 ## Mode :character Median :12855 Median :4136 Median : 12.0 ## Mean :12527 Mean :4156 Mean : 31.6 ## 3rd Qu.:15758 3rd Qu.:6262 3rd Qu.: 61.0 ## Max. :55484 Max. :8391 Max. :113.0 ## UserTime SysTime RealTime BeforeSize ## Min. :0.040 Min. :0.0000 Min. : 0.0 Min. : 5476 ## 1st Qu.:0.470 1st Qu.:0.0300 1st Qu.: 0.1 1st Qu.:5137920 ## Median :0.620 Median :0.0300 Median : 0.1 Median :6574080 ## Mean :0.751 Mean :0.0355 Mean : 0.3 Mean :5841855 ## 3rd Qu.:0.920 3rd Qu.:0.0400 3rd Qu.: 0.2 3rd Qu.:7084032 ## Max. :3.370 Max. :1.5600 Max. :488.1 Max. :8696832 ## AfterSize TotalSize ## Min. : 1380 Min. :9437184 ## 1st Qu.:5002752 1st Qu.:9437184 ## Median :6559744 Median :9437184 ## Mean :5785454 Mean :9437184 ## 3rd Qu.:7054336 3rd Qu.:9437184 ## Max. :8482816 Max. :9437184 Q: What is the total amount of User CPU time spent in garbage collection? sum(g1gc.df$UserTime) ## [1] 6236 As you can see, less than two hours of CPU time was spent in garbage collection. Is that too much? To find the percentage of time spent in garbage collection, divide the number above by total_elapsed_time*CPU_count. In this case, there are a lot of CPU’s and it turns out the the overall amount of CPU time spent in garbage collection isn’t a problem when viewed in isolation. When calculating rates, i.e. events per unit time, you need to ask yourself if the rate is homogenous across the time period in the log file. Does the log file include spikes of high activity that should be separately analyzed? Averaging in data from nights and weekends with data from business hours may alias problems. If you have a reason to suspect that the garbage collection rates include peaks and valleys that need independent analysis, see the “Time Series” section, below. Q: How much garbage is collected on each pass? The amount of heap space that is recovered per GC pass is surprisingly low: At least one collection didn’t recover any data. (“Min.=0”) 25% of the passes recovered 3MB or less. (“1st Qu.=3072”) Half of the GC passes recovered 4MB or less. (“Median=4096”) The average amount recovered was 56MB. (“Mean=56390”) 75% of the passes recovered 36MB or less. (“3rd Qu.=36860”) At least one pass recovered 2GB. (“Max.=2121000”) g1gc.df$Delta = g1gc.df$BeforeSize - g1gc.df$AfterSize summary(g1gc.df$Delta) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0 3070 4100 56400 36900 2120000 Q: What is the maximum User CPU time for a single collection? The worst garbage collection (“Max.”) is many standard deviations away from the mean. The data appears to be right skewed. summary(g1gc.df$UserTime) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0.040 0.470 0.620 0.751 0.920 3.370 sd(g1gc.df$UserTime) ## [1] 0.3966 Basic Graphics Once the data is in R, it is trivial to plot the data with formats including dot plots, line charts, bar charts (simple, stacked, grouped), pie charts, boxplots, scatter plots histograms, and kernel density plots. Histogram of User CPU Time per Collection I don't think that this graph requires any explanation. hist(g1gc.df$UserTime, main="User CPU Time per Collection", xlab="Seconds", ylab="Frequency") Box plot to identify outliers When the initial data is viewed with a box plot, you can see the one crazy outlier in the real time per GC. Save this data point for future analysis and drop the outlier so that it’s not throwing off our statistics. Now the box plot shows many outliers, which will be examined later, using times series analysis. Notice that the scale of the x-axis changes drastically once the crazy outlier is removed. par(mfrow=c(2,1)) boxplot(g1gc.df$UserTime,g1gc.df$SysTime,g1gc.df$RealTime, main="Box Plot of Time per GC\n(dominated by a crazy outlier)", names=c("usr","sys","elapsed"), xlab="Seconds per GC", ylab="Time (Seconds)", horizontal = TRUE, outcol="red") crazy.outlier.df=g1gc.df[g1gc.df$RealTime > 400,] g1gc.df=g1gc.df[g1gc.df$RealTime < 400,] boxplot(g1gc.df$UserTime,g1gc.df$SysTime,g1gc.df$RealTime, main="Box Plot of Time per GC\n(crazy outlier excluded)", names=c("usr","sys","elapsed"), xlab="Seconds per GC", ylab="Time (Seconds)", horizontal = TRUE, outcol="red") box(which = "outer", lty = "solid") Here is the crazy outlier for future analysis: crazy.outlier.df ## row.names SecondsSinceLaunch IncrementalCount ## 8233 2014-05-12T23:15:43.903-0700: 20741 8316 ## FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ## 8233 112 0.55 0.42 488.1 8381440 8235008 9437184 ## Delta ## 8233 146432 R Time Series Data To analyze the garbage collection as a time series, I’ll use Z’s Ordered Observations (zoo). “zoo is the creator for an S3 class of indexed totally ordered observations which includes irregular time series.” require(zoo) ## Loading required package: zoo ## ## Attaching package: 'zoo' ## ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric head(g1gc.df[,1]) ## [1] "2014-05-12T14:00:32.868-0700:" "2014-05-12T14:00:33.179-0700:" ## [3] "2014-05-12T14:00:33.677-0700:" "2014-05-12T14:00:35.538-0700:" ## [5] "2014-05-12T14:00:37.811-0700:" "2014-05-12T14:00:41.428-0700:" options("digits.secs"=3) times=as.POSIXct( g1gc.df[,1], format="%Y-%m-%dT%H:%M:%OS%z:") g1gc.z = zoo(g1gc.df[,-c(1)], order.by=times) head(g1gc.z) ## SecondsSinceLaunch IncrementalCount FullCount ## 2014-05-12 17:00:32.868 1.161 0 0 ## 2014-05-12 17:00:33.178 1.472 1 0 ## 2014-05-12 17:00:33.677 1.969 2 0 ## 2014-05-12 17:00:35.538 3.830 3 0 ## 2014-05-12 17:00:37.811 6.103 4 0 ## 2014-05-12 17:00:41.427 9.720 5 0 ## UserTime SysTime RealTime BeforeSize AfterSize ## 2014-05-12 17:00:32.868 0.11 0.04 0.02 8192 1400 ## 2014-05-12 17:00:33.178 0.05 0.01 0.02 5496 1672 ## 2014-05-12 17:00:33.677 0.04 0.01 0.01 5768 2557 ## 2014-05-12 17:00:35.538 0.21 0.05 0.04 22528 4907 ## 2014-05-12 17:00:37.811 0.08 0.01 0.02 24576 7072 ## 2014-05-12 17:00:41.427 0.26 0.06 0.04 43008 14336 ## TotalSize Delta ## 2014-05-12 17:00:32.868 9437184 6792 ## 2014-05-12 17:00:33.178 9437184 3824 ## 2014-05-12 17:00:33.677 9437184 3211 ## 2014-05-12 17:00:35.538 9437184 17621 ## 2014-05-12 17:00:37.811 9437184 17504 ## 2014-05-12 17:00:41.427 9437184 28672 Example of Two Benchmark Runs in One Log File The data in the following graph is from a different log file, not the one of primary interest to this article. I’m including this image because it is an example of idle periods followed by busy periods. It would be uninteresting to average the rate of garbage collection over the entire log file period. More interesting would be the rate of garbage collect in the two busy periods. Are they the same or different? Your production data may be similar, for example, bursts when employees return from lunch and idle times on weekend evenings, etc. Once the data is in an R Time Series, you can analyze isolated time windows. Clipping the Time Series data Flashing back to our test case… Viewing the data as a time series is interesting. You can see that the work intensive time period is between 9:00 PM and 3:00 AM. Lets clip the data to the interesting period:     par(mfrow=c(2,1)) plot(g1gc.z$UserTime, type="h", main="User Time per GC\nTime: Complete Log File", xlab="Time of Day", ylab="CPU Seconds per GC", col="#1b9e77") clipped.g1gc.z=window(g1gc.z, start=as.POSIXct("2014-05-12 21:00:00"), end=as.POSIXct("2014-05-13 03:00:00")) plot(clipped.g1gc.z$UserTime, type="h", main="User Time per GC\nTime: Limited to Benchmark Execution", xlab="Time of Day", ylab="CPU Seconds per GC", col="#1b9e77") box(which = "outer", lty = "solid") Cumulative Incremental and Full GC count Here is the cumulative incremental and full GC count. When the line is very steep, it indicates that the GCs are repeating very quickly. Notice that the scale on the Y axis is different for full vs. incremental. plot(clipped.g1gc.z[,c(2:3)], main="Cumulative Incremental and Full GC count", xlab="Time of Day", col="#1b9e77") GC Analysis of Benchmark Execution using Time Series data In the following series of 3 graphs: The “After Size” show the amount of heap space in use after each garbage collection. Many Java objects are still referenced, i.e. alive, during each garbage collection. This may indicate that the application has a memory leak, or may indicate that the application has a very large memory footprint. Typically, an application's memory footprint plateau's in the early stage of execution. One would expect this graph to have a flat top. The steep decline in the heap space may indicate that the application crashed after 2:00. The second graph shows that the outliers in real execution time, discussed above, occur near 2:00. when the Java heap seems to be quite full. The third graph shows that Full GCs are infrequent during the first few hours of execution. The rate of Full GC's, (the slope of the cummulative Full GC line), changes near midnight.   plot(clipped.g1gc.z[,c("AfterSize","RealTime","FullCount")], xlab="Time of Day", col=c("#1b9e77","red","#1b9e77")) GC Analysis of heap recovered Each GC trace includes the amount of heap space in use before and after the individual GC event. During garbage coolection, unreferenced objects are identified, the space holding the unreferenced objects is freed, and thus, the difference in before and after usage indicates how much space has been freed. The following box plot and bar chart both demonstrate the same point - the amount of heap space freed per garbage colloection is surprisingly low. par(mfrow=c(2,1)) boxplot(as.vector(clipped.g1gc.z$Delta), main="Amount of Heap Recovered per GC Pass", xlab="Size in KB", horizontal = TRUE, col="red") hist(as.vector(clipped.g1gc.z$Delta), main="Amount of Heap Recovered per GC Pass", xlab="Size in KB", breaks=100, col="red") box(which = "outer", lty = "solid") This graph is the most interesting. The dark blue area shows how much heap is occupied by referenced Java objects. This represents memory that holds live data. The red fringe at the top shows how much data was recovered after each garbage collection. barplot(clipped.g1gc.z[,c("AfterSize","Delta")], col=c("#7570b3","#e7298a"), xlab="Time of Day", border=NA) legend("topleft", c("Live Objects","Heap Recovered on GC"), fill=c("#7570b3","#e7298a")) box(which = "outer", lty = "solid") When I discuss the data in the log files with the customer, I will ask for an explaination for the large amount of referenced data resident in the Java heap. There are two are posibilities: There is a memory leak and the amount of space required to hold referenced objects will continue to grow, limited only by the maximum heap size. After the maximum heap size is reached, the JVM will throw an “Out of Memory” exception every time that the application tries to allocate a new object. If this is the case, the aplication needs to be debugged to identify why old objects are referenced when they are no longer needed. The application has a legitimate requirement to keep a large amount of data in memory. The customer may want to further increase the maximum heap size. Another possible solution would be to partition the application across multiple cluster nodes, where each node has responsibility for managing a unique subset of the data. Conclusion In conclusion, R is a very powerful tool for the analysis of Java garbage collection log files. The primary difficulty is data cleansing so that information can be read into an R data frame. Once the data has been read into R, a rich set of tools may be used for thorough evaluation.

    Read the article

  • How to understand these lines in apache.log

    - by chefnelone
    I just get 19000 lines like these in the apache.log file for my site example.com. My hosting provider shut down the hosting and notified me that I need to avoid to activate my hosting again. I understand that I got a big amount of visits but I don't know how to avoid this. 88.190.47.233 - - [27/Jun/2013:09:51:34 +0200] "GET / HTTP/1.0" 403 389 "http://example.com/" "Opera/9.80 (Windows NT 6.1; U; ru) Presto/2.10.289 Version/12.02" 417 88.190.47.233 - - [27/Jun/2013:09:51:34 +0200] "GET / HTTP/1.0" 403 389 "http://example.com/" "Opera/9.80 (Windows NT 6.1; U; ru) Presto/2.10.289 Version/12.02" 417 175.44.28.155 - - [27/Jun/2013:09:51:44 +0200] "GET /en/user/register HTTP/1.1" 403 503 "http://example.com/en/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1;)" 248 175.44.29.140 - - [27/Jun/2013:09:53:19 +0200] "GET /en/node/1557?page=2 HTTP/1.0" 403 517 "http://example.com/en/node/1557?page=2" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.12 Safari/535.11" 491 These are the lines from apache-error.log. There are more than 35000 lines like this. [Thu Jun 27 09:50:58 2013] [error] [client 5.39.19.183] (13)Permission denied: access to /index.php denied, referer: http://example.com/ [Thu Jun 27 09:51:03 2013] [error] [client 125.112.29.105] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/node/1557?page=1#comment-701 [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/node/1557?page=1#comment-701 [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.html denied, referer: http://example.com/en/node/1557?page=1#comment-701 [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.htm denied, referer: http://example.com/en/node/1557?page=1#comment-701 [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.php denied, referer: http://example.com/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.html denied, referer: http://example.com/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.htm denied, referer: http://example.com/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.php denied, referer: http://example.com/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.html denied, referer: http://example.com/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.htm denied, referer: http://example.com/ [Thu Jun 27 09:51:44 2013] [error] [client 175.44.28.155] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/ [Thu Jun 27 09:53:19 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/node/1557?page=2 [Thu Jun 27 09:53:20 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/node/1557?page=2 [Thu Jun 27 09:53:20 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.html denied, referer: http://example.com/en/node/1557?page=2 [Thu Jun 27 09:53:20 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.htm denied, referer: http://example.com/en/node/1557?page=2 [Thu Jun 27 09:53:21 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.php denied, referer: http://example.com/ [Thu Jun 27 09:53:21 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.html denied, referer: http://example.com/ [Thu Jun 27 09:53:21 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.htm denied, referer: http://example.com/ [Thu Jun 27 09:53:22 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.php denied, referer: http://example.com/ [Thu Jun 27 09:53:22 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.html denied, referer: http://example.com/ [Thu Jun 27 09:53:22 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.htm denied, referer: http://example.com/ [Thu Jun 27 09:56:53 2013] [error] [client 113.246.6.147] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/ [Thu Jun 27 09:58:58 2013] [error] [client 108.62.71.180] (13)Permission denied: access to /index.php denied, referer: http://example.com/

    Read the article

  • Windows 7 CHKDSK log - What is "Internal Info"?

    - by Ron Klein
    If I run Disk Scan (CHKDSK) on Windows 7, I get the log in the event viewer. If I look inside it, I can see some kind of a binary dump: Internal Info: 00 4f 05 00 53 4a 05 00 ec 46 09 00 00 00 00 00 .O..SJ...F...... fa 03 00 00 5c 00 00 00 00 00 00 00 00 00 00 00 ....\........... 48 93 42 00 50 01 41 00 f8 1f 41 00 00 00 41 00 H.B.P.A...A...A. Is there any meaningful information in that field, other than debug info for the programmers who developed this tool?

    Read the article

  • SQL log shipping for reporting

    - by Patrick J Collins
    I would like to create a read-only copy of my SQL Server 2008 database on a secondary server for reporting and analysis. I've been testing log shipping, configured to run every 5 minutes or so. Alas, there appears to be a stumbling block, for exclusive access is required on the target database during the restore, which in turn requires killing all active connections. This is far from ideal, especially if a user is in the middle of running a report. Any better suggestions? Edit : I'm doing this on the Express edition.

    Read the article

  • transaction log shipping sql server 2005 to 2008

    - by Andrew Jahn
    I have a reporting setup with SSRS on our sql server 2005 database. Because sql server 2008 is not supported by the main program which populates our database we are stuck with 2005 on our prod database. Unfortunately when I run our weekly check reports the web interface constantly times out because the server cant do the conversion to PDF. I've read that sql server 2008's SSRS is ALOT better with memory management. I was wondering if I can do some kind transact log shipping subscription publication from 2005 to 2008? Am I chasing a dream here. Currently I have to open up the ssrs project in visual studio and run the reports inside because it doesn't ever time out when doing the pdf conversion, only times out if I try to run it through the ssis web interface.

    Read the article

  • Why are there unknown URLs in router log?

    - by Martin
    I recently looked at my router log. Why are a lot of requests that I don't send originated from a computer in my home network? They do not look like 3rd-party advertisements / images embedded in a page. The request have patterns, such as: top-visitor.com/look.php www.dottip.com/search/result.php?aff=8755&req=nickelodeon+games www.placeca.com/search/result.php?aff=3778&req=wireless+cell+phone www.bb5a.com/search.php?username=3348&keywords=flights www.blazerbox.com/search.php?username=2341&keywords=colorado+springs+real+estate www.freeautosource.com/search.php?username=sun100&keywords=vehicle www.1sp2.com/search.php?username=20190&keywords=las+the+hotel+vegas www.loadgeo.com/search/result.php?aff=10357&req=winamp www.exalt123.com/portal.php?ref=seo2007 www.7catalogs.com/search.php?username=la24&keywords=shutter www.theloaninstitute.com/search.php?username=kevin&keywords=webcam www.grammt.com/search.php?username=2530&keywords=bob And there are hundreds of these requests send within a second. So what's happening?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >