Search Results

Search found 23474 results on 939 pages for 'xml import'.

Page 450/939 | < Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >

  • Have I pushed the limits of my current VPS or is there room for optimization?

    - by JRameau
    I am currently on a mediatemple DV server (basic) 512mb dedicated ram, this is a CentOS based VPS with Plesk and Virtuozzo. My experience with it from day 1 has been bad and I only could sooth my server issues with several caching "Band-aids," but my sites are not as small as they were a year ago either so the issues have worsen. I have 3 Drupal installs running on separate (plesk) domains, 1 of those drupal installs is a multisite, that consists of 5-6 sites 2 of those sites are bringing in actual traffic. Those caching "Band-aids" I mentioned are APC, which seemed to help alot initially, and Drupal's Boost, which is considered a poorman's Varnish, it makes all my pages static for anonymous users. Last 30day combined estimate on Google Ananlytics: 90k visitors 260k pageviews. Issue: alot of downtime, I am continually checking if my sites are up, and lately I have been finding it down more than 3 times daily. Restarting Apache will bring it back up, for some time. I have google search every error message and looked up ways to optimize my DV server, and I am beyond stump what is my next move. Is this server bad, have I hit a impossibly low restriction such as the 12mb kernel memory barrier (kmemsize), is it on my end, do I need to optimize some more? *I have provided as much information as I can below, any help or suggestions given will be appreciated Common Error messages I see in the log: [error] (12)Cannot allocate memory: fork: Unable to fork new process [error] make_obcallback: could not import mod_python.apache.\n Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line 21, in ? import traceback File "/usr/lib/python2.4/traceback.py", line 3, in ? import linecache ImportError: No module named linecache [error] python_handler: no interpreter callback found. [warn-phpd] mmap cache can't open /var/www/vhosts/***/httpdocs/*** - Too many open files in system (pid ***) [alert] Child 8125 returned a Fatal error... Apache is exiting! [emerg] (43)Identifier removed: couldn't grab the accept mutex [emerg] (22)Invalid argument: couldn't release the accept mutex cat /proc/user_beancounters: Version: 2.5 uid resource held maxheld barrier limit failcnt 41548: kmemsize 4582652 5306699 12288832 13517715 21105036 lockedpages 0 0 600 600 0 privvmpages 38151 42676 229036 249036 0 shmpages 16274 16274 17237 17237 2 dummy 0 0 0 0 0 numproc 43 46 300 300 0 physpages 27260 29528 0 2147483647 0 vmguarpages 0 0 131072 2147483647 0 oomguarpages 27270 29538 131072 2147483647 0 numtcpsock 21 29 300 300 0 numflock 8 8 480 528 0 numpty 1 1 30 30 0 numsiginfo 0 1 1024 1024 0 tcpsndbuf 648440 675272 2867477 4096277 1711499 tcprcvbuf 301620 359716 2867477 4096277 0 othersockbuf 4472 4472 1433738 2662538 0 dgramrcvbuf 0 0 1433738 1433738 0 numothersock 12 12 300 300 0 dcachesize 0 0 2684271 2764800 0 numfile 3447 3496 6300 6300 3872 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 14 14 200 200 0 TOP: (In January the load avg was really high 3-10, I was able to bring it down where it is currently is by giving APC more memory play around with) top - 16:46:07 up 2:13, 1 user, load average: 0.34, 0.20, 0.20 Tasks: 40 total, 2 running, 37 sleeping, 0 stopped, 1 zombie Cpu(s): 0.3% us, 0.1% sy, 0.0% ni, 99.7% id, 0.0% wa, 0.0% hi, 0.0% si Mem: 916144k total, 156668k used, 759476k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached MySQLTuner: (after optimizing every table and repairing any table with overage I got the fragmented count down to 86) [--] Data in MyISAM tables: 285M (Tables: 1105) [!!] Total fragmented tables: 86 [--] Up for: 2h 44m 38s (409K q [41.421 qps], 6K conn, TX: 1B, RX: 174M) [--] Reads / Writes: 79% / 21% [--] Total buffers: 58.0M global + 2.7M per thread (100 max threads) [!!] Query cache prunes per day: 675307 [!!] Temporary tables created on disk: 35% (7K on disk / 20K total)

    Read the article

  • Openswan ipsec transport tunnel not going up

    - by gparent
    On ClusterA and B I have installed the "openswan" package on Debian Squeeze. ClusterA ip is 172.16.0.107, B is 172.16.0.108 When they ping one another, it does not reach the destination. /etc/ipsec.conf: version 2.0 # conforms to second version of ipsec.conf specification config setup protostack=netkey oe=off conn L2TP-PSK-CLUSTER type=transport left=172.16.0.107 right=172.16.0.108 auto=start ike=aes128-sha1-modp2048 authby=secret compress=yes /etc/ipsec.secrets: 172.16.0.107 172.16.0.108 : PSK "L2TPKEY" 172.16.0.108 172.16.0.107 : PSK "L2TPKEY" Here is the result of ipsec verify on both machines: root@cluster2:~# ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.28/K2.6.32-5-amd64 (netkey) Checking for IPsec support in kernel [OK] NETKEY detected, testing for disabled ICMP send_redirects [OK] NETKEY detected, testing for disabled ICMP accept_redirects [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [FAILED] Checking for 'ip' command [OK] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] root@cluster2:~# This is the end of the output of ipsec auto --status: 000 "cluster": 172.16.0.108<172.16.0.108>[+S=C]...172.16.0.107<172.16.0.107>[+S=C]; prospective erouted; eroute owner: #0 000 "cluster": myip=unset; hisip=unset; 000 "cluster": ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0 000 "cluster": policy: PSK+ENCRYPT+COMPRESS+PFS+UP+IKEv2ALLOW+lKOD+rKOD; prio: 32,32; interface: eth0; 000 "cluster": newest ISAKMP SA: #1; newest IPsec SA: #0; 000 "cluster": IKE algorithm newest: AES_CBC_128-SHA1-MODP2048 000 000 #3: "cluster":500 STATE_QUICK_R0 (expecting QI1); EVENT_CRYPTO_FAILED in 298s; lastdpd=-1s(seq in:0 out:0); idle; import:admin initiate 000 #2: "cluster":500 STATE_QUICK_I1 (sent QI1, expecting QR1); EVENT_RETRANSMIT in 13s; lastdpd=-1s(seq in:0 out:0); idle; import:admin initiate 000 #1: "cluster":500 STATE_MAIN_I4 (ISAKMP SA established); EVENT_SA_REPLACE in 2991s; newest ISAKMP; lastdpd=-1s(seq in:0 out:0); idle; import:admin initiate 000 Interestingly enough, if I do ike-scan on the server here's what happens: Doesn't seem to take my ike settings into account root@cluster1:~# ike-scan -M 172.16.0.108 Starting ike-scan 1.9 with 1 hosts (http://www.nta-monitor.com/tools/ike-scan/) 172.16.0.108 Main Mode Handshake returned HDR=(CKY-R=641bffa66ba717b6) SA=(Enc=3DES Hash=SHA1 Auth=PSK Group=2:modp1024 LifeType=Seconds LifeDuration(4)=0x00007080) VID=4f45517b4f7f6e657a7b4351 VID=afcad71368a1f1c96b8696fc77570100 (Dead Peer Detection v1.0) Ending ike-scan 1.9: 1 hosts scanned in 0.008 seconds (118.19 hosts/sec). 1 returned handshake; 0 returned notify root@cluster1:~# I can't tell what's going on here, this is pretty much the simplest config I can have according to the examples.

    Read the article

  • Upgraded Ubuntu, all drives in one zpool marked unavailable

    - by Matt Sieker
    I just upgraded Ubuntu 14.04, and I had two ZFS pools on the server. There was some minor issue with me fighting with the ZFS driver and the kernel version, but that's worked out now. One pool came online, and mounted fine. The other didn't. The main difference between the tool is one was just a pool of disks (video/music storage), and the other was a raidz set (documents, etc) I've already attempted exporting and re-importing the pool, to no avail, attempting to import gets me this: root@kyou:/home/matt# zpool import -fFX -d /dev/disk/by-id/ pool: storage id: 15855792916570596778 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: http://zfsonlinux.org/msg/ZFS-8000-5E config: storage UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas ata-SAMSUNG_HD103SJ_S246J90B134910 UNAVAIL ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 UNAVAIL ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 UNAVAIL The symlinks for those in /dev/disk/by-id also exist: root@kyou:/home/matt# ls -l /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910* /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51* lrwxrwxrwx 1 root root 9 May 27 19:31 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910 -> ../../sdb lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part9 -> ../../sdb9 lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 -> ../../sdd lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1 -> ../../sdd1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part9 -> ../../sdd9 lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 -> ../../sde lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1 -> ../../sde1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part9 -> ../../sde9 Inspecting the various /dev/sd* devices listed, they appear to be the correct ones (The 3 1TB drives that were in a raidz array). I've run zdb -l on each drive, dumping it to a file, and running a diff. The only difference on the 3 are the guid fields (Which I assume is expected). All 3 labels on each one are basically identical, and are as follows: version: 5000 name: 'storage' state: 0 txg: 4 pool_guid: 15855792916570596778 hostname: 'kyou' top_guid: 1683909657511667860 guid: 8815283814047599968 vdev_children: 1 vdev_tree: type: 'raidz' id: 0 guid: 1683909657511667860 nparity: 1 metaslab_array: 33 metaslab_shift: 34 ashift: 9 asize: 3000569954304 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 8815283814047599968 path: '/dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1' whole_disk: 1 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 18036424618735999728 path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1' whole_disk: 1 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 10307555127976192266 path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1' whole_disk: 1 create_txg: 4 features_for_read: Stupidly, I do not have a recent backup of this pool. However, the pool was fine before reboot, and Linux sees the disks fine (I have smartctl running now to double check) So, in summary: I upgraded Ubuntu, and lost access to one of my two zpools. The difference between the pools is the one that came up was JBOD, the other was zraid. All drives in the unmountable zpool are marked UNAVAIL, with no notes for corrupted data The pools were both created with disks referenced from /dev/disk/by-id/. Symlinks from /dev/disk/by-id to the various /dev/sd devices seems to be correct zdb can read the labels from the drives. Pool has already been attempted to be exported/imported, and isn't able to import again. Is there some sort of black magic I can invoke via zpool/zfs to bring these disks back into a reasonable array? Can I run zpool create zraid ... without losing my data? Is my data gone anyhow?

    Read the article

  • problem with sIFR 3 not displaying in IE just getting XXX

    - by user288306
    I am having a problem with sIFR 3 not displaying in IE. I get 3 larges black XXX in IE yet it displays fine in Firefox. I have checked i do have the most recent version of flash installed correctly. Here is the code on the page <div id="features"> <div id="mainmessage_advertisers"><h2>Advertisers</h2><br /><br /><h3><a href="">Reach your customers where they browse. Buy directly from top web publishers.</a></h3><br /><br /><br /><a href=""><img src="img/buyads.gif" border="0"></a></div> <div id="mainmessage_publishers"><h2>Publishers</h2><br /><br /><h3>Take control of your ad space and start generating more revenue than <u>ever before</u>.</h3><br /><br /><br /><a href=""><img src="img/sellads.gif" border="0"></a></div> </div>` Here is the code from my global.css #mainmessage_advertisers { width: 395px; height: 200px; padding: 90px 50px; border: 1px; float: left; } #mainmessage_publishers { width: 395px; height: 200px; padding: 90px 50px; float: right; } and here is what i have in my sifr.js /*********************************************************************** SIFR 3.0 (BETA 1) FUNCTIONS ************************************************************************/ var parseSelector=(function(){var _1=/\s*,\s*/;var _2=/\s*([\s>+~(),]|^|$)\s*/g;var _3=/([\s>+~,]|[^(]\+|^)([#.:@])/g;var _4=/^[^\s>+~]/;var _5=/[\s#.:>+~()@]|[^\s#.:>+~()@]+/g;function parseSelector(_6,_7){_7=_7||document.documentElement;var _8=_6.split(_1),_9=[];for(var i=0;i<_8.length;i++){var _b=[_7],_c=toStream(_8[i]);for(var j=0;j<_c.length;){var _e=_c[j++],_f=_c[j++],_10="";if(_c[j]=="("){while(_c[j++]!=")"&&j<_c.length){_10+=_c[j]}_10=_10.slice(0,-1)}_b=select(_b,_e,_f,_10)}_9=_9.concat(_b)}return _9}function toStream(_11){var _12=_11.replace(_2,"$1").replace(_3,"$1*$2");if(_4.test(_12)){_12=" "+_12}return _12.match(_5)||[]}function select(_13,_14,_15,_16){return (_17[_14])?_17[_14](_13,_15,_16):[]}var _18={toArray:function(_19){var a=[];for(var i=0;i<_19.length;i++){a.push(_19[i])}return a}};var dom={isTag:function(_1d,tag){return (tag=="*")||(tag.toLowerCase()==_1d.nodeName.toLowerCase())},previousSiblingElement:function(_1f){do{_1f=_1f.previousSibling}while(_1f&&_1f.nodeType!=1);return _1f},nextSiblingElement:function(_20){do{_20=_20.nextSibling}while(_20&&_20.nodeType!=1);return _20},hasClass:function(_21,_22){return (_22.className||"").match("(^|\\s)"+_21+"(\\s|$)")},getByTag:function(tag,_24){return _24.getElementsByTagName(tag)}};var _17={"#":function(_25,_26){for(var i=0;i<_25.length;i++){if(_25[i].getAttribute("id")==_26){return [_25[i]]}}return []}," ":function(_28,_29){var _2a=[];for(var i=0;i<_28.length;i++){_2a=_2a.concat(_18.toArray(dom.getByTag(_29,_28[i])))}return _2a},">":function(_2c,_2d){var _2e=[];for(var i=0,_30;i<_2c.length;i++){_30=_2c[i];for(var j=0,_32;j<_30.childNodes.length;j++){_32=_30.childNodes[j];if(_32.nodeType==1&&dom.isTag(_32,_2d)){_2e.push(_32)}}}return _2e},".":function(_33,_34){var _35=[];for(var i=0,_37;i<_33.length;i++){_37=_33[i];if(dom.hasClass([_34],_37)){_35.push(_37)}}return _35},":":function(_38,_39,_3a){return (pseudoClasses[_39])?pseudoClasses[_39](_38,_3a):[]}};parseSelector.selectors=_17;parseSelector.pseudoClasses={};parseSelector.util=_18;parseSelector.dom=dom;return parseSelector})(); var sIFR=new function(){var _3b=this;var _3c="sIFR-active";var _3d="sIFR-replaced";var _3e="sIFR-flash";var _3f="sIFR-ignore";var _40="sIFR-alternate";var _41="sIFR-class";var _42="sIFR-layout";var _43="http://www.w3.org/1999/xhtml";var _44=6;var _45=126;var _46=8;var _47="SIFR-PREFETCHED";var _48=" ";this.isActive=false;this.isEnabled=true;this.hideElements=true;this.replaceNonDisplayed=false;this.preserveSingleWhitespace=false;this.fixWrap=true;this.registerEvents=true;this.setPrefetchCookie=true;this.cookiePath="/";this.domains=[];this.fromLocal=true;this.forceClear=false;this.forceWidth=true;this.fitExactly=false;this.forceTextTransform=true;this.useDomContentLoaded=true;this.debugMode=false;this.hasFlashClassSet=false;var _49=0;var _4a=false,_4b=false;var dom=new function(){this.getBody=function(){var _4d=document.getElementsByTagName("body");if(_4d.length==1){return _4d[0]}return null};this.addClass=function(_4e,_4f){if(_4f){_4f.className=((_4f.className||"")==""?"":_4f.className+" ")+_4e}};this.removeClass=function(_50,_51){if(_51){_51.className=_51.className.replace(new RegExp("(^|\\s)"+_50+"(\\s|$)"),"").replace(/^\s+|(\s)\s+/g,"$1")}};this.hasClass=function(_52,_53){return new RegExp("(^|\\s)"+_52+"(\\s|$)").test(_53.className)};this.create=function(_54){if(document.createElementNS){return document.createElementNS(_43,_54)}return document.createElement(_54)};this.setInnerHtml=function(_55,_56){if(ua.innerHtmlSupport){_55.innerHTML=_56}else{if(ua.xhtmlSupport){_56=["<root xmlns=\"",_43,"\">",_56,"</root>"].join("");var xml=(new DOMParser()).parseFromString(_56,"text/xml");xml=document.importNode(xml.documentElement,true);while(_55.firstChild){_55.removeChild(_55.firstChild)}while(xml.firstChild){_55.appendChild(xml.firstChild)}}}};this.getComputedStyle=function(_58,_59){var _5a;if(document.defaultView&&document.defaultView.getComputedStyle){_5a=document.defaultView.getComputedStyle(_58,null)[_59]}else{if(_58.currentStyle){_5a=_58.currentStyle[_59]}}return _5a||""};this.getStyleAsInt=function(_5b,_5c,_5d){var _5e=this.getComputedStyle(_5b,_5c);if(_5d&&!/px$/.test(_5e)){return 0}_5e=parseInt(_5e);return isNaN(_5e)?0:_5e};this.getZoom=function(){return _5f.zoom.getLatest()}};this.dom=dom;var ua=new function(){var ua=navigator.userAgent.toLowerCase();var _62=(navigator.product||"").toLowerCase();this.macintosh=ua.indexOf("mac")>-1;this.windows=ua.indexOf("windows")>-1;this.quicktime=false;this.opera=ua.indexOf("opera")>-1;this.konqueror=_62.indexOf("konqueror")>-1;this.ie=false/*@cc_on || true @*/;this.ieSupported=this.ie&&!/ppc|smartphone|iemobile|msie\s5\.5/.test(ua)/*@cc_on && @_jscript_version >= 5.5 @*/;this.ieWin=this.ie&&this.windows/*@cc_on && @_jscript_version >= 5.1 @*/;this.windows=this.windows&&(!this.ie||this.ieWin);this.ieMac=this.ie&&this.macintosh/*@cc_on && @_jscript_version < 5.1 @*/;this.macintosh=this.macintosh&&(!this.ie||this.ieMac);this.safari=ua.indexOf("safari")>-1;this.webkit=ua.indexOf("applewebkit")>-1&&!this.konqueror;this.khtml=this.webkit||this.konqueror;this.gecko=!this.webkit&&_62=="gecko";this.operaVersion=this.opera&&/.*opera(\s|\/)(\d+\.\d+)/.exec(ua)?parseInt(RegExp.$2):0;this.webkitVersion=this.webkit&&/.*applewebkit\/(\d+).*/.exec(ua)?parseInt(RegExp.$1):0;this.geckoBuildDate=this.gecko&&/.*gecko\/(\d{8}).*/.exec(ua)?parseInt(RegExp.$1):0;this.konquerorVersion=this.konqueror&&/.*konqueror\/(\d\.\d).*/.exec(ua)?parseInt(RegExp.$1):0;this.flashVersion=0;if(this.ieWin){var axo;var _64=false;try{axo=new ActiveXObject("ShockwaveFlash.ShockwaveFlash.7")}catch(e){try{axo=new ActiveXObject("ShockwaveFlash.ShockwaveFlash.6");this.flashVersion=6;axo.AllowScriptAccess="always"}catch(e){_64=this.flashVersion==6}if(!_64){try{axo=new ActiveXObject("ShockwaveFlash.ShockwaveFlash")}catch(e){}}}if(!_64&&axo){this.flashVersion=parseFloat(/([\d,?]+)/.exec(axo.GetVariable("$version"))[1].replace(/,/g,"."))}}else{if(navigator.plugins&&navigator.plugins["Shockwave Flash"]){var _65=navigator.plugins["Shockwave Flash"];this.flashVersion=parseFloat(/(\d+\.?\d*)/.exec(_65.description)[1]);var i=0;while(this.flashVersion>=_46&&i<navigator.mimeTypes.length){var _67=navigator.mimeTypes[i];if(_67.type=="application/x-shockwave-flash"&&_67.enabledPlugin.description.toLowerCase().indexOf("quicktime")>-1){this.flashVersion=0;this.quicktime=true}i++}}}this.flash=this.flashVersion>=_46;this.transparencySupport=this.macintosh||this.windows;this.computedStyleSupport=this.ie||document.defaultView&&document.defaultView.getComputedStyle&&(!this.gecko||this.geckoBuildDate>=20030624);this.css=true;if(this.computedStyleSupport){try{var _68=document.getElementsByTagName("head")[0];_68.style.backgroundColor="#FF0000";var _69=dom.getComputedStyle(_68,"backgroundColor");this.css=!_69||/\#F{2}0{4}|rgb\(255,\s?0,\s?0\)/i.test(_69);_68=null}catch(e){}}this.xhtmlSupport=!!window.DOMParser&&!!document.importNode;this.innerHtmlSupport;try{var n=dom.create("span");if(!this.ieMac){n.innerHTML="x"}this.innerHtmlSupport=n.innerHTML=="x"}catch(e){this.innerHtmlSupport=false}this.zoomSupport=!!(this.opera&&document.documentElement);this.geckoXml=this.gecko&&(document.contentType||"").indexOf("xml")>-1;this.requiresPrefetch=this.ieWin||this.khtml;this.verifiedKonqueror=false;this.supported=this.flash&&this.css&&(!this.ie||this.ieSupported)&&(!this.opera||this.operaVersion>=8)&&(!this.webkit||this.webkitVersion>=412)&&(!this.konqueror||this.konquerorVersion>3.5)&&this.computedStyleSupport&&(this.innerHtmlSupport||!this.khtml&&this.xhtmlSupport)};this.ua=ua;var _6b=new function(){function capitalize($){return $.toUpperCase()}this.normalize=function(str){if(_3b.preserveSingleWhitespace){return str.replace(/\s/g,_48)}return str.replace(/(\s)\s+/g,"$1")};this.textTransform=function(_6e,str){switch(_6e){case "uppercase":str=str.toUpperCase();break;case "lowercase":str=str.toLowerCase();break;case "capitalize":var _70=str;str=str.replace(/^\w|\s\w/g,capitalize);if(str.indexOf("function capitalize")!=-1){var _71=_70.replace(/(^|\s)(\w)/g,"$1$1$2$2").split(/^\w|\s\w/g);str="";for(var i=0;i<_71.length;i++){str+=_71[i].charAt(0).toUpperCase()+_71[i].substring(1)}}break}return str};this.toHexString=function(str){if(typeof (str)!="string"||!str.charAt(0)=="#"||str.length!=4&&str.length!=7){return str}str=str.replace(/#/,"");if(str.length==3){str=str.replace(/(.)(.)(.)/,"$1$1$2$2$3$3")}return "0x"+str};this.toJson=function(obj){var _75="";switch(typeof (obj)){case "string":_75="\""+obj+"\"";break;case "number":case "boolean":_75=obj.toString();break;case "object":_75=[];for(var _76 in obj){if(obj[_76]==Object.prototype[_76]){continue}_75.push("\""+_76+"\":"+_6b.toJson(obj[_76]))}_75="{"+_75.join(",")+"}";break}return _75};this.convertCssArg=function(arg){if(!arg){return {}}if(typeof (arg)=="object"){if(arg.constructor==Array){arg=arg.join("")}else{return arg}}var obj={};var _79=arg.split("}");for(var i=0;i<_79.length;i++){var $=_79[i].match(/([^\s{]+)\s*\{(.+)\s*;?\s*/);if(!$||$.length!=3){continue}if(!obj[$[1]]){obj[$[1]]={}}var _7c=$[2].split(";");for(var j=0;j<_7c.length;j++){var $2=_7c[j].match(/\s*([^:\s]+)\s*\:\s*([^\s;]+)/);if(!$2||$2.length!=3){continue}obj[$[1]][$2[1]]=$2[2]}}return obj};this.extractFromCss=function(css,_80,_81,_82){var _83=null;if(css&&css[_80]&&css[_80][_81]){_83=css[_80][_81];if(_82){delete css[_80][_81]}}return _83};this.cssToString=function(arg){var css=[];for(var _86 in arg){var _87=arg[_86];if(_87==Object.prototype[_86]){continue}css.push(_86,"{");for(var _88 in _87){if(_87[_88]==Object.prototype[_88]){continue}css.push(_88,":",_87[_88],";")}css.push("}")}return escape(css.join(""))}};this.util=_6b;var _5f={};_5f.fragmentIdentifier=new function(){this.fix=true;var _89;this.cache=function(){_89=document.title};function doFix(){document.title=_89}this.restore=function(){if(this.fix){setTimeout(doFix,0)}}};_5f.synchronizer=new function(){this.isBlocked=false;this.block=function(){this.isBlocked=true};this.unblock=function(){this.isBlocked=false;_8a.replaceAll()}};_5f.zoom=new function(){var _8b=100;this.getLatest=function(){return _8b};if(ua.zoomSupport&&ua.opera){var _8c=document.createElement("div");_8c.style.position="fixed";_8c.style.left="-65536px";_8c.style.top="0";_8c.style.height="100%";_8c.style.width="1px";_8c.style.zIndex="-32";document.documentElement.appendChild(_8c);function updateZoom(){if(!_8c){return}var _8d=window.innerHeight/_8c.offsetHeight;var _8e=Math.round(_8d*100)%10;if(_8e>5){_8d=Math.round(_8d*100)+10-_8e}else{_8d=Math.round(_8d*100)-_8e}_8b=isNaN(_8d)?100:_8d;_5f.synchronizer.unblock();document.documentElement.removeChild(_8c);_8c=null}_5f.synchronizer.block();setTimeout(updateZoom,54)}};this.hacks=_5f;var _8f={kwargs:[],replaceAll:function(){for(var i=0;i<this.kwargs.length;i++){_3b.replace(this.kwargs[i])}this.kwargs=[]}};var _8a={kwargs:[],replaceAll:_8f.replaceAll};function isValidDomain(){if(_3b.domains.length==0){return true}var _91="";try{_91=document.domain}catch(e){}if(_3b.fromLocal&&sIFR.domains[0]!="localhost"){sIFR.domains.unshift("localhost")}for(var i=0;i<_3b.domains.length;i++){if(_3b.domains[i]=="*"||_3b.domains[i]==_91){return true}}return false}this.activate=function(){if(!ua.supported||!this.isEnabled||this.isActive||!isValidDomain()){return}this.isActive=true;if(this.hideElements){this.setFlashClass()}if(ua.ieWin&&_5f.fragmentIdentifier.fix&&window.location.hash!=""){_5f.fragmentIdentifier.cache()}else{_5f.fragmentIdentifier.fix=false}if(!this.registerEvents){return}function handler(evt){_3b.initialize();if(evt&&evt.type=="load"){if(document.removeEventListener){document.removeEventListener("DOMContentLoaded",handler,false);document.removeEventListener("load",handler,false)}if(window.removeEventListener){window.removeEventListener("load",handler,false)}}}if(window.addEventListener){if(_3b.useDomContentLoaded&&ua.gecko){document.addEventListener("DOMContentLoaded",handler,false)}window.addEventListener("load",handler,false)}else{if(ua.ieWin){if(_3b.useDomContentLoaded&&!_4a){document.write("<scr"+"ipt id=__sifr_ie_onload defer src=//:></script>");document.getElementById("__sifr_ie_onload").onreadystatechange=function(){if(this.readyState=="complete"){handler();this.removeNode()}}}window.attachEvent("onload",handler)}}};this.setFlashClass=function(){if(this.hasFlashClassSet){return}dom.addClass(_3c,dom.getBody()||document.documentElement);this.hasFlashClassSet=true};this.removeFlashClass=function(){if(!this.hasFlashClassSet){return}dom.removeClass(_3c,dom.getBody());dom.removeClass(_3c,document.documentElement);this.hasFlashClassSet=false};this.initialize=function(){if(_4b||!this.isActive||!this.isEnabled){return}_4b=true;_8f.replaceAll();clearPrefetch()};function getSource(src){if(typeof (src)!="string"){if(src.src){src=src.src}if(typeof (src)!="string"){var _95=[];for(var _96 in src){if(src[_96]!=Object.prototype[_96]){_95.push(_96)}}_95.sort().reverse();var _97="";var i=-1;while(!_97&&++i<_95.length){if(parseFloat(_95[i])<=ua.flashVersion){_97=src[_95[i]]}}src=_97}}if(!src&&_3b.debugMode){throw new Error("sIFR: Could not determine appropriate source")}if(ua.ie&&src.charAt(0)=="/"){src=window.location.toString().replace(/([^:]+)(:\/?\/?)([^\/]+).*/,"$1$2$3")+src}return src}this.prefetch=function(){if(!ua.requiresPrefetch||!ua.supported||!this.isEnabled||!isValidDomain()){return}if(this.setPrefetchCookie&&new RegExp(";?"+_47+"=true;?").test(document.cookie)){return}try{_4a=true;if(ua.ieWin){prefetchIexplore(arguments)}else{prefetchLight(arguments)}if(this.setPrefetchCookie){document.cookie=_47+"=true;path="+this.cookiePath}}catch(e){if(_3b.debugMode){throw e}}};function prefetchIexplore(_99){for(var i=0;i<_99.length;i++){document.write("<embed src=\""+getSource(_99[i])+"\" sIFR-prefetch=\"true\" style=\"display:none;\">")}}function prefetchLight(_9b){for(var i=0;i<_9b.length;i++){new Image().src=getSource(_9b[i])}}function clearPrefetch(){if(!ua.ieWin||!_4a){return}try{var _9d=document.getElementsByTagName("embed");for(var i=_9d.length-1;i>=0;i--){var _9f=_9d[i];if(_9f.getAttribute("sIFR-prefetch")=="true"){_9f.parentNode.removeChild(_9f)}}}catch(e){}}function getRatio(_a0){if(_a0<=10){return 1.55}if(_a0<=19){return 1.45}if(_a0<=32){return 1.35}if(_a0<=71){return 1.3}return 1.25}function getFilters(obj){var _a2=[];for(var _a3 in obj){if(obj[_a3]==Object.prototype[_a3]){continue}var _a4=obj[_a3];_a3=[_a3.replace(/filter/i,"")+"Filter"];for(var _a5 in _a4){if(_a4[_a5]==Object.prototype[_a5]){continue}_a3.push(_a5+":"+escape(_6b.toJson(_6b.toHexString(_a4[_a5]))))}_a2.push(_a3.join(","))}return _a2.join(";")}this.replace=function(_a6,_a7){if(!ua.supported){return}if(_a7){for(var _a8 in _a6){if(typeof (_a7[_a8])=="undefined"){_a7[_a8]=_a6[_a8]}}_a6=_a7}if(!_4b){return _8f.kwargs.push(_a6)}if(_5f.synchronizer.isBlocked){return _8a.kwargs.push(_a6)}var _a9=_a6.elements;if(!_a9&&parseSelector){_a9=parseSelector(_a6.selector)}if(_a9.length==0){return}this.setFlashClass();var src=getSource(_a6.src);var css=_6b.convertCssArg(_a6.css);var _ac=getFilters(_a6.filters);var _ad=(_a6.forceClear==null)?_3b.forceClear:_a6.forceClear;var _ae=(_a6.fitExactly==null)?_3b.fitExactly:_a6.fitExactly;var _af=_ae||(_a6.forceWidth==null?_3b.forceWidth:_a6.forceWidth);var _b0=parseInt(_6b.extractFromCss(css,".sIFR-root","leading"))||0;var _b1=_6b.extractFromCss(css,".sIFR-root","background-color",true)||"#FFFFFF";var _b2=_6b.extractFromCss(css,".sIFR-root","opacity",true)||"100";if(parseFloat(_b2)<1){_b2=100*parseFloat(_b2)}var _b3=_6b.extractFromCss(css,".sIFR-root","kerning",true)||"";var _b4=_a6.gridFitType||_6b.extractFromCss(css,".sIFR-root","text-align")=="right"?"subpixel":"pixel";var _b5=_3b.forceTextTransform?_6b.extractFromCss(css,".sIFR-root","text-transform",true)||"none":"none";var _b6="";if(_ae){_6b.extractFromCss(css,".sIFR-root","text-align",true)}if(!_a6.modifyCss){_b6=_6b.cssToString(css)}var _b7=_a6.wmode||"";if(_b7=="transparent"){if(!ua.transparencySupport){_b7="opaque"}else{_b1="transparent"}}for(var i=0;i<_a9.length;i++){var _b9=_a9[i];if(!ua.verifiedKonqueror){if(dom.getComputedStyle(_b9,"lineHeight").match(/e\+08px/)){ua.supported=_3b.isEnabled=false;this.removeFlashClass();return}ua.verifiedKonqueror=true}if(dom.hasClass(_3d,_b9)||dom.hasClass(_3f,_b9)){continue}var _ba=false;if(!_b9.offsetHeight||!_b9.offsetWidth){if(!_3b.replaceNonDisplayed){continue}_b9.style.display="block";if(!_b9.offsetHeight||!_b9.offsetWidth){_b9.style.display="";continue}_ba=true}if(_ad&&ua.gecko){_b9.style.clear="both"}var _bb=null;if(_3b.fixWrap&&ua.ie&&dom.getComputedStyle(_b9,"display")=="block"){_bb=_b9.innerHTML;dom.setInnerHtml(_b9,"X")}var _bc=dom.getStyleAsInt(_b9,"width",ua.ie);if(ua.ie&&_bc==0){var _bd=dom.getStyleAsInt(_b9,"paddingRight",true);var _be=dom.getStyleAsInt(_b9,"paddingLeft",true);var _bf=dom.getStyleAsInt(_b9,"borderRightWidth",true);var _c0=dom.getStyleAsInt(_b9,"borderLeftWidth",true);_bc=_b9.offsetWidth-_be-_bd-_c0-_bf}if(_bb&&_3b.fixWrap&&ua.ie){dom.setInnerHtml(_b9,_bb)}var _c1,_c2;if(!ua.ie){_c1=dom.getStyleAsInt(_b9,"lineHeight");_c2=Math.floor(dom.getStyleAsInt(_b9,"height")/_c1)}else{if(ua.ie){var _bb=_b9.innerHTML;_b9.style.visibility="visible";_b9.style.overflow="visible";_b9.style.position="static";_b9.style.zoom="normal";_b9.style.writingMode="lr-tb";_b9.style.width=_b9.style.height="auto";_b9.style.maxWidth=_b9.style.maxHeight=_b9.style.styleFloat="none";var _c3=_b9;var _c4=_b9.currentStyle.hasLayout;if(_c4){dom.setInnerHtml(_b9,"<div class=\""+_42+"\">X<br />X<br />X</div>");_c3=_b9.firstChild}else{dom.setInnerHtml(_b9,"X<br />X<br />X")}var _c5=_c3.getClientRects();_c1=_c5[1].bottom-_c5[1].top;_c1=Math.ceil(_c1*0.8);if(_c4){dom.setInnerHtml(_b9,"<div class=\""+_42+"\">"+_bb+"</div>");_c3=_b9.firstChild}else{dom.setInnerHtml(_b9,_bb)}_c5=_c3.getClientRects();_c2=_c5.length;if(_c4){dom.setInnerHtml(_b9,_bb)}_b9.style.visibility=_b9.style.width=_b9.style.height=_b9.style.maxWidth=_b9.style.maxHeight=_b9.style.overflow=_b9.style.styleFloat=_b9.style.position=_b9.style.zoom=_b9.style.writingMode=""}}if(_ba){_b9.style.display=""}if(_ad&&ua.gecko){_b9.style.clear=""}_c1=Math.max(_44,_c1);_c1=Math.min(_45,_c1);if(isNaN(_c2)||!isFinite(_c2)){_c2=1}var _c6=Math.round(_c2*_c1);if(_c2>1&&_b0){_c6+=Math.round((_c2-1)*_b0)}var _c7=dom.create("span");_c7.className=_40;var _c8=_b9.cloneNode(true);for(var j=0,l=_c8.childNodes.length;j<l;j++){_c7.appendChild(_c8.childNodes[j].cloneNode(true))}if(_a6.modifyContent){_a6.modifyContent(_c8,_a6.selector)}if(_a6.modifyCss){_b6=_a6.modifyCss(css,_c8,_a6.selector)}var _cb=handleContent(_c8,_b5);if(_a6.modifyContentString){_cb=_a6.modifyContentString(_cb,_a6.selector)}if(_cb==""){continue}var _cc=["content="+_cb.replace(/\</g,"&lt;").replace(/>/g,"&gt;"),"width="+_bc,"height="+_c6,"fitexactly="+(_ae?"true":""),"tunewidth="+(_a6.tuneWidth||""),"tuneheight="+(_a6.tuneHeight||""),"offsetleft="+(_a6.offsetLeft||""),"offsettop="+(_a6.offsetTop||""),"thickness="+(_a6.thickness||""),"sharpness="+(_a6.sharpness||""),"kerning="+_b3,"gridfittype="+_b4,"zoomsupport="+ua.zoomSupport,"filters="+_ac,"opacity="+_b2,"blendmode="+(_a6.blendMode||""),"size="+_c1,"zoom="+dom.getZoom(),"css="+_b6];_cc=encodeURI(_cc.join("&amp;"));var _cd="sIFR_callback_"+_49++;var _ce={flashNode:null};window[_cd+"_DoFSCommand"]=(function(_cf){return function(_d0,arg){if(/(FSCommand\:)?resize/.test(_d0)){var $=arg.split(":");_cf.flashNode.setAttribute($[0],$[1]);if(ua.khtml){_cf.flashNode.innerHTML+=""}}}})(_ce);_c6=Math.round(_c2*getRatio(_c1)*_c1);var _d3=_af?_bc:"100%";var _d4;if(ua.ie){_d4=["<object classid=\"clsid:D27CDB6E-AE6D-11cf-96B8-444553540000\" id=\"",_cd,"\" sifr=\"true\" width=\"",_d3,"\" height=\"",_c6,"\" class=\"",_3e,"\">","<param name=\"movie\" value=\"",src,"\"></param>","<param name=\"flashvars\" value=\"",_cc,"\"></param>","<param name=\"allowScriptAccess\" value=\"always\"></param>","<param name=\"quality\" value=\"best\"></param>","<param name=\"wmode\" value=\"",_b7,"\"></param>","<param name=\"bgcolor\" value=\"",_b1,"\"></param>","<param name=\"name\" value=\"",_cd,"\"></param>","</object>","<scr","ipt event=FSCommand(info,args) for=",_cd,">",_cd,"_DoFSCommand(info, args);","</","script>"].join("")}else{_d4=["<embed class=\"",_3e,"\" type=\"application/x-shockwave-flash\" src=\"",src,"\" quality=\"best\" flashvars=\"",_cc,"\" width=\"",_d3,"\" height=\"",_c6,"\" wmode=\"",_b7,"\" bgcolor=\"",_b1,"\" name=\"",_cd,"\" allowScriptAccess=\"always\" sifr=\"true\"></embed>"].join("")}dom.setInnerHtml(_b9,_d4);_ce.flashNode=_b9.firstChild;_b9.appendChild(_c7);dom.addClass(_3d,_b9);if(_a6.onReplacement){_a6.onReplacement(_ce.flashNode)}}_5f.fragmentIdentifier.restore()};function handleContent(_d5,_d6){var _d7=[],_d8=[];var _d9=_d5.childNodes;var i=0;while(i<_d9.length){var _db=_d9[i];if(_db.nodeType==3){var _dc=_6b.normalize(_db.nodeValue);_dc=_6b.textTransform(_d6,_dc);_d8.push(_dc.replace(/\%/g,"%25").replace(/\&/g,"%26").replace(/\,/g,"%2C").replace(/\+/g,"%2B"))}if(_db.nodeType==1){var _dd=[];var _de=_db.nodeName.toLowerCase();var _df=_db.className||"";if(/\s+/.test(_df)){if(_df.indexOf(_41)){_df=_df.match("(\\s|^)"+_41+"-([^\\s$]*)(\\s|$)")[2]}else{_df=_df.match(/^([^\s]+)/)[1]}}if(_df!=""){_dd.push("class=\""+_df+"\"")}if(_de=="a"){var _e0=_db.getAttribute("href")||"";var _e1=_db.getAttribute("target")||"";_dd.push("href=\""+_e0+"\"","target=\""+_e1+"\"")}_d8.push("<"+_de+(_dd.length>0?" ":"")+escape(_dd.join(" "))+">");if(_db.hasChildNodes()){_d7.push(i);i=0;_d9=_db.childNodes;continue}else{if(!/^(br|img)$/i.test(_db.nodeName)){_d8.push("</",_db.nodeName.toLowerCase(),">")}}}if(_d7.length>0&&!_db.nextSibling){do{i=_d7.pop();_d9=_db.parentNode.parentNode.childNodes;_db=_d9[i];if(_db){_d8.push("</",_db.nodeName.toLowerCase(),">")}}while(i<_d9.length&&_d7.length>0)}i++}return _d8.join("").replace(/\n|\r/g,"")}}; sIFR.prefetch({ src: 'swf/sifr/helvetica.swf' }); sIFR.activate(); sIFR.replace({ selector: 'h2, h3', src: 'swf/sifr/helvetica.swf', wmode: 'transparent', css: { '.sIFR-root' : { 'color': '#000000', 'font-weight': 'bold', 'letter-spacing': '-1' }, 'a': { 'text-decoration': 'none' }, 'a:link': { 'color': '#000000' }, 'a:hover': { 'color': '#000000' }, '.span': { 'color': '#979797' }, 'label': { 'color': '#E11818' } } }); sIFR.replace({ selector: 'h4', src: 'swf/sifr/helvetica.swf', wmode: 'transparent', css: { '.sIFR-root' : { 'color': '#7E7E7E', 'font-weight': 'bold', 'letter-spacing': '-0.8' }, 'a': { 'text-decoration': 'none' }, 'a:link': { 'color': '#7E7E7E' }, 'a:hover': { 'color': '#7E7E7E' }, 'label': { 'color': '#E11818' } } }); sIFR.replace({ selector: '#cart p', src: 'swf/sifr/helvetica-lt.swf', wmode: 'transparent', css: { '.sIFR-root' : { 'color': '#979797', 'font-weight': 'bold', 'letter-spacing': '-0.8' }, 'a': { 'text-decoration': 'none' }, 'a:link': { 'color': '#979797' }, 'a:hover': { 'color': '#000000' }, 'label': { 'color': '#979797' } } }); Thank you in advance for your help!

    Read the article

  • How to obtain a random sub-datatable from another data table

    - by developerit
    Introduction In this article, I’ll show how to get a random subset of data from a DataTable. This is useful when you already have queries that are filtered correctly but returns all the rows. Analysis I came across this situation when I wanted to display a random tag cloud. I already had the query to get the keywords ordered by number of clicks and I wanted to created a tag cloud. Tags that are the most popular should have more chance to get picked and should be displayed larger than less popular ones. Implementation In this code snippet, there is everything you need. ' Min size, in pixel for the tag Private Const MIN_FONT_SIZE As Integer = 9 ' Max size, in pixel for the tag Private Const MAX_FONT_SIZE As Integer = 14 ' Basic function that retreives Tags from a DataBase Public Shared Function GetTags() As MediasTagsDataTable ' Simple call to the TableAdapter, to get the Tags ordered by number of clicks Dim dt As MediasTagsDataTable = taMediasTags.GetDataValide ' If the query returned no result, return an empty DataTable If dt Is Nothing OrElse dt.Rows.Count < 1 Then Return New MediasTagsDataTable End If ' Set the font-size of the group of data ' We are dividing our results into sub set, according to their number of clicks ' Example: 10 results -> [0,2] will get font size 9, [3,5] will get font size 10, [6,8] wil get 11, ... ' This is the number of elements in one group Dim groupLenth As Integer = CType(Math.Floor(dt.Rows.Count / (MAX_FONT_SIZE - MIN_FONT_SIZE)), Integer) ' Counter of elements in the same group Dim counter As Integer = 0 ' Counter of groups Dim groupCounter As Integer = 0 ' Loop througt the list For Each row As MediasTagsRow In dt ' Set the font-size in a custom column row.c_FontSize = MIN_FONT_SIZE + groupCounter ' Increment the counter counter += 1 ' If the group counter is less than the counter If groupLenth <= counter Then ' Start a new group counter = 0 groupCounter += 1 End If Next ' Return the new DataTable with font-size Return dt End Function ' Function that generate the random sub set Public Shared Function GetRandomSampleTags(ByVal KeyCount As Integer) As MediasTagsDataTable ' Get the data Dim dt As MediasTagsDataTable = GetTags() ' Create a new DataTable that will contains the random set Dim rep As MediasTagsDataTable = New MediasTagsDataTable ' Count the number of row in the new DataTable Dim count As Integer = 0 ' Random number generator Dim rand As New Random() While count < KeyCount Randomize() ' Pick a random row Dim r As Integer = rand.Next(0, dt.Rows.Count - 1) Dim tmpRow As MediasTagsRow = dt(r) ' Import it into the new DataTable rep.ImportRow(tmpRow) ' Remove it from the old one, to be sure not to pick it again dt.Rows.RemoveAt(r) ' Increment the counter count += 1 End While ' Return the new sub set Return rep End Function Pro’s This method is good because it doesn’t require much work to get it work fast. It is a good concept when you are working with small tables, let says less than 100 records. Con’s If you have more than 100 records, out of memory exception may occur since we are coping and duplicating rows. I would consider using a stored procedure instead.

    Read the article

  • Redmine on Apache2 with Passenger issue

    - by nkr1pt
    I installed Redmine and run it in Apache2 with the Passenger module. Apache2 boots, Passenger module gets loaded and the Redmine welcome page is shown, however when trying to login or navigate to other parts of the Redmine site, the browser keeps loading and loading and loading forever, although the Redmine production.log indicates redirects and HTTP 200 codes in the header, so everything seems to work correctly according to the log. I tested in various browsers. Does anyone have an idea what could be wrong? I will add apache configuration and some relevant log snippets from both apache and redmine hereafter. Apache2 Redmine configuration: DocumentRoot /var/www <Directory /var/www/redmine> RailsEnv production AllowOverride all RailsBaseURI /redmine PassengerResolveSymLinksInDocumentRoot on </Directory> Apache2 error log after booting Apache: [Wed Feb 09 19:59:58 2011] [notice] Apache/2.2.14 (Ubuntu) Phusion_Passenger/3.0.2 DAV/2 SVN/1.6.6 configured -- resuming normal operations Redmine production log after logging in: Logfile created on Wed Feb 09 20:01:40 +0100 2011 Processing WelcomeController#index (for 192.168.1.55 at 2011-02-09 20:01:48) [GET] Parameters: {"action"=>"index", "controller"=>"welcome"} Rendering template within layouts/base Rendering welcome/index Completed in 220ms (View: 96, DB: 16) | 200 OK [http://sirius/redmine] Processing AccountController#login (for 192.168.1.55 at 2011-02-09 20:03:17) [GET] Parameters: {"action"=>"login", "controller"=>"account"} Rendering template within layouts/base Rendering account/login Completed in 85ms (View: 63, DB: 1) | 200 OK [http://sirius/redmine/login] Processing AccountController#login (for 192.168.1.55 at 2011-02-09 20:03:20) [POST] Parameters: {"back_url"=>"http%3A%2F%2Fsirius%2Fredmine", "action"=>"login", "authenticity_token"=>"cEMUZHhRKJU8w3p6d+xQQhJTk4/pnnzUdg5g5fwhxDU=", "username"=>"admin", "controller"=>"account", "password"=>"[FILTERED]", "login"=>"Login \302\273"} Redirected to http://sirius/redmine Completed in 37ms (DB: 6) | 302 Found [http://sirius/redmine/login] Processing WelcomeController#index (for 192.168.1.55 at 2011-02-09 20:03:20) [GET] Parameters: {"action"=>"index", "controller"=>"welcome"} Rendering template within layouts/base Rendering welcome/index Completed in 100ms (View: 77, DB: 6) | 200 OK [http://sirius/redmine] Apache2 error log afterwards: [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/mod_instaweb.cc(247)] ModPagespeed OutputFilter called for request /redmine/login [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/mod_instaweb.cc(272)] unparsed=/redmine/login, absolute_url=http://sirius/redmine/login [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: HtmlParse::StartParse [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/mod_instaweb.cc(299)] Request headers:\nHTTP/1.1 0 Internal Server Error\r\nHost: sirius\r\nUser-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.8) Gecko/20100723 Ubuntu/10.04 (lucid) Firefox/3.6.8\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8\r\nAccept-Language: en-us,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nKeep-Alive: 115\r\nConnection: keep-alive\r\nReferer: http://sirius/redmine\r\nCookie: _redmine_session=BAh7BjoPc2Vzc2lvbl9pZCIlNmVlMzFiMDc4MWQxZDU5ZTI5MTk2NjU0NGY3MzJmYzQ%3D--ea4b7adbc35551051632b5544faaad138ae08d90\r\n\r\n [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/mod_instaweb.cc(302)] request-filename=/var/www/redmine/login, uri=/redmine/login [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/mod_instaweb.cc(319)] ModPagespeed Response headers:\nHTTP/1.1 200 OK\r\nStatus: 200\r\nX-Mod-Pagespeed: 0.9.0.0-128\r\n\r\n [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 2157us: HtmlParse::Flush [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 2272us: HtmlParse::CoalesceAdjacentCharactersNodes [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 2342us: HtmlParse::ApplyFilter:AddHead [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 2407us: HtmlParse::SanityCheck [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 2504us: HtmlParse::ApplyFilter:CssCombine [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/stylesheets/application.css?1296181549 [Wed Feb 09 20:03:17 2011] [warn] [0209/200317:WARNING:net/instaweb/util/google_message_handler.cc(32)] Failed to create or read input resource /redmine/stylesheets/application.css?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/stylesheets/jstoolbar.css?1296181549 [Wed Feb 09 20:03:17 2011] [warn] [0209/200317:WARNING:net/instaweb/util/google_message_handler.cc(32)] Failed to create or read input resource /redmine/stylesheets/jstoolbar.css?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 3642us: HtmlParse::ApplyFilter:CssFilter [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/stylesheets/application.css?1296181549 [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] http://sirius/redmine/login:9: Failed to load resource http://sirius/redmine/stylesheets/application.css?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/stylesheets/jstoolbar.css?1296181549 [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] http://sirius/redmine/login:17: Failed to load resource http://sirius/redmine/stylesheets/jstoolbar.css?1296181549 [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Failed to load resource http://sirius/redmine/stylesheets/jstoolbar.css?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 4863us: HtmlParse::ApplyFilter:Javascript [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:11: Found script with src /redmine/javascripts/prototype.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/javascripts/prototype.js?1296181549 [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] http://sirius/redmine/login: Couldn't fetch resource /redmine/javascripts/prototype.js?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:12: Found script with src /redmine/javascripts/effects.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/javascripts/effects.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] http://sirius/redmine/login: Couldn't fetch resource /redmine/javascripts/effects.js?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: http://sirius/redmine/login: Couldn't fetch resource /redmine/javascripts/effects.js?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:13: Found script with src /redmine/javascripts/dragdrop.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/javascripts/dragdrop.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] Creating connection [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:14: Found script with src /redmine/javascripts/controls.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/javascripts/controls.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] http://sirius/redmine/login: Couldn't fetch resource /redmine/javascripts/controls.js?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:15: Found script with src /redmine/javascripts/application.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/javascripts/application.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] Creating connection [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 8389us: HtmlParse::SanityCheck [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 8588us: HtmlParse::CoalesceAdjacentCharactersNodes [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 8701us: HtmlParse::ApplyFilter:InlineCss [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: 8701us: HtmlParse::ApplyFilter:InlineCss [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/stylesheets/application.css?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] http://sirius/redmine/login: Couldn't fetch resource /redmine/stylesheets/application.css?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 9199us: HtmlParse::ApplyFilter:InlineJs [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/javascripts/prototype.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] http://sirius/redmine/login: Couldn't fetch resource /redmine/javascripts/prototype.js?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/javascripts/effects.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] Creating connectionhttp://sirius/redmine/login: Couldn't fetch resource /redmine/javascripts/effects.js?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connectionhttp://sirius/redmine/login: Couldn't fetch resource /redmine/javascripts/effects.js?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/javascripts/dragdrop.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] http://sirius/redmine/login: Couldn't fetch resource /redmine/javascripts/dragdrop.js?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/javascripts/controls.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] http://sirius/redmine/login: Couldn't fetch resource /redmine/javascripts/controls.js?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/javascripts/application.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] http://sirius/redmine/login: Couldn't fetch resource /redmine/javascripts/application.js?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 11398us: HtmlParse::ApplyFilter:ImgRewrite [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 11506us: HtmlParse::ApplyFilter:CacheExtender [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/stylesheets/application.css?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] http://sirius/redmine/login: Couldn't fetch resource /redmine/stylesheets/application.css?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/javascripts/prototype.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] http://sirius/redmine/login: Couldn't fetch resource /redmine/javascripts/prototype.js?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/javascripts/effects.js?1296181549 [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] http://sirius/redmine/login: Couldn't fetch resource /redmine/javascripts/effects.js?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/javascripts/dragdrop.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] http://sirius/redmine/login: Couldn't fetch resource /redmine/javascripts/dragdrop.js?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/javascripts/controls.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] http://sirius/redmine/login: Couldn't fetch resource /redmine/javascripts/controls.js?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/javascripts/application.js?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] http://sirius/redmine/login: Couldn't fetch resource /redmine/javascripts/application.js?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/stylesheets/jstoolbar.css?1296181549 [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(29)] http://sirius/redmine/login: Couldn't fetch resource /redmine/stylesheets/jstoolbar.css?1296181549 to rewrite. [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 14401us: HtmlParse::ApplyFilter:HtmlWriter [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:17 2011] [notice] [0209/200317:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 15218us: HtmlParse::FinishParse [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:17 2011] [error] [0209/200317:ERROR:net/instaweb/util/google_message_handler.cc(54)] net/instaweb/apache/serf_url_async_fetcher.cc:506: Creating connection [Wed Feb 09 20:03:20 2011] [warn] [client 192.168.1.55] Not GET request: 2., referer: http://sirius/redmine/login [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/apache/mod_instaweb.cc(247)] ModPagespeed OutputFilter called for request /redmine/login [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/apache/mod_instaweb.cc(272)] unparsed=/redmine/login, absolute_url=http://sirius/redmine/login [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: HtmlParse::StartParse [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/apache/mod_instaweb.cc(299)] Request headers:\nHTTP/1.1 0 Internal Server Error\r\nHost: sirius\r\nUser-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.8) Gecko/20100723 Ubuntu/10.04 (lucid) Firefox/3.6.8\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8\r\nAccept-Language: en-us,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nKeep-Alive: 115\r\nConnection: keep-alive\r\nReferer: http://sirius/redmine/login\r\nCookie: _redmine_session=BAh7BzoPc2Vzc2lvbl9pZCIlNmVlMzFiMDc4MWQxZDU5ZTI5MTk2NjU0NGY3MzJmYzQ6EF9jc3JmX3Rva2VuIjFjRU1VWkhoUktKVTh3M3A2ZCt4UVFoSlRrNC9wbm56VWRnNWc1ZndoeERVPQ%3D%3D--8b195ac3cab88b5a1f408e3f18aaddc70782140e\r\nContent-Type: application/x-www-form-urlencoded\r\nContent-Length: 165\r\n\r\n [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/apache/mod_instaweb.cc(302)] request-filename=/var/www/redmine/login, uri=/redmine/login [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/apache/mod_instaweb.cc(319)] ModPagespeed Response headers:\nHTTP/1.1 302 Found\r\nLocation: http://sirius/redmine\r\nStatus: 302\r\nX-Mod-Pagespeed: 0.9.0.0-128\r\n\r\n [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 604us: HtmlParse::Flush [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 697us: HtmlParse::CoalesceAdjacentCharactersNodes [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 758us: HtmlParse::ApplyFilter:AddHead [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 813us: HtmlParse::SanityCheck [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 912us: HtmlParse::CoalesceAdjacentCharactersNodes [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 965us: HtmlParse::ApplyFilter:CssCombine [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 1020us: HtmlParse::ApplyFilter:CssFilter [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 1073us: HtmlParse::ApplyFilter:Javascript [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 1125us: HtmlParse::ApplyFilter:InlineCss [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 1179us: HtmlParse::ApplyFilter:InlineJs [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 1233us: HtmlParse::ApplyFilter:ImgRewrite [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 1285us: HtmlParse::ApplyFilter:CacheExtender [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 1338us: HtmlParse::ApplyFilter:HtmlWriter [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine/login:1: 1415us: HtmlParse::FinishParse [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/apache/mod_instaweb.cc(247)] ModPagespeed OutputFilter called for request /redmine [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/apache/mod_instaweb.cc(272)] unparsed=/redmine, absolute_url=http://sirius/redmine [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine:1: HtmlParse::StartParse [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/apache/mod_instaweb.cc(299)] Request headers:\nHTTP/1.1 0 Internal Server Error\r\nHost: sirius\r\nUser-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.8) Gecko/20100723 Ubuntu/10.04 (lucid) Firefox/3.6.8\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8\r\nAccept-Language: en-us,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nKeep-Alive: 115\r\nConnection: keep-alive\r\nReferer: http://sirius/redmine/login\r\nCookie: _redmine_session=BAh7BzoMdXNlcl9pZGkGOg9zZXNzaW9uX2lkIiVlYjNmYTY5NmZjNzMwYTdhMjA5ZDJmZmM4MTM0MzcyMw%3D%3D--57a4931aae681664d2a6ff6c039ac84b6ebc9e55\r\nIf-None-Match: "76628aff953f11fbdefb77ce3d575718"\r\n\r\n [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/apache/mod_instaweb.cc(302)] request-filename=/var/www/redmine, uri=/redmine [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/apache/mod_instaweb.cc(319)] ModPagespeed Response headers:\nHTTP/1.1 200 OK\r\nStatus: 200\r\nX-Mod-Pagespeed: 0.9.0.0-128\r\n\r\n [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine:1: 1870us: HtmlParse::Flush [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine:1: 1973us: HtmlParse::CoalesceAdjacentCharactersNodes [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine:1: 2040us: HtmlParse::ApplyFilter:AddHead [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine:1: 2101us: HtmlParse::SanityCheck [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/util/google_message_handler.cc(48)] http://sirius/redmine:1: 2231us: HtmlParse::ApplyFilter:CssCombine [Wed Feb 09 20:03:20 2011] [notice] [0209/200320:INFO:net/instaweb/apache/serf_url_async_fetcher.cc(632)] Initiating async fetch for http://sirius/redmine/stylesheets/application.css?1296181549

    Read the article

  • Mono 2.11 on nginx using fastcgi-mono-server4 will not work

    - by fuzzycow101
    I have mono 2.11 set up with my nginx 1.0.15 webserver running on centos 6.2. I built it from source and xps2, xps4 and fastcgi-mono-server2 work as expected. The problem is when I try and run fastcgi-mono-server4. When I run: fastcgi-mono-server4 /applications=site:/:/srv/www/html/ /socket=tcp:127.0.0.1:9000 /loglevels=Debug /printlog=true Here is what I get from fastcgi-mono-server2: [2012-06-06 23:51:07Z] Debug Record received. (Type: BeginRequest, ID: 1, Length: 8) [2012-06-06 23:51:07Z] Debug Record received. (Type: Params, ID: 1, Length: 801) [2012-06-06 23:51:07Z] Debug Record received. (Type: Params, ID: 1, Length: 0) [2012-06-06 23:51:07Z] Debug Read parameter. (QUERY_STRING = ) [2012-06-06 23:51:07Z] Debug Read parameter. (REQUEST_METHOD = GET) [2012-06-06 23:51:07Z] Debug Read parameter. (CONTENT_TYPE = ) [2012-06-06 23:51:07Z] Debug Read parameter. (CONTENT_LENGTH = ) [2012-06-06 23:51:07Z] Debug Read parameter. (SCRIPT_NAME = /) [2012-06-06 23:51:07Z] Debug Read parameter. (REQUEST_URI = /) [2012-06-06 23:51:07Z] Debug Read parameter. (DOCUMENT_URI = /) [2012-06-06 23:51:07Z] Debug Read parameter. (DOCUMENT_ROOT = /srv/www/html) [2012-06-06 23:51:07Z] Debug Read parameter. (SERVER_PROTOCOL = HTTP/1.1) [2012-06-06 23:51:07Z] Debug Read parameter. (GATEWAY_INTERFACE = CGI/1.1) [2012-06-06 23:51:07Z] Debug Read parameter. (SERVER_SOFTWARE = nginx/1.0.15) [2012-06-06 23:51:07Z] Debug Read parameter. (REMOTE_ADDR = 192.168.128.121) [2012-06-06 23:51:07Z] Debug Read parameter. (REMOTE_PORT = 62326) [2012-06-06 23:51:07Z] Debug Read parameter. (SERVER_ADDR = 192.168.128.125) [2012-06-06 23:51:07Z] Debug Read parameter. (SERVER_PORT = 80) [2012-06-06 23:51:07Z] Debug Read parameter. (SERVER_NAME = site) [2012-06-06 23:51:07Z] Debug Read parameter. (REDIRECT_STATUS = 200) [2012-06-06 23:51:07Z] Debug Read parameter. (PATH_INFO = ) [2012-06-06 23:51:07Z] Debug Read parameter. (SCRIPT_FILENAME = /srv/www/html/) [2012-06-06 23:51:07Z] Debug Read parameter. (HTTP_HOST = site) [2012-06-06 23:51:07Z] Debug Read parameter. (HTTP_USER_AGENT = Mozilla/5.0 (Windows NT 6.1; WOW64; rv:13.0) Gecko/20100101 Firefox/13.0) [2012-06-06 23:51:07Z] Debug Read parameter. (HTTP_ACCEPT = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8) [2012-06-06 23:51:07Z] Debug Read parameter. (HTTP_ACCEPT_LANGUAGE = en-us,en;q=0.5) [2012-06-06 23:51:07Z] Debug Read parameter. (HTTP_ACCEPT_ENCODING = gzip, deflate) [2012-06-06 23:51:07Z] Debug Read parameter. (HTTP_CONNECTION = keep-alive) [2012-06-06 23:51:07Z] Debug Read parameter. (HTTP_COOKIE = ASP.NET_SessionId=0176BE8FC161E702439D3C91) [2012-06-06 23:51:07Z] Debug Record received. (Type: StandardInput, ID: 1, Length: 0) [2012-06-06 23:51:08Z] Debug Record sent. (Type: StandardOutput, ID: 1, Length: 196) [2012-06-06 23:51:08Z] Debug Record sent. (Type: StandardOutput, ID: 1, Length: 128) [2012-06-06 23:51:08Z] Debug Record sent. (Type: StandardOutput, ID: 1, Length: 0) [2012-06-06 23:51:08Z] Debug Record sent. (Type: EndRequest, ID: 1, Length: 8) And this is what I get from fastcgi-mono-server4: [2012-06-06 23:50:52Z] Debug Record received. (Type: BeginRequest, ID: 1, Length: 8) [2012-06-06 23:50:52Z] Debug Record received. (Type: Params, ID: 1, Length: 801) [2012-06-06 23:50:52Z] Debug Record received. (Type: Params, ID: 1, Length: 0) [2012-06-06 23:50:52Z] Debug Read parameter. (QUERY_STRING = ) [2012-06-06 23:50:52Z] Debug Read parameter. (REQUEST_METHOD = GET) [2012-06-06 23:50:52Z] Debug Read parameter. (CONTENT_TYPE = ) [2012-06-06 23:50:52Z] Debug Read parameter. (CONTENT_LENGTH = ) [2012-06-06 23:50:52Z] Debug Read parameter. (SCRIPT_NAME = /) [2012-06-06 23:50:52Z] Debug Read parameter. (REQUEST_URI = /) [2012-06-06 23:50:52Z] Debug Read parameter. (DOCUMENT_URI = /) [2012-06-06 23:50:52Z] Debug Read parameter. (DOCUMENT_ROOT = /srv/www/html) [2012-06-06 23:50:52Z] Debug Read parameter. (SERVER_PROTOCOL = HTTP/1.1) [2012-06-06 23:50:52Z] Debug Read parameter. (GATEWAY_INTERFACE = CGI/1.1) [2012-06-06 23:50:52Z] Debug Read parameter. (SERVER_SOFTWARE = nginx/1.0.15) [2012-06-06 23:50:52Z] Debug Read parameter. (REMOTE_ADDR = 192.168.128.121) [2012-06-06 23:50:52Z] Debug Read parameter. (REMOTE_PORT = 62326) [2012-06-06 23:50:52Z] Debug Read parameter. (SERVER_ADDR = 192.168.128.125) [2012-06-06 23:50:52Z] Debug Read parameter. (SERVER_PORT = 80) [2012-06-06 23:50:52Z] Debug Read parameter. (SERVER_NAME = site) [2012-06-06 23:50:52Z] Debug Read parameter. (REDIRECT_STATUS = 200) [2012-06-06 23:50:52Z] Debug Read parameter. (PATH_INFO = ) [2012-06-06 23:50:52Z] Debug Read parameter. (SCRIPT_FILENAME = /srv/www/html/) [2012-06-06 23:50:52Z] Debug Read parameter. (HTTP_HOST = site) [2012-06-06 23:50:52Z] Debug Read parameter. (HTTP_USER_AGENT = Mozilla/5.0 (Windows NT 6.1; WOW64; rv:13.0) Gecko/20100101 Firefox/13.0) [2012-06-06 23:50:52Z] Debug Read parameter. (HTTP_ACCEPT = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8) [2012-06-06 23:50:52Z] Debug Read parameter. (HTTP_ACCEPT_LANGUAGE = en-us,en;q=0.5) [2012-06-06 23:50:52Z] Debug Read parameter. (HTTP_ACCEPT_ENCODING = gzip, deflate) [2012-06-06 23:50:52Z] Debug Read parameter. (HTTP_CONNECTION = keep-alive) [2012-06-06 23:50:52Z] Debug Read parameter. (HTTP_COOKIE = ASP.NET_SessionId=0176BE8FC161E702439D3C91) [2012-06-06 23:50:53Z] Debug Record received. (Type: StandardInput, ID: 1, Length: 0) [2012-06-06 23:50:53Z] Debug Record sent. (Type: EndRequest, ID: 1, Length: 8) I do not see what I am doing wrong. Any help would be great.

    Read the article

  • Real Excel Templates I

    - by Tim Dexter
    As promised, I'm starting to document the new Excel templates that I teased you all with a few weeks back. Leslie is buried in 11g documentation and will not get to officially documenting the templates for a while. I'll do my best to be professional and not ramble on about this and that, although the weather here has finally turned and its 'scorchio' here in Colorado today. Maybe our stand of Aspen will finally come into leaf ... but I digress. Preamble These templates are not actually that new, I helped in a small way to develop them a few years back with Excel 'meistress' Shirley for a company that was trying to use the Report Manager(RR) Excel FSG outputs under EBS 12. The functionality they needed was just not there in the RR FSG templates, the templates are actually XSL that is created from the the RR Excel template builder and fed to BIP for processing. Think of Excel from our RTF templates and you'll be there ie not really Excel but HTML masquerading as Excel. Although still under controlled release in EBS they have now made their way to the standlone release and are willing to share their Excel goodness. You get everything you have with hte Excel Analyzer Excel templates plus so much more. Therein lies a question, what will happen to the Analyzer templates? My understanding is that both will come together into a single Excel template format some time in the post-11g release world. The new XLSX format for Exce 2007/10 is also in the mix too so watch this space. What more do these templates offer? Well, you can structure data in the Excel output. Similar to RTF templates you can create sheets of data that have master-detail n relationships. Although the analyzer templates can do this, you have to get into macros whereas BIP will do this all for you. You can also use native XSL functions in your data to manipulate it prior to rendering. BP functions are not currently supported. The most impressive, for me at least, is the sheet 'bursting'. You can split your hierarchical data across multiple sheets and dynamically name those sheets. Finally, you of course, still get all the native Excel functionality. Pre-reqs You must be on 10.1.3.4.1 plus the latest rollup patch, 9546699. You can patch upa BIP instance running with OBIEE, no problem You need Excel 2000 or above to build the templates Some patience - there is no Excel template builder for these new templates. So its all going to have to be done by hand. Its not that tough but can get a little 'fiddly'. You can not test the template from Excel , it has to be deployed and then run. Limitations The new templates are definitely superior to the Analyzer templates but there are a few limitations. Re-grouping is not supported. You can only follow a data hierarchy not bend it to your will unless you want to get into macros. No support for BIP functions. The templates support native XSL functions only. No template builder Getting Started The templates make the use of named cells and groups of cells to allow BIP to find the insertion point for data points. It also uses a hidden sheet to store calculation mappings from named cells to XML data elements. To start with, in the great BIP tradition, we need some sample XML data. Becasue I wanted to show the master-detail output we need some hierarchical data. If you have not yet gotten into the data templates, now is a good time, I wrote a post a while back starting from the simple to more complex. They generate ideal data sets for these templates. Im working with the following data set: <EMPLOYEES> <LIST_G_DEPT> <G_DEPT> <DEPARTMENT_ID>10</DEPARTMENT_ID> <DEPARTMENT_NAME>Administration</DEPARTMENT_NAME> <LIST_G_EMP> <G_EMP> <EMPLOYEE_ID>200</EMPLOYEE_ID> <EMP_NAME>Jennifer Whalen</EMP_NAME> <EMAIL>JWHALEN</EMAIL> <PHONE_NUMBER>515.123.4444</PHONE_NUMBER> <HIRE_DATE>1987-09-17T00:00:00.000-06:00</HIRE_DATE> <SALARY>4400</SALARY> </G_EMP> </LIST_G_EMP> <TOTAL_EMPS>1</TOTAL_EMPS> <TOTAL_SALARY>4400</TOTAL_SALARY> <AVG_SALARY>4400</AVG_SALARY> <MAX_SALARY>4400</MAX_SALARY> <MIN_SALARY>4400</MIN_SALARY> </G_DEPT> ... <LIST_G_DEPT> <EMPLOYEES> Simple enough to follow and bread and butter stuff for an RTF template. Building the Template For an Excel template we need to start by thinking about how we want to render the data. Come up with a sample output in Excel. Its all dummy data, nothing marked up yet with one row of data for each level. I have the department name and then a repeating row for the employees. You can apply Excel formatting to the layout. The total is going to be derived from a data element. We'll get to Excel functions later. Marking Up Cells Next we need to start marking up the cells with custom names to map them to data elements. The cell names need to follow a specific format: For data grouping, XDO_GROUP_?group_name? For data elements, XDO_?element_name? Notice the question mark delimter, the group_name and element_name are case sensitive. The next step is to find how to name cells; the easiest method is to highlight the cell and then type in the name. You can also find the Name Manager dialog. I use 2007 and its available on the ribbon under the Formulas section Go thorugh the process of naming all the cells for the element values you have. Using my data set from above.You should end up with something like this in your 'Name Manager' dialog. You can update any mistakes you might have made through this dialog. Creating Groups In the image above you can see there are a couple of named group cells. To create these its a simple case of highlighting the cells that make up the group and then naming them. For the EMP group, highlight the employee row and then type in the name, XDO_GROUP?G_EMP? Notice the 10,000 total is outside of the G_EMP group. Its actually named, XDO_?TOTAL_SALARY?, a query calculated value. For the department group, we need to include the department name cell and the sub EMP grouping and name it, XDO_GROUP?G_DEPT? Notice, the 10,000 total is included in the G_DEPT group. This will ensure it repeats at the department level. Lastly, we do need to include a special sheet in the workbook. We will not have anything meaningful in there for now, but it needs to be present. Create a new sheet and name it XDO_METADATA. The name is important as the BIP rendering engine will looking for it. For our current example we do not need anything other than the required stuff in our XDO_METADATA sheet but, it must be present. Easy enough to hide it. Here's what I have: The only cell that is important is the 'Data Constraints:' cell. The rest is optional. To save curious users getting distracted, hide the metadata sheet. Deploying & Running Templates We should now have a usable Excel template. Loading it into a report is easy enough using the browser UI, just like an RTF template. Set the template type to Excel. You will now be able to run the report and hopefully get something like this. You will not get the red highlighting, thats just some conditional formatting I added to the template using Excel functionality. Your dates are probably going to look raw too. I got around this for now using an Excel function on the cell: =--REPLACE(SUBSTITUTE(E8,"T"," "),LEN(E8)-6,6,"") Google to the rescue on that one. Try some other stuff out. To avoid constantly loading the template through the UI. If you have BIP running locally or you can access the reports repository, once you have loaded the template the first time. Just save the template directly into the report folder. I have put together a sample report using a sample data set, available here. Just drop the xml data file, EmpbyDeptExcelData.xml into 'demo files' folder and you should be good to go. Thats the basics, next we'll start using some XSL functions in the template and move onto the 'bursting' across sheets.

    Read the article

  • How to fix Solr - Server is shutting down issue?

    - by Krunal
    I was having a running Solr 4.1 on Windows Server 2008 R2. The Solr is deployed on Tomcat. However, today it stops suddenly, and while accessing Solr it gives following error. HTTP Status 503 - Server is shutting down type Status report message Server is shutting down description The requested service is not currently available. On further looking into Logs, we got following: Log File: tomcat7-stderr.2013-05-09.txt May 09, 2013 8:00:40 PM org.apache.solr.core.CoreContainer finalize SEVERE: CoreContainer was not shutdown prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=2221663 Log File: catalina.2013-05-09.txt May 09, 2013 7:59:25 PM org.apache.solr.core.SolrResourceLoader <init> INFO: new SolrResourceLoader for directory: 'c:\solrdir\' May 09, 2013 7:59:29 PM org.apache.solr.common.SolrException log SEVERE: Exception during parsing file: null:org.xml.sax.SAXParseException; systemId: file:/c:/solr/solr.xml; lineNumber: 2; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPI(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown Source) at org.apache.solr.core.Config.<init>(Config.java:121) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:428) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:404) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:336) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:98) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:281) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:262) at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:107) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4656) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5309) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:633) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:977) at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1655) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) May 09, 2013 7:59:29 PM org.apache.solr.servlet.SolrDispatchFilter init SEVERE: Could not start Solr. Check solr/home property and the logs May 09, 2013 7:59:29 PM org.apache.solr.common.SolrException log SEVERE: null:org.apache.solr.common.SolrException: at org.apache.solr.core.CoreContainer.load(CoreContainer.java:431) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:404) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:336) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:98) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:281) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:262) at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:107) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4656) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5309) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:633) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:977) at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1655) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: org.xml.sax.SAXParseException; systemId: file:/c:/solrdir/solr.xml; lineNumber: 2; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPI(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown Source) at org.apache.solr.core.Config.<init>(Config.java:121) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:428) ... 20 more May 09, 2013 7:59:29 PM org.apache.solr.servlet.SolrDispatchFilter init INFO: SolrDispatchFilter.init() done May 09, 2013 7:59:29 PM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory C:\Program Files (x86)\Apache Software Foundation\Tomcat 7.0\webapps\docs May 09, 2013 7:59:30 PM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory C:\Program Files (x86)\Apache Software Foundation\Tomcat 7.0\webapps\manager May 09, 2013 7:59:30 PM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory C:\Program Files (x86)\Apache Software Foundation\Tomcat 7.0\webapps\ROOT May 09, 2013 7:59:30 PM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["http-bio-8983"] May 09, 2013 7:59:30 PM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["ajp-bio-8009"] May 09, 2013 7:59:30 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 9578 ms May 09, 2013 8:00:40 PM org.apache.solr.core.CoreContainer finalize SEVERE: CoreContainer was not shutdown prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=2221663 Any idea what could be wrong and how to fix?

    Read the article

  • Productive Toolset for C# Developer

    - by Marko Apfel
    Programming Visual Studio ReSharper Agent Johnson Agent Smith StyleCop for ReSharper Keymaps SettingsManager Git Source Control Provider Gist NuGet Package Manager NDepend Productivity Power Tools PowerCommands for Visual Studio PostSharp Indent Guides Typemock Isolator VSCommands Ressource Refactor Clone Detective GhostDoc CR_Documentor AnkSVN Expression Blend SharpDevelop Notepad++, PS Pad StyleCop, FxCop, .. .NET Reflector, ILSpy, dotPeek, Just Decompile Git Extensions inkl. MSysGit, MinGW Github for Windows SmartGit PoSH-Git Console Enhancement Project LINQPad Mercurial RapidSVN SQL Management Studio Adventure Works Sample DB AdventureWorksLT Toad for SQL Server yEd Graph Editor TeX, LateX MiKTeX, TeXworks Pandoc Jenkins, TeamCity KompoZer XML Notepad Kaxaml KDiff3, WinMerge, Perforce Merge Handle DbgView FusLogVw FTP Commander HTML Help Workshop, Sandcastle, SHFB WiX Enterprise Architect InsightProfiler Putty Cygwin DXCore, DXCore Plugins FreeMind ProcessExplorer, ProcessMonitor Social Networking, Community Windows Live Writer Disgsby Skype TweetDeck FeedReader Sytem and others Microsoft Office (notably OneNote!!!) Adobe Reader PDF Creator SRWare Iron (Chrome) AddThis bit-ly del.icio.us InstaPaper Leo Dictionary Google Bookmarks Proxy Switchy! StumbleUpon K-Meleon FreeCommander, FAR 7-Zip Keyboard Jedi Launchy TrueCrypt Dropbox Ditto Greenshot Rainlendar2 Everything Daemon Tools inSSIDer VirtualBox Stardock Fences Media Player Classic VLC Media Player Winamp WinAmp Cue Player LAME Encoder CamStudio Youtube to MP3 Converter VirtualDub Image Resizer Powertoy Clone 2.0 Paint.NET Picasa Windy JediConcentrate, Ghoster TeamViewer Timerle TreeSizeFree WinDirStat Windows Sizer, WinResizer ZoomIt Sometimes nice to have ArcGIS TortoiseSVN, TortoiseCVS XnView GitJungle CowSpy Grindstone Free Download Manager CDBurnerXP Free Audio CD Burner SmartAssembly intellibook GMX SMS Manager BlackBerry Desktop Cisco Any Connect eRoom Foxit Reader Google Earth ThinkVantage GPS Gridy Bluefish The GodFather Tor Browser, Charon YouTube Downloader NCover Network Stumbler Remote Debugger WScite XML Pad DBVisualizer Microsoft Network Monitor, Fiddler2 Eclipse IDE Oracle Client, Oracle SQL Developer Bookmarks, Links http://pastebin.de/, http://pastebin.com/ http://followup.cc  http://trello.com http://tumblr.com https://bitly.com/, http://is.gd http://www.famkruithof.net/uuid/uuidgen, http://www.guidgenerator.com/ https://github.com/, https://bitbucket.org/ http://dict.leo.org/, http://translate.google.com/ http://prezi.com/ http://geekswithblogs.net/Default.aspx, http://codebetter.com/ http://duckduckgo.com/bang.html   http://de.schreibtrainer.com/index.php?site=3&menuId=3 http://www.mr-wetter.de/ this is an update to http://geekswithblogs.net/mapfel/archive/2010/07/12/140877.aspx

    Read the article

  • Tomcat6 Manager Webapp is 404 on apt-get install on Ubuntu 10.10

    - by Noel
    http://localhost:8080/manager/html gives a 404 error on apt-get install of tomcat6 (6.0.28 on JVM 1.6.0_20-b20 on 2.6.35-27-generic amd64). http://localhost:8080/host-manager/html works. Lists one Host name, localhost. Installed tomcat6-admin with apt-get. $ ls dpkg -l | grep -i tomcat6-admin ii tomcat6-admin 6.0.28-2ubuntu1.1 Servlet and JSP engine -- admin web applications $ cat /usr/share/tomcat6/conf/tomcat-users.xml <tomcat-users> <role rolename="admin"/> <role rolename="manager" /> <user username="tomcatuser" password="Password1" roles="admin,manager"/> </tomcat-users> $ cat /usr/share/tomcat6/conf/Catalina/localhost/manager.xml <Context path="/manager" docBase="/usr/share/tomcat6-admin/manager" antiResourceLocking="false" privileged="true" /> <role name="manager" /> <user name="manager" password="Password1" roles="manager" /> <user name="tomcatuser" password="Password1" roles="manager" /> Those two files are the only documentation I've seen on how to setup the Manager webapp, and they seem to be compliant with the requirements.

    Read the article

  • Tomcat6 Manager Webapp is 404 on apt-get install on Ubuntu 10.10

    - by Noel
    http://localhost:8080/manager/html gives a 404 error on apt-get install of tomcat6 (6.0.28 on JVM 1.6.0_20-b20 on 2.6.35-27-generic amd64). http://localhost:8080/host-manager/html works. Lists one Host name, localhost. cat /usr/share/tomcat6/conf/tomcat-users.xml <tomcat-users> <role rolename="admin"/> <role rolename="manager" /> <user username="tomcatuser" password="Password1" roles="admin,manager"/> </tomcat-users> cat /usr/share/tomcat6/conf/Catalina/localhost/manager.xml <Context path="/manager" docBase="/usr/share/tomcat6-admin/manager" antiResourceLocking="false" privileged="true" /> <role name="manager" /> <user name="manager" password="Password1" roles="manager" /> <user name="tomcatuser" password="Password1" roles="manager" /> Those two files are the only documentation I've seen on how to setup the Manager webapp, and they seem to be compliant with the requirements.

    Read the article

  • Tomcat6 Manager Webapp returns a 404

    - by Noel
    http://localhost:8080/manager/html gives a 404 error on apt-get install of tomcat6 (6.0.28 on JVM 1.6.0_20-b20 on 2.6.35-27-generic amd64). http://localhost:8080/host-manager/html works. Lists one Host name, localhost. Installed tomcat6-admin with apt-get. ls dpkg -l | grep -i tomcat6-admin ii tomcat6-admin 6.0.28-2ubuntu1.1 Servlet and JSP engine -- admin web applications $ cat /usr/share/tomcat6/conf/tomcat-users.xml <tomcat-users> <role rolename="admin"/> <role rolename="manager" /> <user username="tomcatuser" password="Password1" roles="admin,manager"/> </tomcat-users> cat /usr/share/tomcat6/conf/Catalina/localhost/manager.xml <Context path="/manager" docBase="/usr/share/tomcat6-admin/manager" antiResourceLocking="false" privileged="true" /> <role name="manager" /> <user name="manager" password="Password1" roles="manager" /> <user name="tomcatuser" password="Password1" roles="manager" /> Those two files are the only documentation I've seen on how to setup the Manager webapp, and they seem to be compliant with the requirements.

    Read the article

  • SQL SERVER – Copy Data from One Table to Another Table – SQL in Sixty Seconds #031 – Video

    - by pinaldave
    Copy data from one table to another table is one of the most requested questions on forums, Facebook and Twitter. The question has come in many formats and there are places I have seen developers are using cursor instead of this direct method. Earlier I have written the similar article a few years ago - SQL SERVER – Insert Data From One Table to Another Table – INSERT INTO SELECT – SELECT INTO TABLE. The article has been very popular and I have received many interesting and constructive comments. However there were two specific comments keep on ending up on my mailbox. 1) SQL Server AdventureWorks Samples Database does not have table I used in the example 2) If there is a video tutorial of the same example. After carefully thinking I decided to build a new set of the scripts for the example which are very similar to the old one as well video tutorial of the same. There was no better place than our SQL in Sixty Second Series to cover this interesting small concept. Let me know what you think of this video. Here is the updated script. -- Method 1 : INSERT INTO SELECT USE AdventureWorks2012 GO ----Create TestTable CREATE TABLE TestTable (FirstName VARCHAR(100), LastName VARCHAR(100)) ----INSERT INTO TestTable using SELECT INSERT INTO TestTable (FirstName, LastName) SELECT FirstName, LastName FROM Person.Person WHERE EmailPromotion = 2 ----Verify that Data in TestTable SELECT FirstName, LastName FROM TestTable ----Clean Up Database DROP TABLE TestTable GO --------------------------------------------------------- --------------------------------------------------------- -- Method 2 : SELECT INTO USE AdventureWorks2012 GO ----Create new table and insert into table using SELECT INSERT SELECT FirstName, LastName INTO TestTable FROM Person.Person WHERE EmailPromotion = 2 ----Verify that Data in TestTable SELECT FirstName, LastName FROM TestTable ----Clean Up Database DROP TABLE TestTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Data From One Table to Another Table – INSERT INTO SELECT – SELECT INTO TABLE Powershell – Importing CSV File Into Database – Video SQL SERVER – 2005 – Export Data From SQL Server 2005 to Microsoft Excel Datasheet SQL SERVER – Import CSV File into Database Table Using SSIS SQL SERVER – Import CSV File Into SQL Server Using Bulk Insert – Load Comma Delimited File Into SQL Server SQL SERVER – 2005 – Generate Script with Data from Database – Database Publishing Wizard What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • Using linked servers, OPENROWSET and OPENQUERY

    - by BuckWoody
    SQL Server has a few mechanisms to reach out to another server (even another server type) and query data from within a Transact-SQL statement. Among them are a set of stored credentials and information (called a Linked Server), a statement that uses a linked server called called OPENQUERY, another called OPENROWSET, and one called OPENDATASOURCE. This post isn’t about those particular functions or statements – hit the links for more if you’re new to those topics. I’m actually more concerned about where I see these used than the particular method. In many cases, a Linked server isn’t another Relational Database Management System (RDMBS) like Oracle or DB2 (which is possible with a linked server), but another SQL Server. My concern is that linked servers are the new Data Transformation Services (DTS) from SQL Server 2000 – something that was designed for one purpose but which is being morphed into something much more. In the case of DTS, most of us turned that feature into a full-fledged job system. What was designed as a simple data import and export system has been pressed into service doing logic, routing and timing. And of course we all know how painful it was to move off of a complex DTS system onto SQL Server Integration Services. In the case of linked servers, what should be used as a method of running a simple query or two on another server where you have occasional connection or need a quick import of a small data set is morphing into a full federation strategy. In some cases I’ve seen a complex web of linked servers, and when credentials, names or anything else changes there are huge problems. Now don’t get me wrong – linked servers and other forms of distributing queries is a fantastic set of tools that we have to move data around. I’m just saying that when you start having lots of workarounds and when things get really complicated, you might want to step back a little and ask if there’s a better way. Are you able to tolerate some latency? Perhaps you’re able to use Service Broker. Would you like to be platform-independent on the data source? Perhaps a middle-tier might make more sense, abstracting the queries there and sending them to the proper server. Designed properly, I’ve seen these systems scale further and be more resilient than loading up on linked servers. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • LLBLGen Pro v3.0 has been released!

    - by FransBouma
    After two years of hard work we released v3.0 of LLBLGen Pro today! V3.0 comes with a completely new designer which has been developed from the ground up for .NET 3.5 and higher. Below I'll briefly mention some highlights of this new release: Entity Framework (v1 & v4) support NHibernate support (hbm.xml mappings & FluentNHibernate mappings) Linq to SQL support Allows both Model first and Database first development, or a mixture of both .NET 4.0 support Model views Grouping of project elements Linq-based project search Value Type (DDD) support Multiple Database types in single project XML based project file Integrated template editor Relational Model Data management Flexible attribute declaration for code generation, no more buddy classes needed Fine-grained project validation Update / Create DDL SQL scripts Fast Text-DSL based Quick mode Powerful text-DSL based Quick Model functionality Per target framework extensible settings framework much much more... Of course we still support our own O/R mapper framework: LLBLGen Pro v3.0 Runtime framework as well, which was updated with some minor features and was upgraded to use the DbProviderFactory system. Please watch the videos of the designer (more to come very soon!) to see some aspects of the new designer in action. The full version comes with Algorithmia in sourcecode as well. Algorithmia is an algorithm library written for .NET 3.5 which powers the heart of the designer with a fine-grained undo/redo command framework, graph classes and much more. I'd like to thank all beta-testers, our support team and others who have helped us with this massive release. :)

    Read the article

  • Php pdo_dblib - cannot find/unable to load freetds

    - by MaxPowers
    Self-hosted box, RHEL 6 PHP 5.3.3 PDO installed freetds installed pdo_dblib - so far no luck installing My goal is to use PDO with sybase. Attempting to install pdo_dblib from the appropriate version php source code. I have tried a variety of methods and searched quite a bit for help on this topic, but have yet to be successful. Method 1 Install freetds $ ./configure $ make $ su root Password: $ make install This is successful Install pdo_dblib inside the /ext/pdo_dblib folder: $ phpize $ ./configure $ make $ make test Error output: PHP Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 That doesn't look good...I researched this and found an interesting hack for this here. But changing pdo.ini to pdo_0.ini was not the solution, as I still got the same errors on make test. $ su $ make install Output: Installing shared extensions: /usr/lib64/php/modules/ That seems strange...and no, it doesn't actually install (not showing up on phpinfo after apache restart). Method 2 Install freetds following the instructions exactly, i add the prefix $ ./configure --prefix=/usr/local/freetds $ make $ su root Password: $ make install This is successful Install pdo_dblib inside the /ext/pdo_dblib folder: $ phpize $ ./configure --with-sybase=/usr/local/freetds This produces the following error at the bottom of the output ... checking for PDO_DBLIB support via FreeTDS... yes, shared configure: error: Cannot find FreeTDS in known installation directories Method 3 freetds ./configure variation (including or not include the --prefix...) did not change the result of this so I'll skip it. Install pdo_dblib pecl extension following the method specified here. pecl download pdo_dblib tar -xzvf PDO_DBLIB-1.0.tgz Removed the line, <dep type=”ext” rel=”ge” version=”1.0?>pdo</dep> Saved the package.xml file, and moved it in to the PDO_DBLIB directory. mv package.xml ./PDO_DBLIB-1.0 Navigated to the PDO_DBLIB directory, then installed the package from the directory. cd ./PDO_DBLIB-1.0 pecl install package.xml But, this command gives me the following error output, same as Method 2. checking for PDO_DBLIB support via FreeTDS... yes, shared configure: error: Cannot find FreeTDS in known installation directories ERROR: `/home/sybase/Install_items/pecl_pdo_dblib/PDO_DBLIB-1.0/configure' failed

    Read the article

  • connect() failed (111: Connection refused) while connecting to upstream

    - by Burning the Codeigniter
    I'm experiencing 502 gateway errors when accessing a PHP file in a directory (http://domain.com/dev/index.php), the logs simply says this: 2011/09/30 23:47:54 [error] 31160#0: *35 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: domain.com, request: "GET /dev/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "domain.com" I've never experienced this before, how do I do a solution for this type of 502 gateway error? This is the nginx.conf: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #}

    Read the article

  • Combining HBase and HDFS results in Exception in makeDirOnFileSystem

    - by utrecht
    Introduction An attempt to combine HBase and HDFS results in the following: 2014-06-09 00:15:14,777 WARN org.apache.hadoop.hbase.HBaseFileSystem: Create Dir ectory, retries exhausted 2014-06-09 00:15:14,780 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown. java.io.IOException: Exception in makeDirOnFileSystem at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile System.java:136) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFi leSystem.java:428) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSyst emLayout(MasterFileSystem.java:148) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSyst em.java:133) at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.j ava:572) at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:432) at java.lang.Thread.run(Thread.java:744) Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/":vagrant:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe rmissionChecker.java:224) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe rmissionChecker.java:204) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermi ssion(FSPermissionChecker.java:149) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F SNamesystem.java:4891) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F SNamesystem.java:4873) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAcce ss(FSNamesystem.java:4847) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FS Namesystem.java:3192) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNames ystem.java:3156) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesyst em.java:3137) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameN odeRpcServer.java:669) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTra nslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Cl ientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4497 0) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.cal l(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1752) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1748) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma tion.java:1438) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1746) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct orAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC onstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExce ption.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExc eption.java:57) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2153) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2122) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSy stem.java:545) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1915) at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile System.java:129) ... 6 more while configuration and system settings are as follows: [vagrant@localhost hadoop-hdfs]$ hadoop fs -ls hdfs://localhost/ Found 1 items -rw-r--r-- 3 vagrant supergroup 1010827264 2014-06-08 19:01 hdfs://localhost/u buntu-14.04-desktop-amd64.iso [vagrant@localhost hadoop-hdfs]$ /etc/hadoop/conf/core-site.xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:8020</value> </property> </configuration> /etc/hbase/conf/hbase-site.xml <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://localhost:8020/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> </configuration> /etc/hadoop/conf/hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/var/lib/hadoop-hdfs/cache</value> </property> <property> <name>dfs.data.dir</name> <value>/tmp/hellodatanode</value> </property> </configuration> NameNode directory permissions [vagrant@localhost hadoop-hdfs]$ ls -ltr /var/lib/hadoop-hdfs/cache total 8 -rwxrwxrwx. 1 hbase hdfs 15 Jun 8 23:43 in_use.lock drwxrwxrwx. 2 hbase hdfs 4096 Jun 8 23:43 current [vagrant@localhost hadoop-hdfs]$ HMaster is able to start if fs.defaultFS property has been commented in core-site.xml NameNode is listening [vagrant@localhost hadoop-hdfs]$ netstat -nato | grep 50070 tcp 0 0 0.0.0.0:50070 0.0.0.0:* LIST EN off (0.00/0/0) tcp 0 0 33.33.33.33:50070 33.33.33.1:57493 ESTA BLISHED off (0.00/0/0) and accessible by navigating to http://33.33.33.33:50070/dfshealth.jsp. Question How to solve makeDirOnFileSystem exception and let HBase connect to HDFS?

    Read the article

  • Xen won't start after it had been working

    - by Paul Tomblin
    I've been setting up this Debian Stable system with a dom0 and 3 domUs. It was working fine for several days, and I'm almost ready to deploy it to the rack. But last night I shut it down with all three domUs still running for the first time, and today when I started it up, xend won't start. In /var/log/messages, I have: Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: blktapctrl: v1.0.0 Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [raw image (aio)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [raw image (sync)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [vmware image (vmdk)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [ramdisk image (ram)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [qcow disk (qcow)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: couldn't find device number for 'blktap0' Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Unable to start blktapctrl and in /var/log/xen/xend.log, I have this: [2010-04-18 12:46:32 3523] INFO (SrvDaemon:219) Xend exited with status 1. [2010-04-18 13:01:34 4255] INFO (SrvDaemon:331) Xend Daemon started [2010-04-18 13:01:34 4255] INFO (SrvDaemon:335) Xend changeset: unavailable. [2010-04-18 13:01:34 4255] INFO (SrvDaemon:342) Xend version: Unknown. [2010-04-18 13:01:34 4255] ERROR (SrvDaemon:353) Exception starting xend (no element found: line 1, column 0) Traceback (most recent call last): File "/usr/lib/xen-3.2-1/lib/python/xen/xend/server/SrvDaemon.py", line 345, in run servers = SrvServer.create() File "/usr/lib/xen-3.2-1/lib/python/xen/xend/server/SrvServer.py", line 251, in create root.putChild('xend', SrvRoot()) File "/usr/lib/xen-3.2-1/lib/python/xen/xend/server/SrvRoot.py", line 40, in __init__ self.get(name) File "/usr/lib/xen-3.2-1/lib/python/xen/web/SrvDir.py", line 82, in get val = val.getobj() File "/usr/lib/xen-3.2-1/lib/python/xen/web/SrvDir.py", line 52, in getobj File "/usr/lib/xen-3.2-1/lib/python/xen/xend/server/SrvNode.py", line 30, in _ _init__ self.xn = XendNode.instance() File "/usr/lib/xen-3.2-1/lib/python/xen/xend/XendNode.py", line 709, in instance inst = XendNode() File "/usr/lib/xen-3.2-1/lib/python/xen/xend/XendNode.py", line 164, in __init__ saved_pifs = self.state_store.load_state('pif') File "/usr/lib/xen-3.2-1/lib/python/xen/xend/XendStateStore.py", line 104, in load_state dom = minidom.parse(xml_path) File "/usr/lib/python2.5/xml/dom/minidom.py", line 1915, in parse return expatbuilder.parse(file) File "/usr/lib/python2.5/xml/dom/expatbuilder.py", line 924, in parse result = builder.parseFile(fp) File "/usr/lib/python2.5/xml/dom/expatbuilder.py", line 211, in parseFile parser.Parse("", True) ExpatError: no element found: line 1, column 0 [2010-04-18 13:01:34 4253] INFO (SrvDaemon:219) Xend exited with status 1. Any clues as to what might be going wrong?

    Read the article

  • Changing Endpoint URL for a Web Service Data Control

    - by vishal.s.jain(at)oracle.com
    When you move your application from Development to Production, there is more often then not, a need to change the web service endpoint URL in your ADF application. If you are using a Web Service Data Control(WSDC), you can do this in more than one ways. The following example illustrates how this can be done.At Design TimeIf the application workspace is in your control, you can quickly do this by updating the definition in DataControl.dcx file:Along with this, you will also need to change the endpoint in connections.xml. So invoke the Edit Connections dialog: Then, change the endpoint URL.At DeploymentAnother way to change is changing the endpoint at the ear level, at deployment. So when you select Deploy -> Application Server at the Application level, it will bring up a Deployment Configuration dialog, in which you can edit the WSDL URL:Also, change the Port URL:At Post DeploymentIf your need to change this post deployment, you can do it through Oracle Enterprise Manager. But for this, your application needs to be configured with a writable MDS repository. It is recommended you use a Database MDS store during deployment. So have your application configured (by having an entry in adf-config.xml) and server configured (by having a MDS store registered). Once done, you can configure the ADF Connection in EM for this application:Change the WSDL location here on 'Edit':Also, change the Port using Advance Connection Configuration:Change the Endpoint Address here:Apply Changes and you are done!

    Read the article

  • Loading a Template From a User Control

    - by Ricardo Peres
    What if you wanted to load a template (ITemplate property) from an external user control (.ascx) file? Yes, it is possible; there are a number of ways to do this, the one I'll talk about here is through a type converter. You need to apply a TypeConverterAttribute to your ITemplate property where you specify a custom type converter that does the job. This type converter relies on InstanceDescriptor. Here is the code for it: public class TemplateTypeConverter: TypeConverter { public override Boolean CanConvertFrom(ITypeDescriptorContext context, Type sourceType) { return ((sourceType == typeof(String)) || (base.CanConvertFrom(context, sourceType) == true)); } public override Boolean CanConvertTo(ITypeDescriptorContext context, Type destinationType) { return ((destinationType == typeof(InstanceDescriptor)) || (base.CanConvertTo(context, destinationType) == true)); } public override Object ConvertTo(ITypeDescriptorContext context, CultureInfo culture, Object value, Type destinationType) { if (destinationType == typeof(InstanceDescriptor)) { Object objectFactory = value.GetType().GetField("_objectFactory", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(value); Object builtType = objectFactory.GetType().BaseType.GetField("_builtType", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(objectFactory); MethodInfo loadTemplate = typeof(TemplateTypeConverter).GetMethod("LoadTemplate"); return (new InstanceDescriptor(loadTemplate, new Object [] { "~/" + (builtType as Type).Name.Replace('_', '/').Replace("/ascx", ".ascx") })); } return base.ConvertTo(context, culture, value, destinationType); } public static ITemplate LoadTemplate(String virtualPath) { using (Page page = new Page()) { return (page.LoadTemplate(virtualPath)); } } } And, on your control: public class MyControl: Control { [Browsable(false)] [TypeConverter(typeof(TemplateTypeConverter))] public ITemplate Template { get; set; } } This allows the following declaration: Hope this helps! SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.brushes.Xml.aliases = ['xml']; SyntaxHighlighter.all();

    Read the article

  • Extending NerdDinner: Adding Geolocated Flair

    - by Jon Galloway
    NerdDinner is a website with the audacious goal of “Organizing the world’s nerds and helping them eat in packs.” Because nerds aren’t likely to socialize with others unless a website tells them to do it. Scott Hanselman showed off a lot of the cool features we’ve added to NerdDinner lately during his popular talk at MIX10, Beyond File | New Company: From Cheesy Sample to Social Platform. Did you miss it? Go ahead and watch it, I’ll wait. One of the features we wanted to add was flair. You know about flair, right? It’s a way to let folks who like your site show it off in their own site. For example, here’s my StackOverflow flair: Great! So how could we add some of this flair stuff to NerdDinner? What do we want to show? If we’re going to encourage our users to give up a bit of their beautiful website to show off a bit of ours, we need to think about what they’ll want to show. For instance, my StackOverflow flair is all about me, not StackOverflow. So how will this apply to NerdDinner? Since NerdDinner is all about organizing local dinners, in order for the flair to be useful it needs to make sense for the person viewing the web page. If someone visits from Egypt visits my blog, they should see information about NerdDinners in Egypt. That’s geolocation – localizing site content based on where the browser’s sitting, and it makes sense for flair as well as entire websites. So we’ll set up a simple little callout that prompts them to host a dinner in their area: Hopefully our flair works and there is a dinner near your viewers, so they’ll see another view which lists upcoming dinners near them: The Geolocation Part Generally website geolocation is done by mapping the requestor’s IP address to a geographic area. It’s not an exact science, but I’ve always found it to be pretty accurate. There are (at least) three ways to handle it: You pay somebody like MaxMind for a database (with regular updates) that sits on your server, and you use their API to do lookups. I used this on a pretty big project a few years ago and it worked well. You use HTML 5 Geolocation API or Google Gears or some other browser based solution. I think those are cool (I use Google Gears a lot), but they’re both in flux right now and I don’t think either has a wide enough of an install base yet to rely on them. You might want to, but I’ve heard you do all kinds of crazy stuff, and sometimes it gets you in trouble. I don’t mean talk out of line, but we all laugh behind your back a bit. But, hey, it’s up to you. It’s your flair or whatever. There are some free webservices out there that will take an IP address and give you location information. Easy, and works for everyone. That’s what we’re doing. I looked at a few different services and settled on IPInfoDB. It’s free, has a great API, and even returns JSON, which is handy for Javascript use. The IP query is pretty simple. We hit a URL like this: http://ipinfodb.com/ip_query.php?ip=74.125.45.100&timezone=false … and we get an XML response back like this… <?xml version="1.0" encoding="UTF-8"?> <Response> <Ip>74.125.45.100</Ip> <Status>OK</Status> <CountryCode>US</CountryCode> <CountryName>United States</CountryName> <RegionCode>06</RegionCode> <RegionName>California</RegionName> <City>Mountain View</City> <ZipPostalCode>94043</ZipPostalCode> <Latitude>37.4192</Latitude> <Longitude>-122.057</Longitude> </Response> So we’ll build some data transfer classes to hold the location information, like this: public class LocationInfo { public string Country { get; set; } public string RegionName { get; set; } public string City { get; set; } public string ZipPostalCode { get; set; } public LatLong Position { get; set; } } public class LatLong { public float Lat { get; set; } public float Long { get; set; } } And now hitting the service is pretty simple: public static LocationInfo HostIpToPlaceName(string ip) { string url = "http://ipinfodb.com/ip_query.php?ip={0}&timezone=false"; url = String.Format(url, ip); var result = XDocument.Load(url); var location = (from x in result.Descendants("Response") select new LocationInfo { City = (string)x.Element("City"), RegionName = (string)x.Element("RegionName"), Country = (string)x.Element("CountryName"), ZipPostalCode = (string)x.Element("CountryName"), Position = new LatLong { Lat = (float)x.Element("Latitude"), Long = (float)x.Element("Longitude") } }).First(); return location; } Getting The User’s IP Okay, but first we need the end user’s IP, and you’d think it would be as simple as reading the value from HttpContext: HttpContext.Current.Request.UserHostAddress But you’d be wrong. Sorry. UserHostAddress just wraps HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"], but that doesn’t get you the IP for users behind a proxy. That’s in another header, “HTTP_X_FORWARDED_FOR". So you can either hit a wrapper and then check a header, or just check two headers. I went for uniformity: string SourceIP = string.IsNullOrEmpty(Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; We’re almost set to wrap this up, but first let’s talk about our views. Yes, views, because we’ll have two. Selecting the View We wanted to make it easy for people to include the flair in their sites, so we looked around at how other people were doing this. The StackOverflow folks have a pretty good flair system, which allows you to include the flair in your site as either an IFRAME reference or a Javascript include. We’ll do both. We have a ServicesController to handle use of the site information outside of NerdDinner.com, so this fits in pretty well there. We’ll be displaying the same information for both HTML and Javascript flair, so we can use one Flair controller action which will return a different view depending on the requested format. Here’s our general flow for our controller action: Get the user’s IP Translate it to a location Grab the top three upcoming dinners that are near that location Select the view based on the format (defaulted to “html”) Return a FlairViewModel which contains the list of dinners and the location information public ActionResult Flair(string format = "html") { string SourceIP = string.IsNullOrEmpty( Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; var location = GeolocationService.HostIpToPlaceName(SourceIP); var dinners = dinnerRepository. FindByLocation(location.Position.Lat, location.Position.Long). OrderByDescending(p => p.EventDate).Take(3); // Select the view we'll return. // Using a switch because we'll add in JSON and other formats later. string view; switch (format.ToLower()) { case "javascript": view = "JavascriptFlair"; break; default: view = "Flair"; break; } return View( view, new FlairViewModel { Dinners = dinners.ToList(), LocationName = string.IsNullOrEmpty(location.City) ? "you" : String.Format("{0}, {1}", location.City, location.RegionName) } ); } Note: I’m not in love with the logic here, but it seems like overkill to extract the switch statement away when we’ll probably just have two or three views. What do you think? The HTML View The HTML version of the view is pretty simple – the only thing of any real interest here is the use of an extension method to truncate strings that are would cause the titles to wrap. public static string Truncate(this string s, int maxLength) { if (string.IsNullOrEmpty(s) || maxLength <= 0) return string.Empty; else if (s.Length > maxLength) return s.Substring(0, maxLength) + "..."; else return s; }   So here’s how the HTML view ends up looking: <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage<FlairViewModel>" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Nerd Dinner</title> <link href="/Content/Flair.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="nd-wrapper"> <h2 id="nd-header">NerdDinner.com</h2> <div id="nd-outer"> <% if (Model.Dinners.Count == 0) { %> <div id="nd-bummer"> Looks like there's no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target="_blank" href="http://www.nerddinner.com/Dinners/Create">host one</a>?</div> <% } else { %> <h3> Dinners Near You</h3> <ul> <% foreach (var item in Model.Dinners) { %> <li> <%: Html.ActionLink(String.Format("{0} with {1} on {2}", item.Title.Truncate(20), item.HostedBy, item.EventDate.ToShortDateString()), "Details", "Dinners", new { id = item.DinnerID }, new { target = "_blank" })%></li> <% } %> </ul> <% } %> <div id="nd-footer"> More dinners and fun at <a target="_blank" href="http://nrddnr.com">http://nrddnr.com</a></div> </div> </div> </body> </html> You’d include this in a page using an IFRAME, like this: <IFRAME height=230 marginHeight=0 src="http://nerddinner.com/services/flair" frameBorder=0 width=160 marginWidth=0 scrolling=no></IFRAME> The Javascript view The Javascript flair is written so you can include it in a webpage with a simple script include, like this: <script type="text/javascript" src="http://nerddinner.com/services/flair?format=javascript"></script> The goal of this view is very similar to the HTML embed view, with a few exceptions: We’re creating a script element and adding it to the head of the document, which will then document.write out the content. Note that you have to consider if your users will actually have a <head> element in their documents, but for website flair use cases I think that’s a safe bet. Since the content is being added to the existing page rather than shown in an IFRAME, all links need to be absolute. That means we can’t use Html.ActionLink, since it generates relative routes. We need to escape everything since it’s being written out as strings. We need to set the content type to application/x-javascript. The easiest way to do that is to use the <%@ Page ContentType%> directive. <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<NerdDinner.Models.FlairViewModel>" ContentType="application/x-javascript" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> document.write('<script>var link = document.createElement(\"link\");link.href = \"http://nerddinner.com/content/Flair.css\";link.rel = \"stylesheet\";link.type = \"text/css\";var head = document.getElementsByTagName(\"head\")[0];head.appendChild(link);</script>'); document.write('<div id=\"nd-wrapper\"><h2 id=\"nd-header\">NerdDinner.com</h2><div id=\"nd-outer\">'); <% if (Model.Dinners.Count == 0) { %> document.write('<div id=\"nd-bummer\">Looks like there\'s no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target=\"_blank\" href=\"http://www.nerddinner.com/Dinners/Create\">host one</a>?</div>'); <% } else { %> document.write('<h3> Dinners Near You</h3><ul>'); <% foreach (var item in Model.Dinners) { %> document.write('<li><a target=\"_blank\" href=\"http://nrddnr.com/<%: item.DinnerID %>\"><%: item.Title.Truncate(20) %> with <%: item.HostedBy %> on <%: item.EventDate.ToShortDateString() %></a></li>'); <% } %> document.write('</ul>'); <% } %> document.write('<div id=\"nd-footer\"> More dinners and fun at <a target=\"_blank\" href=\"http://nrddnr.com\">http://nrddnr.com</a></div></div></div>'); Getting IP’s for Testing There are a variety of online services that will translate a location to an IP, which were handy for testing these out. I found http://www.itouchmap.com/latlong.html to be most useful, but I’m open to suggestions if you know of something better. Next steps I think the next step here is to minimize load – you know, in case people start actually using this flair. There are two places to think about – the NerdDinner.com servers, and the services we’re using for Geolocation. I usually think about caching as a first attack on server load, but that’s less helpful here since every user will have a different IP. Instead, I’d look at taking advantage of Asynchronous Controller Actions, a cool new feature in ASP.NET MVC 2. Async Actions let you call a potentially long-running webservice without tying up a thread on the server while waiting for the response. There’s some good info on that in the MSDN documentation, and Dino Esposito wrote a great article on Asynchronous ASP.NET Pages in the April 2010 issue of MSDN Magazine. But let’s think of the children, shall we? What about ipinfodb.com? Well, they don’t have specific daily limits, but they do throttle you if you put a lot of traffic on them. From their FAQ: We do not have a specific daily limit but queries that are at a rate faster than 2 per second will be put in "queue". If you stay below 2 queries/second everything will be normal. If you go over the limit, you will still get an answer for all queries but they will be slowed down to about 1 per second. This should not affect most users but for high volume websites, you can either use our IP database on your server or we can whitelist your IP for 5$/month (simply use the donate form and leave a comment with your server IP). Good programming practices such as not querying our API for all page views (you can store the data in a cookie or a database) will also help not reaching the limit. So the first step there is to save the geolocalization information in a time-limited cookie, which will allow us to look up the local dinners immediately without having to hit the geolocation service.

    Read the article

  • Storing SCA Metadata in the Oracle Metadata Services Repository by Nicolás Fonnegra Martinez and Markus Lohn

    - by JuergenKress
    The advantages of using the Oracle Metadata Services Repository as a central storage for the metadata. SCA has been available since the release of the Oracle SOA Suite 11g. This technology combines and orchestrates several SOA components inside an SCA composite, making design, development, deployment, and maintenance easier. SCA development is metadata-driven, meaning that metadata artifacts, such as Web Services Description Language (WSDL), XML Schema Definition (XSD), XML, others, define the composite's behavior. With the increased number of composites and the dependencies among them, it became necessary to manage all the metadata in an adequate way. This article will address the advantages of using the Oracle Metadata Services (MDS) repository as a central storage for the metadata. The MDS repository is a central part of the Oracle Fusion Middleware landscape, managing the metadata for several technologies, such as Oracle Application Development Framework (Oracle ADF), Oracle WebCenter, and the Oracle SOA Suite. This article is divided into three parts. The first part provides an overview of SCA and MDS. The second part describes some MDS tasks that help in the management of the SCA metadata files inside the repository. The third part shows how to develop SCA composites in combination with an MDS repository. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SCA Metadata. Metadata Services Repository,Nicolás Fonnegra Martinez,Markus Lohn,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

< Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >