Search Results

Search found 4597 results on 184 pages for 'red tiger'.

Page 181/184 | < Previous Page | 177 178 179 180 181 182 183 184  | Next Page >

  • Why my :hover aren't working?

    - by user1628488
    I have am nav, and when i hover some elements, the submenu should be displayed 'block', but this dont work. See <!doctype html> <html lang="pt-br"> <head> <meta charset="utf-8" /> <meta name="robots" content="noindex, nofollow" /> <meta name="generator" content="Notepad++" /> <meta name="author" content="Erick Ribeiro" /> <meta http-equiv="refresh" content="60" /> <title>Mozilla Firefox</title> <style type="text/css"> *{ font-family: calibri; } #menu { float: left; } .submenu { margin-top: 26px; padding: 10px; border: solid 1px rgb(224, 224, 224); background: rgb(254, 254, 254); color: rgb(0, 128, 224); border-radius: 0 0 4px 4px; } #menuHome:hover ~ #submenuControle { display: block; opacity: 0; color: red; } #submenuHome { display: none; opacity: 0; } #submenuControle { display: block; opacity: 1; } #submenuGestão { display: none; opacity: 0; } #submenuRL { display: none; opacity: 0; } #submenuSI { display: none; opacity: 0; } ul { float: left; list-style-type: none; padding: 0; margin: 0; } li { display: inline; float:left; } .primeiroItem { border: solid rgb(224, 224, 224); border-top-width: 1px; border-right-width: 1px; border-bottom-width: 1px; border-left-width: 1px; border-radius: 4px 0 0 4px; } .naoPrimeiroItem { border: solid rgb(224, 224, 224); border-top-width: 1px; border-right-width: 1px; border-bottom-width: 1px; border-left-width: 0; } .ultimoItem { border: solid rgb(224, 224, 224); border-top-width: 1px; border-right-width: 1px; border-bottom-width: 1px; border-left-width: 0; border-radius: 0 4px 4px 0; } a { text-decoration:none; padding: 8px; border: solid 1px; color: rgb(0, 0, 0); background: rgb(240,240, 240); } a:visited { color: rgb(0, 0, 0); } </style> <script type="text/javascript"> </script> </head> <body> <nav id="menu"> <ul> <li><a id="menuHome" class="primeiroItem" href="#">Home</a></li> <li><a id="menuControle" class="naoPrimeiroItem" href="#">Controle</a></li> <li><a id="menuGestao" class="naoPrimeiroItem" href="#">Gestão</a></li> <li><a id="menuRL" class="naoPrimeiroItem" href="#">Relatórios e Listas</a></li> <li><a id="menuSI" class="ultimoItem" href="#">Sistema Informação</a></li> </ul> <div id="submenuHome" class="submenu"> </div> <div id="submenuControle" class="submenu"> BSC Comunicação Treinamento Documentos Controle de Acesso </div> <div id="submenuGestão" class="submenu"> ASV Treinamento Suprimentos Chamados</div> <div id="submenuRL" class="submenu"> Listas Relatórios </div> <div id="submenuSI" class="submenu"> </div> </nav> </body> </html>

    Read the article

  • CSS: Centering a floated block level element in IE6 (It almost works)

    - by Louis W
    I have a block level element which I am centering on the page. I have gotten it to work for all other browsers except IE6 where it ALMOST works. http://tinyurl.com/28sh9eq If I view the page in IE6 the red box is slightly off center of the pink one in IE. If I then resize the browser window it snaps into place where I want it. Uhhhhh.... yea.... what gives? How come resizing the window makes it work? I have also tried setting an explicit width on the wrapper with no avail. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"> <html> <head> <title></title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=7" /> <style type="text/css"> BODY { text-align: center; font-family: Arial; } .row_wrap { height: 100px; margin-bottom: 30px; background-color: pink; } .row { float: right; position: relative; left: -50%; text-align: left; clear: both; } .button1 { color: #FFF; height: 36px; text-decoration: none; position: relative; padding: 0 30px; background: url('button.gif') no-repeat 0 0; display: block; float: left; left: 50%; } .button1 .end { width: 20px; height: 37px; position: absolute; right: -2px; top: 0; background: url('button.gif') no-repeat right 0; } .button1 .text { font-size: 16px; font-weight: bold; white-space: nowrap; height: 36px; padding-top: 7px; display: block; float: left; } .button1 .text .arrow { vertical-align: 1px; } </style> </head> <body> <h2>RTL: Button 1</h2> <div class="row_wrap"> <div class="row" dir="rtl"> <a href="#" class="button1"> <span class="end"></span> <span class="text"><span class="arrow">»</span> Hello 1.</span> </a> </div> </div> <h2>RTL: Button 1-2</h2> <div class="row_wrap" style="width: 400px;"> <div class="row" dir="rtl"> <a href="#" class="button1"> <span class="end"></span> <span class="text"><span class="arrow">»</span> Hello 1.</span> </a> </div> </div> <br/><br/> <h2>Normal: Button 1</h2> <div class="row_wrap"> <div class="row"> <a href="#" class="button1"> <span class="end"></span> <span class="text"><span class="arrow">»</span> Hello.</span> </a> </div> </div> </body> Thanks for your help.

    Read the article

  • clearTimeout not working as expected

    - by user543314
    Javascript sliding menu stay open. clearTimeout not working as expected -can you help me please <html> <head> <style> #Menu1 {position:absolute; top:-190px; left:150px; font-size:15px;visibility:visible; background-color:#D0BCFE; width:114px;z-index:0;border-style:solid; } #Menu2 {position:absolute; top:-190px; left:580px; font-size:15px;visibility:visible; background-color:#D0BCFE; width:114px;z-index:0;border-style:solid; } #Menu3 {position:absolute; top:-190px; left:1005px; font-size:15px;visibility:visible; background-color:#D0BCFE; width:114px;z-index:0;border-style:solid; } TD.TDHREFMENUS{font-size:20;color:red;position:relative;z-index:0;background-color:#C4ABFE;border-style:solid;width:114px;} </style> <script> var stopUp=null; var stopDown=null; var mov=-143; var on; function down(id){ if (!on){ on=true; clearTimeout(stopUp) stopUp=null; } var obj=document.getElementById(id) obj.style.top=mov +"px"; if (mov <=27){ mov+=2; stopDown=setTimeout(function (){ down(id) }, 20) } } function up(id){ if (on){ on=false; clearTimeout(stopDown) stopDown=null; } var obj=document.getElementById(id) obj.style.top=mov +"px"; if (mov >=-143){ mov-=2; stopUp=setTimeout(function(){ up(id)}, 20); } } </script> </head> <body leftmargin=0 marginwidth=0 topmargin=0 marginheight=0> <div id="Menu1" onmouseover="down('Menu1')" onmouseout="up('Menu1')"> URL 1<br> URL 2<br> URL 3<br> URL 4<br> URL 5<br> URL 6<br> URL 7<br> URL 8<br> </div> </div> <div id="Menu2" onmouseover="down('Menu2')" onmouseout="up('Menu2')"> URL 1<br> URL 2<br> URL 3<br> URL 4<br> URL 5<br> URL 6<br> URL 7<br> URL 8<br> </div> </div> <div id="Menu3" onmouseover="down('Menu3')" onmouseout="up('Menu3')"> URL 1<br> URL 2<br> URL 3<br> URL 4<br> URL 5<br> URL 6<br> URL 7<br> URL 8<br> </div> </div> <TABLE cellSpacing=0 cellPadding=0 BORDER=1 WIDTH=100%> <TBODY> <TR> <TD align=middle CLASS="TDHREFMENUS"><span onmouseover="down('Menu1')" onmouseout="up('Menu1')">MENU 1</span> </TD> <TD align=middle CLASS="TDHREFMENUS"><span onmouseover="down('Menu2')" onmouseout="up('Menu2')">MENU 2</span> </TD> <TD align=middle CLASS="TDHREFMENUS"><span onmouseover="down('Menu3')" onmouseout="up('Menu3')">MENU 3</span> </TD> </TR> </TBODY> </TABLE> <br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br> <html> <head>

    Read the article

  • The css is not displayed in opera mini and balckberry browsers

    - by p.karuppasamy
    Hi All, I 'm a php developer. I develope a site for mobile using php hawhaw.inc. i use the page sim for set the css for my page as $myPage->use_simulator("http://".$_SERVER["SERVER_NAME"].":".$_SERVER['SERVER_PORT'].dirname($_SERVER['PHP_SELF'])."/css/style.css") im my css file i write the css for table. it set in the windows mobile but it's not set in the opera mini and black berry browsers. The page is $myPage = new HAW_deck("Login ", HAW_ALIGN_CENTER); /*$myPage->set_bgcolor('#337fa6'); $myPage->set_background('../images/body_bg.jpg'); $myPage->set_css_class('skin');*/ $myHtmlCode = '<link href="css/style.css" rel="stylesheet" type="text/css" media="handheld" /><link rel="stylesheet" type="text/css" href="css/style.css" /><meta name="viewport" content="initial-scale=1.0"/><meta name="viewport" content="user-scalable=false" />'; $myAdblock = new HAW_raw(HAW_HTML, $myHtmlCode); $myPage->add_raw($myAdblock); /*$myPage->set_width('50%'); $myPage->set_height('50%');*/ $myPage->use_simulator("http://".$_SERVER["SERVER_NAME"].":".$_SERVER['SERVER_PORT'].dirname($_SERVER['PHP_SELF'])."/css/style.css"); $myForm = new HAW_form($_SERVER['PHP_SELF']); $text = new HAW_text("Please enter username & password:"); $text1 = new HAW_text("REMcloud", HAW_TEXTFORMAT_BOLD | HAW_TEXTFORMAT_BIG); $myImage1 = new HAW_image('','../images/logo.png','REMcloud',''); $myImage1->set_br(1); $myImage1->set_html_width(200); $myImage1->set_html_height(100); $error=new HAW_text($_REQUEST['msg'], HAW_TEXTFORMAT_BOLD); $error->set_color("#ff0000", "red"); $theID = new HAW_input("username", "", "Username", "*N"); $theID->set_size(4); $theID->set_maxlength(20); $text2 = new HAW_text(""); $thePW = new HAW_input("password", "", "Password", "*N"); $thePW->set_size(4); $thePW->set_maxlength(20); $thePW->set_type(HAW_INPUT_PASSWORD); $theSubmission = new HAW_submit("Submit", "submit"); // add the elements in the form //$myForm->add_text($text1); $myForm->add_image($myImage1); //$myForm->add_text($text2); $myForm->add_text($error); $myForm->add_text($text2); $myForm->add_text($text); $myForm->add_text($text2); $myForm->add_input($theID); $myForm->add_text($text2); $myForm->add_input($thePW); $myForm->add_text($text2); $myForm->add_submit($theSubmission); $myPage->add_form($myForm); $myPage->create_page(); the css is /* HAWHAW skin stylesheet for the hawhaw phone */ body { background-color: #FFFFFF; text-align: center; margin: 0px; padding: 0px; color: #222222; } #skin { margin: 0px auto; padding: 0px; text-align: left; width: 280px; height: 466px; background-image: url(../../images/bg.jpg); background-color:#337fa6; background-repeat:repeat-x; } #display { position: relative; top: 80px; /* left: 43px;*/ width: 100%; height: 85%; overflow: auto; } TABLE{ width:100%; padding-left:-10px; overflow:scroll; height:20px; border:none; font-family:Calibri; font-size:14px; } tr{ height:20px; } td{ color:#fff; border:none; } a { font-family:Calibri; font-size:16px; } if any one know please advice me for the same Thanks in advance

    Read the article

  • Nested <a> and <span> challenge

    - by PaddyO
    Hi all, Trying in vain to get a nested link working within a nested span. This is a working test page for the code below to explain what I'm trying to do. Any ideas on how to get this working in valid html? I guess it's either a nesting order or style syntax thing but I am at a loss. Any thoughts much appreciated. <div id="greyback"> <ul id="scrollbox"> <li class="listcat">List header</li> <li><a class="menu" href="#freeze">List item 1<span><b>This text has popped up because you have clicked the list item, which has an "a" tag and now has :focus. That "a" tag is the first of two.</b><br><br>What I am trying to do is to set the second "a" tag as a DIFFERENT "embedded" link in this box<span style="color: blue; background-color: yellow;">eg, here<a href="http://www.conservationenterprises.com" target="blank">This is the second (nested) "a" tag in this html nest. It is a link to an external site. Instead of this being an always-visible link, I want it to sit within the yellow box in the first span (click the List item 1 to display).</a></span> </a></span> </li> </a></span> </li> </ul> </div> and the CSS: #scrollbox {margin: 0 auto; margin-top: 1em; margin-bottom: 1em; width:19em; height:auto; max-height: 21em; overflow:auto; border-bottom: 0.1em solid #FFA500; border-top: 0.1em solid #FFA500;} #scrollbox a {float: left; color:#000000; text-decoration:none; width:18em; height: auto; margin-bottom: 0.5em; font-family: Arial, sans-serif; font-size: 0.9em; text-align:left;} #scrollbox a.menu {} #scrollbox a span {display:none; position:absolute; left:0em; top:0;} #scrollbox a span img {float: right; border:0; max-width:7.5em;} #scrollbox a:hover {border: 0; color: #7ebb11; font-size:0.9em;} #scrollbox a:hover span {border: 0; color: #535353;} #scrollbox a span:focus {color: blue;} #scrollbox a:active {border:none; color: #535353; text-decoration: none;} #scrollbox a:focus {border:0em solid #000; outline:0;} #scrollbox a:active span, #scrollbox li a:active span, #scrollbox a:focus span, #scrollbox li a:focus span {display: block; width: 52.5em; min-height: 20em; height: auto; left: 1.5em; top:18em; z-index:10; font-size:0.9em; text-align: left; padding: 1em; padding-bottom: 0em; background-color: #c3FFe3; color: #535353; border: solid #FFA500 0.25em;} #scrollbox li a:active span span, #scrollbox li a:focus span span{display: block; width: auto; height: auto; min-height: 2em; left: 25em; top:10em; z-index:10; font-size:0.9em; text-align: left; padding: 1em; padding-bottom: 0em; background-color: transparent; color: #535353; border: dashed red 1px;} .ul#scrollbox {padding-left: 0.1em;} #scrollbox li {float:left; list-style: none; background: url(blank.png) no-repeat left center; margin-left: 0em; font-family:Arial, sans-serif; font-size: 0.9em;} #scrollbox li.listcat {float: left; text-align:left; width: 18em; margin-left: 0em; margin-top: 0.1em; margin-bottom: 0.3em; padding-top:0.5em; color: green; font-size: 0.9em; font-weight:bold;} Cheers Patrick.

    Read the article

  • How to stop rendering invisible faces

    - by TheMorfeus
    I am making a voxel-based game, and for needs of it, i am creating a block rendering engine. Point is, that i need to generate lots of cubes. Every time i render more than 16x16x16 chunk of theese blocks, my FPS is dropped down hardly, because it renders all 6 faces of all of theese cubes. THat's 24 576 quads, and i dont want that. So, my question is, How to stop rendering vertices(or quads) that are not visible, and therefore increase performance of my game? Here is class for rendering of a block: public void renderBlock(int posx, int posy, int posz) { try{ //t.bind(); glEnable(GL_CULL_FACE); glCullFace(GL_BACK);// or even GL_FRONT_AND_BACK */); glPushMatrix(); GL11.glTranslatef((2*posx+0.5f),(2*posy+0.5f),(2*posz+0.5f)); // Move Right 1.5 Units And Into The Screen 6.0 GL11.glRotatef(rquad,1.0f,1.0f,1.0f); glBegin(GL_QUADS); // Draw A Quad GL11.glColor3f(0.5f, 0.4f, 0.4f); // Set The Color To Green GL11.glTexCoord2f(0,0); GL11.glVertex3f( 1f, 1f,-1f); // Top Right Of The Quad (Top) GL11.glTexCoord2f(1,0); GL11.glVertex3f(-1f, 1f,-1f); // Top Left Of The Quad (Top) GL11.glTexCoord2f(1,1); GL11.glVertex3f(-1f, 1f, 1f); // Bottom Left Of The Quad (Top) GL11.glTexCoord2f(0,1); GL11.glVertex3f( 1f, 1f, 1f); // Bottom Right Of The Quad (Top) //GL11.glColor3f(1.2f,0.5f,0.9f); // Set The Color To Orange GL11.glTexCoord2f(0,0); GL11.glVertex3f( 1f,-1f, 1f); // Top Right Of The Quad (Bottom) GL11.glTexCoord2f(0,1); GL11.glVertex3f(-1f,-1f, 1f); // Top Left Of The Quad (Bottom) GL11.glTexCoord2f(1,1); GL11.glVertex3f(-1f,-1f,-1f); // Bottom Left Of The Quad (Bottom) GL11.glTexCoord2f(1,0); GL11.glVertex3f( 1f,-1f,-1f); // Bottom Right Of The Quad (Bottom) //GL11.glColor3f(1.0f,0.0f,0.0f); // Set The Color To Red GL11.glTexCoord2f(0,0); GL11.glVertex3f( 1f, 1f, 1f); // Top Right Of The Quad (Front) GL11.glTexCoord2f(1,0); GL11.glVertex3f(-1f, 1f, 1f); // Top Left Of The Quad (Front) GL11.glTexCoord2f(1,1); GL11.glVertex3f(-1f,-1f, 1f); // Bottom Left Of The Quad (Front) GL11.glTexCoord2f(0,1); GL11.glVertex3f( 1f,-1f, 1f); // Bottom Right Of The Quad (Front) //GL11.glColor3f(1f,0.5f,0.0f); // Set The Color To Yellow GL11.glTexCoord2f(0,0); GL11.glVertex3f( 1f,-1f,-1f); // Bottom Left Of The Quad (Back) GL11.glTexCoord2f(1,0); GL11.glVertex3f(-1f,-1f,-1f); // Bottom Right Of The Quad (Back) GL11.glTexCoord2f(1,1); GL11.glVertex3f(-1f, 1f,-1f); // Top Right Of The Quad (Back) GL11.glTexCoord2f(0,1); GL11.glVertex3f( 1f, 1f,-1f); // Top Left Of The Quad (Back) //GL11.glColor3f(0.0f,0.0f,0.3f); // Set The Color To Blue GL11.glTexCoord2f(0,1); GL11.glVertex3f(-1f, 1f, 1f); // Top Right Of The Quad (Left) GL11.glTexCoord2f(1,1); GL11.glVertex3f(-1f, 1f,-1f); // Top Left Of The Quad (Left) GL11.glTexCoord2f(1,0); GL11.glVertex3f(-1f,-1f,-1f); // Bottom Left Of The Quad (Left) GL11.glTexCoord2f(0,0); GL11.glVertex3f(-1f,-1f, 1f); // Bottom Right Of The Quad (Left) //GL11.glColor3f(0.5f,0.0f,0.5f); // Set The Color To Violet GL11.glTexCoord2f(0,0); GL11.glVertex3f( 1f, 1f,-1f); // Top Right Of The Quad (Right) GL11.glTexCoord2f(1,0); GL11.glVertex3f( 1f, 1f, 1f); // Top Left Of The Quad (Right) GL11.glTexCoord2f(1,1); GL11.glVertex3f( 1f,-1f, 1f); // Bottom Left Of The Quad (Right) GL11.glTexCoord2f(0,1); GL11.glVertex3f( 1f,-1f,-1f); // Bottom Right Of The Quad (Right) //rquad+=0.0001f; glEnd(); glPopMatrix(); }catch(NullPointerException t){t.printStackTrace(); System.out.println("rendering block failed");} } Here is code that renders them: private void render() { GL11.glClear(GL11.GL_COLOR_BUFFER_BIT|GL11.GL_DEPTH_BUFFER_BIT); for(int y=0; y<32; y++){ for(int x=0; x<16; x++){ for(int z=0; z<16; z++) { b.renderBlock(x, y, z); } } } }

    Read the article

  • How to delete object with a mouse click ?

    - by Meko
    Hi all. I made a simple FlowChat Editor that creates rectangles and triangles and connects them to each other and shows the way from up to down. I can move this elements on screen too. I am now trying to create a button to delete the element which I clicked. There is problem that I can delete MyTriangle objects, but I can't delete MyRectangle objects. It deletes but not object which I clicked. I delete from first object to last. Here is my code: if (deleteObj) { if (rectsList.size() != 0) { for (int i = 0; i < rectsList.size(); i++) { MyRect rect = (MyRect) rectsList.get(i); if (e.getX() <= rect.c.x + 50 && e.getX() >= rect.c.x - 50 && e.getY() <= rect.c.y + 15 && e.getY() >= rect.c.y - 15) { rectsList.remove(rect); System.out.println("This is REctangle DELETED\n"); } } } if (triangleList.size() != 0) { for (int j = 0; j < triangleList.size(); j++) { MyTriangle trian = (MyTriangle) triangleList.get(j); if (e.getX() <= trian.c.x + 20 && e.getX() >= trian.c.x - 20 && e.getY() <= trian.c.y + 20 && e.getY() >= trian.c.y - 20) { triangleList.remove(trian); System.out.println("This is Triangle Deleted\n"); } } } Edit Here MyRectangle and MyTriangle classes public class MyRect extends Ellipse2D.Double { Point c; Point in; Point out; int posX; int posY; int width = 100; int height = 30; int count; public MyRect(Point center, Point input, Point output,int counter) { c = center; in = input; out = output; count=counter; } void drawMe(Graphics g) { // in.x=c.x+20; int posX = c.x; int posY = c.y; int posInX = in.x; int posInY = in.y; int posOutX = out.x; int posOutY = out.y; g.setColor(Color.MAGENTA); g.drawString(" S "+count ,posX-5, posY+5); g.setColor(Color.black); g.drawRect(posX-50, posY-15, width, height); g.setColor(Color.green); g.drawRect(posInX-3, posInY-9, 6, 6); g.setColor(Color.blue); g.drawRect(posOutX-3, posOutY+3, 6, 6); } } public class MyTriangle { Point c; Point in ; Point outYES ; Point outNO ; int posX; int posY; int count; public MyTriangle(Point center,Point input,Point outputYES,Point outputNO,int counter) { c = center; in = input; outYES = outputYES; outNO = outputNO; count=counter; } void drawMe(Graphics g) { int posX = c.x; int posY = c.y; int posInX=in.x; int posInY=in.y; int posOutYESX=outYES.x; int posOutYESY=outYES.y; int posOutNOX=outNO.x; int posOutNOY=outNO.y; int[] xPoints = {posX - 50, posX, posX + 50, posX}; int[] yPoints = {posY, posY - 30, posY, posY + 30}; g.setColor(Color.MAGENTA); g.drawString(" T "+count,posX-5, posY+5); g.setColor(Color.black); g.drawPolygon(xPoints, yPoints, 4); // draw input g.setColor(Color.green); g.drawRect(posInX-3,posInY-9, 6, 6); g.setColor(Color.blue); g.drawRect(posOutYESX-9,posOutYESY-3 , 6, 6); g.setColor(Color.red); g.drawRect(posOutNOX-3,posOutNOY+3 , 6, 6); } }

    Read the article

  • How do I update mysql database when posting form without using hidden inputs?

    - by user1322707
    I have a "members" table in mysql which has approximately 200 field names. Each user is given up to 7 website templates with 26 different values they can insert unique data into for each template. Each time they create a template, they post the form with the 26 associated values. These 26 field names are the same for each template, but are differentiated by an integer at the end, ie _1, _2, ... _7. In the form submitting the template, I have a variable called $pid_sum which is inserted at the end of each field name to identify which template they are creating. For instance: <form method='post' action='create.template.php'> <input type='hidden' name='address_1' value='address_1'> <input type='hidden' name='city_1' value='city_1'> <input type='hidden' name='state_1' value='state_1'> etc... <input type='hidden' name='address_1' value='address_2'> <input type='hidden' name='city_1' value='city_2'> <input type='hidden' name='state_1' value='state_2'> etc... <input type='hidden' name='address_2' value='address_3'> <input type='hidden' name='city_2' value='city_3'> <input type='hidden' name='state_2' value='state_3'> etc... <input type='hidden' name='address_2' value='address_4'> <input type='hidden' name='city_2' value='city_4'> <input type='hidden' name='state_2' value='state_4'> etc... <input type='hidden' name='address_2' value='address_5'> <input type='hidden' name='city_2' value='city_5'> <input type='hidden' name='state_2' value='state_5'> etc... <input type='hidden' name='address_2' value='address_6'> <input type='hidden' name='city_2' value='city_6'> <input type='hidden' name='state_2' value='state_6'> etc... <input type='hidden' name='address_2' value='address_7'> <input type='hidden' name='city_2' value='city_7'> <input type='hidden' name='state_2' value='state_7'> etc... // Visible form user fills out in creating their template ($pid_sum converts // into an integer 1-7, depending on what template they are filling out) <input type='' name='address_$pid_sum'> <input type='' name='city_$pid_sum'> <input type='' name='state_$pid_sum'> etc... <input type='submit' name='save_button' id='save_button' value='Save Settings'> <form> Each of these need updated in a hidden input tag with each form post, or the values in the database table (which aren't submitted with the form) get deleted. So I am forced to insert approximately 175 hidden input tags with every creation of 26 new values for one of the 7 templates. Is there a PHP function or command that would enable me to update all these values without inserting 175 hidden input tags within each form post? Here is the create.template.php file which the form action calls: <?php $q=new Cdb; $t->set_file("content", "create_template.html"); $q2=new CDB; $query="SELECT menu_category FROM menus WHERE link='create.template.ag.php'"; $q2->query($query); $toall=0; if ($q2->nf()<1) { $toall=1; } while ($q2->next_record()) { if ($q2->f('menu_category')=="main") { $toall=1; } } if ($toall==0) { get_logged_info(); $q2=new CDB; $query="SELECT id FROM menus WHERE link='create_template.php'"; $q2->query($query); $q2->next_record(); $query="SELECT membership_id FROM menu_permissions WHERE menu_item='".$q2->f("id")."'"; $q2->query($query); while ($q2->next_record()) { $permissions[]=$q2->f("membership_id"); } if (count($permissions)>0) { $error='<center><font color="red"><b>You do not have access to this area!<br><br>Upgrade your membership level!</b></font></center>'; foreach ($permissions as $value) { if ($value==$q->f("membership_id")) { $error=''; break; } } if ($error!="") { die("$error"); } } } $member_id=$q->f("id"); $pid=$q->f("pid"); $pid_sum = $pid +1; $first_name=$q->f("first_name"); $last_name=$q->f("last_name"); $email=$q->f("email"); echo " // THIS IS WHERE THE HTML FORM GOES "; replace_tags_t($q->f("id"), $t); ?>

    Read the article

  • PHP setcookie warning

    - by Ranking
    Hello guys, I have a problem with 'setcookie' in PHP and I can't solve it. so I receive this error "Warning: Cannot modify header information - headers already sent by (output started at C:\Program Files\VertrigoServ\www\vote.php:14) in C:\Program Files\VertrigoServ\www\vote.php on line 86" and here is the file.. line 86 is setcookie ($cookie_name, 1, time()+86400, '/', '', 0); is there any other way to do this ?? <html> <head> <title>Ranking</title> <link href="style.css" rel="stylesheet" type="text/css"> </head> <body bgcolor="#EEF0FF"> <div align="center"> <br/> <div align="center"><div id="header"></div></div> <br/> <table width="800" border="0" align="center" cellpadding="5" cellspacing="0" class="mid-table"> <tr><td height="5"> <center> <table border="0" cellpadding="0" cellspacing="0" align="center" style="padding-top:5px;"> <tr> <td align="center" valign="top"><img src="images/ads/top_banner.png"></td> </tr> </table> </center> </td></tr> <tr><td height="5"></td></tr> </table> <br/> <?php include "conf.php"; $id = $_GET['id']; if (!isset($_POST['submitted'])) { if (isset($_GET['id']) && is_numeric($_GET['id'])) { $id = mysql_real_escape_string($_GET['id']); $query = mysql_query("SELECT SQL_CACHE id, name FROM s_servers WHERE id = $id"); $row = mysql_fetch_assoc($query); ?> <form action="" method="POST"> <table width="800" height="106" border="0" align="center" cellpadding="3" cellspacing="0" class="mid-table"> <tr><td><div align="center"> <p>Code: <input type="text" name="kod" class="port" /><img src="img.php" id="captcha2" alt="" /><a href="javascript:void(0);" onclick="document.getElementById('captcha2').src = document.getElementById('captcha2').src + '?' + (new Date()).getMilliseconds()">Refresh</a></p><br /> <p><input type="submit" class="vote-button" name="vote" value="Vote for <?php echo $row['name']; ?>" /></p> <input type="hidden" name="submitted" value="TRUE" /> <input type="hidden" name="id" value="<?php echo $row['id']; ?>" /> </div></td></tr> <tr><td align="center" valign="top"><img src="images/ads/top_banner.png"></td></tr> </table> </form> <?php } else { echo '<font color="red">You must select a valid server to vote for it!</font>'; } } else { $kod=$_POST['kod']; if($kod!=$_COOKIE[imgcodepage]) { echo "The code does not match"; } else { $id = mysql_real_escape_string($_POST['id']); $query = "SELECT SQL_CACHE id, votes FROM s_servers WHERE id = $id"; $result = mysql_query($query) OR die(mysql_error()); $row = mysql_fetch_array($result, MYSQL_ASSOC); $votes = $row['votes']; $id = $row['id']; $cookie_name = 'vote_'.$id; $ip = $_SERVER['REMOTE_ADDR']; $ltime = mysql_fetch_assoc(mysql_query("SELECT SQL_CACHE `time` FROM `s_votes` WHERE `sid`='$id' AND `ip`='$ip'")); $ltime = $ltime['time'] + 86400; $time = time(); if (isset($_COOKIE['vote_'.$id]) OR $ltime > $time) { echo 'You have already voted in last 24 hours! Your vote is not recorded.'; } else { $votes++; $query = "UPDATE s_servers SET votes = $votes WHERE id = $id"; $time = time(); $query2 = mysql_query("INSERT INTO `s_votes` (`ip`, `time`, `sid`) VALUES ('$ip', '$time', '$id')"); $result = mysql_query($query) OR die(mysql_error()); setcookie ($cookie_name, 1, time()+86400, '/', '', 0); } } } ?> <p><a href="index.php">[Click here if you don't want to vote]</a></p><br/> <p><a href="index.php">Ranking.net</a> &copy; 2010-2011<br> </p> </div> </body> </html> Thanks a lot!

    Read the article

  • Upgrading MySQL causes errors

    - by user1032531
    I am running Centos 5.8 (Linux 2.6.18-308.13.1.el5 on x86_64), and wish to upgrade MySQL version 5.0.95. I am using a tutorial found at http://www.if-not-true-then-false.com/2010/install-mysql-on-fedora-centos-red-hat-rhel/. When I get to the following, I get the host of errors. Seems to have something to do with going from 386 to 86x64. Can anyone help? Thanks yum --enablerepo=remi,remi-test update mysql mysql-server Transaction Check Error: file /etc/my.cnf from install of mysql50-5.0.96-2.ius.el5.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/info/mysql.info.gz from install of mysql50-5.0.96-2.ius.el5.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql.1.gz from install of mysql50-5.0.96-2.ius.el5.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqlaccess.1.gz from install of mysql50-5.0.96-2.ius.el5.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqladmin.1.gz from install of mysql50-5.0.96-2.ius.el5.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqldump.1.gz from install of mysql50-5.0.96-2.ius.el5.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqlshow.1.gz from install of mysql50-5.0.96-2.ius.el5.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /etc/my.cnf from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/charsets/Index.xml from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/charsets/cp1250.xml from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/charsets/cp1251.xml from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/czech/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/danish/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/dutch/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/english/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/estonian/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/french/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/german/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/greek/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/hungarian/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/italian/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/japanese/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/korean/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/norwegian-ny/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/norwegian/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/polish/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/portuguese/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/romanian/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/russian/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/serbian/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/slovak/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/spanish/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/swedish/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/ukrainian/errmsg.sys from install of mysql-libs-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/bin/mysqlaccess from install of mysql-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/my_print_defaults.1.gz from install of mysql-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql.1.gz from install of mysql-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql_config.1.gz from install of mysql-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql_find_rows.1.gz from install of mysql-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql_waitpid.1.gz from install of mysql-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqlaccess.1.gz from install of mysql-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqladmin.1.gz from install of mysql-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqldump.1.gz from install of mysql-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqlshow.1.gz from install of mysql-5.5.28-1.el5.remi.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386

    Read the article

  • WSAECONNRESET (10054) error using WebDrive to map to a Subversion/Apache WebDAV share

    - by Dylan Beattie
    Hello, I'm using WebDrive to map a drive letter to a WebDAV share running on Subversion with the SVNAutoversioning flag enabled. The Subversion server is running CollabNet Subversion Edge with LDAP authentication. When trying to connect using WebDrive, I get: Connecting to site myserver Connecting to http://myserver/webdrive/ Resolving url myserver to an IP address Url resolved to IP address 192.168.0.12 Connecting to 192.168.0.12 on port 80 Connected successfully to the server on port 80 Testing directory listing ... Connecting to 192.168.0.12 on port 80 Connected successfully to the server on port 80 Unable to connect to server, error information below Error: Socket receive failure (4507) Operation: Connecting to server Winsock Error: WSAECONNRESET (10054) The httpd.conf file running on the server contains the following section: <Location /webdrive/> DAV svn SVNParentPath "C:\Program Files\Subversion\data\repositories" SVNReposName "My Subversion WebDrive" AuthzSVNAccessFile "C:\Program Files\Subversion\data/conf/svn_access_file" SVNListParentPath On Allow from all AuthType Basic AuthName "My Subversion Repository" AuthBasicProvider csvn-file-users ldap-users Require valid-user ModMimeUsePathInfo on SVNAutoversioning on </Location> and in the Apache error_yyyy_mm_dd.log file on the server, I'm seeing this when I try to connect via WebDAV: [Mon Jan 10 14:53:22 2011] [debug] mod_authnz_ldap.c(379): [client 192.168.0.50] [5572] auth_ldap authenticate: using URL ldap://mydc/dc=mydomain,dc=com?sAMAccountName?sub [Mon Jan 10 14:53:22 2011] [debug] mod_authnz_ldap.c(484): [client 192.168.0.50] [5572] auth_ldap authenticate: accepting dylan.beattie [Mon Jan 10 14:53:22 2011] [info] [client 192.168.0.50] Access granted: 'dylan.beattie' OPTIONS webdrive:/ [Mon Jan 10 14:53:22 2011] [debug] mod_authnz_ldap.c(379): [client 192.168.0.50] [5572] auth_ldap authenticate: using URL ldap://mydc/dc=mydomain,dc=com?sAMAccountName?sub [Mon Jan 10 14:53:22 2011] [debug] mod_authnz_ldap.c(484): [client 192.168.0.50] [5572] auth_ldap authenticate: accepting dylan.beattie [Mon Jan 10 14:53:22 2011] [info] [client 192.168.0.50] Access granted: 'dylan.beattie' PROPFIND webdrive:/ [Mon Jan 10 14:53:25 2011] [notice] Parent: child process exited with status 3221225477 -- Restarting. [Mon Jan 10 14:53:25 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xcd0f18 rmm=0xcd0f48 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:25 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xcd0f18 rmm=0xcd0f48 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:25 2011] [info] APR LDAP: Built with Microsoft Corporation. LDAP SDK [Mon Jan 10 14:53:25 2011] [info] LDAP: SSL support unavailable: LDAP: CA certificates cannot be set using this method, as they are stored in the registry instead. [Mon Jan 10 14:53:25 2011] [notice] Apache/2.2.16 (Win32) DAV/2 SVN/1.6.13 configured -- resuming normal operations [Mon Jan 10 14:53:25 2011] [notice] Server built: Oct 4 2010 19:55:36 [Mon Jan 10 14:53:25 2011] [notice] Parent: Created child process 4368 [Mon Jan 10 14:53:25 2011] [debug] mpm_winnt.c(487): Parent: Sent the scoreboard to the child [Mon Jan 10 14:53:25 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xca2bb0 rmm=0xca2be0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:25 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xca2bb0 rmm=0xca2be0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:25 2011] [info] APR LDAP: Built with Microsoft Corporation. LDAP SDK [Mon Jan 10 14:53:25 2011] [info] LDAP: SSL support unavailable: LDAP: CA certificates cannot be set using this method, as they are stored in the registry instead. [Mon Jan 10 14:53:25 2011] [error] python_init: Python version mismatch, expected '2.5', found '2.5.4'. [Mon Jan 10 14:53:25 2011] [error] python_init: Python executable found 'C:\\Program Files\\Subversion\\bin\\httpd.exe'. [Mon Jan 10 14:53:25 2011] [error] python_init: Python path being used 'C:\\Program Files\\Subversion\\Python25\\python25.zip;C:\\Program Files\\Subversion\\Python25\\\\DLLs;C:\\Program Files\\Subversion\\Python25\\\\lib;C:\\Program Files\\Subversion\\Python25\\\\lib\\plat-win;C:\\Program Files\\Subversion\\Python25\\\\lib\\lib-tk;C:\\Program Files\\Subversion\\bin'. [Mon Jan 10 14:53:25 2011] [notice] mod_python: Creating 8 session mutexes based on 0 max processes and 64 max threads. [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Child process is running [Mon Jan 10 14:53:25 2011] [debug] mpm_winnt.c(408): Child 4368: Retrieved our scoreboard from the parent. [Mon Jan 10 14:53:25 2011] [info] Parent: Duplicating socket 288 and sending it to child process 4368 [Mon Jan 10 14:53:25 2011] [info] Parent: Duplicating socket 276 and sending it to child process 4368 [Mon Jan 10 14:53:25 2011] [debug] mpm_winnt.c(564): Child 4368: retrieved 2 listeners from parent [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Acquired the start mutex. [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Starting 64 worker threads. [Mon Jan 10 14:53:25 2011] [debug] mpm_winnt.c(605): Parent: Sent 2 listeners to child 4368 [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Starting thread to listen on port 49159. [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Starting thread to listen on port 80. [Mon Jan 10 14:53:25 2011] [debug] mod_authnz_ldap.c(379): [client 192.168.0.50] [4368] auth_ldap authenticate: using URL ldap://mydc/dc=mydomain,dc=com?sAMAccountName?sub [Mon Jan 10 14:53:25 2011] [debug] mod_authnz_ldap.c(484): [client 192.168.0.50] [4368] auth_ldap authenticate: accepting dylan.beattie [Mon Jan 10 14:53:25 2011] [info] [client 192.168.0.50] Access granted: 'dylan.beattie' PROPFIND webdrive:/ [Mon Jan 10 14:53:28 2011] [notice] Parent: child process exited with status 3221225477 -- Restarting. [Mon Jan 10 14:53:28 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xcd4f90 rmm=0xcd4fc0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:28 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xcd4f90 rmm=0xcd4fc0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:28 2011] [info] APR LDAP: Built with Microsoft Corporation. LDAP SDK [Mon Jan 10 14:53:28 2011] [info] LDAP: SSL support unavailable: LDAP: CA certificates cannot be set using this method, as they are stored in the registry instead. [Mon Jan 10 14:53:28 2011] [notice] Apache/2.2.16 (Win32) DAV/2 SVN/1.6.13 configured -- resuming normal operations [Mon Jan 10 14:53:28 2011] [notice] Server built: Oct 4 2010 19:55:36 [Mon Jan 10 14:53:28 2011] [notice] Parent: Created child process 5440 [Mon Jan 10 14:53:28 2011] [debug] mpm_winnt.c(487): Parent: Sent the scoreboard to the child [Mon Jan 10 14:53:28 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xda2bb0 rmm=0xda2be0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:28 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xda2bb0 rmm=0xda2be0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:28 2011] [info] APR LDAP: Built with Microsoft Corporation. LDAP SDK [Mon Jan 10 14:53:28 2011] [info] LDAP: SSL support unavailable: LDAP: CA certificates cannot be set using this method, as they are stored in the registry instead. [Mon Jan 10 14:53:28 2011] [error] python_init: Python version mismatch, expected '2.5', found '2.5.4'. [Mon Jan 10 14:53:28 2011] [error] python_init: Python executable found 'C:\\Program Files\\Subversion\\bin\\httpd.exe'. [Mon Jan 10 14:53:28 2011] [error] python_init: Python path being used 'C:\\Program Files\\Subversion\\Python25\\python25.zip;C:\\Program Files\\Subversion\\Python25\\\\DLLs;C:\\Program Files\\Subversion\\Python25\\\\lib;C:\\Program Files\\Subversion\\Python25\\\\lib\\plat-win;C:\\Program Files\\Subversion\\Python25\\\\lib\\lib-tk;C:\\Program Files\\Subversion\\bin'. [Mon Jan 10 14:53:28 2011] [notice] mod_python: Creating 8 session mutexes based on 0 max processes and 64 max threads. [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Child process is running [Mon Jan 10 14:53:28 2011] [debug] mpm_winnt.c(408): Child 5440: Retrieved our scoreboard from the parent. [Mon Jan 10 14:53:28 2011] [info] Parent: Duplicating socket 288 and sending it to child process 5440 [Mon Jan 10 14:53:28 2011] [info] Parent: Duplicating socket 276 and sending it to child process 5440 [Mon Jan 10 14:53:28 2011] [debug] mpm_winnt.c(564): Child 5440: retrieved 2 listeners from parent [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Acquired the start mutex. [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Starting 64 worker threads. [Mon Jan 10 14:53:28 2011] [debug] mpm_winnt.c(605): Parent: Sent 2 listeners to child 5440 [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Starting thread to listen on port 49159. [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Starting thread to listen on port 80. Browsing http://myserver/webdrive/ from a web browser is working fine, and I have a similar set-up working perfectly on a different SVN server that isn't running Collabnet but has had Subversion and Apache installed and configured separately. Any ideas? The python version error might be red herring - I've seen it in a couple of places in the log files and in other scenarios it doesn't appear to be breaking anything...

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • How to get full control of umask/PAM/permissions?

    - by plua
    OUR SITUATION Several people from our company log in to a server and upload files. They all need to be able to upload and overwrite the same files. They have different usernames, but are all part of the same group. However, this is an internet server, so the "other" users should have (in general) just read-only access. So what I want to have is these standard permissions: files: 664 directories: 771 My goal is that all users do not need to worry about permissions. The server should be configured in such a way that these permissions apply to all files and directories, newly created, copied, or over-written. Only when we need some special permissions we'd manually change this. We upload files to the server by SFTP-ing in Nautilus, by mounting the server using sshfs and accessing it in Nautilus as if it were a local folder, and by SCP-ing in the command line. That basically covers our situation and what we aim to do. Now, I have read many things about the beautiful umask functionality. From what I understand umask (together with PAM) should allow me to do exactly what I want: set standard permissions for new files and directories. However, after many many hours of reading and trial-and-error, I still do not get this to work. I get many unexpected results. I really like to get a solid grasp of umask and have many question unanswered. I will post these questions below, together with my findings and an explanation of my trials that led to these questions. Given that many things appear to go wrong, I think that I am doing several things wrong. So therefore, there are many questions. NOTE: I am using Ubuntu 9.10 and therefore can not change the sshd_config to set the umask for the SFTP server. Installed SSH OpenSSH_5.1p1 Debian-6ubuntu2 < required OpenSSH 5.4p1. So here go the questions. 1. DO I NEED TO RESTART FOR PAM CHANGS TO TAKE EFFECT? Let's start with this. There were so many files involved and I was unable to figure out what does and what does not affect things, also because I did not know whether or not I have to restart the whole system for PAM changes to take effect. I did do so after not seeing the expected results, but is this really necessary? Or can I just log out from the server and log back in, and should new PAM policies be effective? Or is there some 'PAM' program to reload? 2. IS THERE ONE SINGLE FILE TO CHANGE THAT AFFECTS ALL USERS FOR ALL SESSIONS? So I ended up changing MANY files, as I read MANY different things. I ended up setting the umask in the following files: ~/.profile -> umask=0002 ~/.bashrc -> umask=0002 /etc/profile -> umask=0002 /etc/pam.d/common-session -> umask=0002 /etc/pam.d/sshd -> umask=0002 /etc/pam.d/login -> umask=0002 I want this change to apply to all users, so some sort of system-wide change would be best. Can it be achieved? 3. AFTER ALL, THIS UMASK THING, DOES IT WORK? So after changing umask to 0002 at every possible place, I run tests. ------------SCP----------- TEST 1: scp testfile (which has 777 permissions for testing purposes) server:/home/ testfile 100% 4 0.0KB/s 00:00 Let's check permissions: user@server:/home$ ls -l total 4 -rwx--x--x 1 user uploaders 4 2011-02-05 17:59 testfile (711) ---------SSH------------ TEST 2: ssh server user@server:/home$ touch anotherfile user@server:/home$ ls -l total 4 -rw-rw-r-- 1 user uploaders 0 2011-02-05 18:03 anotherfile (664) --------SFTP----------- Nautilus: sftp://server/home/ Copy and paste newfile from client to server (777 on client) TEST 3: user@server:/home$ ls -l total 4 -rwxrwxrwx 1 user uploaders 3 2011-02-05 18:05 newfile (777) Create a new file through Nautilus. Check file permissions in terminal: TEST 4: user@server:/home$ ls -l total 4 -rw------- 1 user uploaders 0 2011-02-05 18:06 newfile (600) I mean... WHAT just happened here?! We should get 644 every single time. Instead I get 711, 777, 600, and then once 644. And the 644 is only achieved when creating a new, blank file through SSH, which is the least probable scenario. So I am asking, does umask/pam work after all? 4. SO WHAT DOES IT MEAN TO UMASK SSHFS? Sometimes we mount a server locally, using sshfs. Very useful. But again, we have permissions issues. Here is how we mount: sshfs -o idmap=user -o umask=0113 user@server:/home/ /mnt NOTE: we use umask = 113 because apparently, sshfs starts from 777 instead of 666, so with 113 we get 664 which is the desired file permission. But what now happens is that we see all files and directories as if they are 664. We browse in Nautilus to /mnt and: Right click - New File (newfile) --- TEST 5 Right click - New Folder (newfolder) --- TEST 6 Copy and paste a 777 file from our local client --- TEST 7 So let's check on the command line: user@client:/mnt$ ls -l total 8 -rw-rw-r-- 1 user 1007 3 Feb 5 18:05 copyfile (664) -rw-rw-r-- 1 user 1007 0 Feb 5 18:15 newfile (664) drw-rw-r-- 1 user 1007 4096 Feb 5 18:15 newfolder (664) But hey, let's check this same folder on the server-side: user@server:/home$ ls -l total 8 -rwxrwxrwx 1 user uploaders 3 2011-02-05 18:05 copyfile (777) -rw------- 1 user uploaders 0 2011-02-05 18:15 newfile (600) drwx--x--x 2 user uploaders 4096 2011-02-05 18:15 newfolder (711) What?! The REAL file permissions are very different from what we see in Nautilus. So does this umask on sshfs just create a 'filter' that shows unreal file permissions? And I tried to open a file from another user but the same group that had real 600 permissions but 644 'fake' permissions, and I could still not read this, so what good is this filter?? 5. UMASK IS ALL ABOUT FILES. BUT WHAT ABOUT DIRECTORIES? From my tests I can see that the umask that is being applied also somehow influences the directory permissions. However, I want my files to be 664 (002) and my directories to be 771 (006). So is it possible to have a different umask for directories? 6. PERHAPS UMASK/PAM IS REALLY COOL, BUT UBUNTU IS JUST BUGGY? On the one hand, I have read topics of people that have had success with PAM/UMASK and Ubuntu. On the other hand, I have found many older and newer bugs regarding umask/PAM/fuse on Ubuntu: https://bugs.launchpad.net/ubuntu/+source/gdm/+bug/241198 https://bugs.launchpad.net/ubuntu/+source/fuse/+bug/239792 https://bugs.launchpad.net/ubuntu/+source/pam/+bug/253096 https://bugs.launchpad.net/ubuntu/+source/sudo/+bug/549172 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=314796 So I do not know what to believe anymore. Should I just give up? Would ACL solve all my problems? Or do I have again problems using Ubuntu? One word of caution with backups using tar. Red Hat /Centos distributions support acls in the tar program but Ubuntu does not support acls when backing up. This means that all acls will be lost when you create a backup. I am very willing to upgrade to Ubuntu 10.04 if that would solve my problems too, but first I want to understand what is happening.

    Read the article

  • EC2 instance suddenly refusing SSH connections and won't respond to ping

    - by Chris
    My instance was running fine and this morning I was able to access a Ruby on Rails app hosted on it. An hour later I suddenly wasn't able to access my site, my SSH connection attempts were refused and the server wasn't even responding to ping. I didn't change anything on my system during that hour and reboots aren't fixing it. I've never had any problems connecting or pinging the system before. Can someone please help? This is on my production system! OS: CentOS 5 AMI ID: ami-10b55379 Type: m1.small [] ~% ssh -v *****@meeteor.com OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to meeteor.com [184.73.235.191] port 22. debug1: connect to address 184.73.235.191 port 22: Connection refused ssh: connect to host meeteor.com port 22: Connection refused [] ~% ping meeteor.com PING meeteor.com (184.73.235.191): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 ^C --- meeteor.com ping statistics --- 4 packets transmitted, 0 packets received, 100.0% packet loss [] ~% ========= System Log ========= Restarting system. Linux version 2.6.16-xenU ([email protected]) (gcc version 4.0.1 20050727 (Red Hat 4.0.1-5)) #1 SMP Mon May 28 03:41:49 SAST 2007 BIOS-provided physical RAM map: Xen: 0000000000000000 - 000000006a400000 (usable) 980MB HIGHMEM available. 727MB LOWMEM available. NX (Execute Disable) protection: active IRQ lockup detection disabled Built 1 zonelists Kernel command line: root=/dev/sda1 ro 4 Enabling fast FPU save and restore... done. Enabling unmasked SIMD FPU exception support... done. Initializing CPU#0 PID hash table entries: 4096 (order: 12, 65536 bytes) Xen reported: 2599.998 MHz processor. Dentry cache hash table entries: 131072 (order: 7, 524288 bytes) Inode-cache hash table entries: 65536 (order: 6, 262144 bytes) Software IO TLB disabled vmalloc area: ee000000-f53fe000, maxmem 2d7fe000 Memory: 1718700k/1748992k available (1958k kernel code, 20948k reserved, 620k data, 144k init, 1003528k highmem) Checking if this processor honours the WP bit even in supervisor mode... Ok. Calibrating delay using timer specific routine.. 5202.30 BogoMIPS (lpj=26011526) Mount-cache hash table entries: 512 CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line) CPU: L2 Cache: 1024K (64 bytes/line) Checking 'hlt' instruction... OK. Brought up 1 CPUs migration_cost=0 Grant table initialized NET: Registered protocol family 16 Brought up 1 CPUs xen_mem: Initialising balloon driver. highmem bounce pool size: 64 pages VFS: Disk quotas dquot_6.5.1 Dquot-cache hash table entries: 1024 (order 0, 4096 bytes) Initializing Cryptographic API io scheduler noop registered io scheduler anticipatory registered (default) io scheduler deadline registered io scheduler cfq registered i8042.c: No controller found. RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize Xen virtual console successfully installed as tty1 Event-channel device installed. netfront: Initialising virtual ethernet driver. mice: PS/2 mouse device common for all mice md: md driver 0.90.3 MAX_MD_DEVS=256, MD_SB_DISKS=27 md: bitmap version 4.39 NET: Registered protocol family 2 Registering block device major 8 IP route cache hash table entries: 65536 (order: 6, 262144 bytes) TCP established hash table entries: 262144 (order: 9, 2097152 bytes) TCP bind hash table entries: 65536 (order: 7, 524288 bytes) TCP: Hash tables configured (established 262144 bind 65536) TCP reno registered TCP bic registered NET: Registered protocol family 1 NET: Registered protocol family 17 NET: Registered protocol family 15 Using IPI No-Shortcut mode md: Autodetecting RAID arrays. md: autorun ... md: ... autorun DONE. kjournald starting. Commit interval 5 seconds EXT3-fs: mounted filesystem with ordered data mode. VFS: Mounted root (ext3 filesystem) readonly. Freeing unused kernel memory: 144k freed *************************************************************** *************************************************************** ** WARNING: Currently emulating unsupported memory accesses ** ** in /lib/tls glibc libraries. The emulation is ** ** slow. To ensure full performance you should ** ** install a 'xen-friendly' (nosegneg) version of ** ** the library, or disable tls support by executing ** ** the following as root: ** ** mv /lib/tls /lib/tls.disabled ** ** Offending process: init (pid=1) ** *************************************************************** *************************************************************** Pausing... 5Pausing... 4Pausing... 3Pausing... 2Pausing... 1Continuing... INIT: version 2.86 booting Welcome to CentOS release 5.4 (Final) Press 'I' to enter interactive startup. Setting clock : Fri Oct 1 14:35:26 EDT 2010 [ OK ] Starting udev: [ OK ] Setting hostname localhost.localdomain: [ OK ] No devices found Setting up Logical Volume Management: [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext3 (1) -- /] fsck.ext3 -a /dev/sda1 /dev/sda1: clean, 275424/1310720 files, 1161123/2621440 blocks [ OK ] Remounting root filesystem in read-write mode: [ OK ] Mounting local filesystems: [ OK ] Enabling local filesystem quotas: [ OK ] Enabling /etc/fstab swaps: [ OK ] INIT: Entering runlevel: 4 Entering non-interactive startup Starting background readahead: [ OK ] Applying ip6tables firewall rules: modprobe: FATAL: Module ip6_tables not found. ip6tables-restore v1.3.5: ip6tables-restore: unable to initializetable 'filter' Error occurred at line: 3 Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information. [FAILED] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_ns [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: Determining IP information for eth0... done. [ OK ] Starting auditd: [FAILED] Starting irqbalance: [ OK ] Starting portmap: [ OK ] FATAL: Module lockd not found. Starting NFS statd: [ OK ] Starting RPC idmapd: FATAL: Module sunrpc not found. FATAL: Error running install command for sunrpc Error: RPC MTAB does not exist. Starting system message bus: [ OK ] Starting Bluetooth services:[ OK ] [ OK ] Can't open RFCOMM control socket: Address family not supported by protocol Mounting other filesystems: [ OK ] Starting PC/SC smart card daemon (pcscd): [ OK ] Starting hidd: Can't open HIDP control socket: Address family not supported by protocol [FAILED] Starting autofs: Starting automount: automount: test mount forbidden or incorrect kernel protocol version, kernel protocol version 5.00 or above required. [FAILED] [FAILED] Starting sshd: [ OK ] Starting cups: [ OK ] Starting sendmail: [ OK ] Starting sm-client: [ OK ] Starting console mouse services: no console device found[FAILED] Starting crond: [ OK ] Starting xfs: [ OK ] Starting anacron: [ OK ] Starting atd: [ OK ] % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 390 100 390 0 0 58130 0 --:--:-- --:--:-- --:--:-- 58130 100 390 100 390 0 0 56984 0 --:--:-- --:--:-- --:--:-- 0 Starting yum-updatesd: [ OK ] Starting Avahi daemon... [ OK ] Starting HAL daemon: [ OK ] Starting OSSEC: [ OK ] Starting smartd: [ OK ] c CentOS release 5.4 (Final) Kernel 2.6.16-xenU on an i686 domU-12-31-39-00-C4-97 login: INIT: Id "2" respawning too fast: disabled for 5 minutes INIT: Id "3" respawning too fast: disabled for 5 minutes INIT: Id "4" respawning too fast: disabled for 5 minutes INIT: Id "5" respawning too fast: disabled for 5 minutes INIT: Id "6" respawning too fast: disabled for 5 minutes

    Read the article

  • Failed to convert a wmv file to mp4 with ffmpeg

    - by Olaf Erlandsen
    i need a help with this command FFMPEG COMMAND: ffmpeg -y -i /input.wmv -vcodec libx264 -acodec libfaac -ac 2 -bufsize 20M -sameq -f mp4 /output.mp4 Output: ffmpeg version 1.0 Copyright (c) 2000-2012 the FFmpeg developers built on Oct 9 2012 07:04:08 with gcc 4.4.6 (GCC) 20120305 (Red Hat 4.4.6-4) [wmv3 @ 0x16a4800] Extra data: 8 bits left, value: 0 Guessed Channel Layout for Input Stream #0.0 : stereo Input #0, asf, from '/input.wmv': Metadata: WMFSDKVersion : 11.0.5721.5275 WMFSDKNeeded : 0.0.0.0000 IsVBR : 0 Duration: 00:01:35.10, start: 0.000000, bitrate: 496 kb/s Stream #0:0(spa): Audio: wmav2 (a[1][0][0] / 0x0161), 44100 Hz, stereo, s16, 64 kb/s Stream #0:1(spa): Video: wmv3 (Main) (WMV3 / 0x33564D57), yuv420p, 320x240, 425 kb/s, SAR 1:1 DAR 4:3, 29.97 tbr, 1k tbn, 1k tbc [libx264 @ 0x16c3000] VBV bufsize set but maxrate unspecified, ignored [libx264 @ 0x16c3000] using SAR=1/1 [libx264 @ 0x16c3000] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x16c3000] profile High, level 1.3 [libx264 @ 0x16c3000] 264 - core 128 - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 [wmv3 @ 0x16a4800] Extra data: 8 bits left, value: 0 Output #0, mp4, to '/output.mp4': Metadata: WMFSDKVersion : 11.0.5721.5275 WMFSDKNeeded : 0.0.0.0000 IsVBR : 0 encoder : Lavf54.29.104 Stream #0:0(spa): Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 320x240 [SAR 1:1 DAR 4:3], q=-1--1, 30k tbn, 29.97 tbc Stream #0:1(spa): Audio: aac ([64][0][0][0] / 0x0040), 44100 Hz, stereo, s16, 128 kb/s Stream mapping: Stream #0:1 -> #0:0 (wmv3 -> libx264) Stream #0:0 -> #0:1 (wmav2 -> libfaac) Press [q] to stop, [?] for help [libfaac @ 0x16b3600] Que input is backward in time [mp4 @ 0x16bb3a0] st:0 PTS: 6174 DTS: 6174 < 7169 invalid, clipping frame= 144 fps=0.0 q=29.0 size= 207kB time=00:00:03.38 bitrate= 500.3kbits/s frame= 259 fps=257 q=29.0 size= 447kB time=00:00:07.30 bitrate= 501.3kbits/s frame= 375 fps=248 q=29.0 size= 668kB time=00:00:11.01 bitrate= 496.5kbits/s frame= 487 fps=241 q=29.0 size= 836kB time=00:00:14.85 bitrate= 460.7kbits/s frame= 605 fps=240 q=29.0 size= 1080kB time=00:00:18.92 bitrate= 467.4kbits/s frame= 719 fps=238 q=29.0 size= 1306kB time=00:00:22.80 bitrate= 469.2kbits/s frame= 834 fps=237 q=29.0 size= 1546kB time=00:00:26.52 bitrate= 477.3kbits/s frame= 953 fps=237 q=29.0 size= 1763kB time=00:00:30.27 bitrate= 477.0kbits/s frame= 1071 fps=237 q=29.0 size= 1986kB time=00:00:34.36 bitrate= 473.4kbits/s frame= 1161 fps=231 q=29.0 size= 2160kB time=00:00:37.21 bitrate= 475.4kbits/s frame= 1221 fps=220 q=29.0 size= 2282kB time=00:00:39.53 bitrate= 472.9kbits/s frame= 1280 fps=212 q=29.0 size= 2392kB time=00:00:41.16 bitrate= 476.1kbits/s frame= 1331 fps=203 q=29.0 size= 2502kB time=00:00:43.23 bitrate= 474.1kbits/s frame= 1379 fps=195 q=29.0 size= 2618kB time=00:00:44.72 bitrate= 479.6kbits/s frame= 1430 fps=189 q=29.0 size= 2733kB time=00:00:46.34 bitrate= 483.0kbits/s frame= 1487 fps=184 q=29.0 size= 2851kB time=00:00:48.40 bitrate= 482.6kbits/s frame= 1546 fps=180 q=26.0 size= 2973kB time=00:00:50.43 bitrate= 482.9kbits/s frame= 1610 fps=177 q=29.0 size= 3112kB time=00:00:52.40 bitrate= 486.5kbits/s frame= 1672 fps=174 q=29.0 size= 3231kB time=00:00:54.35 bitrate= 487.0kbits/s frame= 1733 fps=171 q=29.0 size= 3348kB time=00:00:56.51 bitrate= 485.3kbits/s frame= 1792 fps=169 q=29.0 size= 3459kB time=00:00:58.28 bitrate= 486.2kbits/s frame= 1851 fps=166 q=29.0 size= 3588kB time=00:01:00.32 bitrate= 487.2kbits/s frame= 1910 fps=164 q=29.0 size= 3716kB time=00:01:02.36 bitrate= 488.1kbits/s frame= 1972 fps=162 q=29.0 size= 3833kB time=00:01:04.45 bitrate= 487.1kbits/s frame= 2032 fps=161 q=29.0 size= 3946kB time=00:01:06.40 bitrate= 486.8kbits/s frame= 2091 fps=159 q=29.0 size= 4080kB time=00:01:08.35 bitrate= 488.9kbits/s frame= 2150 fps=158 q=29.0 size= 4201kB time=00:01:10.54 bitrate= 487.9kbits/s frame= 2206 fps=156 q=29.0 size= 4315kB time=00:01:12.39 bitrate= 488.3kbits/s frame= 2263 fps=154 q=29.0 size= 4438kB time=00:01:14.21 bitrate= 489.9kbits/s frame= 2327 fps=154 q=29.0 size= 4567kB time=00:01:16.16 bitrate= 491.2kbits/s frame= 2388 fps=152 q=29.0 size= 4666kB time=00:01:18.48 bitrate= 487.0kbits/s frame= 2450 fps=152 q=29.0 size= 4776kB time=00:01:20.24 bitrate= 487.6kbits/s frame= 2511 fps=151 q=29.0 size= 4890kB time=00:01:22.15 bitrate= 487.6kbits/s frame= 2575 fps=150 q=29.0 size= 5015kB time=00:01:24.42 bitrate= 486.6kbits/s frame= 2635 fps=149 q=29.0 size= 5130kB time=00:01:26.62 bitrate= 485.2kbits/s frame= 2695 fps=148 q=29.0 size= 5258kB time=00:01:28.65 bitrate= 485.9kbits/s frame= 2758 fps=147 q=29.0 size= 5382kB time=00:01:30.64 bitrate= 486.4kbits/s frame= 2816 fps=147 q=29.0 size= 5521kB time=00:01:32.69 bitrate= 487.9kbits/s get_buffer() failed Error while decoding stream #0:0: Invalid argument frame= 2848 fps=143 q=-1.0 Lsize= 5787kB time=00:01:35.10 bitrate= 498.4kbits/s video:5099kB audio:581kB subtitle:0 global headers:0kB muxing overhead 1.884230% [libx264 @ 0x16c3000] frame I:12 Avg QP:22.64 size: 12092 [libx264 @ 0x16c3000] frame P:1508 Avg QP:25.39 size: 2933 [libx264 @ 0x16c3000] frame B:1328 Avg QP:30.62 size: 491 [libx264 @ 0x16c3000] consecutive B-frames: 10.0% 80.8% 8.1% 1.1% [libx264 @ 0x16c3000] mb I I16..4: 1.8% 72.1% 26.0% [libx264 @ 0x16c3000] mb P I16..4: 0.4% 2.4% 0.3% P16..4: 48.3% 19.6% 19.3% 0.0% 0.0% skip: 9.5% [libx264 @ 0x16c3000] mb B I16..4: 0.1% 0.2% 0.0% B16..8: 52.6% 6.6% 2.3% direct: 1.4% skip:36.8% L0:48.8% L1:42.5% BI: 8.7% [libx264 @ 0x16c3000] 8x8 transform intra:75.3% inter:55.4% [libx264 @ 0x16c3000] coded y,uvDC,uvAC intra: 77.9% 81.7% 33.1% inter: 24.2% 11.6% 1.1% [libx264 @ 0x16c3000] i16 v,h,dc,p: 25% 16% 44% 14% [libx264 @ 0x16c3000] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 15% 29% 6% 5% 6% 6% 7% 7% [libx264 @ 0x16c3000] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 15% 17% 7% 9% 8% 9% 7% 7% [libx264 @ 0x16c3000] i8c dc,h,v,p: 50% 19% 24% 7% [libx264 @ 0x16c3000] Weighted P-Frames: Y:3.8% UV:1.1% [libx264 @ 0x16c3000] ref P L0: 75.6% 19.1% 4.2% 1.0% 0.1% [libx264 @ 0x16c3000] ref B L0: 98.1% 1.9% 0.0% [libx264 @ 0x16c3000] ref B L1: 98.9% 1.1% [libx264 @ 0x16c3000] kb/s:439.47 FFMPEG Configuration: --enable-version3 --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libvpx --enable-libfaac --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-gpl --enable-postproc --enable-nonfree libavutil 51. 73.101 / 51. 73.101 libavcodec 54. 59.100 / 54. 59.100 libavformat 54. 29.104 / 54. 29.104 libavdevice 54. 2.101 / 54. 2.101 libavfilter 3. 17.100 / 3. 17.100 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 15.100 / 0. 15.100 libpostproc 52. 0.100 / 52. 0.100 PROBLEM #1: [libfaac @ 0x16b3600] Que input is backward in time [mp4 @ 0x16bb3a0] st:0 PTS: 6174 DTS: 6174 < 7169 invalid, clipping PROBLEM #2: get_buffer() failed Error while decoding stream #0:0: Invalid argument

    Read the article

  • Windows Server 2008 R2 network adapter stops working, requires hard reboot

    - by Geoff Dalgas
    TL;DR version: Turns out this was a Windows Server 2008 R2 kernel networking bug. After siccing Microsoft support on it, we (eventually) got an unpublished kernel hotfix from Microsoft to address it. If you, too, are experiencing mysterious low-level network driver failures requiring a reboot/bluescreen cycle, you might want that hotfix (or maybe Service Pack 1 whenever it is released, too.) We have been using HAProxy along with heartbeat from the Linux-HA project. We are using two linux instances to provide a failover. Each server has with their own public IP and a single IP which is shared between the two using a virtual interface (eth1:1) at IP: 69.59.196.211 The virtual interface (eth1:1) IP 69.59.196.211 is configured as the gateway for the windows servers behind them and we use ip_forwarding to route traffic. We are experiencing an occasional network outage on one of our windows servers behind our linux gateways. HAProxy will detect the server is offline which we can verify by remoting to the failed server and attempting to ping the gateway: Pinging 69.59.196.211 with 32 bytes of data: Reply from 69.59.196.220: Destination host unreachable. Running arp -a on this failed server shows that there is no entry for the gateway address (69.59.196.211): Interface: 69.59.196.220 --- 0xa Internet Address Physical Address Type 69.59.196.161 00-26-88-63-c7-80 dynamic 69.59.196.210 00-15-5d-0a-3e-0e dynamic 69.59.196.212 00-21-5e-4d-45-c9 dynamic 69.59.196.213 00-15-5d-00-b2-0d dynamic 69.59.196.215 00-21-5e-4d-61-1a dynamic 69.59.196.217 00-21-5e-4d-2c-e8 dynamic 69.59.196.219 00-21-5e-4d-38-e5 dynamic 69.59.196.221 00-15-5d-00-b2-0d dynamic 69.59.196.222 00-15-5d-0a-3e-09 dynamic 69.59.196.223 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 225.0.0.1 01-00-5e-00-00-01 static On our linux gateway instances arp -a shows: peak-colo-196-220.peak.org (69.59.196.220) at <incomplete> on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-222.peak.org (69.59.196.222) at 00:15:5d:0a:3e:09 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 Why would arp occasionally set the entry for this failed server as <incomplete>? Should we be defining our arp entries statically? I've always left arp alone since it works 99% of the time, but in this one instance it appears to be failing. Are there any additional troubleshooting steps we can take help resolve this issue? THINGS WE HAVE TRIED I added a static arp entry for testing on one of the linux gateways which still didn't help. root@haproxy2:~# arp -a peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-221.peak.org (69.59.196.221) at 00:15:5d:00:b2:0d [ether] on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 peak-colo-196-220.peak.org (69.59.196.220) at 00:21:5e:4d:30:8d [ether] PERM on eth1 root@haproxy2:~# arp -i eth1 -s 69.59.196.220 00:21:5e:4d:30:8d root@haproxy2:~# ping 69.59.196.220 PING 69.59.196.220 (69.59.196.220) 56(84) bytes of data. --- 69.59.196.220 ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 6006ms Rebooting the windows web server solves this issue temporarily with no other changes to the network but our experience shows this issue will come back. Swapping network cards and switches I noticed the link light on the port of the switch for the failed windows server was running at 100Mb instead of 1Gb on the failed interface. I moved the cable to several other open ports and the link indicated 100Mb for each port that I tried. I also swapped the cable with the same result. I tried changing the properties of the network card in windows and the server locked up and required a hard reset after clicking apply. This windows server has two physical network interfaces so I have swapped the cables and network settings on the two interfaces to see if the problem follows the interface. If the public interface goes down again we will know that it is not an issue with the network card. (We also tried another switch we have on hand, no change) Changing network hardware driver versions We've had the same problem with the latest Broadcom driver, as well as the built-in driver that ships in Windows Server 2008 R2. Replacing network cables As a last ditch effort we remembered another change that occurred was the replacement of all of the patch cords between our servers / switch. We had purchased two sets, one green of lengths 1ft - 3ft for the private interfaces and another set of red cables for the public interfaces. We swapped out all of the public interface patch cables with a different brand and ran our servers without issue for a full week ... aaaaaand then the problem recurred. Disable checksum offload, remove TProxy We also tried disabling TCP/IP checksum offload in the driver, no change. We're now pulling out TProxy and moving to a more traditional x-forwarded-for network arrangement without any fancy IP address rewriting. We'll see if that helps. Switch Virtualization providers On the off chance this was related to Hyper-V in some way (we do host Linux VMs on it), we switched to VMWare Server. No change. Switch host model We've reached the end of our troubleshooting rope and are now formally involving Microsoft support. They recommended changing the host model: http://en.wikipedia.org/wiki/Host_model http://technet.microsoft.com/en-us/magazine/2007.09.cableguy.aspx We did that, and.. we'll see.

    Read the article

  • Introducing the Earthquake Locator – A Bing Maps Silverlight Application, part 1

    - by Bobby Diaz
    Update: Live demo and source code now available!  The recent wave of earthquakes (no pun intended) being reported in the news got me wondering about the frequency and severity of earthquakes around the world. Since I’ve been doing a lot of Silverlight development lately, I decided to scratch my curiosity with a nice little Bing Maps application that will show the location and relative strength of recent seismic activity. Here is a list of technologies this application will utilize, so be sure to have everything downloaded and installed if you plan on following along. Silverlight 3 WCF RIA Services Bing Maps Silverlight Control * Managed Extensibility Framework (optional) MVVM Light Toolkit (optional) log4net (optional) * If you are new to Bing Maps or have not signed up for a Developer Account, you will need to visit www.bingmapsportal.com to request a Bing Maps key for your application. Getting Started We start out by creating a new Silverlight Application called EarthquakeLocator and specify that we want to automatically create the Web Application Project with RIA Services enabled. I cleaned up the web app by removing the Default.aspx and EarthquakeLocatorTestPage.html. Then I renamed the EarthquakeLocatorTestPage.aspx to Default.aspx and set it as my start page. I also set the development server to use a specific port, as shown below. RIA Services Next, I created a Services folder in the EarthquakeLocator.Web project and added a new Domain Service Class called EarthquakeService.cs. This is the RIA Services Domain Service that will provide earthquake data for our client application. I am not using LINQ to SQL or Entity Framework, so I will use the <empty domain service class> option. We will be pulling data from an external Atom feed, but this example could just as easily pull data from a database or another web service. This is an important distinction to point out because each scenario I just mentioned could potentially use a different Domain Service base class (i.e. LinqToSqlDomainService<TDataContext>). Now we can start adding Query methods to our EarthquakeService that pull data from the USGS web site. Here is the complete code for our service class: using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.ServiceModel.Syndication; using System.Web.DomainServices; using System.Web.Ria; using System.Xml; using log4net; using EarthquakeLocator.Web.Model;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// Provides earthquake data to client applications.     /// </summary>     [EnableClientAccess()]     public class EarthquakeService : DomainService     {         private static readonly ILog log = LogManager.GetLogger(typeof(EarthquakeService));           // USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/         private const string FeedForPreviousDay =             "http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml";         private const string FeedForPreviousWeek =             "http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml";           /// <summary>         /// Gets the earthquake data for the previous week.         /// </summary>         /// <returns>A queryable collection of <see cref="Earthquake"/> objects.</returns>         public IQueryable<Earthquake> GetEarthquakes()         {             var feed = GetFeed(FeedForPreviousWeek);             var list = new List<Earthquake>();               if ( feed != null )             {                 foreach ( var entry in feed.Items )                 {                     var quake = CreateEarthquake(entry);                     if ( quake != null )                     {                         list.Add(quake);                     }                 }             }               return list.AsQueryable();         }           /// <summary>         /// Creates an <see cref="Earthquake"/> object for each entry in the Atom feed.         /// </summary>         /// <param name="entry">The Atom entry.</param>         /// <returns></returns>         private Earthquake CreateEarthquake(SyndicationItem entry)         {             Earthquake quake = null;             string title = entry.Title.Text;             string summary = entry.Summary.Text;             string point = GetElementValue<String>(entry, "point");             string depth = GetElementValue<String>(entry, "elev");             string utcTime = null;             string localTime = null;             string depthDesc = null;             double? magnitude = null;             double? latitude = null;             double? longitude = null;             double? depthKm = null;               if ( !String.IsNullOrEmpty(title) && title.StartsWith("M") )             {                 title = title.Substring(2, title.IndexOf(',')-3).Trim();                 magnitude = TryParse(title);             }             if ( !String.IsNullOrEmpty(point) )             {                 var values = point.Split(' ');                 if ( values.Length == 2 )                 {                     latitude = TryParse(values[0]);                     longitude = TryParse(values[1]);                 }             }             if ( !String.IsNullOrEmpty(depth) )             {                 depthKm = TryParse(depth);                 if ( depthKm != null )                 {                     depthKm = Math.Round((-1 * depthKm.Value) / 100, 2);                 }             }             if ( !String.IsNullOrEmpty(summary) )             {                 summary = summary.Replace("</p>", "");                 var values = summary.Split(                     new string[] { "<p>" },                     StringSplitOptions.RemoveEmptyEntries);                   if ( values.Length == 3 )                 {                     var times = values[1].Split(                         new string[] { "<br>" },                         StringSplitOptions.RemoveEmptyEntries);                       if ( times.Length > 0 )                     {                         utcTime = times[0];                     }                     if ( times.Length > 1 )                     {                         localTime = times[1];                     }                       depthDesc = values[2];                     depthDesc = "Depth: " + depthDesc.Substring(depthDesc.IndexOf(":") + 2);                 }             }               if ( latitude != null && longitude != null )             {                 quake = new Earthquake()                 {                     Id = entry.Id,                     Title = entry.Title.Text,                     Summary = entry.Summary.Text,                     Date = entry.LastUpdatedTime.DateTime,                     Url = entry.Links.Select(l => Path.Combine(l.BaseUri.OriginalString,                         l.Uri.OriginalString)).FirstOrDefault(),                     Age = entry.Categories.Where(c => c.Label == "Age")                         .Select(c => c.Name).FirstOrDefault(),                     Magnitude = magnitude.GetValueOrDefault(),                     Latitude = latitude.GetValueOrDefault(),                     Longitude = longitude.GetValueOrDefault(),                     DepthInKm = depthKm.GetValueOrDefault(),                     DepthDesc = depthDesc,                     UtcTime = utcTime,                     LocalTime = localTime                 };             }               return quake;         }           private T GetElementValue<T>(SyndicationItem entry, String name)         {             var el = entry.ElementExtensions.Where(e => e.OuterName == name).FirstOrDefault();             T value = default(T);               if ( el != null )             {                 value = el.GetObject<T>();             }               return value;         }           private double? TryParse(String value)         {             double d;             if ( Double.TryParse(value, out d) )             {                 return d;             }             return null;         }           /// <summary>         /// Gets the feed at the specified URL.         /// </summary>         /// <param name="url">The URL.</param>         /// <returns>A <see cref="SyndicationFeed"/> object.</returns>         public static SyndicationFeed GetFeed(String url)         {             SyndicationFeed feed = null;               try             {                 log.Debug("Loading RSS feed: " + url);                   using ( var reader = XmlReader.Create(url) )                 {                     feed = SyndicationFeed.Load(reader);                 }             }             catch ( Exception ex )             {                 log.Error("Error occurred while loading RSS feed: " + url, ex);             }               return feed;         }     } }   The only method that will be generated in the client side proxy class, EarthquakeContext, will be the GetEarthquakes() method. The reason being that it is the only public instance method and it returns an IQueryable<Earthquake> collection that can be consumed by the client application. GetEarthquakes() calls the static GetFeed(String) method, which utilizes the built in SyndicationFeed API to load the external data feed. You will need to add a reference to the System.ServiceModel.Web library in order to take advantage of the RSS/Atom reader. The API will also allow you to create your own feeds to serve up in your applications. Model I have also created a Model folder and added a new class, Earthquake.cs. The Earthquake object will hold the various properties returned from the Atom feed. Here is a sample of the code for that class. Notice the [Key] attribute on the Id property, which is required by RIA Services to uniquely identify the entity. using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ComponentModel.DataAnnotations;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     [DataContract]     public class Earthquake     {         /// <summary>         /// Gets or sets the id.         /// </summary>         /// <value>The id.</value>         [Key]         [DataMember]         public string Id { get; set; }           /// <summary>         /// Gets or sets the title.         /// </summary>         /// <value>The title.</value>         [DataMember]         public string Title { get; set; }           /// <summary>         /// Gets or sets the summary.         /// </summary>         /// <value>The summary.</value>         [DataMember]         public string Summary { get; set; }           // additional properties omitted     } }   View Model The recent trend to use the MVVM pattern for WPF and Silverlight provides a great way to separate the data and behavior logic out of the user interface layer of your client applications. I have chosen to use the MVVM Light Toolkit for the Earthquake Locator, but there are other options out there if you prefer another library. That said, I went ahead and created a ViewModel folder in the Silverlight project and added a EarthquakeViewModel class that derives from ViewModelBase. Here is the code: using System; using System.Collections.ObjectModel; using System.ComponentModel.Composition; using System.ComponentModel.Composition.Hosting; using Microsoft.Maps.MapControl; using GalaSoft.MvvmLight; using EarthquakeLocator.Web.Model; using EarthquakeLocator.Web.Services;   namespace EarthquakeLocator.ViewModel {     /// <summary>     /// Provides data for views displaying earthquake information.     /// </summary>     public class EarthquakeViewModel : ViewModelBase     {         [Import]         public EarthquakeContext Context;           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         public EarthquakeViewModel()         {             var catalog = new AssemblyCatalog(GetType().Assembly);             var container = new CompositionContainer(catalog);             container.ComposeParts(this);             Initialize();         }           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         /// <param name="context">The context.</param>         public EarthquakeViewModel(EarthquakeContext context)         {             Context = context;             Initialize();         }           private void Initialize()         {             MapCenter = new Location(20, -170);             ZoomLevel = 2;         }           #region Private Methods           private void OnAutoLoadDataChanged()         {             LoadEarthquakes();         }           private void LoadEarthquakes()         {             var query = Context.GetEarthquakesQuery();             Context.Earthquakes.Clear();               Context.Load(query, (op) =>             {                 if ( !op.HasError )                 {                     foreach ( var item in op.Entities )                     {                         Earthquakes.Add(item);                     }                 }             }, null);         }           #endregion Private Methods           #region Properties           private bool autoLoadData;         /// <summary>         /// Gets or sets a value indicating whether to auto load data.         /// </summary>         /// <value><c>true</c> if auto loading data; otherwise, <c>false</c>.</value>         public bool AutoLoadData         {             get { return autoLoadData; }             set             {                 if ( autoLoadData != value )                 {                     autoLoadData = value;                     RaisePropertyChanged("AutoLoadData");                     OnAutoLoadDataChanged();                 }             }         }           private ObservableCollection<Earthquake> earthquakes;         /// <summary>         /// Gets the collection of earthquakes to display.         /// </summary>         /// <value>The collection of earthquakes.</value>         public ObservableCollection<Earthquake> Earthquakes         {             get             {                 if ( earthquakes == null )                 {                     earthquakes = new ObservableCollection<Earthquake>();                 }                   return earthquakes;             }         }           private Location mapCenter;         /// <summary>         /// Gets or sets the map center.         /// </summary>         /// <value>The map center.</value>         public Location MapCenter         {             get { return mapCenter; }             set             {                 if ( mapCenter != value )                 {                     mapCenter = value;                     RaisePropertyChanged("MapCenter");                 }             }         }           private double zoomLevel;         /// <summary>         /// Gets or sets the zoom level.         /// </summary>         /// <value>The zoom level.</value>         public double ZoomLevel         {             get { return zoomLevel; }             set             {                 if ( zoomLevel != value )                 {                     zoomLevel = value;                     RaisePropertyChanged("ZoomLevel");                 }             }         }           #endregion Properties     } }   The EarthquakeViewModel class contains all of the properties that will be bound to by the various controls in our views. Be sure to read through the LoadEarthquakes() method, which handles calling the GetEarthquakes() method in our EarthquakeService via the EarthquakeContext proxy, and also transfers the loaded entities into the view model’s Earthquakes collection. Another thing to notice is what’s going on in the default constructor. I chose to use the Managed Extensibility Framework (MEF) for my composition needs, but you can use any dependency injection library or none at all. To allow the EarthquakeContext class to be discoverable by MEF, I added the following partial class so that I could supply the appropriate [Export] attribute: using System; using System.ComponentModel.Composition;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// The client side proxy for the EarthquakeService class.     /// </summary>     [Export]     public partial class EarthquakeContext     {     } }   One last piece I wanted to point out before moving on to the user interface, I added a client side partial class for the Earthquake entity that contains helper properties that we will bind to later: using System;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     public partial class Earthquake     {         /// <summary>         /// Gets the location based on the current Latitude/Longitude.         /// </summary>         /// <value>The location.</value>         public string Location         {             get { return String.Format("{0},{1}", Latitude, Longitude); }         }           /// <summary>         /// Gets the size based on the Magnitude.         /// </summary>         /// <value>The size.</value>         public double Size         {             get { return (Magnitude * 3); }         }     } }   View Now the fun part! Usually, I would create a Views folder to place all of my View controls in, but I took the easy way out and added the following XAML code to the default MainPage.xaml file. Be sure to add the bing prefix associating the Microsoft.Maps.MapControl namespace after adding the assembly reference to your project. The MVVM Light Toolkit project templates come with a ViewModelLocator class that you can use via a static resource, but I am instantiating the EarthquakeViewModel directly in my user control. I am setting the AutoLoadData property to true as a way to trigger the LoadEarthquakes() method call. The MapItemsControl found within the <bing:Map> control binds its ItemsSource property to the Earthquakes collection of the view model, and since it is an ObservableCollection<T>, we get the automatic two way data binding via the INotifyCollectionChanged interface. <UserControl x:Class="EarthquakeLocator.MainPage"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:bing="clr-namespace:Microsoft.Maps.MapControl;assembly=Microsoft.Maps.MapControl"     xmlns:vm="clr-namespace:EarthquakeLocator.ViewModel"     mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" >     <UserControl.Resources>         <DataTemplate x:Key="EarthquakeTemplate">             <Ellipse Fill="Red" Stroke="Black" StrokeThickness="1"                      Width="{Binding Size}" Height="{Binding Size}"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="{Binding UtcTime}" />                         <TextBlock Text="{Binding LocalTime}" />                         <TextBlock Text="{Binding DepthDesc}" />                     </StackPanel>                 </ToolTipService.ToolTip>             </Ellipse>         </DataTemplate>     </UserControl.Resources>       <UserControl.DataContext>         <vm:EarthquakeViewModel AutoLoadData="True" />     </UserControl.DataContext>       <Grid x:Name="LayoutRoot">           <bing:Map x:Name="map" CredentialsProvider="--Your-Bing-Maps-Key--"                   Center="{Binding MapCenter, Mode=TwoWay}"                   ZoomLevel="{Binding ZoomLevel, Mode=TwoWay}">             <bing:MapItemsControl ItemsSource="{Binding Earthquakes}"                                   ItemTemplate="{StaticResource EarthquakeTemplate}" />         </bing:Map>       </Grid> </UserControl>   The EarthquakeTemplate defines the Ellipse that will represent each earthquake, the Width and Height that are determined by the Magnitude, the Position on the map, and also the tooltip that will appear when we mouse over each data point. Running the application will give us the following result (shown with a tooltip example): That concludes this portion of our show but I plan on implementing additional functionality in later blog posts. Be sure to come back soon to see the next installments in this series. Enjoy!   Additional Resources USGS Earthquake Data Feeds Brad Abrams shows how RIA Services and MVVM can work together

    Read the article

  • Camera for 2.5D Game

    - by me--
    I'm hoping someone can explain this to me like I'm 5, because I've been struggling with this for hours and simply cannot understand what I'm doing wrong. I've written a Camera class for my 2.5D game. The intention is to support world and screen spaces like this: The camera is the black thing on the right. The +Z axis is upwards in that image, with -Z heading downwards. As you can see, both world space and screen space have (0, 0) at their top-left. I started writing some unit tests to prove that my camera was working as expected, and that's where things started getting...strange. My tests plot coordinates in world, view, and screen spaces. Eventually I will use image comparison to assert that they are correct, but for now my test just displays the result. The render logic uses Camera.ViewMatrix to transform world space to view space, and Camera.WorldPointToScreen to transform world space to screen space. Here is an example test: [Fact] public void foo() { var camera = new Camera(new Viewport(0, 0, 250, 100)); DrawingVisual worldRender; DrawingVisual viewRender; DrawingVisual screenRender; this.Render(camera, out worldRender, out viewRender, out screenRender, new Vector3(30, 0, 0), new Vector3(30, 40, 0)); this.ShowRenders(camera, worldRender, viewRender, screenRender); } And here's what pops up when I run this test: World space looks OK, although I suspect the z axis is going into the screen instead of towards the viewer. View space has me completely baffled. I was expecting the camera to be sitting above (0, 0) and looking towards the center of the scene. Instead, the z axis seems to be the wrong way around, and the camera is positioned in the opposite corner to what I expect! I suspect screen space will be another thing altogether, but can anyone explain what I'm doing wrong in my Camera class? UPDATE I made some progress in terms of getting things to look visually as I expect, but only through intuition: not an actual understanding of what I'm doing. Any enlightenment would be greatly appreciated. I realized that my view space was flipped both vertically and horizontally compared to what I expected, so I changed my view matrix to scale accordingly: this.viewMatrix = Matrix.CreateLookAt(this.location, this.target, this.up) * Matrix.CreateScale(this.zoom, this.zoom, 1) * Matrix.CreateScale(-1, -1, 1); I could combine the two CreateScale calls, but have left them separate for clarity. Again, I have no idea why this is necessary, but it fixed my view space: But now my screen space needs to be flipped vertically, so I modified my projection matrix accordingly: this.projectionMatrix = Matrix.CreatePerspectiveFieldOfView(0.7853982f, viewport.AspectRatio, 1, 2) * Matrix.CreateScale(1, -1, 1); And this results in what I was expecting from my first attempt: I have also just tried using Camera to render sprites via a SpriteBatch to make sure everything works there too, and it does. But the question remains: why do I need to do all this flipping of axes to get the space coordinates the way I expect? UPDATE 2 I've since improved my rendering logic in my test suite so that it supports geometries and so that lines get lighter the further away they are from the camera. I wanted to do this to avoid optical illusions and to further prove to myself that I'm looking at what I think I am. Here is an example: In this case, I have 3 geometries: a cube, a sphere, and a polyline on the top face of the cube. Notice how the darkening and lightening of the lines correctly identifies those portions of the geometries closer to the camera. If I remove the negative scaling I had to put in, I see: So you can see I'm still in the same boat - I still need those vertical and horizontal flips in my matrices to get things to appear correctly. In the interests of giving people a repro to play with, here is the complete code needed to generate the above. If you want to run via the test harness, just install the xunit package: Camera.cs: using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using System.Diagnostics; public sealed class Camera { private readonly Viewport viewport; private readonly Matrix projectionMatrix; private Matrix? viewMatrix; private Vector3 location; private Vector3 target; private Vector3 up; private float zoom; public Camera(Viewport viewport) { this.viewport = viewport; // for an explanation of the negative scaling, see: http://gamedev.stackexchange.com/questions/63409/ this.projectionMatrix = Matrix.CreatePerspectiveFieldOfView(0.7853982f, viewport.AspectRatio, 1, 2) * Matrix.CreateScale(1, -1, 1); // defaults this.location = new Vector3(this.viewport.Width / 2, this.viewport.Height, 100); this.target = new Vector3(this.viewport.Width / 2, this.viewport.Height / 2, 0); this.up = new Vector3(0, 0, 1); this.zoom = 1; } public Viewport Viewport { get { return this.viewport; } } public Vector3 Location { get { return this.location; } set { this.location = value; this.viewMatrix = null; } } public Vector3 Target { get { return this.target; } set { this.target = value; this.viewMatrix = null; } } public Vector3 Up { get { return this.up; } set { this.up = value; this.viewMatrix = null; } } public float Zoom { get { return this.zoom; } set { this.zoom = value; this.viewMatrix = null; } } public Matrix ProjectionMatrix { get { return this.projectionMatrix; } } public Matrix ViewMatrix { get { if (this.viewMatrix == null) { // for an explanation of the negative scaling, see: http://gamedev.stackexchange.com/questions/63409/ this.viewMatrix = Matrix.CreateLookAt(this.location, this.target, this.up) * Matrix.CreateScale(this.zoom) * Matrix.CreateScale(-1, -1, 1); } return this.viewMatrix.Value; } } public Vector2 WorldPointToScreen(Vector3 point) { var result = viewport.Project(point, this.ProjectionMatrix, this.ViewMatrix, Matrix.Identity); return new Vector2(result.X, result.Y); } public void WorldPointsToScreen(Vector3[] points, Vector2[] destination) { Debug.Assert(points != null); Debug.Assert(destination != null); Debug.Assert(points.Length == destination.Length); for (var i = 0; i < points.Length; ++i) { destination[i] = this.WorldPointToScreen(points[i]); } } } CameraFixture.cs: using Microsoft.Xna.Framework.Graphics; using System; using System.Collections.Generic; using System.Linq; using System.Windows; using System.Windows.Controls; using System.Windows.Media; using Xunit; using XNA = Microsoft.Xna.Framework; public sealed class CameraFixture { [Fact] public void foo() { var camera = new Camera(new Viewport(0, 0, 250, 100)); DrawingVisual worldRender; DrawingVisual viewRender; DrawingVisual screenRender; this.Render( camera, out worldRender, out viewRender, out screenRender, new Sphere(30, 15) { WorldMatrix = XNA.Matrix.CreateTranslation(155, 50, 0) }, new Cube(30) { WorldMatrix = XNA.Matrix.CreateTranslation(75, 60, 15) }, new PolyLine(new XNA.Vector3(0, 0, 0), new XNA.Vector3(10, 10, 0), new XNA.Vector3(20, 0, 0), new XNA.Vector3(0, 0, 0)) { WorldMatrix = XNA.Matrix.CreateTranslation(65, 55, 30) }); this.ShowRenders(worldRender, viewRender, screenRender); } #region Supporting Fields private static readonly Pen xAxisPen = new Pen(Brushes.Red, 2); private static readonly Pen yAxisPen = new Pen(Brushes.Green, 2); private static readonly Pen zAxisPen = new Pen(Brushes.Blue, 2); private static readonly Pen viewportPen = new Pen(Brushes.Gray, 1); private static readonly Pen nonScreenSpacePen = new Pen(Brushes.Black, 0.5); private static readonly Color geometryBaseColor = Colors.Black; #endregion #region Supporting Methods private void Render(Camera camera, out DrawingVisual worldRender, out DrawingVisual viewRender, out DrawingVisual screenRender, params Geometry[] geometries) { var worldDrawingVisual = new DrawingVisual(); var viewDrawingVisual = new DrawingVisual(); var screenDrawingVisual = new DrawingVisual(); const int axisLength = 15; using (var worldDrawingContext = worldDrawingVisual.RenderOpen()) using (var viewDrawingContext = viewDrawingVisual.RenderOpen()) using (var screenDrawingContext = screenDrawingVisual.RenderOpen()) { // draw lines around the camera's viewport var viewportBounds = camera.Viewport.Bounds; var viewportLines = new Tuple<int, int, int, int>[] { Tuple.Create(viewportBounds.Left, viewportBounds.Bottom, viewportBounds.Left, viewportBounds.Top), Tuple.Create(viewportBounds.Left, viewportBounds.Top, viewportBounds.Right, viewportBounds.Top), Tuple.Create(viewportBounds.Right, viewportBounds.Top, viewportBounds.Right, viewportBounds.Bottom), Tuple.Create(viewportBounds.Right, viewportBounds.Bottom, viewportBounds.Left, viewportBounds.Bottom) }; foreach (var viewportLine in viewportLines) { var viewStart = XNA.Vector3.Transform(new XNA.Vector3(viewportLine.Item1, viewportLine.Item2, 0), camera.ViewMatrix); var viewEnd = XNA.Vector3.Transform(new XNA.Vector3(viewportLine.Item3, viewportLine.Item4, 0), camera.ViewMatrix); var screenStart = camera.WorldPointToScreen(new XNA.Vector3(viewportLine.Item1, viewportLine.Item2, 0)); var screenEnd = camera.WorldPointToScreen(new XNA.Vector3(viewportLine.Item3, viewportLine.Item4, 0)); worldDrawingContext.DrawLine(viewportPen, new Point(viewportLine.Item1, viewportLine.Item2), new Point(viewportLine.Item3, viewportLine.Item4)); viewDrawingContext.DrawLine(viewportPen, new Point(viewStart.X, viewStart.Y), new Point(viewEnd.X, viewEnd.Y)); screenDrawingContext.DrawLine(viewportPen, new Point(screenStart.X, screenStart.Y), new Point(screenEnd.X, screenEnd.Y)); } // draw axes var axisLines = new Tuple<int, int, int, int, int, int, Pen>[] { Tuple.Create(0, 0, 0, axisLength, 0, 0, xAxisPen), Tuple.Create(0, 0, 0, 0, axisLength, 0, yAxisPen), Tuple.Create(0, 0, 0, 0, 0, axisLength, zAxisPen) }; foreach (var axisLine in axisLines) { var viewStart = XNA.Vector3.Transform(new XNA.Vector3(axisLine.Item1, axisLine.Item2, axisLine.Item3), camera.ViewMatrix); var viewEnd = XNA.Vector3.Transform(new XNA.Vector3(axisLine.Item4, axisLine.Item5, axisLine.Item6), camera.ViewMatrix); var screenStart = camera.WorldPointToScreen(new XNA.Vector3(axisLine.Item1, axisLine.Item2, axisLine.Item3)); var screenEnd = camera.WorldPointToScreen(new XNA.Vector3(axisLine.Item4, axisLine.Item5, axisLine.Item6)); worldDrawingContext.DrawLine(axisLine.Item7, new Point(axisLine.Item1, axisLine.Item2), new Point(axisLine.Item4, axisLine.Item5)); viewDrawingContext.DrawLine(axisLine.Item7, new Point(viewStart.X, viewStart.Y), new Point(viewEnd.X, viewEnd.Y)); screenDrawingContext.DrawLine(axisLine.Item7, new Point(screenStart.X, screenStart.Y), new Point(screenEnd.X, screenEnd.Y)); } // for all points in all geometries to be rendered, find the closest and furthest away from the camera so we can lighten lines that are further away var distancesToAllGeometrySections = from geometry in geometries let geometryViewMatrix = geometry.WorldMatrix * camera.ViewMatrix from section in geometry.Sections from point in new XNA.Vector3[] { section.Item1, section.Item2 } let viewPoint = XNA.Vector3.Transform(point, geometryViewMatrix) select viewPoint.Length(); var furthestDistance = distancesToAllGeometrySections.Max(); var closestDistance = distancesToAllGeometrySections.Min(); var deltaDistance = Math.Max(0.000001f, furthestDistance - closestDistance); // draw each geometry for (var i = 0; i < geometries.Length; ++i) { var geometry = geometries[i]; // there's probably a more correct name for this, but basically this gets the geometry relative to the camera so we can check how far away each point is from the camera var geometryViewMatrix = geometry.WorldMatrix * camera.ViewMatrix; // we order roughly by those sections furthest from the camera to those closest, so that the closer ones "overwrite" the ones further away var orderedSections = from section in geometry.Sections let startPointRelativeToCamera = XNA.Vector3.Transform(section.Item1, geometryViewMatrix) let endPointRelativeToCamera = XNA.Vector3.Transform(section.Item2, geometryViewMatrix) let startPointDistance = startPointRelativeToCamera.Length() let endPointDistance = endPointRelativeToCamera.Length() orderby (startPointDistance + endPointDistance) descending select new { Section = section, DistanceToStart = startPointDistance, DistanceToEnd = endPointDistance }; foreach (var orderedSection in orderedSections) { var start = XNA.Vector3.Transform(orderedSection.Section.Item1, geometry.WorldMatrix); var end = XNA.Vector3.Transform(orderedSection.Section.Item2, geometry.WorldMatrix); var viewStart = XNA.Vector3.Transform(start, camera.ViewMatrix); var viewEnd = XNA.Vector3.Transform(end, camera.ViewMatrix); worldDrawingContext.DrawLine(nonScreenSpacePen, new Point(start.X, start.Y), new Point(end.X, end.Y)); viewDrawingContext.DrawLine(nonScreenSpacePen, new Point(viewStart.X, viewStart.Y), new Point(viewEnd.X, viewEnd.Y)); // screen rendering is more complicated purely because I wanted geometry to fade the further away it is from the camera // otherwise, it's very hard to tell whether the rendering is actually correct or not var startDistanceRatio = (orderedSection.DistanceToStart - closestDistance) / deltaDistance; var endDistanceRatio = (orderedSection.DistanceToEnd - closestDistance) / deltaDistance; // lerp towards white based on distance from camera, but only to a maximum of 90% var startColor = Lerp(geometryBaseColor, Colors.White, startDistanceRatio * 0.9f); var endColor = Lerp(geometryBaseColor, Colors.White, endDistanceRatio * 0.9f); var screenStart = camera.WorldPointToScreen(start); var screenEnd = camera.WorldPointToScreen(end); var brush = new LinearGradientBrush { StartPoint = new Point(screenStart.X, screenStart.Y), EndPoint = new Point(screenEnd.X, screenEnd.Y), MappingMode = BrushMappingMode.Absolute }; brush.GradientStops.Add(new GradientStop(startColor, 0)); brush.GradientStops.Add(new GradientStop(endColor, 1)); var pen = new Pen(brush, 1); brush.Freeze(); pen.Freeze(); screenDrawingContext.DrawLine(pen, new Point(screenStart.X, screenStart.Y), new Point(screenEnd.X, screenEnd.Y)); } } } worldRender = worldDrawingVisual; viewRender = viewDrawingVisual; screenRender = screenDrawingVisual; } private static float Lerp(float start, float end, float amount) { var difference = end - start; var adjusted = difference * amount; return start + adjusted; } private static Color Lerp(Color color, Color to, float amount) { var sr = color.R; var sg = color.G; var sb = color.B; var er = to.R; var eg = to.G; var eb = to.B; var r = (byte)Lerp(sr, er, amount); var g = (byte)Lerp(sg, eg, amount); var b = (byte)Lerp(sb, eb, amount); return Color.FromArgb(255, r, g, b); } private void ShowRenders(DrawingVisual worldRender, DrawingVisual viewRender, DrawingVisual screenRender) { var itemsControl = new ItemsControl(); itemsControl.Items.Add(new HeaderedContentControl { Header = "World", Content = new DrawingVisualHost(worldRender)}); itemsControl.Items.Add(new HeaderedContentControl { Header = "View", Content = new DrawingVisualHost(viewRender) }); itemsControl.Items.Add(new HeaderedContentControl { Header = "Screen", Content = new DrawingVisualHost(screenRender) }); var window = new Window { Title = "Renders", Content = itemsControl, ShowInTaskbar = true, SizeToContent = SizeToContent.WidthAndHeight }; window.ShowDialog(); } #endregion #region Supporting Types // stupidly simple 3D geometry class, consisting of a series of sections that will be connected by lines private abstract class Geometry { public abstract IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get; } public XNA.Matrix WorldMatrix { get; set; } } private sealed class Line : Geometry { private readonly XNA.Vector3 magnitude; public Line(XNA.Vector3 magnitude) { this.magnitude = magnitude; } public override IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get { yield return Tuple.Create(XNA.Vector3.Zero, this.magnitude); } } } private sealed class PolyLine : Geometry { private readonly XNA.Vector3[] points; public PolyLine(params XNA.Vector3[] points) { this.points = points; } public override IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get { if (this.points.Length < 2) { yield break; } var end = this.points[0]; for (var i = 1; i < this.points.Length; ++i) { var start = end; end = this.points[i]; yield return Tuple.Create(start, end); } } } } private sealed class Cube : Geometry { private readonly float size; public Cube(float size) { this.size = size; } public override IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get { var halfSize = this.size / 2; var frontBottomLeft = new XNA.Vector3(-halfSize, halfSize, -halfSize); var frontBottomRight = new XNA.Vector3(halfSize, halfSize, -halfSize); var frontTopLeft = new XNA.Vector3(-halfSize, halfSize, halfSize); var frontTopRight = new XNA.Vector3(halfSize, halfSize, halfSize); var backBottomLeft = new XNA.Vector3(-halfSize, -halfSize, -halfSize); var backBottomRight = new XNA.Vector3(halfSize, -halfSize, -halfSize); var backTopLeft = new XNA.Vector3(-halfSize, -halfSize, halfSize); var backTopRight = new XNA.Vector3(halfSize, -halfSize, halfSize); // front face yield return Tuple.Create(frontBottomLeft, frontBottomRight); yield return Tuple.Create(frontBottomLeft, frontTopLeft); yield return Tuple.Create(frontTopLeft, frontTopRight); yield return Tuple.Create(frontTopRight, frontBottomRight); // left face yield return Tuple.Create(frontTopLeft, backTopLeft); yield return Tuple.Create(backTopLeft, backBottomLeft); yield return Tuple.Create(backBottomLeft, frontBottomLeft); // right face yield return Tuple.Create(frontTopRight, backTopRight); yield return Tuple.Create(backTopRight, backBottomRight); yield return Tuple.Create(backBottomRight, frontBottomRight); // back face yield return Tuple.Create(backBottomLeft, backBottomRight); yield return Tuple.Create(backTopLeft, backTopRight); } } } private sealed class Sphere : Geometry { private readonly float radius; private readonly int subsections; public Sphere(float radius, int subsections) { this.radius = radius; this.subsections = subsections; } public override IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get { var latitudeLines = this.subsections; var longitudeLines = this.subsections; // see http://stackoverflow.com/a/4082020/5380 var results = from latitudeLine in Enumerable.Range(0, latitudeLines) from longitudeLine in Enumerable.Range(0, longitudeLines) let latitudeRatio = latitudeLine / (float)latitudeLines let longitudeRatio = longitudeLine / (float)longitudeLines let nextLatitudeRatio = (latitudeLine + 1) / (float)latitudeLines let nextLongitudeRatio = (longitudeLine + 1) / (float)longitudeLines let z1 = Math.Cos(Math.PI * latitudeRatio) let z2 = Math.Cos(Math.PI * nextLatitudeRatio) let x1 = Math.Sin(Math.PI * latitudeRatio) * Math.Cos(Math.PI * 2 * longitudeRatio) let y1 = Math.Sin(Math.PI * latitudeRatio) * Math.Sin(Math.PI * 2 * longitudeRatio) let x2 = Math.Sin(Math.PI * nextLatitudeRatio) * Math.Cos(Math.PI * 2 * longitudeRatio) let y2 = Math.Sin(Math.PI * nextLatitudeRatio) * Math.Sin(Math.PI * 2 * longitudeRatio) let x3 = Math.Sin(Math.PI * latitudeRatio) * Math.Cos(Math.PI * 2 * nextLongitudeRatio) let y3 = Math.Sin(Math.PI * latitudeRatio) * Math.Sin(Math.PI * 2 * nextLongitudeRatio) let start = new XNA.Vector3((float)x1 * radius, (float)y1 * radius, (float)z1 * radius) let firstEnd = new XNA.Vector3((float)x2 * radius, (float)y2 * radius, (float)z2 * radius) let secondEnd = new XNA.Vector3((float)x3 * radius, (float)y3 * radius, (float)z1 * radius) select new { First = Tuple.Create(start, firstEnd), Second = Tuple.Create(start, secondEnd) }; foreach (var result in results) { yield return result.First; yield return result.Second; } } } } #endregion }

    Read the article

  • Creating Custom Ajax Control Toolkit Controls

    - by Stephen Walther
    The goal of this blog entry is to explain how you can extend the Ajax Control Toolkit with custom Ajax Control Toolkit controls. I describe how you can create the two halves of an Ajax Control Toolkit control: the server-side control extender and the client-side control behavior. Finally, I explain how you can use the new Ajax Control Toolkit control in a Web Forms page. At the end of this blog entry, there is a link to download a Visual Studio 2010 solution which contains the code for two Ajax Control Toolkit controls: SampleExtender and PopupHelpExtender. The SampleExtender contains the minimum skeleton for creating a new Ajax Control Toolkit control. You can use the SampleExtender as a starting point for your custom Ajax Control Toolkit controls. The PopupHelpExtender control is a super simple custom Ajax Control Toolkit control. This control extender displays a help message when you start typing into a TextBox control. The animated GIF below demonstrates what happens when you click into a TextBox which has been extended with the PopupHelp extender. Here’s a sample of a Web Forms page which uses the control: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="ShowPopupHelp.aspx.cs" Inherits="MyACTControls.Web.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head runat="server"> <title>Show Popup Help</title> </head> <body> <form id="form1" runat="server"> <div> <act:ToolkitScriptManager ID="tsm" runat="server" /> <%-- Social Security Number --%> <asp:Label ID="lblSSN" Text="SSN:" AssociatedControlID="txtSSN" runat="server" /> <asp:TextBox ID="txtSSN" runat="server" /> <act:PopupHelpExtender id="ph1" TargetControlID="txtSSN" HelpText="Please enter your social security number." runat="server" /> <%-- Social Security Number --%> <asp:Label ID="lblPhone" Text="Phone Number:" AssociatedControlID="txtPhone" runat="server" /> <asp:TextBox ID="txtPhone" runat="server" /> <act:PopupHelpExtender id="ph2" TargetControlID="txtPhone" HelpText="Please enter your phone number." runat="server" /> </div> </form> </body> </html> In the page above, the PopupHelp extender is used to extend the functionality of the two TextBox controls. When focus is given to a TextBox control, the popup help message is displayed. An Ajax Control Toolkit control extender consists of two parts: a server-side control extender and a client-side behavior. For example, the PopupHelp extender consists of a server-side PopupHelpExtender control (PopupHelpExtender.cs) and a client-side PopupHelp behavior JavaScript script (PopupHelpBehavior.js). Over the course of this blog entry, I describe how you can create both the server-side extender and the client-side behavior. Writing the Server-Side Code Creating a Control Extender You create a control extender by creating a class that inherits from the abstract ExtenderControlBase class. For example, the PopupHelpExtender control is declared like this: public class PopupHelpExtender: ExtenderControlBase { } The ExtenderControlBase class is part of the Ajax Control Toolkit. This base class contains all of the common server properties and methods of every Ajax Control Toolkit extender control. The ExtenderControlBase class inherits from the ExtenderControl class. The ExtenderControl class is a standard class in the ASP.NET framework located in the System.Web.UI namespace. This class is responsible for generating a client-side behavior. The class generates a call to the Microsoft Ajax Library $create() method which looks like this: <script type="text/javascript"> $create(MyACTControls.PopupHelpBehavior, {"HelpText":"Please enter your social security number.","id":"ph1"}, null, null, $get("txtSSN")); }); </script> The JavaScript $create() method is part of the Microsoft Ajax Library. The reference for this method can be found here: http://msdn.microsoft.com/en-us/library/bb397487.aspx This method accepts the following parameters: type – The type of client behavior to create. The $create() method above creates a client PopupHelpBehavior. Properties – Enables you to pass initial values for the properties of the client behavior. For example, the initial value of the HelpText property. This is how server property values are passed to the client. Events – Enables you to pass client-side event handlers to the client behavior. References – Enables you to pass references to other client components. Element – The DOM element associated with the client behavior. This will be the DOM element associated with the control being extended such as the txtSSN TextBox. The $create() method is generated for you automatically. You just need to focus on writing the server-side control extender class. Specifying the Target Control All Ajax Control Toolkit extenders inherit a TargetControlID property from the ExtenderControlBase class. This property, the TargetControlID property, points at the control that the extender control extends. For example, the Ajax Control Toolkit TextBoxWatermark control extends a TextBox, the ConfirmButton control extends a Button, and the Calendar control extends a TextBox. You must indicate the type of control which your extender is extending. You indicate the type of control by adding a [TargetControlType] attribute to your control. For example, the PopupHelp extender is declared like this: [TargetControlType(typeof(TextBox))] public class PopupHelpExtender: ExtenderControlBase { } The PopupHelp extender can be used to extend a TextBox control. If you try to use the PopupHelp extender with another type of control then an exception is thrown. If you want to create an extender control which can be used with any type of ASP.NET control (Button, DataView, TextBox or whatever) then use the following attribute: [TargetControlType(typeof(Control))] Decorating Properties with Attributes If you decorate a server-side property with the [ExtenderControlProperty] attribute then the value of the property gets passed to the control’s client-side behavior. The value of the property gets passed to the client through the $create() method discussed above. The PopupHelp control contains the following HelpText property: [ExtenderControlProperty] [RequiredProperty] public string HelpText { get { return GetPropertyValue("HelpText", "Help Text"); } set { SetPropertyValue("HelpText", value); } } The HelpText property determines the help text which pops up when you start typing into a TextBox control. Because the HelpText property is decorated with the [ExtenderControlProperty] attribute, any value assigned to this property on the server is passed to the client automatically. For example, if you declare the PopupHelp extender in a Web Form page like this: <asp:TextBox ID="txtSSN" runat="server" /> <act:PopupHelpExtender id="ph1" TargetControlID="txtSSN" HelpText="Please enter your social security number." runat="server" />   Then the PopupHelpExtender renders the call to the the following Microsoft Ajax Library $create() method: $create(MyACTControls.PopupHelpBehavior, {"HelpText":"Please enter your social security number.","id":"ph1"}, null, null, $get("txtSSN")); You can see this call to the JavaScript $create() method by selecting View Source in your browser. This call to the $create() method calls a method named set_HelpText() automatically and passes the value “Please enter your social security number”. There are several attributes which you can use to decorate server-side properties including: ExtenderControlProperty – When a property is marked with this attribute, the value of the property is passed to the client automatically. ExtenderControlEvent – When a property is marked with this attribute, the property represents a client event handler. Required – When a value is not assigned to this property on the server, an error is displayed. DefaultValue – The default value of the property passed to the client. ClientPropertyName – The name of the corresponding property in the JavaScript behavior. For example, the server-side property is named ID (uppercase) and the client-side property is named id (lower-case). IDReferenceProperty – Applied to properties which refer to the IDs of other controls. URLProperty – Calls ResolveClientURL() to convert from a server-side URL to a URL which can be used on the client. ElementReference – Returns a reference to a DOM element by performing a client $get(). The WebResource, ClientResource, and the RequiredScript Attributes The PopupHelp extender uses three embedded resources named PopupHelpBehavior.js, PopupHelpBehavior.debug.js, and PopupHelpBehavior.css. The first two files are JavaScript files and the final file is a Cascading Style sheet file. These files are compiled as embedded resources. You don’t need to mark them as embedded resources in your Visual Studio solution because they get added to the assembly when the assembly is compiled by a build task. You can see that these files get embedded into the MyACTControls assembly by using Red Gate’s .NET Reflector tool: In order to use these files with the PopupHelp extender, you need to work with both the WebResource and the ClientScriptResource attributes. The PopupHelp extender includes the following three WebResource attributes. [assembly: WebResource("PopupHelp.PopupHelpBehavior.js", "text/javascript")] [assembly: WebResource("PopupHelp.PopupHelpBehavior.debug.js", "text/javascript")] [assembly: WebResource("PopupHelp.PopupHelpBehavior.css", "text/css", PerformSubstitution = true)] These WebResource attributes expose the embedded resource from the assembly so that they can be accessed by using the ScriptResource.axd or WebResource.axd handlers. The first parameter passed to the WebResource attribute is the name of the embedded resource and the second parameter is the content type of the embedded resource. The PopupHelp extender also includes the following ClientScriptResource and ClientCssResource attributes: [ClientScriptResource("MyACTControls.PopupHelpBehavior", "PopupHelp.PopupHelpBehavior.js")] [ClientCssResource("PopupHelp.PopupHelpBehavior.css")] Including these attributes causes the PopupHelp extender to request these resources when you add the PopupHelp extender to a page. If you open View Source in a browser which uses the PopupHelp extender then you will see the following link for the Cascading Style Sheet file: <link href="/WebResource.axd?d=0uONMsWXUuEDG-pbJHAC1kuKiIMteQFkYLmZdkgv7X54TObqYoqVzU4mxvaa4zpn5H9ch0RDwRYKwtO8zM5mKgO6C4WbrbkWWidKR07LD1d4n4i_uNB1mHEvXdZu2Ae5mDdVNDV53znnBojzCzwvSw2&amp;t=634417392021676003" type="text/css" rel="stylesheet" /> You also will see the following script include for the JavaScript file: <script src="/ScriptResource.axd?d=pIS7xcGaqvNLFBvExMBQSp_0xR3mpDfS0QVmmyu1aqDUjF06TrW1jVDyXNDMtBHxpRggLYDvgFTWOsrszflZEDqAcQCg-hDXjun7ON0Ol7EXPQIdOe1GLMceIDv3OeX658-tTq2LGdwXhC1-dE7_6g2&amp;t=ffffffff88a33b59" type="text/javascript"></script> The JavaScrpt file returned by this request to ScriptResource.axd contains the combined scripts for any and all Ajax Control Toolkit controls in a page. By default, the Ajax Control Toolkit combines all of the JavaScript files required by a page into a single JavaScript file. Combining files in this way really speeds up how quickly all of the JavaScript files get delivered from the web server to the browser. So, by default, there will be only one ScriptResource.axd include for all of the JavaScript files required by a page. If you want to disable Script Combining, and create separate links, then disable Script Combining like this: <act:ToolkitScriptManager ID="tsm" runat="server" CombineScripts="false" /> There is one more important attribute used by Ajax Control Toolkit extenders. The PopupHelp behavior uses the following two RequirdScript attributes to load the JavaScript files which are required by the PopupHelp behavior: [RequiredScript(typeof(CommonToolkitScripts), 0)] [RequiredScript(typeof(PopupExtender), 1)] The first parameter of the RequiredScript attribute represents either the string name of a JavaScript file or the type of an Ajax Control Toolkit control. The second parameter represents the order in which the JavaScript files are loaded (This second parameter is needed because .NET attributes are intrinsically unordered). In this case, the RequiredScript attribute will load the JavaScript files associated with the CommonToolkitScripts type and the JavaScript files associated with the PopupExtender in that order. The PopupHelp behavior depends on these JavaScript files. Writing the Client-Side Code The PopupHelp extender uses a client-side behavior written with the Microsoft Ajax Library. Here is the complete code for the client-side behavior: (function () { // The unique name of the script registered with the // client script loader var scriptName = "PopupHelpBehavior"; function execute() { Type.registerNamespace('MyACTControls'); MyACTControls.PopupHelpBehavior = function (element) { /// <summary> /// A behavior which displays popup help for a textbox /// </summmary> /// <param name="element" type="Sys.UI.DomElement">The element to attach to</param> MyACTControls.PopupHelpBehavior.initializeBase(this, [element]); this._textbox = Sys.Extended.UI.TextBoxWrapper.get_Wrapper(element); this._cssClass = "ajax__popupHelp"; this._popupBehavior = null; this._popupPosition = Sys.Extended.UI.PositioningMode.BottomLeft; this._popupDiv = null; this._helpText = "Help Text"; this._element$delegates = { focus: Function.createDelegate(this, this._element_onfocus), blur: Function.createDelegate(this, this._element_onblur) }; } MyACTControls.PopupHelpBehavior.prototype = { initialize: function () { MyACTControls.PopupHelpBehavior.callBaseMethod(this, 'initialize'); // Add event handlers for focus and blur var element = this.get_element(); $addHandlers(element, this._element$delegates); }, _ensurePopup: function () { if (!this._popupDiv) { var element = this.get_element(); var id = this.get_id(); this._popupDiv = $common.createElementFromTemplate({ nodeName: "div", properties: { id: id + "_popupDiv" }, cssClasses: ["ajax__popupHelp"] }, element.parentNode); this._popupBehavior = new $create(Sys.Extended.UI.PopupBehavior, { parentElement: element }, {}, {}, this._popupDiv); this._popupBehavior.set_positioningMode(this._popupPosition); } }, get_HelpText: function () { return this._helpText; }, set_HelpText: function (value) { if (this._HelpText != value) { this._helpText = value; this._ensurePopup(); this._popupDiv.innerHTML = value; this.raisePropertyChanged("Text") } }, _element_onfocus: function (e) { this.show(); }, _element_onblur: function (e) { this.hide(); }, show: function () { this._popupBehavior.show(); }, hide: function () { if (this._popupBehavior) { this._popupBehavior.hide(); } }, dispose: function() { var element = this.get_element(); $clearHandlers(element); if (this._popupBehavior) { this._popupBehavior.dispose(); this._popupBehavior = null; } } }; MyACTControls.PopupHelpBehavior.registerClass('MyACTControls.PopupHelpBehavior', Sys.Extended.UI.BehaviorBase); Sys.registerComponent(MyACTControls.PopupHelpBehavior, { name: "popupHelp" }); } // execute if (window.Sys && Sys.loader) { Sys.loader.registerScript(scriptName, ["ExtendedBase", "ExtendedCommon"], execute); } else { execute(); } })();   In the following sections, we’ll discuss how this client-side behavior works. Wrapping the Behavior for the Script Loader The behavior is wrapped with the following script: (function () { // The unique name of the script registered with the // client script loader var scriptName = "PopupHelpBehavior"; function execute() { // Behavior Content } // execute if (window.Sys && Sys.loader) { Sys.loader.registerScript(scriptName, ["ExtendedBase", "ExtendedCommon"], execute); } else { execute(); } })(); This code is required by the Microsoft Ajax Library Script Loader. You need this code if you plan to use a behavior directly from client-side code and you want to use the Script Loader. If you plan to only use your code in the context of the Ajax Control Toolkit then you can leave out this code. Registering a JavaScript Namespace The PopupHelp behavior is declared within a namespace named MyACTControls. In the code above, this namespace is created with the following registerNamespace() method: Type.registerNamespace('MyACTControls'); JavaScript does not have any built-in way of creating namespaces to prevent naming conflicts. The Microsoft Ajax Library extends JavaScript with support for namespaces. You can learn more about the registerNamespace() method here: http://msdn.microsoft.com/en-us/library/bb397723.aspx Creating the Behavior The actual Popup behavior is created with the following code. MyACTControls.PopupHelpBehavior = function (element) { /// <summary> /// A behavior which displays popup help for a textbox /// </summmary> /// <param name="element" type="Sys.UI.DomElement">The element to attach to</param> MyACTControls.PopupHelpBehavior.initializeBase(this, [element]); this._textbox = Sys.Extended.UI.TextBoxWrapper.get_Wrapper(element); this._cssClass = "ajax__popupHelp"; this._popupBehavior = null; this._popupPosition = Sys.Extended.UI.PositioningMode.BottomLeft; this._popupDiv = null; this._helpText = "Help Text"; this._element$delegates = { focus: Function.createDelegate(this, this._element_onfocus), blur: Function.createDelegate(this, this._element_onblur) }; } MyACTControls.PopupHelpBehavior.prototype = { initialize: function () { MyACTControls.PopupHelpBehavior.callBaseMethod(this, 'initialize'); // Add event handlers for focus and blur var element = this.get_element(); $addHandlers(element, this._element$delegates); }, _ensurePopup: function () { if (!this._popupDiv) { var element = this.get_element(); var id = this.get_id(); this._popupDiv = $common.createElementFromTemplate({ nodeName: "div", properties: { id: id + "_popupDiv" }, cssClasses: ["ajax__popupHelp"] }, element.parentNode); this._popupBehavior = new $create(Sys.Extended.UI.PopupBehavior, { parentElement: element }, {}, {}, this._popupDiv); this._popupBehavior.set_positioningMode(this._popupPosition); } }, get_HelpText: function () { return this._helpText; }, set_HelpText: function (value) { if (this._HelpText != value) { this._helpText = value; this._ensurePopup(); this._popupDiv.innerHTML = value; this.raisePropertyChanged("Text") } }, _element_onfocus: function (e) { this.show(); }, _element_onblur: function (e) { this.hide(); }, show: function () { this._popupBehavior.show(); }, hide: function () { if (this._popupBehavior) { this._popupBehavior.hide(); } }, dispose: function() { var element = this.get_element(); $clearHandlers(element); if (this._popupBehavior) { this._popupBehavior.dispose(); this._popupBehavior = null; } } }; The code above has two parts. The first part of the code is used to define the constructor function for the PopupHelp behavior. This is a factory method which returns an instance of a PopupHelp behavior: MyACTControls.PopupHelpBehavior = function (element) { } The second part of the code modified the prototype for the PopupHelp behavior: MyACTControls.PopupHelpBehavior.prototype = { } Any code which is particular to a single instance of the PopupHelp behavior should be placed in the constructor function. For example, the default value of the _helpText field is assigned in the constructor function: this._helpText = "Help Text"; Any code which is shared among all instances of the PopupHelp behavior should be added to the PopupHelp behavior’s prototype. For example, the public HelpText property is added to the prototype: get_HelpText: function () { return this._helpText; }, set_HelpText: function (value) { if (this._HelpText != value) { this._helpText = value; this._ensurePopup(); this._popupDiv.innerHTML = value; this.raisePropertyChanged("Text") } }, Registering a JavaScript Class After you create the PopupHelp behavior, you must register the behavior as a class by using the Microsoft Ajax registerClass() method like this: MyACTControls.PopupHelpBehavior.registerClass('MyACTControls.PopupHelpBehavior', Sys.Extended.UI.BehaviorBase); This call to registerClass() registers PopupHelp behavior as a class which derives from the base Sys.Extended.UI.BehaviorBase class. Like the ExtenderControlBase class on the server side, the BehaviorBase class on the client side contains method used by every behavior. The documentation for the BehaviorBase class can be found here: http://msdn.microsoft.com/en-us/library/bb311020.aspx The most important methods and properties of the BehaviorBase class are the following: dispose() – Use this method to clean up all resources used by your behavior. In the case of the PopupHelp behavior, the dispose() method is used to remote the event handlers created by the behavior and disposed the Popup behavior. get_element() -- Use this property to get the DOM element associated with the behavior. In other words, the DOM element which the behavior extends. get_id() – Use this property to the ID of the current behavior. initialize() – Use this method to initialize the behavior. This method is called after all of the properties are set by the $create() method. Creating Debug and Release Scripts You might have noticed that the PopupHelp behavior uses two scripts named PopupHelpBehavior.js and PopupHelpBehavior.debug.js. However, you never create these two scripts. Instead, you only create a single script named PopupHelpBehavior.pre.js. The pre in PopupHelpBehavior.pre.js stands for preprocessor. When you build the Ajax Control Toolkit (or the sample Visual Studio Solution at the end of this blog entry), a build task named JSBuild generates the PopupHelpBehavior.js release script and PopupHelpBehavior.debug.js debug script automatically. The JSBuild preprocessor supports the following directives: #IF #ELSE #ENDIF #INCLUDE #LOCALIZE #DEFINE #UNDEFINE The preprocessor directives are used to mark code which should only appear in the debug version of the script. The directives are used extensively in the Microsoft Ajax Library. For example, the Microsoft Ajax Library Array.contains() method is created like this: $type.contains = function Array$contains(array, item) { //#if DEBUG var e = Function._validateParams(arguments, [ {name: "array", type: Array, elementMayBeNull: true}, {name: "item", mayBeNull: true} ]); if (e) throw e; //#endif return (indexOf(array, item) >= 0); } Notice that you add each of the preprocessor directives inside a JavaScript comment. The comment prevents Visual Studio from getting confused with its Intellisense. The release version, but not the debug version, of the PopupHelpBehavior script is also minified automatically by the Microsoft Ajax Minifier. The minifier is invoked by a build step in the project file. Conclusion The goal of this blog entry was to explain how you can create custom AJAX Control Toolkit controls. In the first part of this blog entry, you learned how to create the server-side portion of an Ajax Control Toolkit control. You learned how to derive a new control from the ExtenderControlBase class and decorate its properties with the necessary attributes. Next, in the second part of this blog entry, you learned how to create the client-side portion of an Ajax Control Toolkit control by creating a client-side behavior with JavaScript. You learned how to use the methods of the Microsoft Ajax Library to extend your client behavior from the BehaviorBase class. Download the Custom ACT Starter Solution

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • How to create Custom ListForm WebPart

    - by DipeshBhanani
    Mostly all who works extensively on SharePoint (including meJ) don’t like to use out-of-box list forms (DispForm.aspx, EditForm.aspx, NewForm.aspx) as interface. Actually these OOB list forms bind hands of developers for the customization. It gives headache to developers to add just one post back event, for a dropdown field and to populate other fields in NewForm.aspx or EditForm.aspx. On top of that clients always ask such stuff. So here I am going to give you guys a flight for SharePoint Customization world. In this blog, I will explain, how to create CustomListForm WebPart. In my next blogs, I am going to explain easy deployment of List Forms through features and last, guidance on using SharePoint web controls. 1.       First thing, create a class library project through Visual Studio and inherit the class with WebPart class.     public class CustomListForm : WebPart   2.       Declare the public variables and properties which we are going to use throughout the class. You will get to know these once you see them in use.         #region "Variable Declaration"           Table spTableCntl;         FormToolBar formToolBar;         Literal ltAlertMessage;         Guid SiteId;         Guid ListId;         int ItemId;         string ListName;           #endregion           #region "Properties"           SPControlMode _ControlMode = SPControlMode.New;         [Personalizable(PersonalizationScope.Shared),          WebBrowsable(true),          WebDisplayName("Control Mode"),          WebDescription("Set Control Mode"),          DefaultValue(""),          Category("Miscellaneous")]         public SPControlMode ControlMode         {             get { return _ControlMode; }             set { _ControlMode = value; }         }           #endregion     The property “ControlMode” is used to identify the mode of the List Form. The property is of type SPControlMode which is an enum type with values (Display, Edit, New and Invalid). When we will add this WebPart to DispForm.aspx, EditForm.aspx and NewForm.aspx, we will set the WebPart property “ControlMode” to Display, Edit and New respectively.     3.       Now, we need to override the CreateChildControl method and write code to manually add SharePoint Web Controls related to each list fields as well as ToolBar controls.         protected override void CreateChildControls()         {             base.CreateChildControls();               try             {                 SiteId = SPContext.Current.Site.ID;                 ListId = SPContext.Current.ListId;                 ListName = SPContext.Current.List.Title;                   if (_ControlMode == SPControlMode.Display || _ControlMode == SPControlMode.Edit)                     ItemId = SPContext.Current.ItemId;                   SPSecurity.RunWithElevatedPrivileges(delegate()                 {                     using (SPSite site = new SPSite(SiteId))                     {                         //creating a new SPSite with credentials of System Account                         using (SPWeb web = site.OpenWeb())                         {                               //<Custom Code for creating form controls>                         }                     }                 });             }             catch (Exception ex)             {                 ShowError(ex, "CreateChildControls");             }         }   Here we are assuming that we are developing this WebPart to plug into List Forms. Hence we will get the List Id and List Name from the current context. We can have Item Id only in case of Display and Edit Mode. We are putting our code into “RunWithElevatedPrivileges” to elevate privileges to System Account. Now, let’s get deep down into the main code and expand “//<Custom Code for creating form controls>”. Before initiating any SharePoint control, we need to set context of SharePoint web controls explicitly so that it will be instantiated with elevated System Account user. Following line does the job.     //To create SharePoint controls with new web object and System Account credentials     SPControl.SetContextWeb(Context, web);   First thing, let’s add main table as container for all controls.     //Table to render webpart     Table spTableMain = new Table();     spTableMain.CellPadding = 0;     spTableMain.CellSpacing = 0;     spTableMain.Width = new Unit(100, UnitType.Percentage);     this.Controls.Add(spTableMain);   Now we need to add Top toolbar with Save and Cancel button at top as you see in the below screen shot.       // Add Row and Cell for Top ToolBar     TableRow spRowTopToolBar = new TableRow();     spTableMain.Rows.Add(spRowTopToolBar);     TableCell spCellTopToolBar = new TableCell();     spRowTopToolBar.Cells.Add(spCellTopToolBar);     spCellTopToolBar.Width = new Unit(100, UnitType.Percentage);         ToolBar toolBarTop = (ToolBar)Page.LoadControl("/_controltemplates/ToolBar.ascx");     toolBarTop.CssClass = "ms-formtoolbar";     toolBarTop.ID = "toolBarTbltop";     toolBarTop.RightButtons.SeparatorHtml = "<td class=ms-separator> </td>";       if (_ControlMode != SPControlMode.Display)     {         SaveButton btnSave = new SaveButton();         btnSave.ControlMode = _ControlMode;         btnSave.ListId = ListId;           if (_ControlMode == SPControlMode.New)             btnSave.RenderContext = SPContext.GetContext(web);         else         {             btnSave.RenderContext = SPContext.GetContext(this.Context, ItemId, ListId, web);             btnSave.ItemContext = SPContext.GetContext(this.Context, ItemId, ListId, web);             btnSave.ItemId = ItemId;         }         toolBarTop.RightButtons.Controls.Add(btnSave);     }       GoBackButton goBackButtonTop = new GoBackButton();     toolBarTop.RightButtons.Controls.Add(goBackButtonTop);     goBackButtonTop.ControlMode = SPControlMode.Display;       spCellTopToolBar.Controls.Add(toolBarTop);   Here we have use “SaveButton” and “GoBackButton” which are internal SharePoint web controls for save and cancel functionality. I have set some of the properties of Save Button with if-else condition because we will not have Item Id in case of New Mode. Item Id property is used to identify which SharePoint List Item need to be saved. Now, add Form Toolbar to the page which contains “Attach File”, “Delete Item” etc buttons.       // Add Row and Cell for FormToolBar     TableRow spRowFormToolBar = new TableRow();     spTableMain.Rows.Add(spRowFormToolBar);     TableCell spCellFormToolBar = new TableCell();     spRowFormToolBar.Cells.Add(spCellFormToolBar);     spCellFormToolBar.Width = new Unit(100, UnitType.Percentage);       FormToolBar formToolBar = new FormToolBar();     formToolBar.ID = "formToolBar";     formToolBar.ListId = ListId;     if (_ControlMode == SPControlMode.New)         formToolBar.RenderContext = SPContext.GetContext(web);     else     {         formToolBar.RenderContext = SPContext.GetContext(this.Context, ItemId, ListId, web);         formToolBar.ItemContext = SPContext.GetContext(this.Context, ItemId, ListId, web);         formToolBar.ItemId = ItemId;     }     formToolBar.ControlMode = _ControlMode;     formToolBar.EnableViewState = true;       spCellFormToolBar.Controls.Add(formToolBar);     The ControlMode property will take care of which button to be displayed on the toolbar. E.g. “Attach files”, “Delete Item” in new/edit forms and “New Item”, “Edit Item”, “Delete Item”, “Manage Permissions” etc in display forms. Now add main section which contains form field controls.     //Create Form Field controls and add them in Table "spCellCntl"     CreateFieldControls(web);     //Add public variable "spCellCntl" containing all form controls to the page     spRowCntl.Cells.Add(spCellCntl);     spCellCntl.Width = new Unit(100, UnitType.Percentage);     spCellCntl.Controls.Add(spTableCntl);       //Add a Blank Row with height of 5px to render space between ToolBar table and Control table     TableRow spRowLine1 = new TableRow();     spTableMain.Rows.Add(spRowLine1);     TableCell spCellLine1 = new TableCell();     spRowLine1.Cells.Add(spCellLine1);     spCellLine1.Height = new Unit(5, UnitType.Pixel);     spCellLine1.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));       //Add Row and Cell for Form Controls Section     TableRow spRowCntl = new TableRow();     spTableMain.Rows.Add(spRowCntl);     TableCell spCellCntl = new TableCell();       //Create Form Field controls and add them in Table "spCellCntl"     CreateFieldControls(web);     //Add public variable "spCellCntl" containing all form controls to the page     spRowCntl.Cells.Add(spCellCntl);     spCellCntl.Width = new Unit(100, UnitType.Percentage);     spCellCntl.Controls.Add(spTableCntl);       TableRow spRowLine2 = new TableRow();     spTableMain.Rows.Add(spRowLine2);     TableCell spCellLine2 = new TableCell();     spRowLine2.Cells.Add(spCellLine2);     spCellLine2.CssClass = "ms-formline";     spCellLine2.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));       // Add Blank row with height of 5 pixel     TableRow spRowLine3 = new TableRow();     spTableMain.Rows.Add(spRowLine3);     TableCell spCellLine3 = new TableCell();     spRowLine3.Cells.Add(spCellLine3);     spCellLine3.Height = new Unit(5, UnitType.Pixel);     spCellLine3.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));   You can add bottom toolbar also to get same look and feel as OOB forms. I am not adding here as the blog will be much lengthy. At last, you need to write following lines to allow unsafe updates for Save and Delete button.     // Allow unsafe update on web for save button and delete button     if (this.Page.IsPostBack && this.Page.Request["__EventTarget"] != null         && (this.Page.Request["__EventTarget"].Contains("IOSaveItem")         || this.Page.Request["__EventTarget"].Contains("IODeleteItem")))     {         SPContext.Current.Web.AllowUnsafeUpdates = true;     }   So that’s all. We have finished writing Custom Code for adding field control. But something most important is skipped. In above code, I have called function “CreateFieldControls(web);” to add SharePoint field controls to the page. Let’s see the implementation of the function:     private void CreateFieldControls(SPWeb pWeb)     {         SPList listMain = pWeb.Lists[ListId];         SPFieldCollection fields = listMain.Fields;           //Main Table to render all fields         spTableCntl = new Table();         spTableCntl.BorderWidth = new Unit(0);         spTableCntl.CellPadding = 0;         spTableCntl.CellSpacing = 0;         spTableCntl.Width = new Unit(100, UnitType.Percentage);         spTableCntl.CssClass = "ms-formtable";           SPContext controlContext = SPContext.GetContext(this.Context, ItemId, ListId, pWeb);           foreach (SPField listField in fields)         {             string fieldDisplayName = listField.Title;             string fieldInternalName = listField.InternalName;               //Skip if the field is system field or hidden             if (listField.Hidden || listField.ShowInVersionHistory == false)                 continue;               //Skip if the control mode is display and field is read-only             if (_ControlMode != SPControlMode.Display && listField.ReadOnlyField == true)                 continue;               FieldLabel fieldLabel = new FieldLabel();             fieldLabel.FieldName = listField.InternalName;             fieldLabel.ListId = ListId;               BaseFieldControl fieldControl = listField.FieldRenderingControl;             fieldControl.ListId = ListId;             //Assign unique id using Field Internal Name             fieldControl.ID = string.Format("Field_{0}", fieldInternalName);             fieldControl.EnableViewState = true;               //Assign control mode             fieldLabel.ControlMode = _ControlMode;             fieldControl.ControlMode = _ControlMode;             switch (_ControlMode)             {                 case SPControlMode.New:                     fieldLabel.RenderContext = SPContext.GetContext(pWeb);                     fieldControl.RenderContext = SPContext.GetContext(pWeb);                     break;                 case SPControlMode.Edit:                 case SPControlMode.Display:                     fieldLabel.RenderContext = controlContext;                     fieldLabel.ItemContext = controlContext;                     fieldLabel.ItemId = ItemId;                       fieldControl.RenderContext = controlContext;                     fieldControl.ItemContext = controlContext;                     fieldControl.ItemId = ItemId;                     break;             }               //Add row to display a field row             TableRow spCntlRow = new TableRow();             spTableCntl.Rows.Add(spCntlRow);               //Add the cells for containing field lable and control             TableCell spCellLabel = new TableCell();             spCellLabel.Width = new Unit(30, UnitType.Percentage);             spCellLabel.CssClass = "ms-formlabel";             spCntlRow.Cells.Add(spCellLabel);             TableCell spCellControl = new TableCell();             spCellControl.Width = new Unit(70, UnitType.Percentage);             spCellControl.CssClass = "ms-formbody";             spCntlRow.Cells.Add(spCellControl);               //Add the control to the table cells             spCellLabel.Controls.Add(fieldLabel);             spCellControl.Controls.Add(fieldControl);               //Add description if there is any in case of New and Edit Mode             if (_ControlMode != SPControlMode.Display && listField.Description != string.Empty)             {                 FieldDescription fieldDesc = new FieldDescription();                 fieldDesc.FieldName = fieldInternalName;                 fieldDesc.ListId = ListId;                 spCellControl.Controls.Add(fieldDesc);             }               //Disable Name(Title) in Edit Mode             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Name")             {                 TextBox txtTitlefield = (TextBox)fieldControl.Controls[0].FindControl("TextField");                 txtTitlefield.Enabled = false;             }         }         fields = null;     }   First of all, I have declared List object and got list fields in field collection object called “fields”. Then I have added a table for the container of all controls and assign CSS class as "ms-formtable" so that it gives consistent look and feel of SharePoint. Now it’s time to navigate through all fields and add them if required. Here we don’t need to add hidden or system fields. We also don’t want to display read-only fields in new and edit forms. Following lines does this job.             //Skip if the field is system field or hidden             if (listField.Hidden || listField.ShowInVersionHistory == false)                 continue;               //Skip if the control mode is display and field is read-only             if (_ControlMode != SPControlMode.Display && listField.ReadOnlyField == true)                 continue;   Let’s move to the next line of code.             FieldLabel fieldLabel = new FieldLabel();             fieldLabel.FieldName = listField.InternalName;             fieldLabel.ListId = ListId;               BaseFieldControl fieldControl = listField.FieldRenderingControl;             fieldControl.ListId = ListId;             //Assign unique id using Field Internal Name             fieldControl.ID = string.Format("Field_{0}", fieldInternalName);             fieldControl.EnableViewState = true;               //Assign control mode             fieldLabel.ControlMode = _ControlMode;             fieldControl.ControlMode = _ControlMode;   We have used “FieldLabel” control for displaying field title. The advantage of using Field Label is, SharePoint automatically adds red star besides field label to identify it as mandatory field if there is any. Here is most important part to understand. The “BaseFieldControl”. It will render the respective web controls according to type of the field. For example, if it’s single line of text, then Textbox, if it’s look up then it renders dropdown. Additionally, the “ControlMode” property tells compiler that which mode (display/edit/new) controls need to be rendered with. In display mode, it will render label with field value. In edit mode, it will render respective control with item value and in new mode it will render respective control with empty value. Please note that, it’s not always the case when dropdown field will be rendered for Lookup field or Choice field. You need to understand which controls are rendered for which list fields. I am planning to write a separate blog which I hope to publish it very soon. Moreover, we also need to assign list field specific properties like List Id, Field Name etc to identify which SharePoint List field is attached with the control.             switch (_ControlMode)             {                 case SPControlMode.New:                     fieldLabel.RenderContext = SPContext.GetContext(pWeb);                     fieldControl.RenderContext = SPContext.GetContext(pWeb);                     break;                 case SPControlMode.Edit:                 case SPControlMode.Display:                     fieldLabel.RenderContext = controlContext;                     fieldLabel.ItemContext = controlContext;                     fieldLabel.ItemId = ItemId;                       fieldControl.RenderContext = controlContext;                     fieldControl.ItemContext = controlContext;                     fieldControl.ItemId = ItemId;                     break;             }   Here, I have separate code for new mode and Edit/Display mode because we will not have Item Id to assign in New Mode. We also need to set CSS class for cell containing Label and Controls so that those controls get rendered with SharePoint theme.             spCellLabel.CssClass = "ms-formlabel";             spCellControl.CssClass = "ms-formbody";   “FieldDescription” control is used to add field description if there is any.    Now it’s time to add some more customization,               //Disable Name(Title) in Edit Mode             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Name")             {                 TextBox txtTitlefield = (TextBox)fieldControl.Controls[0].FindControl("TextField");                 txtTitlefield.Enabled = false;             }   The above code will disable the title field in edit mode. You can add more code here to achieve more customization according to your requirement. Some of the examples are as follow:             //Adding post back event on UserField to auto populate some other dependent field             //in new mode and disable it in edit mode             if (_ControlMode != SPControlMode.Display && fieldDisplayName == "Manager")             {                 if (fieldControl.Controls[0].FindControl("UserField") != null)                 {                     PeopleEditor pplEditor = (PeopleEditor)fieldControl.Controls[0].FindControl("UserField");                     if (_ControlMode == SPControlMode.New)                         pplEditor.AutoPostBack = true;                     else                         pplEditor.Enabled = false;                 }             }               //Add JavaScript Event on Dropdown field. Don't forget to add the JavaScript function on the page.             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Designation")             {                 DropDownList ddlCategory = (DropDownList)fieldControl.Controls[0];                 ddlCategory.Attributes.Add("onchange", string.Format("javascript:DropdownChangeEvent('{0}');return false;", ddlCategory.ClientID));             }    Following are the screenshots of my Custom ListForm WebPart. Let’s play a game, check out your OOB List forms of SharePoint, compare with these screens and find out differences.   DispForm.aspx:   EditForm.aspx:   NewForm.aspx:   Enjoy the SharePoint Soup!!! ­­­­­­­­­­­­­­­­­­­­

    Read the article

  • Bitnami redmine error SVN

    - by Evgeniy
    I'm installing the Bitnami Redmine stack (redmine + subversion). Firstly I install configure and test it locally (Ubuntu 14.04 LTS). And everything is OK. I install Bitnami stack on server (Red Hat 4.4.7-4) and configure SVN. I commit files into SVN and connect project into Redmine with SVN repository, but when I try see it Rredmine displays 404 error. In the Redmine log file I see the following errors: Started GET "/redmine/projects/web-user-panel/repository" for 127.0.0.1 at 2014-04-24 11:34:20 +0300 Processing by RepositoriesController#show as HTML Parameters: {"id"=>"web-user-panel"} Current user: user (id=13) Error parsing svn output: #<REXML::ParseException: No close tag for /lists/list> /var/www/html/redmine/ruby/lib/ruby/1.9.1/rexml/parsers/treeparser.rb:28:in `parse' /var/www/html/redmine/ruby/lib/ruby/1.9.1/rexml/document.rb:245:in `build' /var/www/html/redmine/ruby/lib/ruby/1.9.1/rexml/document.rb:43:in `initialize' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/xml_mini/rexml.rb:30:in `new' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/xml_mini/rexml.rb:30:in `parse' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/xml_mini.rb:80:in `parse' /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:313:in `parse_xml' /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/subversion_adapter.rb:106:in `block in entries' /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:258:in `call' /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:258:in `block in shellout' /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:255:in `popen' /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:255:in `shellout' /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:212:in `shellout' /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/subversion_adapter.rb:100:in `entries' /var/www/html/redmine/apps/redmine/htdocs/app/models/repository.rb:198:in `scm_entries' /var/www/html/redmine/apps/redmine/htdocs/app/models/repository.rb:203:in `entries' /var/www/html/redmine/apps/redmine/htdocs/app/controllers/repositories_controller.rb:116:in `show' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/implicit_render.rb:4:in `send_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/abstract_controller/base.rb:167:in `process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/rendering.rb:10:in `process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/abstract_controller/callbacks.rb:18:in `block in process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:491:in `_run__2883861927089110970__process_action__2542827355008294621__callbacks' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:405:in `__run_callback' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:385:in `_run_process_action_callbacks' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:81:in `run_callbacks' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/abstract_controller/callbacks.rb:17:in `process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/rescue.rb:29:in `process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/instrumentation.rb:30:in `block in process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/notifications.rb:123:in `block in instrument' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/notifications/instrumenter.rb:20:in `instrument' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/notifications.rb:123:in `instrument' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/instrumentation.rb:29:in `process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/params_wrapper.rb:207:in `process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activerecord-3.2.17/lib/active_record/railties/controller_runtime.rb:18:in `process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/abstract_controller/base.rb:121:in `process' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/abstract_controller/rendering.rb:45:in `process' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal.rb:203:in `dispatch' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/rack_delegation.rb:14:in `dispatch' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal.rb:246:in `block in action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/routing/route_set.rb:73:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/routing/route_set.rb:73:in `dispatch' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/routing/route_set.rb:36:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/journey-1.0.4/lib/journey/router.rb:68:in `block in call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/journey-1.0.4/lib/journey/router.rb:56:in `each' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/journey-1.0.4/lib/journey/router.rb:56:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/routing/route_set.rb:608:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-openid-1.3.1/lib/rack/openid.rb:98:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/best_standards_support.rb:17:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/etag.rb:23:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/conditionalget.rb:25:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/head.rb:14:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/params_parser.rb:21:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/flash.rb:242:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/session/abstract/id.rb:210:in `context' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/session/abstract/id.rb:205:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/cookies.rb:341:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activerecord-3.2.17/lib/active_record/query_cache.rb:64:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activerecord-3.2.17/lib/active_record/connection_adapters/abstract/connection_pool.rb:479:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/callbacks.rb:28:in `block in call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:405:in `_run__1805290955544829105__call__1486932417638469082__callbacks' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:405:in `__run_callback' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:385:in `_run_call_callbacks' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:81:in `run_callbacks' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/callbacks.rb:27:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/remote_ip.rb:31:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/debug_exceptions.rb:16:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/show_exceptions.rb:56:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/rack/logger.rb:32:in `call_app' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/rack/logger.rb:16:in `block in call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/tagged_logging.rb:22:in `tagged' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/rack/logger.rb:16:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/request_id.rb:22:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/methodoverride.rb:21:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/runtime.rb:17:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/cache/strategy/local_cache.rb:72:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/lock.rb:15:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/static.rb:63:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:136:in `forward' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:245:in `fetch' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:185:in `lookup' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:66:in `call!' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:51:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/engine.rb:484:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/application.rb:231:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/railtie/configurable.rb:30:in `method_missing' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/builder.rb:134:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/urlmap.rb:64:in `block in call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/urlmap.rb:49:in `each' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/urlmap.rb:49:in `call' /var/www/html/redmine/ruby/lib/ruby/gems/1.9.1/gems/passenger-4.0.40/lib/phusion_passenger/rack/thread_handler_extension.rb:74:in `process_request' /var/www/html/redmine/ruby/lib/ruby/gems/1.9.1/gems/passenger-4.0.40/lib/phusion_passenger/request_handler/thread_handler.rb:141:in `accept_and_process_next_request' /var/www/html/redmine/ruby/lib/ruby/gems/1.9.1/gems/passenger-4.0.40/lib/phusion_passenger/request_handler/thread_handler.rb:109:in `main_loop' /var/www/html/redmine/ruby/lib/ruby/gems/1.9.1/gems/passenger-4.0.40/lib/phusion_passenger/request_handler.rb:448:in `block (3 levels) in start_threads' ... No close tag for /lists/list Line: 4 Position: 93 Last 80 unconsumed characters: Output was: <?xml version="1.0" encoding="UTF-8"?> <lists> <list path="svn://127.0.0.1/voxysuser"> Rendered common/error.html.erb within layouts/base (0.1ms) Completed 404 Not Found in 69.1ms (Views: 15.1ms | ActiveRecord: 3.0ms) How can I resolve this problem? I googled it, but similar problem fixed should be fixed 3 years ago. I'm installing the latest Bitnami Redmine 2.5.1-1 stack. UPDATE Well, I found next way. If I use the http protocol it works fine, but I should remove access for svn by web. That's why I create virtual host on localhost and get info from svn use 127.0.0.1 IP. <VirtualHost 127.0.0.1:8000> <Location /repo> DAV svn SVNPath "PATH_TO_MY_REPOSITORY" </Location> And this it work good.

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184  | Next Page >