Search Results

Search found 3865 results on 155 pages for 't4 templates'.

Page 125/155 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Scaling a node.js application, nginx as a base server, but varnish or redis for caching?

    - by AntelopeSalad
    I'm not close to being well versed in using nginx or varnish but this is my setup at the moment. I have a node.js server running which is serving either json, html templates, or socket.io events. Then I have nginx running in front of node which is serving all static content (css, js, etc.). At this point I would like to cache both static content and dynamic content to memory. It's to my understanding that varnish can cache static content quite well and it wouldn't require touching my application code. I also think it's capable of caching dynamic content too but there cannot be any cookie headers? I do use redis at the moment for holding session data and planned to use it for other things in the future like keeping track of non-crucial but fun stats. I just have no idea how I should handle caching everything on the site. I think it comes down to these options but there might be more: Throw varnish in front of nginx and let varnish cache static pages, no app code changes. Redis would cache dynamic db calls which would require modifying my app code. Ignore using varnish completely and let redis handle caching everything, then use one of the nginx-redis modules. I'm not sure if this would require a lot of app code changes (for the static files). I'm not having any luck finding benchmarks that compare nginx+varnish vs nginx+redis and I'm too inexperienced to bench it myself (high chances of my configs being awful). I'm basically looking for the solution that would be the most efficient in terms of req/sec and scalable in the future (throw new hardware at the problem + maybe adjust some values in a config = new servers up and running semi-painlessly).

    Read the article

  • Varnish going sick

    - by junke1990
    I'm having trouble with Varnish, it works for a couple of views and then just goes sick... The weird thing is that it does work for about 20 or 30 requests. If I call apache directly it works fine. I'm running Varnish Version: 3.0.3-1 on Debian Squeeze and, for now, Apache on port 80 and Varnish on port 8080 on the same server.. I'm using https://github.com/mattiasgeniar/varnish-3.0-configuration-templates as base for my VCLs and modified the VCLs to support Concrete5. Anyone any clue on how I should debug this? backend default { .host = "127.0.0.1"; .port = "80"; .connect_timeout = 1.5s; .first_byte_timeout = 45s; .between_bytes_timeout = 30s; .probe = { .url = "/"; .timeout = 1s; .interval = 10s; .window = 10; .threshold = 8; } } LOG 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791312 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791315 1.0 0 Backend_health - default Still sick 4--X-R- 0 8 10 0.000689 0.000000 HTTP/1.1 301 Moved Permanently (the 301 is because I check for www.)

    Read the article

  • Why is /dev/urandom only readable by root since Ubuntu 12.04 and how can I "fix" it?

    - by Joe Hopfgartner
    I used to work with Ubuntu 10.04 templates on a lot of servers. Since changing to 12.04 I have problems that I've now isolated. The /dev/urandom device is only accessible to root. This caused SSL engines, at least in PHP, for example file_get_contents(https://... to fail. It also broke redmine. After a chmod 644 it works fine, but that doesnt stay upon reboot. So my question. why is this? I see no security risk because... i mean.. wanna steal some random data? How can I "fix" it? The servers are isolated and used by only one application, thats why I use openvz. I think about something like a runlevel script or so... but how do I do it efficiently? Maby with dpkg or apt? The same goes vor /dev/shm. in this case i totally understand why its not accessible, but I assume I can "fix" it the same way to fix /dev/urandom

    Read the article

  • VMware ESX Linux Guest Customization

    - by andyh_ky
    Hello, I am interested in deploying several RHEL 4 Update 8 virtual machines for creation of a test environment. Here are the steps I am taking: In off hours, P2V/V2V the production machines and convert them to templates Deploy the virtual machines with a customization specification that changes hostname, IP address I am interested in how these processes are done and if there are any options for further customization. Are the machines brought on the network when they are powered on, before they are reconfigured? Is there a potential IP address conflict? Is there an option to run additional scripts which reside on the guest as a part of the reconfiguration? For example, restoring an Oracle Database. This is an option with Windows guests and sysprep, but I have been unable to locate anything showing a RHEL equivalent. I am dealing with a multi tier application. The main issue I am attempting to mitigate is that the application servers reference database servers by hostname and in tnsnames files. I am interested in scripting the reconfiguration of the application in the deployment so that the app/db servers are pointing to the test environment. I am OK with placing the 'cleanup' script on the source and executing it after the machine has been brought up. I am interested in the automation of the script's execution post clone/boot, as well as if there could be an IP address conflict. (cross posted to VMTN's ESX 4 community)

    Read the article

  • xslt cookbook example not working

    - by Liza dawson
    Hi I am working on this from xslt cookbook type my.xml <?xml version="1.0" encoding="UTF-8"?> <people> <person name="Al Zehtooney" age="33" sex="m" smoker="no"/> <person name="Brad York" age="38" sex="m" smoker="yes"/> <person name="Charles Xavier" age="32" sex="m" smoker="no"/> <person name="David Williams" age="33" sex="m" smoker="no"/> <person name="Edward Ulster" age="33" sex="m" smoker="yes"/> <person name="Frank Townsend" age="35" sex="m" smoker="no"/> <person name="Greg Sutter" age="40" sex="m" smoker="no"/> <person name="Harry Rogers" age="37" sex="m" smoker="no"/> <person name="John Quincy" age="43" sex="m" smoker="yes"/> <person name="Kent Peterson" age="31" sex="m" smoker="no"/> <person name="Larry Newell" age="23" sex="m" smoker="no"/> <person name="Max Milton" age="22" sex="m" smoker="no"/> <person name="Norman Lamagna" age="30" sex="m" smoker="no"/> <person name="Ollie Kensington" age="44" sex="m" smoker="no"/> <person name="John Frank" age="24" sex="m" smoker="no"/> <person name="Mary Williams" age="33" sex="f" smoker="no"/> <person name="Jane Frank" age="38" sex="f" smoker="yes"/> <person name="Jo Peterson" age="32" sex="f" smoker="no"/> <person name="Angie Frost" age="33" sex="f" smoker="no"/> <person name="Betty Bates" age="33" sex="f" smoker="no"/> <person name="Connie Date" age="35" sex="f" smoker="no"/> <person name="Donna Finster" age="20" sex="f" smoker="no"/> <person name="Esther Gates" age="37" sex="f" smoker="no"/> <person name="Fanny Hill" age="33" sex="f" smoker="yes"/> <person name="Geta Iota" age="27" sex="f" smoker="no"/> <person name="Hillary Johnson" age="22" sex="f" smoker="no"/> <person name="Ingrid Kent" age="21" sex="f" smoker="no"/> <person name="Jill Larson" age="20" sex="f" smoker="no"/> <person name="Kim Mulrooney" age="41" sex="f" smoker="no"/> <person name="Lisa Nevins" age="21" sex="f" smoker="no"/> </people> type generic-attr-to-csv.xslt <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:csv="http://www.ora.com/XSLTCookbook/namespaces/csv"> <xsl:param name="delimiter" select=" ',' "/> <xsl:output method="text" /> <xsl:strip-space elements="*"/> <xsl:template match="/"> <xsl:for-each select="$columns"> <xsl:value-of select="@name"/> <xsl:if test="position( ) != last( )"> <xsl:value-of select="$delimiter/> </xsl:if> </xsl:for-each> <xsl:text>&#xa;</xsl:text> <xsl:apply-templates/> </xsl:template> <xsl:template match="/*/*"> <xsl:variable name="row" select="."/> <xsl:for-each select="$columns"> <xsl:apply-templates select="$row/@*[local-name(.)=current( )/@attr]" mode="csv:map-value"/> <xsl:if test="position( ) != last( )"> <xsl:value-of select="$delimiter"/> </xsl:if> </xsl:for-each> <xsl:text>&#xa;</xsl:text> </xsl:template> <xsl:template match="@*" mode="map-value"> <xsl:value-of select="."/> </xsl:template> </xsl:stylesheet> type my.xsl <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:csv="http://www.ora.com/XSLTCookbook/namespaces/csv"> <xsl:import href="generic-attr-to-csv.xslt"/> <!--Defines the mapping from attributes to columns --> <xsl:variable name="columns" select="document('')/*/csv:column"/> <csv:column name="Name" attr="name"/> <csv:column name="Age" attr="age"/> <csv:column name="Gender" attr="sex"/> <csv:column name="Smoker" attr="smoker"/> <!-- Handle custom attribute mappings --> <xsl:template match="@sex" mode="csv:map-value"> <xsl:choose> <xsl:when test=".='m'">male</xsl:when> <xsl:when test=".='f'">female</xsl:when> <xsl:otherwise>error</xsl:otherwise> </xsl:choose> </xsl:template> </xsl:stylesheet> using the apache xalan parser D:\Test>java org.apache.xalan.xslt.Process -in my.xml -xsl my.xsl -out my.csv [Fatal Error] generic-attr-to-csv.xslt:15:6: The value of attribute "select" associated with an element type "xsl:v alue-of" must not contain the '<' character. file:///D:/Test/generic-attr-to-csv.xslt; Line #15; Column #6; org.xml.sax.SAXParseException: The value of attribut e "select" associated with an element type "xsl:value-of" must not contain the '<' character. java.lang.NullPointerException at org.apache.xalan.transformer.TransformerImpl.createSerializationHandler(TransformerImpl.java:1171) at org.apache.xalan.transformer.TransformerImpl.createSerializationHandler(TransformerImpl.java:1060) at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImpl.java:1268) at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImpl.java:1251) at org.apache.xalan.xslt.Process.main(Process.java:1048) Exception in thread "main" java.lang.RuntimeException at org.apache.xalan.xslt.Process.doExit(Process.java:1155) at org.apache.xalan.xslt.Process.main(Process.java:1128) Any ideas what am i doing wrong

    Read the article

  • $_GET['page'] loading content incorrectly

    - by s32ialx
    OK so here is my previous post PHP Templated Site w/ file_get_content links now i got that issue resolved BUT the problem is now that the content loads it displays UNDER the div i placed #CONTENT# inside so the styles are being ignored and it's posting #CONTENT# outside the divs at positions 0,0 any suggestions? Found out whats happening by using "View Source" seems that it's putting all of the #CONTENT#, content that's being loaded in front of the tag. Like this <doctype...> <div class="home"> blah blah </div> <head> <script src=""></script> </head> <body> <div class="header"></div> <div class="contents"> #CONTENT# < where content SHOULD load </div> <div class="footer"></div> </body> so anyone got a fix? OK so a better description I'll add relevant screen-shots Whats happening is /* file.class.php */ <?php $file = new file(); class file{ var $path = "templates/clean"; var $ext = "tpl"; function loadfile($filename){ return file_get_contents($this->path . "/" . $filename . "." . $this->ext); } function css($val,$content='',$contentvar='#CSS#') { if(is_array($val)) { $css = 'style="'; foreach($val as $p) { $css .= $p . ";"; } $css .= '"'; } else { $css = 'style="' . $val . '"'; } if($content!='') { return str_replace($contentvar,' ' . $css,$content); } else { return $css; } } function setsize($content,$width='-1',$height='-1',$border='-1'){ $css = ''; if($width!='-1') { $css = $css . "width=\"".$width."\""; } if($height!='-1') { $css = $css . "height=\"".$height."\""; } if($border!='-1') { $css = $css . "border=\"" . $border . "\""; } return str_replace('#SIZE#',' ' . $css,$content); } function setcontent($content,$newcontent,$vartoreplace='#CONTENT#'){ $val = str_replace($vartoreplace,$newcontent,$content); return $val; } function p($content) { $v = $content; $v = str_replace('#CONTENT#','',$v); $v = str_replace('#SIZE#','',$v); print $v; } } if (isset($_GET['page'])) { $content = $_GET['page'].'.php'; } else { $content = 'main.php'; } ?> is calling for a file_get_contents at the bottom which I use in /* index.php */ <?php include('classes/file.class.php'); // load the templates $header = $file->loadfile('header'); $body = $file->loadfile('body'); $footer = $file->loadfile('footer'); // fill body.tpl #CONTENT# slot with $content $body = $file->setcontent($body, $content); // cleanup and output the full page $file->p($header . $body . $footer); ?> and loads into /* body.tpl */ <div id="bodys"> <div id="bodt"></div> <div id="bodm"> <div id="contents"> #CONTENT# </div> </div> <div id="bodb"></div> </div> but the issue is as follows the $content loads properly img tags etc <h2> tags etc but CSS styling is TOTALY ignored for position width z-index etc. and as follows here's the screen-shot JUST incase you require the css for where $content is being loaded #bodys { top:91px; position:absolute; width:100%; } #bodt { margin-left:auto; margin-right:auto; top:3px; position:relative; width:820px; height:42px; background-image:url('images/pagetop.png'); background-repeat:no-repeat; z-index: 0; } #bodm { margin-left:auto; margin-right:auto; top:3px; position:relative; width:820px; background-image:url('images/pagemid.png'); background-repeat:repeat-y; z-index: 0; } #bodb { margin-left:auto; margin-right:auto; bottom:-42px; position:relative; width:820px; height:42px; background-image:url('images/pagebot.png'); background-repeat:no-repeat; z-index:-1; } #menuo { position:absolute; bottom:-2px; z-index:199; } #contents { position:relative; top:5px; left:25px; width:770px; z-index:10; overflow: auto; color: #000000; line-height: 1.3em; font-size: 12px; } #content { position:absolute; top:5px; left:25px; width:760px; z-index:1; color: #000000; line-height: 1.3em; font-size: 12px; } #contents p{ margin-bottom: 0.7em; } #contents a{ font-weight:bold; color: #6fa5fd; border-bottom: 1px dotted #6fa5fd; }

    Read the article

  • change password code error

    - by ejah85
    I've created a code to change a password. Now it seem contain an error. When I fill in the form to change password, and click save the error message: Warning: mysql_real_escape_string() expects parameter 2 to be resource, null given in C:\Program Files\xampp\htdocs\e-Complaint(FYP)\userChangePass.php on line 103 Warning: mysql_real_escape_string() expects parameter 2 to be resource, null given in C:\Program Files\xampp\htdocs\e-Complaint(FYP)\userChangePass.php on line 103 I really don’t know what the error message means. Please guys. Help me fix it. Here's is the code: <?php session_start(); ?> <?php # change password.php //set the page title and include the html header. $page_title = 'Change Your Password'; //include('templates/header.inc'); if(isset($_POST['submit'])){//handle the form require_once('connectioncomplaint.php');//connect to the db. //include "connectioncomplaint.php"; //create a function for escaping the data. function escape_data($data){ global $dbc;//need the connection. if(ini_get('magic_quotes_gpc')){ $data=stripslashes($data); } return mysql_real_escape_string($data, $dbc); }//end function $message=NULL;//create the empty new variable. //check for a username if(empty($_POST['userid'])){ $u=FALSE; $message .='<p> You forgot enter your userid!</p>'; }else{ $u=escape_data($_POST['userid']); } //check for existing password if(empty($_POST['password'])){ $p=FALSE; $message .='<p>You forgot to enter your existing password!</p>'; }else{ $p=escape_data($_POST['password']); } //check for a password and match againts the comfirmed password. if(empty($_POST['password1'])) { $np=FALSE; $message .='<p> you forgot to enter your new password!</p>'; }else{ if($_POST['password1'] == $_POST['password2']){ $np=escape_data($_POST['password1']); }else{ $np=FALSE; $message .='<p> your new password did not match the confirmed new password!</p>'; } } if($u && $p && $np){//if everything's ok. $query="SELECT userid FROM access WHERE (userid='$u' AND password=PASSWORD('$p'))"; $result=@mysql_query($query); $num=mysql_num_rows($result); if($num == 1){ $row=mysql_fetch_array($result, MYSQL_NUM); //make the query $query="UPDATE access SET password=PASSWORD('$np') WHERE userid=$row[0]"; $result=@mysql_query($query);//run the query. if(mysql_affected_rows() == 1) {//if it run ok. //send an email,if desired. echo '<p><b>your password has been changed.</b></p>'; include('templates/footer.inc');//include the HTML footer. exit();//quit the script. }else{//if it did not run OK. $message= '<p>Your password could not be change due to a system error.We apolpgize for any inconvenience.</p><p>' .mysql_error() .'</p>'; } }else{ $message= '<p> Your username and password do not match our records.</p>'; } mysql_close();//close the database connection. }else{ $message .='<p>Please try again.</p>'; } }//end oh=f the submit conditional. //print the error message if there is one. if(isset($message)){ echo'<font color="red">' , $message, '</font>'; } ?> <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> <body> <script language="JavaScript1.2">mmLoadMenus();</script> <table width="604" height="599" border="0" align="center" cellpadding="0" cellspacing="0"> <tr> <td height="130" colspan="7"><img src="images/banner(E-Complaint)-.jpg" width="759" height="130" /></td> </tr> <tr> <td width="100" height="30" bgcolor="#ABD519"></td> <td width="100" bgcolor="#ABD519"></td> <td width="100" bgcolor="#ABD519"></td> <td width="100" bgcolor="#ABD519"></td> <td width="100" bgcolor="#ABD519"></td> <td width="160" bgcolor="#ABD519"> <?php include "header.php"; ?>&nbsp;</td> </tr> <tr> <td colspan="7" bgcolor="#FFFFFF"> <fieldset><legend> Enter your information in the form below:</legend> <p><b>User ID:</b> <input type="text" name="username" size="10" maxlength="20" value="<?php if(isset($_POST['userid'])) echo $_POST['userid']; ?>" /></p> <p><b>Current Password:</b> <input type="password" name="password" size="20" maxlength="20" /></p> <p><b>New Password:</b> <input type="password" name="password1" size="20" maxlength="20" /></p> <p><b>Confirm New Password:</b> <input type="password" name="password2" size="20" maxlength="20" /></p> </fieldset> <div align="center"> <input type="submit" name="submit" value="Change My Password" /></div> </form><!--End Form--> </td> </tr> </table> </body> </html>

    Read the article

  • What&rsquo;s New in ASP.NET 4.0 Part Two: WebForms and Visual Studio Enhancements

    - by Rick Strahl
    In the last installment I talked about the core changes in the ASP.NET runtime that I’ve been taking advantage of. In this column, I’ll cover the changes to the Web Forms engine and some of the cool improvements in Visual Studio that make Web and general development easier. WebForms The WebForms engine is the area that has received most significant changes in ASP.NET 4.0. Probably the most widely anticipated features are related to managing page client ids and of ViewState on WebForm pages. Take Control of Your ClientIDs Unique ClientID generation in ASP.NET has been one of the most complained about “features” in ASP.NET. Although there’s a very good technical reason for these unique generated ids - they guarantee unique ids for each and every server control on a page - these unique and generated ids often get in the way of client-side JavaScript development and CSS styling as it’s often inconvenient and fragile to work with the long, generated ClientIDs. In ASP.NET 4.0 you can now specify an explicit client id mode on each control or each naming container parent control to control how client ids are generated. By default, ASP.NET generates mangled client ids for any control contained in a naming container (like a Master Page, or a User Control for example). The key to ClientID management in ASP.NET 4.0 are the new ClientIDMode and ClientIDRowSuffix properties. ClientIDMode supports four different ClientID generation settings shown below. For the following examples, imagine that you have a Textbox control named txtName inside of a master page control container on a WebForms page. <%@Page Language="C#"      MasterPageFile="~/Site.Master"     CodeBehind="WebForm2.aspx.cs"     Inherits="WebApplication1.WebForm2"  %> <asp:Content ID="content"  ContentPlaceHolderID="content"               runat="server"               ClientIDMode="Static" >       <asp:TextBox runat="server" ID="txtName" /> </asp:Content> The four available ClientIDMode values are: AutoID This is the existing behavior in ASP.NET 1.x-3.x where full naming container munging takes place. <input name="ctl00$content$txtName" type="text"        id="ctl00_content_txtName" /> This should be familiar to any ASP.NET developer and results in fairly unpredictable client ids that can easily change if the containership hierarchy changes. For example, removing the master page changes the name in this case, so if you were to move a block of script code that works against the control to a non-Master page, the script code immediately breaks. Static This option is the most deterministic setting that forces the control’s ClientID to use its ID value directly. No naming container naming at all is applied and you end up with clean client ids: <input name="ctl00$content$txtName"         type="text" id="txtName" /> Note that the name property which is used for postback variables to the server still is munged, but the ClientID property is displayed simply as the ID value that you have assigned to the control. This option is what most of us want to use, but you have to be clear on that because it can potentially cause conflicts with other controls on the page. If there are several instances of the same naming container (several instances of the same user control for example) there can easily be a client id naming conflict. Note that if you assign Static to a data-bound control, like a list child control in templates, you do not get unique ids either, so for list controls where you rely on unique id for child controls, you’ll probably want to use Predictable rather than Static. I’ll write more on this a little later when I discuss ClientIDRowSuffix. Predictable The previous two values are pretty self-explanatory. Predictable however, requires some explanation. To me at least it’s not in the least bit predictable. MSDN defines this value as follows: This algorithm is used for controls that are in data-bound controls. The ClientID value is generated by concatenating the ClientID value of the parent naming container with the ID value of the control. If the control is a data-bound control that generates multiple rows, the value of the data field specified in the ClientIDRowSuffix property is added at the end. For the GridView control, multiple data fields can be specified. If the ClientIDRowSuffix property is blank, a sequential number is added at the end instead of a data-field value. Each segment is separated by an underscore character (_). The key that makes this value a bit confusing is that it relies on the parent NamingContainer’s ClientID to build its own ClientID value. This effectively means that the value is not predictable at all but rather very tightly coupled to the parent naming container’s ClientIDMode setting. For my simple textbox example, if the ClientIDMode property of the parent naming container (Page in this case) is set to “Predictable” you’ll get this: <input name="ctl00$content$txtName" type="text"         id="content_txtName" /> which gives an id that based on walking up to the currently active naming container (the MasterPage content container) and starting the id formatting from there downward. Think of this as a semi unique name that’s guaranteed unique only for the naming container. If, on the other hand, the Page is set to “AutoID” you get the following with Predictable on txtName: <input name="ctl00$content$txtName" type="text"         id="ctl00_content_txtName" /> The latter is effectively the same as if you specified AutoID because it inherits the AutoID naming from the Page and Content Master Page control of the page. But again - predictable behavior always depends on the parent naming container and how it generates its id, so the id may not always be exactly the same as the AutoID generated value because somewhere in the NamingContainer chain the ClientIDMode setting may be set to a different value. For example, if you had another naming container in the middle that was set to Static you’d end up effectively with an id that starts with the NamingContainers id rather than the whole ctl000_content munging. The most common use for Predictable is likely to be for data-bound controls, which results in each data bound item getting a unique ClientID. Unfortunately, even here the behavior can be very unpredictable depending on which data-bound control you use - I found significant differences in how template controls in a GridView behave from those that are used in a ListView control. For example, GridView creates clean child ClientIDs, while ListView still has a naming container in the ClientID, presumably because of the template container on which you can’t set ClientIDMode. Predictable is useful, but only if all naming containers down the chain use this setting. Otherwise you’re right back to the munged ids that are pretty unpredictable. Another property, ClientIDRowSuffix, can be used in combination with ClientIDMode of Predictable to force a suffix onto list client controls. For example: <asp:GridView runat="server" ID="gvItems"              AutoGenerateColumns="false"             ClientIDMode="Static"              ClientIDRowSuffix="Id">     <Columns>     <asp:TemplateField>         <ItemTemplate>             <asp:Label runat="server" id="txtName"                        Text='<%# Eval("Name") %>'                   ClientIDMode="Predictable"/>         </ItemTemplate>     </asp:TemplateField>     <asp:TemplateField>         <ItemTemplate>         <asp:Label runat="server" id="txtId"                     Text='<%# Eval("Id") %>'                     ClientIDMode="Predictable" />         </ItemTemplate>     </asp:TemplateField>     </Columns>  </asp:GridView> generates client Ids inside of a column in the master page described earlier: <td>     <span id="txtName_0">Rick</span> </td> where the value after the underscore is the ClientIDRowSuffix field - in this case “Id” of the item data bound to the control. Note that all of the child controls require ClientIDMode=”Predictable” in order for the ClientIDRowSuffix to be applied, and the parent GridView controls need to be set to Static either explicitly or via Naming Container inheritance to give these simple names. It’s a bummer that ClientIDRowSuffix doesn’t work with Static to produce this automatically. Another real problem is that other controls process the ClientIDMode differently. For example, a ListView control processes the Predictable ClientIDMode differently and produces the following with the Static ListView and Predictable child controls: <span id="ctrl0_txtName_0">Rick</span> I couldn’t even figure out a way using ClientIDMode to get a simple ID that also uses a suffix short of falling back to manually generated ids using <%= %> expressions instead. Given the inconsistencies inside of list controls using <%= %>, ids for the ListView might not be a bad idea anyway. Inherit The final setting is Inherit, which is the default for all controls except Page. This means that controls by default inherit the parent naming container’s ClientIDMode setting. For more detailed information on ClientID behavior and different scenarios you can check out a blog post of mine on this subject: http://www.west-wind.com/weblog/posts/54760.aspx. ClientID Enhancements Summary The ClientIDMode property is a welcome addition to ASP.NET 4.0. To me this is probably the most useful WebForms feature as it allows me to generate clean IDs simply by setting ClientIDMode="Static" on either the page or inside of Web.config (in the Pages section) which applies the setting down to the entire page which is my 95% scenario. For the few cases when it matters - for list controls and inside of multi-use user controls or custom server controls) - I can use Predictable or even AutoID to force controls to unique names. For application-level page development, this is easy to accomplish and provides maximum usability for working with client script code against page controls. ViewStateMode Another area of large criticism for WebForms is ViewState. ViewState is used internally by ASP.NET to persist page-level changes to non-postback properties on controls as pages post back to the server. It’s a useful mechanism that works great for the overall mechanics of WebForms, but it can also cause all sorts of overhead for page operation as ViewState can very quickly get out of control and consume huge amounts of bandwidth in your page content. ViewState can also wreak havoc with client-side scripting applications that modify control properties that are tracked by ViewState, which can produce very unpredictable results on a Postback after client-side updates. Over the years in my own development, I’ve often turned off ViewState on pages to reduce overhead. Yes, you lose some functionality, but you can easily implement most of the common functionality in non-ViewState workarounds. Relying less on heavy ViewState controls and sticking with simpler controls or raw HTML constructs avoids getting around ViewState problems. In ASP.NET 3.x and prior, it wasn’t easy to control ViewState - you could turn it on or off and if you turned it off at the page or web.config level, you couldn’t turn it back on for specific controls. In short, it was an all or nothing approach. With ASP.NET 4.0, the new ViewStateMode property gives you more control. It allows you to disable ViewState globally either on the page or web.config level and then turn it back on for specific controls that might need it. ViewStateMode only works when EnableViewState="true" on the page or web.config level (which is the default). You can then use ViewStateMode of Disabled, Enabled or Inherit to control the ViewState settings on the page. If you’re shooting for minimal ViewState usage, the ideal situation is to set ViewStateMode to disabled on the Page or web.config level and only turn it back on particular controls: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"        ClientIDMode="Static"                ViewStateMode="Disabled"     EnableViewState="true"  %> <!-- this control has viewstate  --> <asp:TextBox runat="server" ID="txtName"  ViewStateMode="Enabled" />       <!-- this control has no viewstate - it inherits  from parent container --> <asp:TextBox runat="server" ID="txtAddress" /> Note that the EnableViewState="true" at the Page level isn’t required since it’s the default, but it’s important that the value is true. ViewStateMode has no effect if EnableViewState="false" at the page level. The main benefit of ViewStateMode is that it allows you to more easily turn off ViewState for most of the page and enable only a few key controls that might need it. For me personally, this is a perfect combination as most of my WebForm apps can get away without any ViewState at all. But some controls - especially third party controls - often don’t work well without ViewState enabled, and now it’s much easier to selectively enable controls rather than the old way, which required you to pretty much turn off ViewState for all controls that you didn’t want ViewState on. Inline HTML Encoding HTML encoding is an important feature to prevent cross-site scripting attacks in data entered by users on your site. In order to make it easier to create HTML encoded content, ASP.NET 4.0 introduces a new Expression syntax using <%: %> to encode string values. The encoding expression syntax looks like this: <%: "<script type='text/javascript'>" +     "alert('Really?');</script>" %> which produces properly encoded HTML: &lt;script type=&#39;text/javascript&#39; &gt;alert(&#39;Really?&#39;);&lt;/script&gt; Effectively this is a shortcut to: <%= HttpUtility.HtmlEncode( "<script type='text/javascript'>" + "alert('Really?');</script>") %> Of course the <%: %> syntax can also evaluate expressions just like <%= %> so the more common scenario applies this expression syntax against data your application is displaying. Here’s an example displaying some data model values: <%: Model.Address.Street %> This snippet shows displaying data from your application’s data store or more importantly, from data entered by users. Anything that makes it easier and less verbose to HtmlEncode text is a welcome addition to avoid potential cross-site scripting attacks. Although I listed Inline HTML Encoding here under WebForms, anything that uses the WebForms rendering engine including ASP.NET MVC, benefits from this feature. ScriptManager Enhancements The ASP.NET ScriptManager control in the past has introduced some nice ways to take programmatic and markup control over script loading, but there were a number of shortcomings in this control. The ASP.NET 4.0 ScriptManager has a number of improvements that make it easier to control script loading and addresses a few of the shortcomings that have often kept me from using the control in favor of manual script loading. The first is the AjaxFrameworkMode property which finally lets you suppress loading the ASP.NET AJAX runtime. Disabled doesn’t load any ASP.NET AJAX libraries, but there’s also an Explicit mode that lets you pick and choose the library pieces individually and reduce the footprint of ASP.NET AJAX script included if you are using the library. There’s also a new EnableCdn property that forces any script that has a new WebResource attribute CdnPath property set to a CDN supplied URL. If the script has this Attribute property set to a non-null/empty value and EnableCdn is enabled on the ScriptManager, that script will be served from the specified CdnPath. [assembly: WebResource(    "Westwind.Web.Resources.ww.jquery.js",    "application/x-javascript",    CdnPath =  "http://mysite.com/scripts/ww.jquery.min.js")] Cool, but a little too static for my taste since this value can’t be changed at runtime to point at a debug script as needed, for example. Assembly names for loading scripts from resources can now be simple names rather than fully qualified assembly names, which make it less verbose to reference scripts from assemblies loaded from your bin folder or the assembly reference area in web.config: <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <Scripts>         <asp:ScriptReference          Name="Westwind.Web.Resources.ww.jquery.js"         Assembly="Westwind.Web" />     </Scripts>        </asp:ScriptManager> The ScriptManager in 4.0 also supports script combining via the CompositeScript tag, which allows you to very easily combine scripts into a single script resource served via ASP.NET. Even nicer: You can specify the URL that the combined script is served with. Check out the following script manager markup that combines several static file scripts and a script resource into a single ASP.NET served resource from a static URL (allscripts.js): <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <CompositeScript          Path="~/scripts/allscripts.js">         <Scripts>             <asp:ScriptReference                    Path="~/scripts/jquery.js" />             <asp:ScriptReference                    Path="~/scripts/ww.jquery.js" />             <asp:ScriptReference            Name="Westwind.Web.Resources.editors.js"                 Assembly="Westwind.Web" />         </Scripts>     </CompositeScript> </asp:ScriptManager> When you render this into HTML, you’ll see a single script reference in the page: <script src="scripts/allscripts.debug.js"          type="text/javascript"></script> All you need to do to make this work is ensure that allscripts.js and allscripts.debug.js exist in the scripts folder of your application - they can be empty but the file has to be there. This is pretty cool, but you want to be real careful that you use unique URLs for each combination of scripts you combine or else browser and server caching will easily screw you up royally. The script manager also allows you to override native ASP.NET AJAX scripts now as any script references defined in the Scripts section of the ScriptManager trump internal references. So if you want custom behavior or you want to fix a possible bug in the core libraries that normally are loaded from resources, you can now do this simply by referencing the script resource name in the Name property and pointing at System.Web for the assembly. Not a common scenario, but when you need it, it can come in real handy. Still, there are a number of shortcomings in this control. For one, the ScriptManager and ClientScript APIs still have no common entry point so control developers are still faced with having to check and support both APIs to load scripts so that controls can work on pages that do or don’t have a ScriptManager on the page. The CdnUrl is static and compiled in, which is very restrictive. And finally, there’s still no control over where scripts get loaded on the page - ScriptManager still injects scripts into the middle of the HTML markup rather than in the header or optionally the footer. This, in turn, means there is little control over script loading order, which can be problematic for control developers. MetaDescription, MetaKeywords Page Properties There are also a number of additional Page properties that correspond to some of the other features discussed in this column: ClientIDMode, ClientTarget and ViewStateMode. Another minor but useful feature is that you can now directly access the MetaDescription and MetaKeywords properties on the Page object to set the corresponding meta tags programmatically. Updating these values programmatically previously required either <%= %> expressions in the page markup or dynamic insertion of literal controls into the page. You can now just set these properties programmatically on the Page object in any Control derived class on the page or the Page itself: Page.MetaKeywords = "ASP.NET,4.0,New Features"; Page.MetaDescription = "This article discusses the new features in ASP.NET 4.0"; Note, that there’s no corresponding ASP.NET tag for the HTML Meta element, so the only way to specify these values in markup and access them is via the @Page tag: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"      ClientIDMode="Static"                MetaDescription="Article that discusses what's                      new in ASP.NET 4.0"     MetaKeywords="ASP.NET,4.0,New Features" %> Nothing earth shattering but quite convenient. Visual Studio 2010 Enhancements for Web Development For Web development there are also a host of editor enhancements in Visual Studio 2010. Some of these are not Web specific but they are useful for Web developers in general. Text Editors Throughout Visual Studio 2010, the text editors have all been updated to a new core engine based on WPF which provides some interesting new features for various code editors including the nice ability to zoom in and out with Ctrl-MouseWheel to quickly change the size of text. There are many more API options to control the editor and although Visual Studio 2010 doesn’t yet use many of these features, we can look forward to enhancements in add-ins and future editor updates from the various language teams that take advantage of the visual richness that WPF provides to editing. On the negative side, I’ve noticed that occasionally the code editor and especially the HTML and JavaScript editors will lose the ability to use various navigation keys like arrows, back and delete keys, which requires closing and reopening the documents at times. This issue seems to be well documented so I suspect this will be addressed soon with a hotfix or within the first service pack. Overall though, the code editors work very well, especially given that they were re-written completely using WPF, which was one of my big worries when I first heard about the complete redesign of the editors. Multi-Targeting Visual Studio now targets all versions of the .NET framework from 2.0 forward. You can use Visual Studio 2010 to work on your ASP.NET 2, 3.0 and 3.5 applications which is a nice way to get your feet wet with the new development environment without having to make changes to existing applications. It’s nice to have one tool to work in for all the different versions. Multi-Monitor Support One cool feature of Visual Studio 2010 is the ability to drag windows out of the Visual Studio environment and out onto the desktop including onto another monitor easily. Since Web development often involves working with a host of designers at the same time - visual designer, HTML markup window, code behind and JavaScript editor - it’s really nice to be able to have a little more screen real estate to work on each of these editors. Microsoft made a welcome change in the environment. IntelliSense Snippets for HTML and JavaScript Editors The HTML and JavaScript editors now finally support IntelliSense scripts to create macro-based template expansions that have been in the core C# and Visual Basic code editors since Visual Studio 2005. Snippets allow you to create short XML-based template definitions that can act as static macros or real templates that can have replaceable values that can be embedded into the expanded text. The XML syntax for these snippets is straight forward and it’s pretty easy to create custom snippets manually. You can easily create snippets using XML and store them in your custom snippets folder (C:\Users\rstrahl\Documents\Visual Studio 2010\Code Snippets\Visual Web Developer\My HTML Snippets and My JScript Snippets), but it helps to use one of the third-party tools that exist to simplify the process for you. I use SnippetEditor, by Bill McCarthy, which makes short work of creating snippets interactively (http://snippeteditor.codeplex.com/). Note: You may have to manually add the Visual Studio 2010 User specific Snippet folders to this tool to see existing ones you’ve created. Code snippets are some of the biggest time savers and HTML editing more than anything deals with lots of repetitive tasks that lend themselves to text expansion. Visual Studio 2010 includes a slew of built-in snippets (that you can also customize!) and you can create your own very easily. If you haven’t done so already, I encourage you to spend a little time examining your coding patterns and find the repetitive code that you write and convert it into snippets. I’ve been using CodeRush for this for years, but now you can do much of the basic expansion natively for HTML and JavaScript snippets. jQuery Integration Is Now Native jQuery is a popular JavaScript library and recently Microsoft has recently stated that it will become the primary client-side scripting technology to drive higher level script functionality in various ASP.NET Web projects that Microsoft provides. In Visual Studio 2010, the default full project template includes jQuery as part of a new project including the support files that provide IntelliSense (-vsdoc files). IntelliSense support for jQuery is now also baked into Visual Studio 2010, so unlike Visual Studio 2008 which required a separate download, no further installs are required for a rich IntelliSense experience with jQuery. Summary ASP.NET 4.0 brings many useful improvements to the platform, but thankfully most of the changes are incremental changes that don’t compromise backwards compatibility and they allow developers to ease into the new features one feature at a time. None of the changes in ASP.NET 4.0 or Visual Studio 2010 are monumental or game changers. The bigger features are language and .NET Framework changes that are also optional. This ASP.NET and tools release feels more like fine tuning and getting some long-standing kinks worked out of the platform. It shows that the ASP.NET team is dedicated to paying attention to community feedback and responding with changes to the platform and development environment based on this feedback. If you haven’t gotten your feet wet with ASP.NET 4.0 and Visual Studio 2010, there’s no reason not to give it a shot now - the ASP.NET 4.0 platform is solid and Visual Studio 2010 works very well for a brand new release. Check it out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • SSH new connection begins to hang (not reject or terminate) after a day or so on Ubuntu 13.04 server

    - by kross
    Recently we upgraded the server from 12.04 LTS server to 13.04. All was well, including after a reboot. With all packages updated we began to see a strange issue, ssh works for a day or so (unclear on timing) then a later request for SSH hangs (cannot ctrl+c, nothing). It is up and serving webserver traffic etc. Port 22 is open (ips etc altered slightly for posting): nmap -T4 -A x.acme.com Starting Nmap 6.40 ( http://nmap.org ) at 2013-09-12 16:01 CDT Nmap scan report for x.acme.com (69.137.56.18) Host is up (0.026s latency). rDNS record for 69.137.56.18: c-69-137-56-18.hsd1.tn.provider.net Not shown: 998 filtered ports PORT STATE SERVICE VERSION 22/tcp open ssh OpenSSH 6.1p1 Debian 4 (protocol 2.0) | ssh-hostkey: 1024 54:d3:e3:38:44:f4:20:a4:e7:42:49:d0:a7:f1:3e:21 (DSA) | 2048 dc:21:77:3b:f4:4e:74:d0:87:33:14:40:04:68:33:a6 (RSA) |_256 45:69:10:79:5a:9f:0b:f0:66:15:39:87:b9:a1:37:f7 (ECDSA) 80/tcp open http Jetty 7.6.2.v20120308 | http-title: Log in as a Bamboo user - Atlassian Bamboo |_Requested resource was http://x.acme.com/userlogin!default.action;jsessionid=19v135zn8cl1tgso28fse4d50?os_destination=%2Fstart.action Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 12.89 seconds Here is the ssh -vvv: ssh -vvv x.acme.com OpenSSH_5.9p1, OpenSSL 0.9.8x 10 May 2012 debug1: Reading configuration data /Users/tfergeson/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to x.acme.com [69.137.56.18] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/Users/tfergeson/.ssh/id_rsa" as a RSA1 public key debug1: identity file /Users/tfergeson/.ssh/id_rsa type 1 debug1: identity file /Users/tfergeson/.ssh/id_rsa-cert type -1 debug1: identity file /Users/tfergeson/.ssh/id_dsa type -1 debug1: identity file /Users/tfergeson/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.1p1 Debian-4 debug1: match: OpenSSH_6.1p1 Debian-4 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "x.acme.com" from file "/Users/tfergeson/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /Users/tfergeson/.ssh/known_hosts:10 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 130/256 debug2: bits set: 503/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA dc:21:77:3b:f4:4e:74:d0:87:33:14:40:04:68:33:a6 debug3: load_hostkeys: loading entries for host "x.acme.com" from file "/Users/tfergeson/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /Users/tfergeson/.ssh/known_hosts:10 debug3: load_hostkeys: loaded 1 keys debug3: load_hostkeys: loading entries for host "69.137.56.18" from file "/Users/tfergeson/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /Users/tfergeson/.ssh/known_hosts:6 debug3: load_hostkeys: loaded 1 keys debug1: Host 'x.acme.com' is known and matches the RSA host key. debug1: Found key in /Users/tfergeson/.ssh/known_hosts:10 debug2: bits set: 493/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/tfergeson/.ssh/id_rsa (0x7ff189c1d7d0) debug2: key: /Users/tfergeson/.ssh/id_dsa (0x0) debug1: Authentications that can continue: publickey debug3: start over, passed a different list publickey debug3: preferred publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/tfergeson/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 277 debug2: input_userauth_pk_ok: fp 3c:e5:29:6c:9d:27:d1:7d:e8:09:a2:e8:8e:6e:af:6f debug3: sign_and_send_pubkey: RSA 3c:e5:29:6c:9d:27:d1:7d:e8:09:a2:e8:8e:6e:af:6f debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). Authenticated to x.acme.com ([69.137.56.18]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: fd 3 setting TCP_NODELAY debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env ATLAS_OPTS debug3: Ignored env rvm_bin_path debug3: Ignored env TERM_PROGRAM debug3: Ignored env GEM_HOME debug3: Ignored env SHELL debug3: Ignored env TERM debug3: Ignored env CLICOLOR debug3: Ignored env IRBRC debug3: Ignored env TMPDIR debug3: Ignored env Apple_PubSub_Socket_Render debug3: Ignored env TERM_PROGRAM_VERSION debug3: Ignored env MY_RUBY_HOME debug3: Ignored env TERM_SESSION_ID debug3: Ignored env USER debug3: Ignored env COMMAND_MODE debug3: Ignored env rvm_path debug3: Ignored env COM_GOOGLE_CHROME_FRAMEWORK_SERVICE_PROCESS/USERS/tfergeson/LIBRARY/APPLICATION_SUPPORT/GOOGLE/CHROME_SOCKET debug3: Ignored env JPDA_ADDRESS debug3: Ignored env APDK_HOME debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env Apple_Ubiquity_Message debug3: Ignored env __CF_USER_TEXT_ENCODING debug3: Ignored env rvm_sticky_flag debug3: Ignored env MAVEN_OPTS debug3: Ignored env LSCOLORS debug3: Ignored env rvm_prefix debug3: Ignored env PATH debug3: Ignored env PWD debug3: Ignored env JAVA_HOME debug1: Sending env LANG = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug3: Ignored env JPDA_TRANSPORT debug3: Ignored env rvm_version debug3: Ignored env M2_HOME debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env rvm_ruby_string debug3: Ignored env LOGNAME debug3: Ignored env M2_REPO debug3: Ignored env GEM_PATH debug3: Ignored env AWS_RDS_HOME debug3: Ignored env rvm_delete_flag debug3: Ignored env EC2_PRIVATE_KEY debug3: Ignored env RUBY_VERSION debug3: Ignored env SECURITYSESSIONID debug3: Ignored env EC2_CERT debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 I can hard reboot (only mac monitors at that location) and it will again be accessible. This now happens every single time. It is imperative that I get it sorted. The strange thing is that it behaves initially then starts to hang after several hours. I perused logs previously and nothing stood out. From the auth.log, I can see that it has allowed me in, but still I get nothing back on the client side: Sep 20 12:47:50 cbear sshd[25376]: Accepted publickey for tfergeson from 10.1.10.14 port 54631 ssh2 Sep 20 12:47:50 cbear sshd[25376]: pam_unix(sshd:session): session opened for user tfergeson by (uid=0) UPDATES: Still occurring even after setting UseDNS no and commenting out #session optional pam_mail.so standard noenv This does not appear to be a network/dns related issue, as all services running on the machine are as responsive and accessible as ever, with the exception of sshd. Any thoughts on where to start?

    Read the article

  • CodePlex Daily Summary for Monday, June 14, 2010

    CodePlex Daily Summary for Monday, June 14, 2010New ProjectsBD File Hash: BD File Hash is a convenient file hash and hash compare tool for Windows which currently works with MD5, SHA-1, and SHA-256 algorithms. FileScan: This is an application that searches through a drive or directory structure for files matching a filter. This project was converted from VB to ...genesis9: genesis9HeinanOS: HeinanOS is an operating system developed mainly in C++. HeinanOS is a light OS (1.44 MB image) with a lot of capabilites and many more are being ...MediaBrowserWS - Creates a Web Service for the popular MediaBrowser plugin: Creates a web service in Media Center for accessing your MediaBrowser collection. Allows for external devices (Tablets/phones/laptops) to access a ...MME: New Edition of Managed Menu Extensions for Visual Studio 2010 The Main goal of "MME" is to provide easy access to adding Right Click menus in the ...MVMMapper: Generate the ViewModel and its mapping to the Model when implementing MVVM in .NET. Developed using T4 templates. Current version supports Silver...ProjectArDotNet: Si te agarro te parto! Si te agarro te emperno no me importa que seas menor de edad!Scriptagility for DotNetNuke: Scriptagility is a DotNetNuke module for Javascript developers. This module provides dynamic client scripting infrastructure for developing javascr...simpleLinux Distro: SimpleLinux. is a Linux distributions that is easy to use. Simple Linux website: http://simplelinux.tkTag Cloud Control for asp.net: Tag Cloud Control for asp.net allows the user to display the most important keywords to display in tag cloud. Each Tag has it own navigation url to...thefreeimdb: fsadie qwUppityUp: UppityUp is a simple and light-weight tray application which monitors a remote server and shows a notification when it comes online. This is usefu...Vivid3D 2 - DirectX 10 3D ToolKit: The sequel to my first ever engine wrote several years ago. It is not based on it in anyway. VSIDev: VSI DevXTQXK_WORK: Actionscript 3.0东坡博客: 这是一个ASP。net mvc 2博客。New Releases.NET Extensions - Extension Methods Library: Release 2010.08: Added extension methods for Bitmap manipulation (scaling for now): - Bitmap.ScaleToSize() - Bitmap.ScaleToSizeProportional() - Bitmap.ScaleProport...Black Falcon Software's Database Data-Access-Layers: “SQLHELPER”, “ORAHELPER” - Handling Binary Data: See attached document...BTech Networking Library: BTech Networking Library: Same as pervious just new namespace, extended networking coming soon!!!Community Forums NNTP bridge: Community Forums NNTP Bridge V37: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Generic Entity Model 2: GEM2 build 54383: This is second BETA release of GEM2! Please see source code change sets for updates! Following implementation is not included in this release: My...Hades: Projet Hadès - Official Demo - Version 0.1.0 Beta: ---------------------------------------------------------------------------- - Projet Hadès - Official Demo - Version 0.1.0 Beta ------------------...HeinanOS: HeinanOS M1 Source Code: You can download HeinanOS M1 Source Code and contribute to HeinanOS development! Be aware that you should not use this code for your own systems! ...HeinanOS: Milestone 1: This is the first major release for HeinanOS 1.0 Please note this is a PRE-RELEASE! This release includes the following features: -Bootable DOS-...HKGolden Express: HKGoldenExpress (Build 201006131900): New features: (None) Bug fix: Incorrect message submit date of message/ replies. (Note: Showing message submit date is enabled since Build 20100...HKGolden Express: HKGoldenExpress (Build 201006140110): New features: (None) Bug fix: (None) Improvements: (None) Other changes: Set time zone of message date as Hong Kong. Adjusted the format of messa...MediaCoder.NET: MediaCoder.NET v1.0 Beta 1.5: Installer file for MediaCoder.NET v1.0 beta 1.5. Now converts multiple files.MME: First release: Features of this release 1. One installer MME.msi. However you can also install MMEMenuManagerSetup.vsix which installs a project template that e...MSBuild Launch Pad (mPad): 1.1 Beta 1: Platform selection box is added.MVMMapper: MVMMapper Release v 1.0.1: This release has no downloadable documentation. Please use the Documentation section to get started.NginxTray: NginxTray 0.7 RC2: NginxTray 0.7 RC2PowerAuras: PowerAuras-3.0.0K-beta3: New Auras: Item Name Equipment Slot Tracking Changes from beta1 5 new aura textures Fixed Tracking bug Added graphical equipment slot sele...PowerAuras: PowerAuras-3.0.0K-beta4: New Auras: Item Name Equipment Slot Tracking Changes from beta1 5 new aura textures Fixed Tracking bug Added graphical equipment slot sele...Scriptagility for DotNetNuke: Scriptagility 1.0 (Beta): Initial public release please evaluate and feedbackSharpDevelop: SharpDevelop 4.0 Beta 1: Release notes: http://community.sharpdevelop.net/forums/t/11388.aspxsimpleLinux Distro: Project X3: This is an example of download for simpleLinuxSOAPI - StackOverflow API Parser/Wrapper Generator: SOAPI Beta 3: The SOAPI Beta 3 download will be made availabe later today when the initial documentation is complete. The previously available Beta 1 download h...Sofa: Initial release V1.0: This is the first release of Sofa. As it is made of code being previously used, as we tested it is a stable release. But bugs are always possible,...Tag Cloud Control for asp.net: Tag Cloud Control for asp.net: Tag Cloud Control for asp.net allows the user to display the most important keywords to display in tag cloud. Each Tag has it own navigation url to...UppityUp: UppityUp v0.1: First functional version, supports monitoring availability by ping (ICMP) requests. Fit for general use. Consists of one standalone .exe file - no...VCC: Latest build, v2.1.30613.0: Automatic drop of latest buildWindStyle ExifInfo for Windows Live Writer: 1.1.0.0: Add: Multiple Language(English and Simplified Chinese); Add: Insert multiple files; Fix: Error when insert pictures without Exif info; Update: Icon...Work Recorder - Hold on own time!: WorkRecorder 1.2: +Add a whole day chartXsltDb - DotNetNuke Module Builder: 01.01.24: Syntax highlighting delivered!New samples for RadControls. On single page you can find RadTreeView, RadRating, RadChart, RadFormDecorator, RadEdito...xUnit.net Contrib: xunitcontrib 0.4 (ReSharper 5.0 RTM + dotCover): xunitcontrib release 0.4 (ReSharper runner) This release provides a test runner plugin for Resharper 5.0, 4.5 and 4.1, targetting all versions of x...Most Popular ProjectsCommunity Forums NNTP bridgeRIA Services EssentialsNeatUploadBxf (Basic XAML Framework)Agile Personal Development Methodology.NET Transactional File ManagerSOLID by exampleASP.NET MVC Time PlannerWEI ShareSiverlight ProjectMost Active ProjectsjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryNB_Store - Free DotNetNuke Ecommerce Catalog ModuleRhyduino - Arduino and Managed CodeCommunity Forums NNTP bridgeCassandraemonBlogEngine.NETLightweight Fluent WorkflowMediaCoder.NETAndrew's XNA Helpers

    Read the article

  • CodePlex Daily Summary for Sunday, May 23, 2010

    CodePlex Daily Summary for Sunday, May 23, 2010New ProjectsA2Command: Apple 2 port of CBM-Command (http://cbmcommand.codeplex.com)AgUnit: AgUnit is a plugin for Jetbrains ReSharper (R#) that allows you to run and debug Silverlight unit tests from within Visual Studio.BSonPosh Powershell Module: A collection of useful Powershell functions I have written and collected over the years. It is a Powershell v2 Module composed of mostly scripts.DB Restriker: Simple tool for lookup, parsing, searching some standard databases using wildcards and pattern recognition.Entity Framework Repository & Unit of Work Template: T4 Template for Entity Framework 4 for creating a data access layer using the repository and unit of work patterns. Designed to work well with dep...Fiction Catalog: A catalog project designed to store information about fictional literature.Giving a Presentation: Useful for people doing presentations, this application hides desktop icons, disables screensaver, closes chosen programs when presentation starts,...glueless: Glueless is a local message bus which allows architect to design highly decoupled systems and applications. Glueless is a step beyond dependency i...HtmlCodeIt: Take any code and format it so that it can be viewed properly on a web browser, blog post or website.just testproject :): just have a test!KanbanTaskboard: The aim of the project is to design and implement a functional prototype for visualizing and operating a multi-platform virtual "Kanban Taskboard”Life System: Life SystemOaSys Project: Project Oasys is a project that aims to help solve desertification. Scoring of pingPong Game: Scoring of pingPong GameSilverlight Web Comic: The Silverlight Web Comic makes easier for the people create your own comic with your own pictures o drawings, and add the globes of text like the ...TickSharp: C# Wrapper for http://TickSpot.com RESTful API.Traductor: El Traductor es una aplicación de escritorio para traducción de frases entre distintos idiomas basada en la plataforma Silverlight Out Of Browser y...WatchersNET.SkinObjects.ModulActionsMenu: Displays the Module Actions Menu as a Unsorted CSS Menu.xxfd1r4w96: testingNew ReleasesAgUnit: AgUnit 0.1: Initial release of AgUnit. Copy the extracted files from AgUnit-0.1.zip into the "Bin\Plugins\" folder of your ReSharper installation (default C:...ASP.NET MVC | SCAFFOLD: ASP.NET MVC SCAFFOLD - Beta 1.0: Release versão betaBizTalk Server 2006 Documenter: Documenter_v3.4.0.0: This is the new release of the documenter which has the following highlights Support for 64 bit systems Support for SxS scenarios (so now the sys...CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1 Beta 2- VS 2008 Replacement: The CassiniDev Visual Studio build is a fully compatibly Visual Studio 2008/2010 Development server drop-in replacement with all CassiniDev enhance...CBM-Command: 2010-05-22 Beta: Release Notes - 2010-05-22 BetaNew Features Simple text file viewer. Now when you use SHIFT-RETURN to open a file, it will ask if you want to view...Easy Validation: Documentation: Documentation for easyVal was created and presented at University of Texas at Austin in May of 2010.Entity Framework Repository & Unit of Work Template: 1.0: Initial ReleaseFrotz.NET: FrotzNet 1.0 beta: Many, many changes, including: - Got Adaptive Palette working for graphics - Got undo working - Implemented all zcodes - Added scripting as well as...Giving a Presentation: CTP: This release includes basic extensibility infrastructure and three extensions: hides desktop icons, disables screensaver, closes chosen programs wh...Gov 2.0 Kit: SharePoint 2010 MyPeeps Mysite Accelerators: SharePoint 2010 MyPeeps Mysite Accelerators. Attached are the installation and documentations files.HKGolden Express: HKGoldenExpress (Build 201005221900): New features: (None) Bug fix: Hong Kong special characters now can be posted without encoding problem. Improvements: (None) Other changes: (None) K...Intellibox - A WPF auto complete textbox search control: Beta 2: Updated the namespace of the Intellibox control from "System.Windows.Controls" to "FeserWard.Controls". Empty binding Path properties now work on...MDownloader: MDownloader-0.15.14.59111: Fixed DepositFile provider. Fixed FileFactory provider. Added simple fakeness detector (can check if .rar, .zip, .7z files have valid signature...Mute4: V1: Initial version of Mute4NLog - Advanced .NET Logging: Nightly Build 2010.05.22.003: Changes since the last build:No changes. Unit test results:Passed 191/191 (100%) Passed 191/191 (100%) Passed 214/214 (100%) Passed 216/216 (100%)...NSIS Autorun: NSIS Autorun 0.1.9: This release includes source code, executable binaries and example materials.Silverlight Gantt Chart: Silverlight Gantt Chart 1.3 (SL4): The latest release mainly makes the Gantt Chart useful in Silverlight 4 applications.SqlServerExtensions: V 0.2 beta: V 0.2 Beta release: New features available TrimStart - trim leading characters TrimEnd - trim trailing characters Remove - remove characters f...Traductor: Version 3.1: Nuevo en esta versión: El Traductor ahora permite escoger entre los motores de Microsoft y Google. El Text to Speech is es ahora habilitado por...VCC: Latest build, v2.1.30522.0: Automatic drop of latest buildVDialer Add-In for Outlook 2007 & 2010 - Dial your Vonage phone from Outlook: VDialer Add-In 1.0.3: This release adds new features related to Journal and use of Vonage API Changes in version 1.0.3 Added configurable option to automatically open J...WatchersNET.SkinObjects.ModulActionsMenu: ModulActionsMenu 01.00.00: First Release For Informations How To Install, the Skin Object Read the DocumentationMost Popular ProjectsCodeComment.NETRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesMost Active ProjectsRawrpatterns & practices – Enterprise LibraryCaliburn: An Application Framework for WPF and Silverlightpatterns & practices: Windows Azure Security GuidanceCassiniDev - Cassini 3.5/4.0 Developers EditionGMap.NET - Great Maps for Windows Forms & PresentationNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSQL Server PowerShell ExtensionsBlogEngine.NETCodeReview

    Read the article

  • Running Solaris 11 as a control domain on a T2000

    - by jsavit
    There is increased adoption of Oracle Solaris 11, and many customers are deploying it on systems that previously ran Solaris 10. That includes older T1-processor based systems like T1000 and T2000. Even though they are old (from 2005) and don't have the performance of current SPARC servers, they are still functional, stable servers that customers continue to operate. One reason to install Solaris 11 on them is that older machines are attractive for testing OS upgrades before updating current, production systems. Normally this does not present a challenge, because Solaris 11 runs on any T-series or M-series SPARC server. One scenario adds a complication: running Solaris 11 in a control domain on a T1000 or T2000 hosting logical domains. Solaris 11 pre-installed Oracle VM Server for SPARC incompatible with T1 Unlike Solaris 10, Solaris 11 comes with Oracle VM Server for SPARC preinstalled. The ldomsmanager package contains the logical domains manager for Oracle VM Server for SPARC 2.2, which requires a SPARC T2, T2+, T3, or T4 server. It does not work with T1-processor systems, which are only supported by LDoms Manager 1.2 and earlier. The following screenshot shows what happens (bold font) if you try to use Oracle VM Server for SPARC 2.x commands in a Solaris 11 control domain. The commands were issued in a control domain on a T2000 that previously ran Solaris 10. We also display the version of the logical domains manager installed in Solaris 11: root@t2000 psrinfo -vp The physical processor has 4 virtual processors (0-3) UltraSPARC-T1 (chipid 0, clock 1200 MHz) # prtconf|grep T SUNW,Sun-Fire-T200 # ldm -V Failed to connect to logical domain manager: Connection refused # pkg info ldomsmanager Name: system/ldoms/ldomsmanager Summary: Logical Domains Manager Description: LDoms Manager - Virtualization for SPARC T-Series Category: System/Virtualization State: Installed Publisher: solaris Version: 2.2.0.0 Build Release: 5.11 Branch: 0.175.0.8.0.3.0 Packaging Date: May 25, 2012 10:20:48 PM Size: 2.86 MB FMRI: pkg://solaris/system/ldoms/[email protected],5.11-0.175.0.8.0.3.0:20120525T222048Z The 2.2 version of the logical domains manager will have to be removed, and 1.2 installed, in order to use this as a control domain. Preparing to change - create a new boot environment Before doing anything else, lets create a new boot environment: # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris NR / 2.14G static 2012-09-25 10:32 # beadm create solaris-1 # beadm activate solaris-1 # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris N / 4.82M static 2012-09-25 10:32 solaris-1 R - 2.14G static 2012-09-29 11:40 # init 0 Normally an init 6 to reboot would have been sufficient, but in the next step I reset the system anyway in order to put the system in factory default mode for a "clean" domain configuration. Preparing to change - reset to factory default There was a leftover domain configuration on the T2000, so I reset it to the factory install state. Since the ldm command is't working yet, it can't be done from the control domain, so I did it by logging onto to the service processor: $ ssh -X admin@t2000-sc Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved. Oracle Advanced Lights Out Manager CMT v1.7.9 Please login: admin Please Enter password: ******** sc> showhost Sun-Fire-T2000 System Firmware 6.7.10 2010/07/14 16:35 Host flash versions: OBP 4.30.4.b 2010/07/09 13:48 Hypervisor 1.7.3.c 2010/07/09 15:14 POST 4.30.4.b 2010/07/09 14:24 sc> bootmode config="factory-default" sc> poweroff Are you sure you want to power off the system [y/n]? y SC Alert: SC Request to Power Off Host. SC Alert: Host system has shut down. sc> poweron SC Alert: Host System has Reset At this point I rebooted into the new Solaris 11 boot environment, and Solaris commands showed it was running on the factory default configuration of a single domain owning all 32 CPUs and 32GB of RAM (that's what it looked like in 2005.) # psrinfo -vp The physical processor has 8 cores and 32 virtual processors (0-31) The core has 4 virtual processors (0-3) The core has 4 virtual processors (4-7) The core has 4 virtual processors (8-11) The core has 4 virtual processors (12-15) The core has 4 virtual processors (16-19) The core has 4 virtual processors (20-23) The core has 4 virtual processors (24-27) The core has 4 virtual processors (28-31) UltraSPARC-T1 (chipid 0, clock 1200 MHz) # prtconf|grep Mem Memory size: 32640 Megabytes Note that the older processor has 4 virtual CPUs per core, while current processors have 8 per core. Remove ldomsmanager 2.2 and install the 1.2 version The Solaris 11 pkg command is now used to remove the 2.2 version that shipped with Solaris 11: # pkg uninstall ldomsmanager Packages to remove: 1 Create boot environment: No Create backup boot environment: No Services to change: 2 PHASE ACTIONS Removal Phase 130/130 PHASE ITEMS Package State Update Phase 1/1 Package Cache Update Phase 1/1 Image State Update Phase 2/2 Finally, LDoms 1.2 installed via its install script, the same way it was done years ago: # unzip LDoms-1_2-Integration-10.zip # cd LDoms-1_2-Integration-10/Install/ # ./install-ldm Welcome to the LDoms installer. You are about to install the Logical Domains Manager package that will enable you to create, destroy and control other domains on your system. Given the capabilities of the LDoms domain manager, you can now change the security configuration of this Solaris instance using the Solaris Security Toolkit. ... ... normal install messages omitted ... The Solaris Security Toolkit applies to Solaris 10, and cannot be used in Solaris 11 (in which several things hardened by the Toolkit are already hardened by default), so answer b in the choice below: You are about to install the Logical Domains Manager package that will enable you to create, destroy and control other domains on your system. Given the capabilities of the LDoms domain manager, you can now change the security configuration of this Solaris instance using the Solaris Security Toolkit. Select a security profile from this list: a) Hardened Solaris configuration for LDoms (recommended) b) Standard Solaris configuration c) Your custom-defined Solaris security configuration profile Enter a, b, or c [a]: b ... other install messages omitted for brevity... After install I ensure that the necessary services are enabled, and verify the version of the installed LDoms Manager: # svcs ldmd STATE STIME FMRI online 22:00:36 svc:/ldoms/ldmd:default # svcs vntsd STATE STIME FMRI disabled Aug_19 svc:/ldoms/vntsd:default # ldm -V Logical Domain Manager (v 1.2-debug) Hypervisor control protocol v 1.3 Using Hypervisor MD v 1.1 System PROM: Hypervisor v. 1.7.3. @(#)Hypervisor 1.7.3.c 2010/07/09 15:14\015 OpenBoot v. 4.30.4. @(#)OBP 4.30.4.b 2010/07/09 13:48 Set up control domain and domain services At this point we have a functioning LDoms 1.2 environment that can be configured in the usual fashion. One difference is that LDoms 1.2 behavior had 'delayed configuration mode (as expected) during initial configuration before rebooting the control domain. Another minor difference with a Solaris 11 control domain is that you define virtual switches using the 'vanity name' of the network interface, rather than the hardware driver name as in Solaris 10. # ldm list ------------------------------------------------------------------------------ Notice: the LDom Manager is running in configuration mode. Configuration and resource information is displayed for the configuration under construction; not the current active configuration. The configuration being constructed will only take effect after it is downloaded to the system controller and the host is reset. ------------------------------------------------------------------------------ NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-c-- SP 32 32640M 3.2% 4d 2h 50m # ldm add-vdiskserver primary-vds0 primary # ldm add-vconscon port-range=5000-5100 primary-vcc0 primary # ldm add-vswitch net-dev=net0 primary-vsw0 primary # ldm set-mau 2 primary # ldm set-vcpu 8 primary # ldm set-memory 4g primary # ldm add-config initial # ldm list-spconfig factory-default initial [current] That's it, really. After reboot, we are ready to install guest domains. Summary - new wine in old bottles This example shows that (new) Solaris 11 can be installed on (old) T2000 servers and used as a control domain. The main activity is to remove the preinstalled Oracle VM Server for 2.2 and install Logical Domains 1.2 - the last version of LDoms to support T1-processor systems. I tested Solaris 10 and Solaris 11 guest domains running on this server and they worked without any surprises. This is a viable way to get further into Solaris 11 adoption, even on older T-series equipment.

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service

    - by Elton Stoneman
    We're in the process of delivering an enabling project to expose on-premise WCF services securely to Internet consumers. The Azure Service Bus Relay is doing the clever stuff, we register our on-premise service with Azure, consumers call into our .servicebus.windows.net namespace, and their requests are relayed and serviced on-premise. In theory it's all wonderfully simple; by using the relay we get lots of protocol options, free HTTPS and load balancing, and by integrating to ACS we get plenty of security options. Part of our delivery is a suite of sample consumers for the service - .NET, jQuery, PHP - and this set of posts will cover setting up the service and the consumers. Part 1: Exposing the on-premise service In theory, this is ultra-straightforward. In practice, and on a dev laptop it is - but in a corporate network with firewalls and proxies, it isn't, so we'll walkthrough some of the pitfalls. Note that I'm using the "old" Azure portal which will soon be out of date, but the new shiny portal should have the same steps available and be easier to use. We start with a simple WCF service which takes a string as input, reverses the string and returns it. The Part 1 version of the code is on GitHub here: on GitHub here: IPASBR Part 1. Configuring Azure Service Bus Start by logging into the Azure portal and registering a Service Bus namespace which will be our endpoint in the cloud. Give it a globally unique name, set it up somewhere near you (if you’re in Europe, remember Europe (North) is Ireland, and Europe (West) is the Netherlands), and  enable ACS integration by ticking "Access Control" as a service: Authenticating and authorizing to ACS When we try to register our on-premise service as a listener for the Service Bus endpoint, we need to supply credentials, which means only trusted service providers can act as listeners. We can use the default "owner" credentials, but that has admin permissions so a dedicated service account is better (Neil Mackenzie has a good post On Not Using owner with the Azure AppFabric Service Bus with lots of permission details). Click on "Access Control Service" for the namespace, navigate to Service Identities and add a new one. Give the new account a sensible name and description: Let ACS generate a symmetric key for you (this will be the shared secret we use in the on-premise service to authenticate as a listener), but be sure to set the expiration date to something usable. The portal defaults to expiring new identities after 1 year - but when your year is up *your identity will expire without warning* and everything will stop working. In production, you'll need governance to manage identity expiration and a process to make sure you renew identities and roll new keys regularly. The new service identity needs to be authorized to listen on the service bus endpoint. This is done through claim mapping in ACS - we'll set up a rule that says if the nameidentifier in the input claims has the value serviceProvider, in the output we'll have an action claim with the value Listen. In the ACS portal you'll see that there is already a Relying Party Application set up for ServiceBus, which has a Default rule group. Edit the rule group and click Add to add this new rule: The values to use are: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: serviceProvider Output claim type: net.windows.servicebus.action Output claim value: Listen When your service namespace and identity are set up, open the Part 1 solution and put your own namespace, service identity name and secret key into the file AzureConnectionDetails.xml in Solution Items, e.g: <azure namespace="sixeyed-ipasbr">    <!-- ACS credentials for the listening service (Part1):-->   <service identityName="serviceProvider"            symmetricKey="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>  </azure> Build the solution, and the T4 template will generate the Web.config for the service project with your Azure details in the transportClientEndpointBehavior:           <behavior name="SharedSecret">             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> , and your service namespace in the Azure endpoint:         <!-- Azure Service Bus endpoints -->          <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                   binding="netTcpRelayBinding"                   contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> The sample project is hosted in IIS, but it won't register with Azure until the service is activated. Typically you'd install AppFabric 1.1 for Widnows Server and set the service to auto-start in IIS, but for dev just navigate to the local REST URL, which will activate the service and register it with Azure. Testing the service locally As well as an Azure endpoint, the service has a WebHttpBinding for local REST access:         <!-- local REST endpoint for internal use -->         <endpoint address="rest"                   binding="webHttpBinding"                   behaviorConfiguration="RESTBehavior"                   contract="Sixeyed.Ipasbr.Services.IFormatService" /> Build the service, then navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 - and you should see the reversed string response: If your network allows it, you'll get the expected response as before, but in the background your service will also be listening in the cloud. Good stuff! Who needs network security? Onto the next post for consuming the service with the netTcpRelayBinding.  Setting up network access to Azure But, if you get an error, it's because your network is secured and it's doing something to stop the relay working. The Service Bus relay bindings try to use direct TCP connections to Azure, so if ports 9350-9354 are available *outbound*, then the relay will run through them. If not, the binding steps down to standard HTTP, and issues a CONNECT across port 443 or 80 to set up a tunnel for the relay. If your network security guys are doing their job, the first option will be blocked by the firewall, and the second option will be blocked by the proxy, so you'll get this error: System.ServiceModel.CommunicationException: Unable to reach sixeyed-ipasbr.servicebus.windows.net via TCP (9351, 9352) or HTTP (80, 443) - and that will probably be the start of lots of discussions. Network guys don't really like giving servers special permissions for the web proxy, and they really don't like opening ports, so they'll need to be convinced about this. The resolution in our case was to put up a dedicated box in a DMZ, tinker with the firewall and the proxy until we got a relay connection working, then run some traffic which the the network guys monitored to do a security assessment afterwards. Along the way we hit a few more issues, diagnosed mainly with Fiddler and Wireshark: System.Net.ProtocolViolationException: Chunked encoding upload is not supported on the HTTP/1.0 protocol - this means the TCP ports are not available, so Azure tries to relay messaging traffic across HTTP. The service can access the endpoint, but the proxy is downgrading traffic to HTTP 1.0, which does not support tunneling, so Azure can’t make its connection. We were using the Squid proxy, version 2.6. The Squid project is incrementally adding HTTP 1.1 support, but there's no definitive list of what's supported in what version (here are some hints). System.ServiceModel.Security.SecurityNegotiationException: The X.509 certificate CN=servicebus.windows.net chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. The evocation function was unable to check revocation because the revocation server was offline. - by this point we'd given up on the HTTP proxy and opened the TCP ports. We got this error when the relay binding does it's authentication hop to ACS. The messaging traffic is TCP, but the control traffic still goes over HTTP, and as part of the ACS authentication the process checks with a revocation server to see if Microsoft’s ACS cert is still valid, so the proxy still needs some clearance. The service account (the IIS app pool identity) needs access to: www.public-trust.com mscrl.microsoft.com We still got this error periodically with different accounts running the app pool. We fixed that by ensuring the machine-wide proxy settings are set up, so every account uses the correct proxy: netsh winhttp set proxy proxy-server="http://proxy.x.y.z" - and you might need to run this to clear out your credential cache: certutil -urlcache * delete If your network guys end up grudgingly opening ports, they can restrict connections to the IP address range for your chosen Azure datacentre, which might make them happier - see Windows Azure Datacenter IP Ranges. After all that you've hopefully got an on-premise service listening in the cloud, which you can consume from pretty much any technology.

    Read the article

  • Building a better mouse-trap &ndash; Improving the creation of XML Message Requests using Reflection, XML &amp; XSLT

    - by paulschapman
    Introduction The way I previously created messages to send to the GovTalk service I used the XMLDocument to create the request. While this worked it left a number of problems; not least that for every message a special function would need to created. This is OK for the short term but the biggest cost in any software project is maintenance and this would be a headache to maintain. So the following is a somewhat better way of achieving the same thing. For the purposes of this article I am going to be using the CompanyNumberSearch request of the GovTalk service – although this technique would work for any service that accepted XML. The C# functions which send and receive the messages remain the same. The magic sauce in this is the XSLT which defines the structure of the request, and the use of objects in conjunction with reflection to provide the content. It is a bit like Sweet Chilli Sauce added to Chicken on a bed of rice. So on to the Sweet Chilli Sauce The Sweet Chilli Sauce The request to search for a company based on it’s number is as follows; <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID>1</TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID>????????????????????????????????</SenderID> <Authentication> <Method>CHMD5</Method> <Value>????????????????????????????????</Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber>99999999</PartialCompanyNumber> <DataSet>LIVE</DataSet> <SearchRows>1</SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> This is the XML that we send to the GovTalk Service and we get back a list of companies that match the criteria passed A message is structured in two parts; The envelope which identifies the person sending the request, with the name of the request, and the body which gives the detail of the company we are looking for. The Chilli What makes it possible is the use of XSLT to define the message – and serialization to convert each request object into XML. To start we need to create an object which will represent the contents of the message we are sending. However there is a common properties in all the messages that we send to Companies House. These properties are as follows SenderId – the id of the person sending the message SenderPassword – the password associated with Id TransactionId – Unique identifier for the message AuthenticationValue – authenticates the request Because these properties are unique to the Companies House message, and because they are shared with all messages they are perfect candidates for a base class. The class is as follows; using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Security.Cryptography; using System.Text; using System.Text.RegularExpressions; using Microsoft.WindowsAzure.ServiceRuntime; namespace CompanyHub.Services { public class GovTalkRequest { public GovTalkRequest() { try { SenderID = RoleEnvironment.GetConfigurationSettingValue("SenderId"); SenderPassword = RoleEnvironment.GetConfigurationSettingValue("SenderPassword"); TransactionId = DateTime.Now.Ticks.ToString(); AuthenticationValue = EncodePassword(String.Format("{0}{1}{2}", SenderID, SenderPassword, TransactionId)); } catch (System.Exception ex) { throw ex; } } /// <summary> /// returns the Sender ID to be used when communicating with the GovTalk Service /// </summary> public String SenderID { get; set; } /// <summary> /// return the password to be used when communicating with the GovTalk Service /// </summary> public String SenderPassword { get; set; } // end SenderPassword /// <summary> /// Transaction Id - uses the Time and Date converted to Ticks /// </summary> public String TransactionId { get; set; } // end TransactionId /// <summary> /// calculate the authentication value that will be used when /// communicating with /// </summary> public String AuthenticationValue { get; set; } // end AuthenticationValue property /// <summary> /// encodes password(s) using MD5 /// </summary> /// <param name="clearPassword"></param> /// <returns></returns> public static String EncodePassword(String clearPassword) { MD5CryptoServiceProvider md5Hasher = new MD5CryptoServiceProvider(); byte[] hashedBytes; UTF32Encoding encoder = new UTF32Encoding(); hashedBytes = md5Hasher.ComputeHash(ASCIIEncoding.Default.GetBytes(clearPassword)); String result = Regex.Replace(BitConverter.ToString(hashedBytes), "-", "").ToLower(); return result; } } } There is nothing particularly clever here, except for the EncodePassword method which hashes the value made up of the SenderId, Password and Transaction id. Each message inherits from this object. So for the Company Number Search in addition to the properties above we need a partial number, which dataset to search – for the purposes of the project we only need to search the LIVE set so this can be set in the constructor and the SearchRows. Again all are set as properties. With the SearchRows and DataSet initialized in the constructor. public class CompanyNumberSearchRequest : GovTalkRequest, IDisposable { /// <summary> /// /// </summary> public CompanyNumberSearchRequest() : base() { DataSet = "LIVE"; SearchRows = 1; } /// <summary> /// Company Number to search against /// </summary> public String PartialCompanyNumber { get; set; } /// <summary> /// What DataSet should be searched for the company /// </summary> public String DataSet { get; set; } /// <summary> /// How many rows should be returned /// </summary> public int SearchRows { get; set; } public void Dispose() { DataSet = String.Empty; PartialCompanyNumber = String.Empty; DataSet = "LIVE"; SearchRows = 1; } } As well as inheriting from our base class, I have also inherited from IDisposable – not just because it is just plain good practice to dispose of objects when coding, but it gives also gives us more versatility when using the object. There are four stages in making a request and this is reflected in the four methods we execute in making a call to the Companies House service; Create a request Send a request Check the status If OK then get the results of the request I’ve implemented each of these stages within a static class called Toolbox – which also means I don’t need to create an instance of the class to use it. When making a request there are three stages; Get the template for the message Serialize the object representing the message Transform the serialized object using a predefined XSLT file. Each of my templates I have defined as an embedded resource. When retrieving a resource of this kind we have to include the full namespace to the resource. In making the code re-usable as much as possible I defined the full ‘path’ within the GetRequest method. requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); So we now have the full path of the file within the assembly. Now all we need do is retrieve the assembly and get the resource. asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); Once retrieved  So this can be returned to the calling function and we now have a stream of XSLT to define the message. Time now to serialize the request to create the other side of this message. // Serialize object containing Request, Load into XML Document t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); First off we need the type of the object so we make a call to the GetType method of the object containing the Message properties. Next we need a MemoryStream, XmlSerializer and an XMLTextWriter so these can be initialized. The object is serialized by making the call to the Serialize method of the serializer object. The result of that is then converted into a MemoryStream. That MemoryStream is then converted into a string. ConvertByteArrayToString This is a fairly simple function which uses an ASCIIEncoding object found within the System.Text namespace to convert an array of bytes into a string. public static String ConvertByteArrayToString(byte[] bytes) { System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding(); return enc.GetString(bytes); } I only put it into a function because I will be using this in various places. The Sauce When adding support for other messages outside of creating a new object to store the properties of the message, the C# components do not need to change. It is in the XSLT file that the versatility of the technique lies. The XSLT file determines the format of the message. For the CompanyNumberSearch the XSLT file is as follows; <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID> <xsl:value-of select="CompanyNumberSearchRequest/TransactionId"/> </TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID><xsl:value-of select="CompanyNumberSearchRequest/SenderID"/></SenderID> <Authentication> <Method>CHMD5</Method> <Value> <xsl:value-of select="CompanyNumberSearchRequest/AuthenticationValue"/> </Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber> <xsl:value-of select="CompanyNumberSearchRequest/PartialCompanyNumber"/> </PartialCompanyNumber> <DataSet> <xsl:value-of select="CompanyNumberSearchRequest/DataSet"/> </DataSet> <SearchRows> <xsl:value-of select="CompanyNumberSearchRequest/SearchRows"/> </SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> </xsl:template> </xsl:stylesheet> The outer two tags define that this is a XSLT stylesheet and the root tag from which the nodes are searched for. The GovTalkMessage is the format of the message that will be sent to Companies House. We first set up the XslCompiledTransform object which will transform the XSLT template and the serialized object into the request to Companies House. xslt = new XslCompiledTransform(); resultStream = new MemoryStream(); writer = new XmlTextWriter(resultStream, Encoding.ASCII); doc = new XmlDocument(); The Serialize method require XmlTextWriter to write the XML (writer) and a stream to place the transferred object into (writer). The XML will be loaded into an XMLDocument object (doc) prior to the transformation. // create XSLT Template xslTemplate = Toolbox.GetRequest(Template); xslTemplate.Seek(0, SeekOrigin.Begin); templateReader = XmlReader.Create(xslTemplate); xslt.Load(templateReader); I have stored all the templates as a series of Embedded Resources and the GetRequestCall takes the name of the template and extracts the relevent XSLT file. /// <summary> /// Gets the framwork XML which makes the request /// </summary> /// <param name="RequestFile"></param> /// <returns></returns> public static Stream GetRequest(String RequestFile) { String requestFile = String.Empty; Stream sr = null; Assembly asm = null; try { requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); } catch (Exception) { throw; } finally { asm = null; } return sr; } // end private static stream GetRequest We first take the template name and expand it to include the full namespace to the Embedded Resource I like to keep all my schemas in the same directory and so the namespace reflects this. The rest is the default namespace for the project. Then we get the currently executing assembly (which will contain the resources with the call to GetExecutingAssembly() ) Finally we get a stream which contains the XSLT file. We use this stream and then load an XmlReader with the contents of the template, and that is in turn loaded into the XslCompiledTransform object. We convert the object containing the message properties into Xml by serializing it; calling the Serialize() method of the XmlSerializer object. To set up the object we do the following; t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); We first determine the type of the object being transferred by calling GetType() We create an XmlSerializer object by passing the type of the object being serialized. The serializer writes to a memory stream and that is linked to an XmlTextWriter. Next job is to serialize the object and load it into an XmlDocument. serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; xmlRequest = new XmlTextReader(ms); GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); doc.LoadXml(GovTalkRequest); Time to transform the XML to construct the full request. xslt.Transform(doc, writer); resultStream.Seek(0, SeekOrigin.Begin); request = Toolbox.ConvertByteArrayToString(resultStream.ToArray()); So that creates the full request to be sent  to Companies House. Sending the request So far we have a string with a request for the Companies House service. Now we need to send the request to the Companies House Service. Configuration within an Azure project There are entire blog entries written about configuration within an Azure project – most of this is out of scope for this article but the following is a summary. Configuration is defined in two files within the parent project *.csdef which contains the definition of configuration setting. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WebRole name="CompanyHub.Host"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="80" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="DataConnectionString" /> </ConfigurationSettings> </WebRole> <WebRole name="CompanyHub.Services"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="8080" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="SenderId"/> <Setting name="SenderPassword" /> <Setting name="GovTalkUrl"/> </ConfigurationSettings> </WebRole> <WorkerRole name="CompanyHub.Worker"> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> </ConfigurationSettings> </WorkerRole> </ServiceDefinition>   Above is the configuration definition from the project. What we are interested in however is the ConfigurationSettings tag of the CompanyHub.Services WebRole. There are four configuration settings here, but at the moment we are interested in the second to forth settings; SenderId, SenderPassword and GovTalkUrl The value of these settings are defined in the ServiceDefinition.cscfg file; <?xml version="1.0"?> <ServiceConfiguration serviceName="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"> <Role name="CompanyHub.Host"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> <Role name="CompanyHub.Services"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="SenderId" value="UserID"/> <Setting name="SenderPassword" value="Password"/> <Setting name="GovTalkUrl" value="http://xmlgw.companieshouse.gov.uk/v1-0/xmlgw/Gateway"/> </ConfigurationSettings> </Role> <Role name="CompanyHub.Worker"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> </ServiceConfiguration>   Look for the Role tag that contains our project name (CompanyHub.Services). Having configured the parameters we can now transmit the request. This is done by ‘POST’ing a stream of XML to the Companies House servers. govTalkUrl = RoleEnvironment.GetConfigurationSettingValue("GovTalkUrl"); request = WebRequest.Create(govTalkUrl); request.Method = "POST"; request.ContentType = "text/xml"; writer = new StreamWriter(request.GetRequestStream()); writer.WriteLine(RequestMessage); writer.Close(); We use the WebRequest object to send the object. Set the method of sending to ‘POST’ and the type of data as text/xml. Once set up all we do is write the request to the writer – this sends the request to Companies House. Did the Request Work Part I – Getting the response Having sent a request – we now need the result of that request. response = request.GetResponse(); reader = response.GetResponseStream(); result = Toolbox.ConvertByteArrayToString(Toolbox.ReadFully(reader));   The WebRequest object has a GetResponse() method which allows us to get the response sent back. Like many of these calls the results come in the form of a stream which we convert into a string. Did the Request Work Part II – Translating the Response Much like XSLT and XML were used to create the original request, so it can be used to extract the response and by deserializing the result we create an object that contains the response. Did it work? It would be really great if everything worked all the time. Of course if it did then I don’t suppose people would pay me and others the big bucks so that our programmes do not a) Collapse in a heap (this is an area of memory) b) Blow every fuse in the place in a shower of sparks (this will probably not happen this being real life and not a Hollywood movie, but it was possible to blow the sound system of a BBC Model B with a poorly coded setting) c) Go nuts and trap everyone outside the airlock (this was from a movie, and unless NASA get a manned moon/mars mission set up unlikely to happen) d) Go nuts and take over the world (this was also from a movie, but please note life has a habit of being of exceeding the wildest imaginations of Hollywood writers (note writers – Hollywood executives have no imagination and judging by recent output of that town have turned plagiarism into an art form). e) Freeze in total confusion because the cleaner pulled the plug to the internet router (this has happened) So anyway – we need to check to see if our request actually worked. Within the GovTalk response there is a section that details the status of the message and a description of what went wrong (if anything did). I have defined an XSLT template which will extract these into an XML document. <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <xsl:template match="/"> <GovTalkStatus xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Status> <xsl:value-of select="ev:GovTalkMessage/ev:Header/ev:MessageDetails/ev:Qualifier"/> </Status> <Text> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Text"/> </Text> <Location> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Location"/> </Location> <Number> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Number"/> </Number> <Type> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Type"/> </Type> </GovTalkStatus> </xsl:template> </xsl:stylesheet>   Only thing different about previous XSL files is the references to two namespaces ev & gt. These are defined in the GovTalk response at the top of the response; xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" If we do not put these references into the XSLT template then  the XslCompiledTransform object will not be able to find the relevant tags. Deserialization is a fairly simple activity. encoder = new ASCIIEncoding(); ms = new MemoryStream(encoder.GetBytes(statusXML)); serializer = new XmlSerializer(typeof(GovTalkStatus)); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); messageStatus = (GovTalkStatus)serializer.Deserialize(ms);   We set up a serialization object using the object type containing the error state and pass to it the results of a transformation between the XSLT above and the GovTalk response. Now we have an object containing any error state, and the error message. All we need to do is check the status. If there is an error then we can flag an error. If not then  we extract the results and pass that as an object back to the calling function. We go this by guess what – defining an XSLT template for the result and using that to create an Xml Stream which can be deserialized into a .Net object. In this instance the XSLT to create the result of a Company Number Search is; <?xml version="1.0" encoding="us-ascii"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:sch="http://xmlgw.companieshouse.gov.uk/v1-0/schema" exclude-result-prefixes="ev"> <xsl:template match="/"> <CompanySearchResult xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <CompanyNumber> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyNumber"/> </CompanyNumber> <CompanyName> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyName"/> </CompanyName> </CompanySearchResult> </xsl:template> </xsl:stylesheet> and the object definition is; using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace CompanyHub.Services { public class CompanySearchResult { public CompanySearchResult() { CompanyNumber = String.Empty; CompanyName = String.Empty; } public String CompanyNumber { get; set; } public String CompanyName { get; set; } } } Our entire code to make calls to send a request, and interpret the results are; String request = String.Empty; String response = String.Empty; GovTalkStatus status = null; fault = null; try { using (CompanyNumberSearchRequest requestObj = new CompanyNumberSearchRequest()) { requestObj.PartialCompanyNumber = CompanyNumber; request = Toolbox.CreateRequest(requestObj, "CompanyNumberSearch.xsl"); response = Toolbox.SendGovTalkRequest(request); status = Toolbox.GetMessageStatus(response); if (status.Status.ToLower() == "error") { fault = new HubFault() { Message = status.Text }; } else { Object obj = Toolbox.GetGovTalkResponse(response, "CompanyNumberSearchResult.xsl", typeof(CompanySearchResult)); } } } catch (FaultException<ArgumentException> ex) { fault = new HubFault() { FaultType = ex.Detail.GetType().FullName, Message = ex.Detail.Message }; } catch (System.Exception ex) { fault = new HubFault() { FaultType = ex.GetType().FullName, Message = ex.Message }; } finally { } Wrap up So there we have it – a reusable set of functions to send and interpret XML results from an internet based service. The code is reusable with a little change with any service which uses XML as a transport mechanism – and as for the Companies House GovTalk service all I need to do is create various objects for the result and message sent and the relevent XSLT files. I might need minor changes for other services but something like 70-90% will be exactly the same.

    Read the article

  • Best Practices - which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains) One question that frequently comes up is "which types of domain should I use to run applications?" There used to be a simple answer in most cases: "only run applications in guest domains", but enhancements to T-series servers, Oracle VM Server for SPARC and the advent of SPARC SuperCluster have made this question more interesting and worth qualifying differently. This article reviews the relevant concepts and provides suggestions on where to deploy applications in a logical domains environment. Review: division of labor and types of domain Oracle VM Server for SPARC offloads many functions from the hypervisor to domains (also called virtual machines). This is a modern alternative to using a "thick" hypervisor that provides all virtualization functions, as in traditional VM designs, This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, which further improves reliability and security. In this architecture, management and I/O functionality are provided within domains. Oracle VM Server for SPARC does this by defining the following types of domain, each with their own roles: Control domain - management control point for the server, used to configure domains and manage resources. It is the first domain to boot on a power-up, is an I/O domain, and is usually a service domain as well. I/O domain - has been assigned physical I/O devices: a PCIe root complex, a PCI device, or a SR-IOV (single-root I/O Virtualization) function. It has native performance and functionality for the devices it owns, unmediated by any virtualization layer. Service domain - provides virtual network and disk devices to guest domains. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI busses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain, which is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure: guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device doesn't result in an application outage. This is also used for "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O busses, so there is more I/O capacity that can be used for applications. Increased T-series server capacity made it attractive to run more vertical applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the SPARC SuperCluster engineered system, announced a year ago at Oracle OpenWorld. In SPARC SuperCluster, I/O domains are used for high performance applications, with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is the introduction of Direct I/O (DIO) and Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. A domain with either a DIO or SR-IOV device is an I/O domain. In summary: not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O go guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm has to be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. I/O domains can be used for applications with high performance requirements. This is used to great effect in SPARC SuperCluster and in general T4 deployments. Direct I/O (DIO) and Single Root I/O Virtualization (SR-IOV) make this more attractive by giving direct I/O access to more domains. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect other domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so an interruption of service in the service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. SPARC SuperCluster use the control domain for applications, but it is an exception: it's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity T-series servers have made it more attractive to use them for applications with high resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide higher performance for critical applications.

    Read the article

  • Hierarchical View/ViewModel/Presenters in MVPVM

    - by Brian Flynn
    I've been working with MVVM for a while, but I've recently started using MVPVM and I want to know how to create hierarchial View/ViewModel/Presenter app using this pattern. In MVVM I would typically build my application using a hierarchy of Views and corresponding ViewModels e.g. I might define 3 views as follows: The View Models for these views would be as follows: public class AViewModel { public string Text { get { return "This is A!"; } } public object Child1 { get; set; } public object Child2 { get; set; } } public class BViewModel { public string Text { get { return "This is B!"; } } } public class CViewModel { public string Text { get { return "This is C!"; } } } In would then have some data templates to say that BViewModel and CViewModel should be presented using View B and View C: <DataTemplate DataType="{StaticResource local:BViewModel}"> <local:BView/> </DataTemplate> <DataTemplate DataType="{StaticResource local:CViewModel}"> <local:CView/> </DataTemplate> The final step would be to put some code in AViewModel that would assign values to Child1 and Child2: public AViewModel() { this.Child1 = new AViewModel(); this.Child2 = new BViewModel(); } The result of all this would be a screen that looks something like: Doing this in MVPVM would be fairly simple - simply moving the code in AViewModel's constructor to APresenter: public class APresenter { .... public void WireUp() { ViewModel.Child1 = new BViewModel(); ViewModel.Child2 = new CViewModel(); } } But If I want to have business logic for BViewModel and CViewModel I would need to have a BPresenter and a CPresenter - the problem is, Im not sure where the best place to put these are. I could store references to the presenter for AViewModel.Child1 and AViewModel.Child2 in APresenter i.e.: public class APresenter : IPresenter { private IPresenter child1Presenter; private IPresenter child2Presenter; public void WireUp() { child1Presenter = new BPresenter(); child1Presenter.WireUp(); child2Presenter = new CPresenter(); child2Presenter.WireUp(); ViewModel.Child1 = child1Presenter.ViewModel; ViewModel.Child2 = child2Presenter.ViewModel; } } But this solution seems inelegant compared to the MVVM approach. I have to keep track of both the presenter and the view model and ensure they stay in sync. If, for example, I wanted a button on View A, which, when clicked swapped the View's in Child1 and Child2, I might have a command that did the following: var temp = ViewModel.Child1; ViewModel.Child1 = ViewModel.Child2; ViewModel.Child2 = temp; This would work as far as swapping the view's on screen (assuming the correct Property Change notification code is in place), but now my APresenter.child1Presenter is pointing to the presenter for AViewModel.Child2, and APresenter.child2Presenter is pointing to the presenter for AViewModel.Child1. If something accesses APresenter.child1Presenter, any changes will actually happen to AViewModel.Child2. I can imagine this leading to all sorts of debugging fun. I know that I may be misunderstanding the pattern, and if this is the case a clarification of what Im doing wrong would be appreciated.

    Read the article

  • ASP.NET AJAX, jQuery and AJAX Control Toolkit&ndash;the roadmap

    - by Harish Ranganathan
    The opinions mentioned herein are solely mine and do not reflect those of my employer Wanted to post this for a long time but couldn’t.  I have been an ASP.NET Developer for quite sometime and have worked with version 1.1, 2.0, 3.5 as well as the latest 4.0. With ASP.NET 2.0 and Visual Studio 2005, came the era of AJAX and rich UI style web applications.  So, ASP.NET AJAX (codenamed “ATLAS”) was released almost an year later.  This was called as ASP.NET 2.0 AJAX Extensions.  This release was supported further with Visual Studio 2005 Service Pack 1. The initial release of ASP.NET AJAX had 3 components ASP.NET AJAX Library – Client library that is used internally by the server controls as well as scripts that can be used to write hand coded ajax style pages ASP.NET AJAX Extensions – Server controls i.e. ScriptManager,Proxy, UpdatePanel, UpdateProgress and Timer server controls.  Works pretty much like other server controls in terms of development and render client side behavior automatically AJAX Control Toolkit – Set of server controls that extend a behavior or a capability.  Ex.- AutoCompleteExtender The AJAX Control Toolkit was a separate download from CodePlex while the first two get installed when you install ASP.NET AJAX Extensions. With Visual Studio 2008, ASP.NET AJAX made its way into the runtime.  So one doesn’t need to separately install the AJAX Extensions.  However, the AJAX Control Toolkit still remained as a community project that can be downloaded from CodePlex.  By then, the toolkit had close to 30 controls. So, the approach was clear viz., client side programming using ASP.NET AJAX Library and server side model using built-in controls (UpdatePanel) and/or AJAX Control Toolkit. However, with Visual Studio 2008 Service Pack 1, we also added support for the ever increasing popular jQuery library.  That is, you can use jQuery along with ASP.NET and would also get intellisense for jQuery in Visual Studio 2008. Some of you who have played with Visual Studio 2010 Beta and .NET Framework 4 Beta, would also have explored the new AJAX Library which had a lot of templates, live bindings etc.,  But, overall, the road map ahead makes it much simplified. For client side programming using JavaScript for implementing AJAX in ASP.NET, the recommendation is to use jQuery which will be shipped along with Visual Studio and provides intellisense as well. For server side programming one you can use the server controls like UpdatePanel etc., and also the AJAX Control Toolkit which has close to 40 controls now.  The AJAX Control Toolkit still remains as a separate download at CodePlex.  You can download the different versions for different versions of ASP.NET at http://ajaxcontroltoolkit.codeplex.com/ The Microsoft AJAX Library will still be available through the CDN (Content Delivery Network) channels.  You can view the CDN resources at http://www.asp.net/ajaxlibrary/CDN.ashx Similarly even jQuery and the toolkit would be available as CDN resources in case you chose not to download and have them as a part of your application. I think this makes AJAX development pretty simple.  Earlier, having Microsoft AJAX Library as well as jQuery for client side scripting was kind of confusing on which one to use.  With this roadmap, it makes it simple and clear. You can read more on this at http://ajax.asp.net I hope this post provided some clarity on the AJAX roadmap as I could decipher from various product teams. Cheers!!!

    Read the article

  • How To Disable Control Panel in Windows 7

    - by Mysticgeek
    If you have a shared computer that your family and friends can access, you might not want them to mess around in the Control Panel, and luckily with a simple tweak you can disable it. Disable Control Panel with Group Policy Note: This process uses Local Group Policy Editor which is not available in Home versions of Windows 7. Skip down below for the registry hack version that works on Home editions as well. First type gpedit.msc into the Search box in the Start menu and hit Enter. When Local Group Policy Editor opens, navigate to User Configuration \ Administrative Templates then select Control Panel in the left Column. In the right column double-click on Prohibit access to the Control Panel. In the next window, select Enable, click OK, then close out of Local Group Policy Editor. After the Control Panel is disabled, you’ll notice it’s no longer listed in the Start Menu. If the user tries to type Control Panel into the Search box in the Start menu, they will get the following message indicating it’s restricted. Disable Control Panel with a Registry Tweak You can also tweak the Registry to disable Control Panel. This will work with all versions of Windows 7, Vista, and XP. Making changes in the Registry is not recommended for beginners and you should create a Restore Point, or backup the Registry before making any changes. Type regedit into the Search box in the Start menu and hit Enter. In Registry Editor navigate to HKEY_CURRENT_USER\Software\Microsoft\Windows\Current Version\Policies\Explorer. Then right-click in the right pane and create a new DWORD (32-bit) Value. Name the value NoControlPanel. Then right-click on the new Value and click Modify…   In the Value data field change the value to “1” then click OK. Close out of Registry Editor and restart the machine to complete the process. When you get back from reboot, you’ll notice Control Panel is no longer listed in the Start menu. If a user tries to access it by typing Control Panel into the Search box in the Start menu… They will get the following message indicating it is restricted, just like if you were to disable it via Group Policy. If you want to re-enable the Control Panel, go back into the Registry and change the NoControlPanel value back to “0” then reboot the computer. This comes in handy if you have inexperienced users working on your machine and don’t want them messing with Control Panel settings. Similar Articles Productive Geek Tips Disable User Account Control (UAC) the Easy Way on Win 7 or VistaStill Useful in Vista: Startup Control PanelRestore Missing Items in Windows Vista Control PanelHow To Manage Action Center in Windows 7New Vista Syntax for Opening Control Panel Items from the Command-line TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon

    Read the article

  • Looking into ASP.Net MVC 4.0 Mobile Development - part 1

    - by nikolaosk
    In this post I will be looking how ASP.Net MVC 4.0 helps us to create web solutions that target mobile devices.We all experience the magic that is the World Wide Web through mobile devices. Millions of people around the world, use tablets and smartphones to view the contents of websites,e-shops and portals.ASP.Net MVC 4.0 includes a new mobile project template and the ability to render a different set of views for different types of devices.There is a new feature that is called browser overriding which allows us to control exactly what a user is going to see from your web application regardless of what type of device he is using.In order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 2012 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine.It will work fine in Windows 7 as well so do not worry if you do not have the latest Microsoft operating system.1) Launch VS 2012 and create a new Web Forms application by going to File - >New Project - > ASP.Net MVC 4 Web Application and then click OKHave a look at the picture below  2) From the available templates select Mobile Application and then click OK.Have a look at the picture below 3) When I run the application I get the mobile view of the page. I would like to show you what a typical ASP.Net MVC 4.0 application looks like. So I will create a new simple ASP.Net MVC 4.0 Web Application. When I run the application I get the normal page view.Have a look at the picture below.On the left is the mobile view and on the right the normal view. As you can see we have more or less the same content in our mobile application (log in,register) compared with the normal ASP.Net MVC 4.0 application but it is optimised for mobile devices. 4) Let me explain how and when the mobile view is selected and finally rendered.There is a feature in MVC 4.0 that is called Display Modes and with this feature the runtime will select a view.If we have 2 views e.g contact.mobile.cshtml and contact.cshtml in our application the Controller at some point will instruct the runtime to select and render a view named contact.The runtime will look at the browser making the request and will determine if it is a mobile browser or a desktop browser. So if there is a request from my IPhone Safari browser for a particular site, if there is a mobile view the MVC 4.0 will select it and render it. If there is not a mobile view, the normal view will be rendered.5) In the  ASP.Net MVC 4.0 (Internet application) I created earlier (not the first project which was a mobile one) I can run it once more and see how it looks on the browser. If I want to view it with a mobile browser I must download one emulator like Opera Mobile.You can download Opera Mobile hereWhen I run the application I get the same view in both the desktop and the mobile browser. That was to be expected. Have a look at the picture below 6) Then I create another version of the _Layout.mobile.cshtml view in the Shared folder.I simply copy and paste the _Layout.cshtml  into the same folder and then rename it to _Layout.mobile.cshtml and then just alter the contents of the _Layout.mobile.cshtml.When I run again the application I get a different view on the desktop browser and a different one on the Opera mobile browser.Have a look at the picture below ?he Controller will instruct the ASP.Net runtime to select and render a view named _Layout.mobile.cshtml when the request will come from a mobile browser.?he runtime knows that a browser is a mobile one through the ASP.Net browser capability provider. Hope it helps!!!

    Read the article

  • Redaction in AutoVue

    - by [email protected]
    As the trend to digitize all paper assets continues, so does the push to digitize all the processes around these assets. One such process is redaction - removing sensitive or classified information from documents. While for some this may conjure up thoughts of old CIA documents filled with nothing but blacked out pages, there are actually many uses for redaction today beyond military and government. Many companies have a need to remove names, phone numbers, social security numbers, credit card numbers, etc. from documents that are being scanned in and/or released to the public or less privileged users - insurance companies, banks and legal firms are a few examples. The process of digital redaction actually isn't that far from the old paper method: Step 1. Find a folder with a big red stamp on it labeled "TOP SECRET" Step 2. Make a copy of that document, since some folks still need to access the original contents Step 3. Black out the text or pages you want to hide Step 4. Release or distribute this new 'redacted' copy So where does a solution like AutoVue come in? Well, we've really been doing all of these things for years! 1. With AutoVue's VueLink integration and iSDK, we can integrate to virtually any content management system and view documents of almost any format with a single click. Finding the document and opening it in AutoVue: CHECK! 2. With AutoVue's markup capabilities, adding filled boxes (or other shapes) around certain text is a no-brainer. You can even leverage AutoVue's powerful APIs to automate the addition of markups over certain text or pre-defined regions using our APIs. Black out the text you want to hide: CHECK! 3. With AutoVue's conversion capabilities, you can 'burn-in' the comments into a new file, either as a TIFF, JPEG or PDF document. Burning-in the redactions avoids slip-ups like the recent (well-publicized) TSA one. Through our tight integrations, the newly created copies can be directly checked into the content management system with no manual intervention. Make a copy of that document: CHECK! 4. Again, leveraging AutoVue's integrations, we can now define rules in the system based on a user's privileges. An 'authorized' user wishing to view the document from the repository will get exactly that - no redactions. An 'unauthorized' user, when requesting to view that same document, can get redirected to open the redacted copy of the same document. Release or distribute the new 'redacted' copy: CHECK! See this movie (WMV format, 2mins, 20secs, no audio) for a quick illustration of AutoVue's redaction capabilities. It shows how redactions can be added based on text searches, manual input or pre-defined templates/regions. Let us know what you think in the comments. And remember - this is all in our flagship AutoVue product - no additional software required!

    Read the article

  • Netbeans 7.2 Missing Modules Warning

    - by el10780
    Everytime I start Netbeans and the splash screen shows up when it gets to the part to load the modules I receive the following error message : Warning - could not install some modules: Editor Library 2 - None of the modules providing the capability org.netbeans.modules.editor.actions could be installed. Tags Based Editors Library - The module named org.netbeans.modules.editor.deprecated.pre65formatting/0-1 was needed and not found. Java Editor Library - The module named org.netbeans.modules.editor.deprecated.pre65formatting/0-1 was needed and not found. Preprocessor Bridge - None of the modules providing the capability org.netbeans.modules.java.preprocessorbridge.spi.JavaSourceUtilImpl could be installed. Freeform Ant Projects - The module named org.netbeans.modules.editor.indent.project/0-1 was needed and not found. Editor Code Templates - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Static Analysis Core - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Java Source - The module named org.netbeans.modules.editor.indent.project/0-1 was needed and not found. Eclipse Project Importer - The module named org.netbeans.modules.java.api.common/0-1 was needed and not found. Java Hints SPI - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Java Refactoring - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Java Editor - The module named org.netbeans.modules.editor.bracesmatching/0-1 was needed and not found. Java Editor - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Java Editor - The module named org.netbeans.modules.editor.deprecated.pre65formatting/0-1 was needed and not found. Java Hints UI - The module named org.netbeans.modules.code.analysis/0-1 was needed and not found. Java Hints UI - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Legacy Java Hints SPI - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Java Hints - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Java Declarative Hints - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Javadoc - The module named org.netbeans.modules.editor.bracesmatching/0-1 was needed and not found. Javadoc - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Common Scripting Language API (new) - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. XML Text Editor - The module named org.netbeans.modules.editor.bracesmatching/0-1 was needed and not found. XML Text Editor - The module named org.netbeans.modules.editor.deprecated.pre65formatting/0-1 was needed and not found. CSS Editor - The module named org.netbeans.modules.editor.bracesmatching/0-1 was needed and not found. HTML Editor - The module named org.netbeans.modules.editor.bracesmatching/0-1 was needed and not found. JavaScript Editing - The module named org.netbeans.modules.editor.bracesmatching/0-1 was needed and not found. JavaScript Hints - The module named org.netbeans.spi.editor.hints/0-1 was needed and not found. Editing Files - The module named org.netbeans.modules.editor.bracesmatching/0-1 was needed and not found. IDE Platform - The module named org.netbeans.modules.editor.macros/0-1 was needed and not found. Java SE Projects - The module named org.netbeans.modules.java.api.common/0-1 was needed and not found. 86 further modules could not be installed due to the above problems. Whatever I press either Exit or Disable Modules and Continue or even I close from the "X" Button the Warning window closes and then Netbeans never starts. I have looked it up on the Internet,but I couldn't find a solution.

    Read the article

  • New Communications Industry Data Model with "Factory Installed" Predictive Analytics using Oracle Da

    - by charlie.berger
    Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications Service Providers   We've integrated pre-installed analytical methodologies with the new Oracle Communications Data Model to deliver automated, simple, yet powerful predictive analytics solutions for customers.  Churn, sentiment analysis, identifying customer segments - all things that can be anticipated and hence, preconcieved and implemented inside an applications.  Read on for more information! TM Forum Management World, Nice, France - 18 May 2010 News Facts To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle today introduced the Oracle Communications Data Model. With the Oracle Communications Data Model, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the Oracle Communications Data Model, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Also announced today, Hong Kong Broadband Network enhanced their data warehouse system, going live on Oracle Communications Data Model in three months. The leading provider increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Product Details Oracle Communications Data Model provides industry-specific schema and embedded analytics that address key areas such as customer management, marketing segmentation, product development and network health. CSPs can efficiently capture and monitor critical data and transform it into actionable information to support development and delivery of next-generation services using: More than 1,300 industry-specific measurements and key performance indicators (KPIs) such as network reliability statistics, provisioning metrics and customer churn propensity. Embedded OLAP cubes for extremely fast dimensional analysis of business information. Embedded data mining models for sophisticated trending and predictive analysis. Support for multiple lines of business, such as cable, mobile, wireline and Internet, which can be easily extended to support future requirements. With Oracle Communications Data Model, CSPs can jump start the implementation of a communications data warehouse in line with communications-industry standards including the TM Forum Information Framework (SID), formerly known as the Shared Information Model. Oracle Communications Data Model is optimized for any Oracle Database 11g platform, including Oracle Exadata, which can improve call data record query performance by 10x or more. Supporting Quotes "Oracle Communications Data Model covers a wide range of business areas that are relevant to modern communications service providers and is a comprehensive solution - with its data model and pre-packaged templates including BI dashboards, KPIs, OLAP cubes and mining models. It helps us save a great deal of time in building and implementing a customized data warehouse and enables us to leverage the advanced analytics quickly and more effectively," said Yasuki Hayashi, executive manager, NTT Comware Corporation. "Data volumes will only continue to grow as communications service providers expand next-generation networks, deploy new services and adopt new business models. They will increasingly need efficient, reliable data warehouses to capture key insights on data such as customer value, network value and churn probability. With the Oracle Communications Data Model, Oracle has demonstrated its commitment to meeting these needs by delivering data warehouse tools designed to fill communications industry-specific needs," said Elisabeth Rainge, program director, Network Software, IDC. "The TM Forum Conformance Mark provides reassurance to customers seeking standards-based, and therefore, cost-effective and flexible solutions. TM Forum is extremely pleased to work with Oracle to certify its Oracle Communications Data Model solution. Upon successful completion, this certification will represent the broadest and most complete implementation of the TM Forum Information Framework to date, with more than 130 aggregate business entities," said Keith Willetts, chairman and chief executive officer, TM Forum. Supporting Resources Oracle Communications Oracle Communications Data Model Data Sheet Oracle Communications Data Model Podcast Oracle Data Warehousing Oracle Communications on YouTube Oracle Communications on Delicious Oracle Communications on Facebook Oracle Communications on Twitter Oracle Communications on LinkedIn Oracle Database on Twitter The Data Warehouse Insider Blog

    Read the article

  • Outlook 2010 – My Top 9 features

    - by Daniel Moth
    Office 2010 has reached RTM. Here are my favorite Outlook features. Speed. It is faster than previous versions and hangs much less… Ignore Conversation (Ctrl+Del). Not interested in a conversation? Click this button on the new ribbon and you'll never receive another message on that thread (they all go to your Deleted folder). Calendar Preview. When receiving a Meeting Request, before deciding to accept or not you get to see a preview of your calendar for that day and where the new meeting would fit in. See full description on outlook team blog post. Quick Steps. See full description on outlook team blog post. I have created my own quick steps for filing conversations to folders, various pre-populated reply templates, creating calendar invites and creating TODOs from received emails. Search Interface. Many of us knew the magic keywords for making smart searches (e.g. from:Name), but it is great to learn many more through the search tools contextual ribbon tab. Next 7 days. Out of the many enhancements to the Calendar view, my favorite is to be able with  single click to view the next 7 days – that is now my default view. MailTips. See full description on outlook team blog post. The ones I particularly like are when composing a mail to someone that has their Out Of Office reply set, you get to read it before sending the mail (and hence can decide to postpone sending). when composing a mail to a distribution list, a message informs you of the number of recipients. Hopefully, senders will use that as a clue for narrowing down the recipient list or at least verifying that their mail should indeed be sent to so many people. "You are not responding to the latest message in this conversation. Click here to open it.". When composing a reply to a conversation and you have not picked the last message to reply to (don't you hate it when people split threads like that?), this is the inline message you see (under the MailTips area) and if you click on the message it opens the last mail in the conversation so you can reply to that. Rich "Conversation Settings" and in particular "Show Messages from Other Folders". For example, you can see in your inbox not only the message you received but also the reply you sent (it gets pulled in from the Sent folder). Another example: a conversation has been taking place on a distribution list (so your rules filed it to a folder) and they add you on the TO or CC line, so it appears in a different folder; regardless of which folder you open, you are able to see the entire conversation. Note that messages from other folders than the one you are browsing, appear in grey text so you can easily spot them. Reading them in one folder, obviously marks them as read in the other folder… If you haven't yet, when are you making the move to Outlook 2010? Comments about this post welcome at the original blog.

    Read the article

  • SQL SERVER – Top 10 “Ease of Use” Features of expressor Studio

    - by pinaldave
    expressor Studio is new data integration platform that is being marketed as the most easy to use tool of its kind.  But “easy to use” can be a relative term – an expert can find a very complex system easy, but a beginner might be stumped.  A recent article online discussed exactly what makes expressor Studio so easy use, and here is my view on this subject. Simple Installation There is one pop-up for one .exe file, and nothing more.  You can’t get much simpler than this.  It is also in the familiar Windows design, so there should be no surprises. No 3rd party software dependency Have you ever tried to download software, only to be slowed down by the need to download a compatible system to run the program, and another to read the user manual, and so on?  expressor Studio was designed specifically to avoid this problem. Microsoft Office Like Ribbon Bar and Menus As mentioned before, everything is in the familiar Windows design, from the pop up windows to the tool bars and menus.  There should be no learning curve for using this program, or even simply trying to navigate around a new system. General Development Design Interface This software has been designed to be simple and straightforward.  Projects can be arranged in a simple “tree” design, that is totally collapsible and can easy be added to or “trimmed” with a click of a button.  It was meant to be logical and easy to follow. Integrated Contextual Help This is a fancy way of saying that you can practically yell “help!” if you do get stuck on something.  Solving a problem is as simple as highlighting and hitting F1 for contextual help. Visual Indicators and Messages Wouldn’t it be nice to know exactly where something has gone wrong before trying to complete a project.  expressor Studio has a built in system to catch mistakes and highlight them in a bright color, flash a warning message, and even disable functions before you can continue – and possibly lose hours of work. Property Inputs and Selectors Every operator will have a list of requirements that need to be filled in.  But don’t worry; you won’t have to make stuff up to fill in the boxes.  Each one will have a drop-down menu with options to choose from – but not too many as to be confusing. Connection Wizards Configuring connections can be the hardest part of a project.  But not with the expressor Studioconnection wizard.  A familiar, Windows-style menu will walk you through connections so quickly you’ll forget what trouble it used to be. Templates With large, complex projects, a majority of your time is often spent simply setting up the files and inputting data.  But expressor Studio allows you to create one file and then save it as a template, saving you hours of boring data input. Extension Manager Let’s say that you need a little more functionality or some new features on your program. A lot of software requires you to download complex plug-ins that need to be decompressed and installed.  However, expressor Studio has extended its system to an Extension Manager, which allows for quick and easy installation of the functionality you need, without the need to download and decompress. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Book &ldquo;Team Foundation Server 2012 Starter&rdquo; published!

    - by Jakob Ehn
    During the summer and fall this year, me and my colleague Terje Sandstrøm has worked together on a book project that has now finally hit the stores! The title of the book is Team Foundation Server 2012 Starter and is published by Packt Publishing. You can find it at http://www.packtpub.com/team-foundation-server-2012-starter/book or from Amazon http://www.amazon.com/dp/1849688389                          The book is part of a concept that Packt have with starter-books, intended for people new to Team Foundation Server 2012 and who want a quick guideline to get it up and working. It covers the fundamentals, from installing and configuring it, and how to use it with source control, work items and builds. It is done as a step-by-step guide, but also includes best practices advice in the different areas. It covers the use of both the on-premises and the TFS Services version. It also has a list of links and references in the end to the most relevant Visual Studio 2012 ALM sites. Our good friend and fellow ALM MVP Mathias Olausson have done the review of the book, thanks again Mathias! We hope the book fills the gap between the different online guide sites and the more advanced books that are out. Check it out and please let us know what you think of the book! Book Description Your quick start guide to TFS 2012, top features, and best practices with hands on examples Overview Install TFS 2012 from scratch Get up and running with your first project Streamline release cycles for maximum productivity In Detail Team Foundation Server 2012 is Microsoft's leading ALM tool, integrating source control, work item and process handling, build automation, and testing. This practical "Team Foundation Server 2012 Starter Guide" will provide you with clear step-by-step exercises covering all major aspects of the product. This is essential reading for anyone wishing to set up, organize, and use TFS server. This hands-on guide looks at the top features in Team Foundation Server 2012, starting with a quick installation guide and then moving into using it for your software development projects. Manage your team projects with Team Explorer, one of the many new features for 2012. Covering all the main features in source control to help you work more efficiently, including tools for branching and merging, we will delve into the Agile Planning Tools for planning your product and sprint backlogs. Learn to set up build automation, allowing your team to become faster, more streamlined, and ultimately more productive with this "Team Foundation Server 2012 Starter Guide". What you will learn from this book Install TFS 2012 on premise Access TFS Services in the cloud Quickly get started with a new project with product backlogs, source control, and build automation Work efficiently with source control using the top features Understand how the tools for branching and merging in TFS 2012 help you isolate work and teams Learn about the existing process templates, such as Visual Studio Scrum 2.0 Manage your product and sprint backlogs using the Agile planning tools Approach This Starter guide is a short, sharp introduction to Team Foundation Server 2012, covering everything you need to get up and running. Who this book is written for If you are a developer, project lead, tester, or IT administrator working with Team Foundation Server 2012 this guide will get you up to speed quickly and with minimal effort.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >