Search Results

Search found 4391 results on 176 pages for 'tree hacker'.

Page 174/176 | < Previous Page | 170 171 172 173 174 175 176  | Next Page >

  • ASP.NET treeview populate child nodes. How can I avoid a postback to server?

    - by mas_oz2k1
    I am trying to test populate on demand for a treeview. I follow the procedure from these links: http://msdn.microsoft.com/en-us/library/e8z5184w.aspx But the treeview still make a postback to the server if I expanded one of the tree nodes (If you put a breakpoint in the first line of Page_load event), thus refreshing the whole page. I am using VS2005 and Asp.net 2.0 (but the same issue occurs in VS2008) My simple test page markup is: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="aspTreeview.aspx.cs" Inherits="aspTreeview" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <table> <tr> <td style="height: 80%; width: 45%;"> <asp:Panel ID="Panel1" runat="server" BorderColor="#0033CC" BorderStyle="Solid" ScrollBars="Both"> <asp:TreeView ID="TreeView1" runat="server" ShowLines="True" PopulateNodesFromClient="True" EnableClientScript="True" NodeWrap="True" ontreenodepopulate="TreeView1_TreeNodePopulate" ExpandDepth="0"> </asp:TreeView> </asp:Panel> </td> <td style="width: 10%; height: 80%;" > <div> <asp:Button ID="Button1" runat="server" Text="->" onclick="Button1_Click" /> </div> <div> <asp:Button ID="Button2" runat="server" Text="<-" /> </div> </td> <td style="width: 136px; height: 80%"> <asp:Panel ID="Panel2" runat="server" BorderColor="Lime" BorderStyle="Solid"> <asp:TreeView ID="TreeView2" runat="server" ShowLines="True" ExpandDepth="0"> </asp:TreeView> </asp:Panel> </td> </tr> <tr> <td> </td> <td> </td> <td style="width: 136px"> </td> </tr> </table> </div> </form> </body> </html> The code behind is: protected void Page_Load(object sender, EventArgs e) { Debug.WriteLine("Page_Load started."); if (!IsPostBack) { if (Request.Browser.SupportsCallback) Debug.WriteLine("Browser supports callback scripts."); for (int i = 0; i < 3; i++) { TreeNode node = new TreeNode("ENTRY " + i.ToString()); node.Value = i.ToString(); node.PopulateOnDemand = true; node.Expanded = false; TreeView1.Nodes.Add(node); } } Debug.WriteLine("Page_Load finished."); } protected void TreeView1_TreeNodePopulate(object sender, TreeNodeEventArgs e) { TreeNode targetNode = e.Node; for (int j = 0; j < 4200; j++) { TreeNode subnode = new TreeNode(String.Format("Sub ENTRY {0} {1}", targetNode.Value, j)); subnode.PopulateOnDemand = true; subnode.Expanded = false; targetNode.ChildNodes.Add(subnode); } }

    Read the article

  • Ejabberd clustering problem with amazon EC2 server

    - by user353362
    Hello Guys! I have been trying to install ejabberd server on Amazons EC2 instance. I am kinds a stuck at this step right now. I am following this guide: http://tdewolf.blogspot.com/2009/07/clustering-ejabberd-nodes-using-mnes... From the guide I have sucessfully completed the Set up First Node (on ejabberd1) part. But am stuck in part 4 of Set up Second Node (on ejabberd2) So all in all, I created the main node and am able to run the server on that node and access its admin console from then internet. In the second node I have installed ejabberd. But I am stuck at point 4 of setting up the node instruction presented in this blog (http://tdewolf.blogspot.com/2009/07/clustering-ejabberd-nodes-using-mnes...). I execute this command " erl -sname ejabberd@domU-12-31-39-0F-7D-14 -mnesia dir '"/var/lib/ejabberd/"' -mnesia extra_db_nodes "['ejabberd@domU-12-31-39-02-C8-36']" -s mnesia " on the second server and get a crashing error: root@domU-12-31-39-0F-7D-14:/var/lib/ejabberd# erl -sname ejabberd@domU-12-31-39-0F-7D-14 -mnesia dir '"/var/lib/ejabberd/"' -mnesia extra_db_nodes "['ejabberd@domU-12-31-39-02-C8-36']" -s mnesia {error_logger,{{2010,5,28},{23,52,25}},"Protocol: ~p: register error: ~p~n",["inet_tcp",{{badmatch,{error,duplicate_name}},[{inet_tcp_dist,listen,1},{net_kernel,start_protos,4},{net_kernel,start_protos,3},{net_kernel,init_node,2},{net_kernel,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}]} {error_logger,{{2010,5,28},{23,52,25}},crash_report,[[{pid,<0.21.0},{registered_name,net_kernel},{error_info,{exit,{error,badarg},[{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{initial_call,{net_kernel,init,['Argument__1']}},{ancestors,[net_sup,kernel_sup,<0.8.0]},{messages,[]},{links,[#Port<0.52,<0.18.0]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,610},{stack_size,23},{reductions,518}],[]]} {error_logger,{{2010,5,28},{23,52,25}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfa,{net_kernel,start_link,[['ejabberd@domU-12-31-39-0F-7D-14',shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2010,5,28},{23,52,25}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfa,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2010,5,28},{23,52,25}},crash_report,[[{pid,<0.7.0},{registered_name,[]},{error_info,{exit,{shutdown,{kernel,start,[normal,[]]}},[{application_master,init,4},{proc_lib,init_p_do_apply,3}]}},{initial_call,{application_master,init,['Argument_1','Argument_2','Argument_3','Argument_4']}},{ancestors,[<0.6.0]},{messages,[{'EXIT',<0.8.0,normal}]},{links,[<0.6.0,<0.5.0]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,233},{stack_size,23},{reductions,123}],[]]} {error_logger,{{2010,5,28},{23,52,25}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"} Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}) root@domU-12-31-39-0F-7D-14:/var/lib/ejabberd# any idea what going on? I am not really sure how to solve this problem :S how to let ejabberd only access register from one special server? › Is that the right way of copying .erlang.cookie file? Submitted by privateson on Sat, 2010-05-29 00:11. before this I was getting this error (see below), I solved it by running this command: chmod 400 .erlang.cookie Also to copy the cookie I simply created a file using vi on the second server and copied the secret code from server one to the second server. Is that the right way of copying .erlang.cookie file? ERROR ~~~~~~~~~~ root@domU-12-31-39-0F-7D-14:/etc/ejabberd# erl -sname ejabberd@domU-12-31-39-0F-7D-14 -mnesia dir '"/var/lib/ejabberd/"' -mnesia extra_db_nodes "['ejabberd@domU-12-31-39-02-C8-36']" -s mnesia {error_logger,{{2010,5,28},{23,28,56}},"Cookie file /root/.erlang.cookie must be accessible by owner only",[]} {error_logger,{{2010,5,28},{23,28,56}},crash_report,[[{pid,<0.20.0},{registered_name,auth},{error_info,{exit,{"Cookie file /root/.erlang.cookie must be accessible by owner only",[{auth,init_cookie,0},{auth,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]},[{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{initial_call,{auth,init,['Argument__1']}},{ancestors,[net_sup,kernel_sup,<0.8.0]},{messages,[]},{links,[<0.18.0]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,987},{stack_size,23},{reductions,439}],[]]} {error_logger,{{2010,5,28},{23,28,56}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{"Cookie file /root/.erlang.cookie must be accessible by owner only",[{auth,init_cookie,0},{auth,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{offender,[{pid,undefined},{name,auth},{mfa,{auth,start_link,[]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2010,5,28},{23,28,56}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfa,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2010,5,28},{23,28,56}},crash_report,[[{pid,<0.7.0},{registered_name,[]},{error_info,{exit,{shutdown,{kernel,start,[normal,[]]}},[{application_master,init,4},{proc_lib,init_p_do_apply,3}]}},{initial_call,{application_master,init,['Argument_1','Argument_2','Argument_3','Argument_4']}},{ancestors,[<0.6.0]},{messages,[{'EXIT',<0.8.0,normal}]},{links,[<0.6.0,<0.5.0]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,233},{stack_size,23},{reductions,123}],[]]} {error_logger,{{2010,5,28},{23,28,56}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"} Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}) root@domU-12-31-39-0F-7D-14:/var/lib/ejabberd# cat /var/log/ejabberd/ejabberd.log =INFO REPORT==== 2010-05-28 22:48:53 === I(<0.321.0:mod_pubsub:154) : pubsub init "localhost" [{access_createnode, pubsub_createnode}, {plugins, ["default","pep"]}] =INFO REPORT==== 2010-05-28 22:48:53 === I(<0.321.0:mod_pubsub:210) : ** tree plugin is nodetree_default =INFO REPORT==== 2010-05-28 22:48:53 === I(<0.321.0:mod_pubsub:214) : ** init default plugin =INFO REPORT==== 2010-05-28 22:48:53 === I(<0.321.0:mod_pubsub:214) : ** init pep plugin =ERROR REPORT==== 2010-05-28 23:40:08 === ** Connection attempt from disallowed node 'ejabberdctl1275090008486951000@domU-12-31-39-0F-7D-14' ** =ERROR REPORT==== 2010-05-28 23:41:10 === ** Connection attempt from disallowed node 'ejabberdctl1275090070163253000@domU-12-31-39-0F-7D-14' **

    Read the article

  • Drawing lines between views in a TableLayout

    - by RiThBo
    Firstly - sorry if you saw my other question which I deleted. My question was flawed. Here is a better version If I have two views, how do I draw a (straight) line between them when one of them is touched? The line needs to be dynamic so it can follow the finger until it reaches the second view where it "locks on". So, when view1 is touched a straight line is drawn which then follows the finger until it reaches view2. I created a LineView class that extends view, but I don't how to proceed. I read some tutorials but none show how to do this. I think I need to get the coordinates of both view, and then create a path which "updates" on MotionEvent. I can get the coordinates and the ids of the views I want to draw a line between, but the line that I try to draw between them either goes above it, or the line does not reach past the width and height of the view. Any advice/code/clarity would be much appreciated! Here is some code: My layout. I want to draw a line between two views contained in theTableLayout.# <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/activity_game_relative_layout" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" > <TableLayout android:layout_marginTop="35dp" android:layout_marginBottom="35dp" android:id="@+id/tableLayout1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerInParent="true" > <TableRow android:id="@+id/table_row_1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:padding="5dip" > <com.example.view.DotView android:id="@+id/game_dot_1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginBottom="10dp" android:layout_marginLeft="10dp" android:layout_marginRight="10dp" android:layout_marginTop="10dp" /> <com.example.view.DotView android:id="@+id/game_dot_2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginBottom="10dp" android:layout_marginLeft="10dp" android:layout_marginRight="10dp" android:layout_marginTop="10dp" /> <com.example.view.DotView android:id="@+id/game_dot_3" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginBottom="10dp" android:layout_marginLeft="10dp" android:layout_marginRight="10dp" android:layout_marginTop="10dp" /> </TableRow> <TableRow android:id="@+id/table_row_2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:padding="5dip" > <com.example.view.DotView android:id="@+id/game_dot_7" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginBottom="10dp" android:layout_marginLeft="10dp" android:layout_marginRight="10dp" android:layout_marginTop="10dp" /> <com.example.view.DotView android:id="@+id/game_dot_8" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginBottom="10dp" android:layout_marginLeft="10dp" android:layout_marginRight="10dp" android:layout_marginTop="10dp" /> <com.example.dotte.DotView android:id="@+id/game_dot_9" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginBottom="10dp" android:layout_marginLeft="10dp" android:layout_marginRight="10dp" android:layout_marginTop="10dp" /> </TableRow> </TableLayout> </RelativeLayout> This is my LineView class. I use this to try and draw the actual line between the points. public class LineView extends View { Paint paint = new Paint(); float startingX, startingY, endingX, endingY; public LineView(Context context) { super(context); // TODO Auto-generated constructor stub paint.setColor(Color.BLACK); paint.setStrokeWidth(10); } public void setPoints(float startX, float startY, float endX, float endY) { startingX = startX; startingY = startY; endingX = endX; endingY = endY; invalidate(); } @Override public void onDraw(Canvas canvas) { canvas.drawLine(startingX, startingY, endingX, endingY, paint); } } And this is my Activity. public class Game extends Activity { DotView dv1, dv2, dv3, dv4, dv5, dv6, dv7; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_game); getWindow().getDecorView().setSystemUiVisibility( View.SYSTEM_UI_FLAG_LOW_PROFILE); findDotIds(); RelativeLayout rl = (RelativeLayout) findViewById(R.id.activity_game_relative_layout); LineView lv = new LineView(this); lv.setPoints(dv1.getLeft(), dv1.getTop(), dv7.getLeft(), dv7.getTop()); rl.addView(lv); // TODO: Get the coordinates of all the dots in the TableLayout. Use view tree observer. } private void findDotIds() { // TODO Auto-generated method stub dv1 = (DotView) findViewById(R.id.game_dot_1); dv2 = (DotView) findViewById(R.id.game_dot_2); dv3 = (DotView) findViewById(R.id.game_dot_3); dv4 = (DotView) findViewById(R.id.game_dot_4); dv5 = (DotView) findViewById(R.id.game_dot_5); dv6 = (DotView) findViewById(R.id.game_dot_6); dv7 = (DotView) findViewById(R.id.game_dot_7); } } The views I want to draw lines between are in the TableLayout.

    Read the article

  • Drupal Adding Span inside A tags in Nice Menus

    - by Chris
    I am trying to add drop down menus to a drupal theme which uses text sliding door CSS rounding. The current version uses a primary links injection of the span into the a tags, which works fine. But doesn't support drop down menus. Working code: <?php print theme('links', $primary_links, array('class' => 'links primary-links')) ?> In the template with a template.php file addition: <?php // function for injecting spans inside anchors which we need for the theme's rounded corner background images function strands_guybrush_links($links, $attributes = array('class' => 'links')) { $output = ''; if (count($links) > 0) { $output = '<ul'. drupal_attributes($attributes) .'>'; $num_links = count($links); $i = 1; foreach ($links as $key => $link) { $class = $key; // Add first, last and active classes to the list of links to help out themers. if ($i == 1) { $class .= ' first'; } if ($i == $num_links) { $class .= ' last'; } if (isset($link['href']) && ($link['href'] == $_GET['q'] || ($link['href'] == '<front>' && drupal_is_front_page()))) { $class .= ' active'; } $output .= '<li'. drupal_attributes(array('class' => $class)) .'>'; if (isset($link['href'])) { $link['title'] = '<span class="link">' . check_plain($link['title']) . '</span>'; $link['html'] = TRUE; // Pass in $link as $options, they share the same keys. $output .= l($link['title'], $link['href'], $link); } else if (!empty($link['title'])) { // Some links are actually not links, but we wrap these in <span> for adding title and class attributes if (empty($link['html'])) { $link['title'] = check_plain($link['title']); } $span_attributes = ''; if (isset($link['attributes'])) { $span_attributes = drupal_attributes($link['attributes']); } $output .= '<span'. $span_attributes .'>'. $link['title'] .'</span>'; } $i++; $output .= "</li>\n"; } $output .= '</ul>'; } return $output; } ?> So I have added the [Nice Menu module][1] which works well and allows the drop down menu functions for my navigation which is now addressed from the template using: <?php print theme_nice_menu_primary_links() ?> The issue is that the a tags need to have spans inside to allow for the selected state markup. I have tried every angle I could find to edit the drupal function menu_item_link which is used by nice menus to build the links. E.g. I looked at the drupal forum for two days and no joy. The lines in the module that build the links are: function theme_nice_menu_build($menu) { $output = ''; // Find the active trail and pull out the menus ids. menu_set_active_menu_name('primary-links'); $trail = menu_get_active_trail('primary-links'); foreach ($trail as $item) { $trail_ids[] = $item['mlid']; } foreach ($menu as $menu_item) { $mlid = $menu_item['link']['mlid']; // Check to see if it is a visible menu item. if ($menu_item['link']['hidden'] == 0) { // Build class name based on menu path // e.g. to give each menu item individual style. // Strip funny symbols. $clean_path = str_replace(array('http://', '<', '>', '&', '=', '?', ':'), '', $menu_item['link']['href']); // Convert slashes to dashes. $clean_path = str_replace('/', '-', $clean_path); $class = 'menu-path-'. $clean_path; $class .= in_array($mlid, $trail_ids) ? ' active' : ''; // If it has children build a nice little tree under it. if ((!empty($menu_item['link']['has_children'])) && (!empty($menu_item['below']))) { // Keep passing children into the function 'til we get them all. $children = theme('nice_menu_build', $menu_item['below']); // Set the class to parent only of children are displayed. $class .= $children ? ' menuparent ' : ''; // Add an expanded class for items in the menu trail. $output .= '<li id="menu-'. $mlid .'" class="'. $class .'">'. theme('menu_item_link', $menu_item['link']); // Build the child UL only if children are displayed for the user. if ($children) { $output .= '<ul>'; $output .= $children; $output .= "</ul>\n"; } $output .= "</li>\n"; } else { $output .= '<li id="menu-'. $mlid .'" class="'. $class .'">'. theme('menu_item_link', $menu_item['link']) .'</li>'."\n"; } } } return $output; } As you can see the $output uses menu_item_link to parse the array into links and to added the class of active to the selected navigation link. The question is how do I add a span inside the a tags OR how do I wrap the a tags with a span that has the active class to style the sliding door links? drupal.org/project/nice_menus drupal.org/node/53233

    Read the article

  • Intermittent Internet Explorer Error whilst using Javascript to traverse XML Nodes

    - by Sykth
    Hi there all, Basically I use a javascript SOAP plugin to send and receive XML from a web service. Recently I've been experiencing intermittment (1 in every 20-30 times) errors in IE when trying to access data in the XML file. Bear with me on this, I'm going to try and go into a fair amount of detail to help anyone who is willing to read this: I have a html file, with an external javascript file attached. In the javascript file I include 5 global variables: var soapRes; var questionXML; var xRoot; var assessnode; var edStage = 0; Once the html file is at a readystate I request the XML file from the web service like this: function initNextStage(strA) { // Set pl Parameters here var pl = new SOAPClientParameters(); // soap call var r = SOAPClient.invoke(url, "LoadAssessment", pl, true, initStage_callBack); } function initStage_callBack(r, soapResponse) { soapRes = soapResponse; initvalues(); function initvalues(){ xRoot = soapRes.documentElement; assessnode = xRoot.getElementsByTagName("ed")[0]; questionXML = assessnode.getElementsByTagName("question")[edStage]; } } Once this is completed the XML should be loaded into memory via the global variable soapRes. I then save values to the XML before moving one node down the XML tree, and pulling the next set of values out of it. function submitAns1() { var objAns; var corElem; var corElemTxt; questionXML = assessnode.getElementsByTagName("question")[edStage]; objAns = questionXML.getElementsByTagName("item")[0].getAttribute("answer"); corElem = soapRes.createElement("correct"); corElemTxt = soapRes.createTextNode("value"); questionXML.appendChild(corElem); corElem.appendChild(corElemTxt); if(objAns == "t") { corElemTxt.nodeValue = "true"; } else { corElemTxt.nodeValue = "false"; } edStage++; questionXML = assessnode.getElementsByTagName("question")[edStage]; var inc1 = questionXML.getElementsByTagName("sentence")[0]; var inc2 = questionXML.getElementsByTagName("sentence")[1]; var edopt1 = questionXML.getElementsByTagName("item")[0]; var edopt2 = questionXML.getElementsByTagName("item")[1]; var edopt3 = questionXML.getElementsByTagName("item")[2]; var edopt4 = questionXML.getElementsByTagName("item")[3]; var edopt5 = questionXML.getElementsByTagName("item")[4]; var temp1 = edopt1.childNodes[0].nodeValue; var temp2 = edopt2.childNodes[0].nodeValue; var temp3 = edopt3.childNodes[0].nodeValue; var temp4 = edopt4.childNodes[0].nodeValue; var temp5 = edopt5.childNodes[0].nodeValue; } This is an example of our XML: <ed> <appid>Administrator</appid> <formid>ED009</formid> <question id="1"> <sentence id="1">[email protected]</sentence> <sentence id="2">[email protected]</sentence> <item id="0" answer="f" value="0">0</item> <item id="1" answer="f" value="1">1</item> <item id="2" answer="f" value="2">2</item> <item id="3" answer="t" value="3">3</item> <item id="4" answer="f" value="4">4</item> </question> <question id="2"> <sentence id="1">Beausdene 13</sentence> <sentence id="2">Beauscene 83</sentence> <item id="0" answer="f" value="0">0</item> <item id="1" answer="f" value="1">1</item> <item id="2" answer="t" value="2">2</item> <item id="3" answer="f" value="3">3</item> <item id="4" answer="f" value="4">4</item> </question> </ed> The error I receive intermittently is that "questionXML" is null or not an object. Specifically this line when I call on the submitAns1() method: questionXML = assessnode.getElementsByTagName("question")[edStage]; This solution works on FF, Opera, Chrome & Safari without any issues as far as I am aware. Am I accessing the XML nodes incorrectly? I've been toying with fixes for this bug for weeks now, I'd really appreciate any insight into what I'm doing wrong. On a side note, this is a personal project of mine - it's not university related - I'm much too old for that! Any help greatly appreciated. Regards, Sykth.

    Read the article

  • Very simple, terse and easy GUI programming “frameworks”

    - by jetxee
    Please list GUI programming libraries, toolkits, frameworks which allow to write GUI apps quickly. I mean in such a way, that GUI is described entirely in a human-readable (and human-writable) plain text file (code) code is terse (1 or 2 lines of code per widget/event pair), suitable for scripting structure and operation of the GUI is evident from the code (nesting of widgets and flow of events) details about how to build the GUI are hidden (things like mainloop, attaching event listeners, etc.) auto-layouts are supported (vboxes, hboxes, etc.) As answers suggest, this may be defined as declarative GUI programming, but it is not necessarily such. Any approach is OK if it works, is easy to use and terse. There are some GUI libraries/toolkits like this. They are listed below. Please extend the list if you see a qualifying toolkit missing. Indicate if the project is crossplatform, mature, active, and give an example if possible. Please use this wiki to discuss only Open Source projects. This is the list so far (in alphabetical order): Fudgets Fudgets is a Haskell library. Platform: Unix. Status: Experimental, but still maintained. An example: import Fudgets main = fudlogue (shellF "Hello" (labelF "Hello, world!" >+< quitButtonF)) GNUstep Renaissance Renaissance allows to describe GUI in simple XML. Platforms: OSX/GNUstep. Status: part of GNUstep. An example below: <window title="Example"> <vbox> <label font="big"> Click the button below to quit the application </label> <button title="Quit" action="terminate:"/> </vbox> </window> HTML HTML-based GUI (HTML + JS). Crossplatform, mature. Can be used entirely on the client side. Looking for a nice “helloworld” example. JavaFX JavaFX is usable for standalone (desktop) apps as well as for web applications. Not completely crossplatform, not yet completely open source. Status: 1.0 release. An example: Frame { content: Button { text: "Press Me" action: operation() { System.out.println("You pressed me"); } } visible: true } Screenshot is needed. Phooey Phooey is another Haskell library. Crossplatform (wxWidgets), HTML+JS backend planned. Mature and active. An example (a little more than a helloworld): ui1 :: UI () ui1 = title "Shopping List" $ do a <- title "apples" $ islider (0,10) 3 b <- title "bananas" $ islider (0,10) 7 title "total" $ showDisplay (liftA2 (+) a b) PythonCard PythonCard describes GUI in a Python dictionary. Crossplatform (wxWidgets). Some apps use it, but the project seems stalled. There is an active fork. I skip PythonCard example because it is too verbose for the contest. Shoes Shoes for Ruby. Platforms: Win/OSX/GTK+. Status: Young but active. A minimal app looks like this: Shoes.app { @push = button "Push me" @note = para "Nothing pushed so far" @push.click { @note.replace "Aha! Click!" } } Tcl/Tk Tcl/Tk. Crossplatform (its own widget set). Mature (probably even dated) and active. An example: #!/usr/bin/env wish button .hello -text "Hello, World!" -command { exit } pack .hello tkwait window . tekUI tekUI for Lua (and C). Platforms: X11, DirectFB. Status: Alpha (usable, but API still evolves). An example: #/usr/bin/env lua ui = require "tek.ui" ui.Application:new { Children = { ui.Window:new { Title = "Hello", Children = { ui.Text:new { Text = "_Hello, World!", Style = "button", Mode = "button", }, }, }, }, }:run() Treethon Treethon for Python. It describes GUI in a YAML file (Python in a YAML tree). Platform: GTK+. Status: work in proress. A simple app looks like this: _import: gtk view: gtk.Window() add: - view: gtk.Button('Hello World') on clicked: print view.get_label() Yet unnamed Python library by Richard Jones: This one is not released yet. The idea is to use Python context managers (with keyword) to structure GUI code. See Richard Jones' blog for details. with gui.vertical: text = gui.label('hello!') items = gui.selection(['one', 'two', 'three']) with gui.button('click me!'): def on_click(): text.value = items.value text.foreground = red XUL XUL + Javascript may be used to create stand-alone desktop apps with XULRunner as well as Mozilla extensions. Mature, open source, crossplatform. <?xml version="1.0"?> <?xml-stylesheet href="chrome://global/skin/" type="text/css"?> <window id="main" title="My App" width="300" height="300" xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"> <caption label="Hello World"/> </window> Thank your for contributions!

    Read the article

  • C#: Inheritance, Overriding, and Hiding

    - by Rosarch
    I'm having difficulty with an architectural decision for my C# XNA game. The basic entity in the world, such as a tree, zombie, or the player, is represented as a GameObject. Each GameObject is composed of at least a GameObjectController, GameObjectModel, and GameObjectView. These three are enough for simple entities, like inanimate trees or rocks. However, as I try to keep the functionality as factored out as possible, the inheritance begins to feel unwieldy. Syntactically, I'm not even sure how best to accomplish my goals. Here is the GameObjectController: public class GameObjectController { protected GameObjectModel model; protected GameObjectView view; public GameObjectController(GameObjectManager gameObjectManager) { this.gameObjectManager = gameObjectManager; model = new GameObjectModel(this); view = new GameObjectView(this); } public GameObjectManager GameObjectManager { get { return gameObjectManager; } } public virtual GameObjectView View { get { return view; } } public virtual GameObjectModel Model { get { return model; } } public virtual void Update(long tick) { } } I want to specify that each subclass of GameObjectController will have accessible at least a GameObjectView and GameObjectModel. If subclasses are fine using those classes, but perhaps are overriding for a more sophisticated Update() method, I don't want them to have to duplicate the code to produce those dependencies. So, the GameObjectController constructor sets those objects up. However, some objects do want to override the model and view. This is where the trouble comes in. Some objects need to fight, so they are CombatantGameObjects: public class CombatantGameObject : GameObjectController { protected new readonly CombatantGameModel model; public new virtual CombatantGameModel Model { get { return model; } } protected readonly CombatEngine combatEngine; public CombatantGameObject(GameObjectManager gameObjectManager, CombatEngine combatEngine) : base(gameObjectManager) { model = new CombatantGameModel(this); this.combatEngine = combatEngine; } public override void Update(long tick) { if (model.Health <= 0) { gameObjectManager.RemoveFromWorld(this); } base.Update(tick); } } Still pretty simple. Is my use of new to hide instance variables correct? Note that I'm assigning CombatantObjectController.model here, even though GameObjectController.Model was already set. And, combatants don't need any special view functionality, so they leave GameObjectController.View alone. Then I get down to the PlayerController, at which a bug is found. public class PlayerController : CombatantGameObject { private readonly IInputReader inputReader; private new readonly PlayerModel model; public new PlayerModel Model { get { return model; } } private float lastInventoryIndexAt; private float lastThrowAt; public PlayerController(GameObjectManager gameObjectManager, IInputReader inputReader, CombatEngine combatEngine) : base(gameObjectManager, combatEngine) { this.inputReader = inputReader; model = new PlayerModel(this); Model.Health = Constants.PLAYER_HEALTH; } public override void Update(long tick) { if (Model.Health <= 0) { gameObjectManager.RemoveFromWorld(this); for (int i = 0; i < 10; i++) { Debug.WriteLine("YOU DEAD SON!!!"); } return; } UpdateFromInput(tick); // .... } } The first time that this line is executed, I get a null reference exception: model.Body.ApplyImpulse(movementImpulse, model.Position); model.Position looks at model.Body, which is null. This is a function that initializes GameObjects before they are deployed into the world: public void Initialize(GameObjectController controller, IDictionary<string, string> data, WorldState worldState) { controller.View.read(data); controller.View.createSpriteAnimations(data, _assets); controller.Model.read(data); SetUpPhysics(controller, worldState, controller.Model.BoundingCircleRadius, Single.Parse(data["x"]), Single.Parse(data["y"]), bool.Parse(data["isBullet"])); } Every object is passed as a GameObjectController. Does that mean that if the object is really a PlayerController, controller.Model will refer to the base's GameObjectModel and not the PlayerController's overriden PlayerObjectModel? In response to rh: This means that now for a PlayerModel p, p.Model is not equivalent to ((CombatantGameObject)p).Model, and also not equivalent to ((GameObjectController)p).Model. That is exactly what I do not want. I want: PlayerController p; p.Model == ((CombatantGameObject)p).Model p.Model == ((GameObjectController)p).Model How can I do this? override?

    Read the article

  • Merging elements inside a xml.etree.ElementTree

    - by theAlse
    I have a huge test data like the one provided below (and yes I have no control over this data). Each line is actually 6 parts and I need to generate an XML based on this data. Nav;Basic;Dest;Smoke;No;Yes; Nav;Dest;Recent;Regg;No;Yes; Nav;Dest;Favourites;Regg;No;Yes; ... Nav;Dest using on board;By POI;Smoke;No;Yes; Nav;Dest using on board;Other;Regg;No;Yes; The first 3 elements on each line denotes "test suites"-XML element and the last 3 element should create a "test case"-XML element. I have successfully converted it into a XML using the following code: # testsuite (root) testsuite = ET.Element('testsuite') testsuite.set("name", "Tests") def _create_testcase_tag(elem): global testsuite level1, level2, level3, elem4, elem5, elem6 = elem # -- testsuite (level1) testsuite_level1 = ET.SubElement(testsuite, "testsuite") testsuite_level1.set("name", level1) # -- testsuite (level2) testsuite_level2 = ET.SubElement(testsuite_level1, "testsuite") testsuite_level2.set("name", level2) # -- testsuite (level3) testsuite_level2 = ET.SubElement(testsuite_level2, "testsuite") testsuite_level2.set("name", level3) # -- testcase testcase = ET.SubElement(testsuite_level2, "testcase") testcase.set("name", "TBD") summary = ET.SubElement(testcase, "summary") summary.text = "Test Type= %s, Automated= %s, Available=%s" %(elem4, elem5, elem6) with open(input_file) as in_file: for line_number, a_line in enumerate(in_file): try: parameters = a_line.split(';') if len(parameters) >= 6: level1 = parameters[0].strip() level2 = parameters[1].strip() level3 = parameters[2].strip() elem4 = parameters[3].strip() elem5 = parameters[4].strip() elem6 = parameters[5].strip() lines_as_list.append((level1, level2, level3, elem4, elem5, elem6)) except ValueError: pass lines_as_list.sort() for elem in lines_as_list: _create_testcase_tag(elem) output_xml = ET.ElementTree(testsuite) ET.ElementTree.write(output_xml, output_file, xml_declaration=True, encoding="UTF-8") The above code generates an XML like this: <testsuite name="Tests"> <testsuite name="Nav"> <testsuite name="Basic navigation"> <testsuite name="Set destination"> <testcase name="TBD"> <summary>Test Type= Smoke test Automated= No, Available=Yes</summary> </testcase> </testsuite> </testsuite> </testsuite> <testsuite name="Nav"> <testsuite name="Set destination"> <testsuite name="Recent"> <testcase name="TBD"> <summary> Test Type= Reggression test Automated= No, Available=Yes </summary> </testcase> </testsuite> </testsuite> </testsuite> </testsuite> ... This is all correct, but as you can see I have created a whole tree for each line and that is not what I need. I need to combine e.g. all testsuite with the same name into one testsuite and also perform that recursively. So the XML looks like this instead: <testsuite name="Tests"> <testsuite name="Nav"> <testsuite name="Basic navigation"> <testsuite name="Set destination"> <testcase name="TBD"> <summary>Test Type= Smoke test Automated= No, Available=Yes</summary> </testcase> </testsuite> <testsuite name="Recent"> <testcase name="TBD"> <summary> Test Type= Reggression test Automated= No, Available=Yes </summary> </testcase> </testsuite> </testsuite> </testsuite> </testsuite> I hope you can understand what I mean, but level1, level2 and level3 should be unique with testcases inside. How should I do this? Please do not suggest the use of any external libraries! I can not install new libraries in customer site. xml.etree.ElementTree is all I have. Thanks

    Read the article

  • Getting a NullPointerException at seemingly random intervals, not sure why

    - by Miles
    I'm running an example from a Kinect library for Processing (http://www.shiffman.net/2010/11/14/kinect-and-processing/) and sometimes get a NullPointerException pointing to this line: int rawDepth = depth[offset]; The depth array is created in this line: int[] depth = kinect.getRawDepth(); I'm not exactly sure what a NullPointerException is, and much googling hasn't really helped. It seems odd to me that the code compiles 70% of the time and returns the error unpredictably. Could the hardware itself be affecting it? Here's the whole example if it helps: // Daniel Shiffman // Kinect Point Cloud example // http://www.shiffman.net // https://github.com/shiffman/libfreenect/tree/master/wrappers/java/processing import org.openkinect.*; import org.openkinect.processing.*; // Kinect Library object Kinect kinect; float a = 0; // Size of kinect image int w = 640; int h = 480; // We'll use a lookup table so that we don't have to repeat the math over and over float[] depthLookUp = new float[2048]; void setup() { size(800,600,P3D); kinect = new Kinect(this); kinect.start(); kinect.enableDepth(true); // We don't need the grayscale image in this example // so this makes it more efficient kinect.processDepthImage(false); // Lookup table for all possible depth values (0 - 2047) for (int i = 0; i < depthLookUp.length; i++) { depthLookUp[i] = rawDepthToMeters(i); } } void draw() { background(0); fill(255); textMode(SCREEN); text("Kinect FR: " + (int)kinect.getDepthFPS() + "\nProcessing FR: " + (int)frameRate,10,16); // Get the raw depth as array of integers int[] depth = kinect.getRawDepth(); // We're just going to calculate and draw every 4th pixel (equivalent of 160x120) int skip = 4; // Translate and rotate translate(width/2,height/2,-50); rotateY(a); for(int x=0; x<w; x+=skip) { for(int y=0; y<h; y+=skip) { int offset = x+y*w; // Convert kinect data to world xyz coordinate int rawDepth = depth[offset]; PVector v = depthToWorld(x,y,rawDepth); stroke(255); pushMatrix(); // Scale up by 200 float factor = 200; translate(v.x*factor,v.y*factor,factor-v.z*factor); // Draw a point point(0,0); popMatrix(); } } // Rotate a += 0.015f; } // These functions come from: http://graphics.stanford.edu/~mdfisher/Kinect.html float rawDepthToMeters(int depthValue) { if (depthValue < 2047) { return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161)); } return 0.0f; } PVector depthToWorld(int x, int y, int depthValue) { final double fx_d = 1.0 / 5.9421434211923247e+02; final double fy_d = 1.0 / 5.9104053696870778e+02; final double cx_d = 3.3930780975300314e+02; final double cy_d = 2.4273913761751615e+02; PVector result = new PVector(); double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue); result.x = (float)((x - cx_d) * depth * fx_d); result.y = (float)((y - cy_d) * depth * fy_d); result.z = (float)(depth); return result; } void stop() { kinect.quit(); super.stop(); } And here are the errors: processing.app.debug.RunnerException: NullPointerException at processing.app.Sketch.placeException(Sketch.java:1543) at processing.app.debug.Runner.findException(Runner.java:583) at processing.app.debug.Runner.reportException(Runner.java:558) at processing.app.debug.Runner.exception(Runner.java:498) at processing.app.debug.EventThread.exceptionEvent(EventThread.java:367) at processing.app.debug.EventThread.handleEvent(EventThread.java:255) at processing.app.debug.EventThread.run(EventThread.java:89) Exception in thread "Animation Thread" java.lang.NullPointerException at org.openkinect.processing.Kinect.enableDepth(Kinect.java:70) at PointCloud.setup(PointCloud.java:48) at processing.core.PApplet.handleDraw(PApplet.java:1583) at processing.core.PApplet.run(PApplet.java:1503) at java.lang.Thread.run(Thread.java:637)

    Read the article

  • C++ destructor seems to be called 'early'

    - by suicideducky
    Please see the "edit" section for the updated information. Sorry for yet another C++ dtor question... However I can't seem to find one exactly like mine as all the others are assigning to STL containers (that will delete objects itself) whereas mine is to an array of pointers. So I have the following code fragment #include<iostream> class Block{ public: int x, y, z; int type; Block(){ x=1; y=2; z=3; type=-1; } }; template <class T> class Octree{ T* children[8]; public: ~Octree(){ for( int i=0; i<8; i++){ std::cout << "del:" << i << std::endl; delete children[i]; } } Octree(){ for( int i=0; i<8; i++ ) children[i] = new T; } // place newchild in array at [i] void set_child(int i, T* newchild){ children[i] = newchild; } // return child at [i] T* get_child(int i){ return children[i]; } // place newchild at [i] and return the old [i] T* swap_child(int i, T* newchild){ T* p = children[i]; children[i] = newchild; return p; } }; int main(){ Octree< Octree<Block> > here; std::cout << "nothing seems to have broken" << std::endl; } Looking through the output I notice that the destructor is being called many times before I think it should (as Octree is still in scope), the end of the output also shows: del:0 del:0 del:1 del:2 del:3 Process returned -1073741819 (0xC0000005) execution time : 1.685 s Press any key to continue. For some reason the destructor is going through the same point in the loop twice (0) and then dying. All of this occures before the "nothing seems to have gone wrong" line which I expected before any dtor was called. Thanks in advance :) EDIT The code I posted has some things removed that I thought were unnecessary but after copying and compiling the code I pasted I no longer get the error. What I removed was other integer attributes of the code. Here is the origional: #include<iostream> class Block{ public: int x, y, z; int type; Block(){ x=1; y=2; z=3; type=-1; } Block(int xx, int yy, int zz, int ty){ x=xx; y=yy; z=zz; type=ty; } Block(int xx, int yy, int zz){ x=xx; y=yy; z=zz; type=0; } }; template <class T> class Octree{ int x, y, z; int size; T* children[8]; public: ~Octree(){ for( int i=0; i<8; i++){ std::cout << "del:" << i << std::endl; delete children[i]; } } Octree(int xx, int yy, int zz, int size){ x=xx; y=yy; z=zz; size=size; for( int i=0; i<8; i++ ) children[i] = new T; } Octree(){ Octree(0, 0, 0, 10); } // place newchild in array at [i] void set_child(int i, T* newchild){ children[i] = newchild; } // return child at [i] T* get_child(int i){ return children[i]; } // place newchild at [i] and return the old [i] T* swap_child(int i, T* newchild){ T* p = children[i]; children[i] = newchild; return p; } }; int main(){ Octree< Octree<Block> > here; std::cout << "nothing seems to have broken" << std::endl; } Also, as for the problems with set_child, get_child and swap_child leading to possible memory leaks this will be solved as a wrapper class will either use get before set or use swap to get the old child and write this out to disk before freeing the memory itself. I am glad that it is not my memory management failing but rather another error. I have not made a copy and/or assignment operator yet as I was just testing the block tree out, I will almost certainly make them all private very soon. This version spits out -1073741819. Thank you all for your suggestions and I apologise for highjacking my own thread :$

    Read the article

  • Js/Jquery bind Datatable or Datalist

    - by Robin Rieger
    Scenario I have a input box for post codes. When you put in a postcode it makes a call to a web service and gets all the suburb names back and binds them to a list, like an input dropdown list. This works when I return a List<string> from the web service for just the names. No problem. The problem arises when I go to save the values to the db. When it gets saved I cannot save the name of the suburb in the suburb column in the db. I have to save the id of that suburb for the foreign key constraint. So then I changed the List<> thing slighty and cheated and returned the following. <ArrayOfString> <string>8213</string> <string>BROOKFIELD</string> <string>8214</string> <string>CHAPEL HILL</string> <string>8215</string> <string>FIG TREE POCKET</string> etc.... The reason for this is so I can have the value of the item in the drop down as the id and hence can save that in the code behind on save. To do this, I did the following: $(result).each(function (index, value) { var suburbname; var pattern = new RegExp('[0-9*]'); var m = pattern.exec(value); if (m == null) { suburbname = value var o = new Option(suburbname, suburbid); /// jquerify the DOM object 'o' so we can use the html method $(o).html(suburbname); $(document.getElementById('<%= suburb.ClientID %>')).append($("<option></option>") .attr("value", suburbid) .text(suburbname)); } else { suburbid = value } That works as well... the drop down only has name and the value for those is the number The problem....? I am making assumptions above which I don't like... I.e. that the name never has a number it, that the web service will always return the id first to set the id before running the add to the dropdown for the first time. It just feels wrong.... :S (thoughts?) So, if I change the web service to return a datatable, to out the following (which is what the whole query returns): <Suburbs diffgr:id="Suburbs1" msdata:rowOrder="0"> <SuburbID>8213</SuburbID> <SuburbName>BROOKFIELD</SuburbName> <StateID>4</StateID> <StateCode>07</StateCode> <CountryCode>61</CountryCode> <TimeZones>10.00</TimeZones> <Postcode>4069</Postcode> </Suburbs> Is there a way of acheiving the same as above in js/jquery. So when the user inputs the postcode the webservice returns the datatable and then binds it to the select with the SuburbID going into the value and the name going into the text??? Any other ways I could return the data from the web service to solve this? Note: essentially I need the option to look like this: <option value="8213">BROOKFIELD</option> I also thought I could make a second call to get just the id on the bind of the text... but I kind of want to only make one call.... Thanks in adance, Cheers Robin .net, C#, js, jquery, web service..... Solution with guidance from Billy The adding to the select using the other method below did not work, but my original way did once I had the correct variables so just used that.... The service returns: <ArrayOfMyClass> <MyClass> <ID>8213</ID> <Name>BROOKFIELD</Name> </MyClass> ... etc The js is: (note: it is onchange of the postcode input box. it runs a web service and then on success runs the following) function OnCompleted(result) { var _suburbs = result; var i = 0; $(_suburbs).each(function () { var SuburbName = _suburbs[i].Name; var SuburbID = _suburbs[i].ID; $(document.getElementById('<%= suburb.ClientID %>')).append($("<option></option>") .attr("value", SuburbID) .text(SuburbName)); i = i + 1; });

    Read the article

  • JSF : How to refresh required field in ajax request

    - by Tama
    Ok, here you are the core problem. The page. I have two required "input text". A command button that changes the bean value and reRenderes the "job" object. <a4j:form id="pervForm"> SURNAME:<h:inputText id="surname" label="Surname" value="#{prevManager.surname}" required="true" /> <br/> JOB:<h:inputText value="#{prevManager.job}" id="job" maxlength="10" size="10" label="#{msg.common_label_job}" required="true" /> <br/> <a4j:commandButton value="Set job to Programmer" ajaxSingle="true" reRender="job"> <a4j:actionparam name="jVal" value="Programmer" assignTo="#{prevManager.job}"/> </a4j:commandButton> <h:commandButton id="save" value="save" action="save" class="HATSBUTTON"/> </a4j:form> Here the simple manager: public class PrevManager { private String surname; private String job; public String getSurname() { return surname; } public void setSurname(String surname) { this.surname = surname; } public String getJob() { return job; } public void setJob(String job) { this.job = job; } public String save() { //do something } } Let's do this: Write something on the Job input text (such as "teacher"). Leave empty the surname. Save. Validation error appears (surname is mandatory). Press "Set job to Programmer": nothing happens. Checking the bean value, I discovered that it is correctly updated, indeed the component on the page is not updated! Well, according to the JBoss Docs I found: Ajax region is a key ajax component. It limits the part of the component tree to be processed on the server side when ajax request comes. Processing means invocation during Decode, Validation and Model Update phase. Most common reasons to use a region are: -avoiding the aborting of the JSF lifecycle processing during the validation of other form input unnecessary for given ajax request; -defining the different strategies when events will be delivered (immediate="true/false") -showing an individual indicator of an ajax status -increasing the performance of the rendering processing (selfRendered="true/false", renderRegionOnly="true/false") The following two examples show the situation when a validation error does not allow to process an ajax input. Type the name. The outputText component should reappear after you. However, in the first case, this activity will be aborted because of the other field with required="true". You will see only the error message while the "Job" field is empty. Here you are the example: <ui:composition xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:a4j="http://richfaces.org/a4j" xmlns:rich="http://richfaces.org/rich"> <style> .outergridvalidationcolumn { padding: 0px 30px 10px 0px; } </style> <a4j:outputPanel ajaxRendered="true"> <h:messages style="color:red" /> </a4j:outputPanel> <h:panelGrid columns="2" columnClasses="outergridvalidationcolumn"> <h:form id="form1"> <h:panelGrid columns="2"> <h:outputText value="Name" /> <h:inputText value="#{userBean.name}"> <a4j:support event="onkeyup" reRender="outname" /> </h:inputText> <h:outputText value="Job" /> <h:inputText required="true" id="job2" value="#{userBean.job}" /> </h:panelGrid> </h:form> <h:form id="form2"> <h:panelGrid columns="2"> <h:outputText value="Name" /> <a4j:region> <h:inputText value="#{userBean.name}"> <a4j:support event="onkeyup" reRender="outname" /> </h:inputText> </a4j:region> <h:outputText value="Job" /> <h:inputText required="true" id="job1" value="#{userBean.job}" /> </h:panelGrid> </h:form> </h:panelGrid> <h:outputText id="outname" style="font-weight:bold" value="Typed Name: #{userBean.name}" /> <br /> </ui:composition> Form1: the behaviour is incorrect. I need to fill the job and then the name. Form2: the behaviour is correct. I do not need to fill the job to see the correct value. Unfortunately using Ajax region does not help (indeed I used it in a bad way ...) because my fields are both REQUIRED. That's the main different. Any idea? Many thanks.

    Read the article

  • System.Timers.Timer leaking due to "direct delegate roots"

    - by alimbada
    Apologies for the rather verbose and long-winded post, but this problem's been perplexing me for a few weeks now so I'm posting as much information as I can in order to get this resolved quickly. We have a WPF UserControl which is being loaded by a 3rd party app. The 3rd party app is a presentation application which loads and unloads controls on a schedule defined by an XML file which is downloaded from a server. Our control, when it is loaded into the application makes a web request to a web service and uses the data from the response to display some information. We're using an MVVM architecture for the control. The entry point of the control is a method that is implementing an interface exposed by the main app and this is where the control's configuration is set up. This is also where I set the DataContext of our control to our MainViewModel. The MainViewModel has two other view models as properties and the main UserControl has two child controls. Depending on the data received from the web service, the main UserControl decides which child control to display, e.g. if there is a HTTP error or the data received is not valid, then display child control A, otherwise display child control B. As you'd expect, these two child controls bind two separate view models each of which is a property of MainViewModel. Now child control B (which is displayed when the data is valid) has a RefreshService property/field. RefreshService is an object that is responsible for updating the model in a number of ways and contains 4 System.Timers.Timers; a _modelRefreshTimer a _viewRefreshTimer a _pageSwitchTimer a _retryFeedRetrievalOnErrorTimer (this is only enabled when something goes wrong with retrieving data). I should mention at this point that there are two types of data; the first changes every minute, the second changes every few hours. The controls' configuration decides which type we are using/displaying. If data is of the first type then we update the model quite frequently (every 30 seconds) using the _modelRefreshTimer's events. If the data is of the second type then we update the model after a longer interval. However, the view still needs to be refreshed every 30 seconds as stale data needs to be removed from the view (hence the _viewRefreshTimer). The control also paginates the data so we can see more than we can fit on the display area. This works by breaking the data up into Lists and switching the CurrentPage (which is a List) property of the view model to the right List. This is done by handling the _pageSwitchTimer's Elapsed event. Now the problem My problem is that the control, when removed from the visual tree doesn't dispose of it's timers. This was first noticed when we started getting an unusually high number of requests on the web server end very soon after deploying this control and found that requests were being made at least once a second! We found that the timers were living on and not stopping hours after the control had been removed from view and that the more timers there were the more requests piled up at the web server. My first solution was to implement IDisposable for the RefreshService and do some clean up when the control's UnLoaded event was fired. Within the RefreshServices Dispose method I've set Enabled to false for all the timers, then used the Stop() method on all of them. I've then called Dispose() too and set them to null. None of this worked. After some reading around I found that event handlers may hold references to Timers and prevent them from being disposed and collected. After some more reading and researching I found that the best way around this was to use the Weak Event Pattern. Using this blog and this blog I've managed to work around the shortcomings in the Weak Event pattern. However, none of this solves the problem. Timers are still not being disabled or stopped (let alone disposed) and web requests are continuing to build up. Mem Profiler tells me that "This type has N instances that are directly rooted by a delegate. This can indicate the delegate has not been properly removed" (where N is the number of instances). As far as I can tell though, all listeners of the Elapsed event for the timers are being removed during the cleanup so I can't understand why the timers continue to run. Thanks for reading. Eagerly awaiting your suggestions/comments/solutions (if you got this far :-p)

    Read the article

  • Unable to output XML data in a manageable way

    - by Rob
    I've been given data from a previous version of a website (it was a custom CMS) and am looking to get it into a state that I can import it into my Wordpress site. This is what I'm working on - http://www.teamworksdesign.com/clients/ciw/datatest/index.php. If you scroll down to row 187 the data starts to fail (there should be a red message) with the following error message: Fatal error: Uncaught exception 'Exception' with message 'String could not be parsed as XML' in /home/teamwork/public_html/clients/ciw/datatest/index.php:132 Stack trace: #0 /home/teamwork/public_html/clients/ciw/datatest/index.php(132): SimpleXMLElement-__construct(' Can anyone see what the problem is and how to fix it? This is how I'm outputting the date: <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> </head> <body> <?php ini_set('memory_limit','1024M'); ini_set('max_execution_time', 500); //300 seconds = 5 minutes echo "<br />memory_limit: " . ini_get('memory_limit') . "<br /><br />"; echo "<br />max_execution_time: " . ini_get('max_execution_time') . "<br /><br />"; libxml_use_internal_errors(true); $z = new XMLReader; $z->open('dbo_Content.xml'); $doc = new DOMDocument; $doc->preserveWhiteSpace = false; // move to the first <product /> node while ($z->read() && $z->name !== 'dbo_Content'); $c = 0; // now that we're at the right depth, hop to the next <product/> until the end of the tree while ($z->name === 'dbo_Content') { if($c < 201) { // either one should work $node = simplexml_import_dom($doc->importNode($z->expand(), true)); if($node->ClassId == 'policydocument') { $c++; echo "<h1>Row: $c</h1>"; echo "<pre>"; echo htmlentities($node->XML) . "<br /><br /><br /><b>*******</b><br /><br /><br />"; echo "</pre>"; try{ $xmlObject = new SimpleXMLElement($node->XML); foreach ($xmlObject->fields[0]->field as $field) { switch((string) $field['name']) { case 'parentId': echo "<b>PARENT ID: </b> " . $field->value . "<br />"; break; case 'title': echo "<b>TITLE: </b> " . $field->value . "<br />"; break; case 'summary': echo "<b>SUMMARY: </b> " . $field->value . "<br />"; break; case 'body': echo "<b>BODY:</b> " . $field->value . "<br />"; break; case 'published': echo "<b>PUBLISHED:</b> " . $field->value . "<br />"; break; } } echo '<br /><h2 style="color:green;">Success on node: '.$node->ContentId.'</h2><hr /><br />'; } catch (Exception $e){ echo '<h2 style="color:red;">Failed on node: '.$node->ContentId.'</h2>'; } } // go to next <product /> $z->next('dbo_Content'); } } ?> </body> </html>

    Read the article

  • Android Multiple objects in SimpleAdapter

    - by Adam Sherratt
    I have a need (unless you can think of a better way) of passing multiple objects to a custom list adapter. I know that I'm barking up the wrong tree here, and would appreciate someone setting me on the right course! Thanks playlistadapter = new MyPlaylistAdapter(MyApplication.getAppContext(), songsList, retained_songsList, folderMode, R.layout.file_view, new String[] { "songTitle","songAlbum", "songPath" }, new int[] { R.id.checkTextView, R.id.text2, R.id.text3 }); And my adapter class: public class MyPlaylistAdapter extends SimpleAdapter{ private ArrayList <Song> songsList = new ArrayList<Song>(); private ArrayList <Song> retained_songsList = new ArrayList<Song>(); private ArrayList<Song> playlistcheck = new ArrayList<Song>(); private String folderMode; private String TAG = "AndroidMediaCenter"; public MyPlaylistAdapter(Context context,List<Song> SongsList, List<Song> Retained_songsList, String FolderMode,int resource, String[] from, int[] to) { super(context, null, resource, from, to); songsList.clear(); songsList.addAll(SongsList); Log.i(TAG, "MyPlayListAdapter Songslist = " + songsList.size()); retained_songsList.clear(); retained_songsList.addAll(Retained_songsList); folderMode = FolderMode; } public View getView(int position, View convertView, ViewGroup parent) { //PlayListViewHolder holder; CheckedTextView checkTextView; TextView text2; TextView text3; if (convertView == null) { LayoutInflater inflater = (LayoutInflater) MyApplication.getAppContext().getSystemService(Context.LAYOUT_INFLATER_SERVICE); //LayoutInflater inflater=getLayoutInflater(); convertView=inflater.inflate(R.layout.file_view, parent, false); //convertView.setBackgroundColor(0xFF00FF00 ); //holder = new PlayListViewHolder(); checkTextView = (CheckedTextView) convertView.findViewById(R.id.checkTextView); text2 = (TextView) convertView.findViewById(R.id.text2); text3 = (TextView) convertView.findViewById(R.id.text3); //convertView.setTag(holder); } else { //holder = (PlayListViewHolder) convertView.getTag(); } //put something into textviews String tracks = null; String tracks_Details = null; String trackspath = null; tracks = songsList.get(position).getSongTitle(); tracks_Details = songsList.get(position).getAlbum() + " (" + songsList.get(position).getArtist() + ")"; trackspath = songsList.get(position).getSongPath(); checkTextView = (CheckedTextView) convertView.findViewById(R.id.checkTextView); text2 = (TextView) convertView.findViewById(R.id.text2); text3 = (TextView) convertView.findViewById(R.id.text3); checkTextView.setText(tracks); if(folderMode.equals("Playlists")){ checkTextView.setBackgroundColor(Color.GREEN); checkTextView.setChecked(false); try { int listsize_rs = retained_songsList.size(); for (int j = 0; j<listsize_rs;j++){ if((retained_songsList.get(j).getSongPath()).equals(songsList.get(position).getSongPath())){ checkTextView.setBackgroundColor(Color.TRANSPARENT); //Need to check here whether the checkedtextview is ticked or not checkTextView.setChecked(true); playlistcheck.add(songsList.get(position)); break; } } } catch (Exception e) { e.printStackTrace(); } }else { //Need to check here whether the checkedtextview is ticked or not try { if (songsList.get(position).getSongCheckedStatus()==true){ checkTextView.setChecked(true); }else{ checkTextView.setChecked(false); } } catch (Exception e) { e.printStackTrace(); } } text2.setText(tracks_Details); text3.setText(trackspath); Log.i(TAG, "MyPlayListAdapter Songslist = " + songsList.size()); return convertView; } } However, this doesn't inflate, throwing the following errors: 10-26 23:11:09.464: E/AndroidRuntime(2826): FATAL EXCEPTION: main 10-26 23:11:09.464: E/AndroidRuntime(2826): java.lang.RuntimeException: Error receiving broadcast Intent { act=android.intent.action.GetMusicComplete flg=0x10 } in com.Nmidia.AMC.MusicActivity$18@414c5770 10-26 23:11:09.464: E/AndroidRuntime(2826): at android.app.LoadedApk$ReceiverDispatcher$Args.run(LoadedApk.java:765) 10-26 23:11:09.464: E/AndroidRuntime(2826): at android.os.Handler.handleCallback(Handler.java:615) 10-26 23:11:09.464: E/AndroidRuntime(2826): at android.os.Handler.dispatchMessage(Handler.java:92) 10-26 23:11:09.464: E/AndroidRuntime(2826): at android.os.Looper.loop(Looper.java:137) 10-26 23:11:09.464: E/AndroidRuntime(2826): at android.app.ActivityThread.main(ActivityThread.java:4745) 10-26 23:11:09.464: E/AndroidRuntime(2826): at java.lang.reflect.Method.invokeNative(Native Method) 10-26 23:11:09.464: E/AndroidRuntime(2826): at java.lang.reflect.Method.invoke(Method.java:511) 10-26 23:11:09.464: E/AndroidRuntime(2826): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:786) 10-26 23:11:09.464: E/AndroidRuntime(2826): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:553) 10-26 23:11:09.464: E/AndroidRuntime(2826): at dalvik.system.NativeStart.main(Native Method) 10-26 23:11:09.464: E/AndroidRuntime(2826): Caused by: java.lang.NullPointerException 10-26 23:11:09.464: E/AndroidRuntime(2826): at android.widget.SimpleAdapter.getCount(SimpleAdapter.java:93) 10-26 23:11:09.464: E/AndroidRuntime(2826): at android.widget.ListView.setAdapter(ListView.java:460) 10-26 23:11:09.464: E/AndroidRuntime(2826): at com.Nmidia.AMC.MusicActivity.setFilterMusic(MusicActivity.java:1230) 10-26 23:11:09.464: E/AndroidRuntime(2826): at com.Nmidia.AMC.MusicActivity$18.onReceive(MusicActivity.java:996) 10-26 23:11:09.464: E/AndroidRuntime(2826): at android.app.LoadedApk$ReceiverDispatcher$Args.run(LoadedApk.java:755) 10-26 23:11:09.464: E/AndroidRuntime(2826): ... 9 more

    Read the article

  • Need help with writing from XML to a SQL Server database (detailed)

    - by fogedi
    I'm a bit of a newbie with XML and WebServices and stuff like that. I'm working on a project using GlassFish OpenESB to schedule a process to get some information from a webservice and then store in a database. The criteria is basically that i have to use GlassFish OpenESB or EJB modules where i can expose webservices or something along those lines, AND i have to use SQL Server 2005. So far I've been able to talk to the webservice: and receive something along those lines <?xml version="1.0" encoding="UTF-8" standalone="no"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://schemas.xmlsoap.org/soap/envelope/ http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Body> <m:entrypoint_getSettlementsOperationResponse xmlns:m="http://j2ee.netbeans.org/wsdl/BorgunTestBPEL/entrypoint_getSettlements"> <part1> <GetSettlementsByMerchantResponse xmlns="http://Borgun.Services.Gateway/2010/04/Settlement"> <GetSettlementsByMerchantResult xmlns:a="http://schemas.datacontract.org/2004/07/Borgun.Library.Common" xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns:msgns="http://Borgun.Services.Gateway/2010/04/Settlement" xmlns:ns0="http://j2ee.netbeans.org/wsdl/BorgunTestBPEL/entrypoint_getSettlements"> <a:CreditCardSettlement> <a:amexAmount>XXX</a:amexAmount> <a:amount>XXXX</a:amount> <a:batches> <a:CreditCardBatch> <a:batchdate>xxx</a:batchdate> <a:batchnumber>XXXX</a:batchnumber> <a:currencyCode>xxxx</a:currencyCode> <a:merchantnumber>xxxx</a:merchantnumber> <a:settlementRunNumber>xx4</a:settlementRunNumber> <a:settlementdate>2010-04-06T00:00:00</a:settlementdate> <a:slips>2</a:slips> <a:sum>xxxx</a:sum> </a:CreditCardBatch> <a:CreditCardBatch> <a:batchdate>xxx</a:batchdate> <a:batchnumber>xxxxx</a:batchnumber> <a:currencyCode>xxxx</a:currencyCode> <a:merchantnumber>xxxx</a:merchantnumber> <a:settlementRunNumber>xxxx</a:settlementRunNumber> <a:settlementdate>xxxx</a:settlementdate> <a:slips>x</a:slips> <a:sum>xxx</a:sum> </a:CreditCardBatch> </a:batches> <a:commission>xx</a:commission> <a:currencyCode>xxx</a:currencyCode> <a:deduction>-xxx</a:deduction> <a:deductionItems> <a:CrediCardSettlementDeduction> <a:amount>-xxx</a:amount> <a:code>VIÐSKF</a:code> <a:currencyCode>ISK</a:currencyCode> <a:merchantnumber>xxx</a:merchantnumber> <a:settlementrunnumber>xxx</a:settlementrunnumber> <a:text>Afsláttur v/ekorta</a:text> </a:CrediCardSettlementDeduction> <a:CrediCardSettlementDeduction> <a:amount>-335.00</a:amount> <a:code>ÁLAGKREK</a:code> <a:currencyCode>ISK</a:currencyCode> <a:merchantnumber>xxx</a:merchantnumber> <a:settlementrunnumber>xxx</a:settlementrunnumber> <a:text>xxx</a:text> </a:CrediCardSettlementDeduction> </a:deductionItems> </a:CreditCardSettlement> </GetSettlementsByMerchantResult> </GetSettlementsByMerchantResponse> </part1> </m:entrypoint_getSettlementsOperationResponse> </SOAP-ENV:Body> </SOAP-ENV:Envelope> I have access to the SQL Server 2005 server which is remote and i know i can insert into it but given that now i have a one-to-many relationship i want to be able to rollback if something fails. So in short how can I insert from this XML into the DB preferably without manually walking through the XML tree? I'm pretty sure I'm supposed to be able to use Entity and Session Beans or maybe JAXB bindings but I'm simply not being successful. One of the reasons might have something to do with the fact that the soap response contains an array of CreditCardSettlements and each of which contains an array of Batches and DeductionItems It would be best if someone can help me do this via a BPEL in GlassFish OpenESB but any hint at a java solution is much appreciated.

    Read the article

  • Compat Wireless Drivers Centrino N-2230

    - by user2699451
    So I am using linux and am having trouble installing the Compat Wireless drivers Hardware: Intel Centrino N-2230 OS: Linux Mint 64bit (kernel 13.08-generic) I followed this link http://www.mathyvanhoef.com/2012/09/compat-wireless-injection-patch-for.html Output: apt-get install linux-headers-$(uname -r) Reading package lists... Done Building dependency tree Reading state information... Done linux-headers-3.8.0-19-generic is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 19 not upgraded. charles-W55xEU compat-wireless-2010-10-16 # cd ~ charles-W55xEU ~ # dir adt-bundle-linux-x86_64-20130917.zip Desktop known_hosts_backup charles-W55xEU ~ # wget http://www.orbit-lab.org/kernel/compat-wireless-3-stable/v3.6/compat-wireless-3.6.2-1-snp.tar.bz2 --2013-10-29 10:28:23-- http://www.orbit-lab.org/kernel/compat-wireless-3-stable/v3.6/compat-wireless-3.6.2-1-snp.tar.bz2 Resolving www.orbit-lab.org (www.orbit-lab.org)... 128.6.192.131 Connecting to www.orbit-lab.org (www.orbit-lab.org)|128.6.192.131|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 4443700 (4,2M) [application/x-bzip2] Saving to: ‘compat-wireless-3.6.2-1-snp.tar.bz2’ 100%[======================================>] 4 443 700 13,5KB/s in 11m 3s 2013-10-29 10:39:27 (6,55 KB/s) - ‘compat-wireless-3.6.2-1-snp.tar.bz2’ saved [4443700/4443700] charles-W55xEU ~ # tar -xf compat-wireless-3.6.2-1-snp.tar.bz2 charles-W55xEU ~ # cd compat-wireless-3.6-rc6-1 bash: cd: compat-wireless-3.6-rc6-1: No such file or directory charles-W55xEU ~ # dir adt-bundle-linux-x86_64-20130917.zip Desktop compat-wireless-3.6.2-1-snp known_hosts_backup compat-wireless-3.6.2-1-snp.tar.bz2 charles-W55xEU ~ # cd compat-wireless-3.6.2-1-snp/ charles-W55xEU compat-wireless-3.6.2-1-snp # dir code-metrics.txt defconfigs linux-next-pending pending-stable compat drivers MAINTAINERS README config.mk enable-older-kernels Makefile scripts COPYRIGHT include net udev crap linux-next-cherry-picks patches charles-W55xEU compat-wireless-3.6.2-1-snp # wget http://patches.aircrack-ng.org/mac80211.compat08082009.wl_frag+ack_v1.patch --2013-10-29 10:40:52-- http://patches.aircrack-ng.org/mac80211.compat08082009.wl_frag+ack_v1.patch Resolving patches.aircrack-ng.org (patches.aircrack-ng.org)... 213.186.33.2, 2001:41d0:1:1b00:213:186:33:2 Connecting to patches.aircrack-ng.org (patches.aircrack-ng.org)|213.186.33.2|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1049 (1,0K) [text/plain] Saving to: ‘mac80211.compat08082009.wl_frag+ack_v1.patch’ 100%[======================================>] 1 049 --.-K/s in 0s 2013-10-29 10:40:56 (180 MB/s) - ‘mac80211.compat08082009.wl_frag+ack_v1.patch’ saved [1049/1049] charles-W55xEU compat-wireless-3.6.2-1-snp # patch -p1 < mac80211.compat08082009.wl_frag+ack_v1.patch patching file net/mac80211/tx.c Hunk #1 succeeded at 792 (offset 115 lines). charles-W55xEU compat-wireless-3.6.2-1-snp # wget -Ocompatwireless_chan_qos_frag.patch http://pastie.textmate.org/pastes/4882675/download --2013-10-29 10:43:18-- http://pastie.textmate.org/pastes/4882675/download Resolving pastie.textmate.org (pastie.textmate.org)... 178.79.137.125 Connecting to pastie.textmate.org (pastie.textmate.org)|178.79.137.125|:80... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: http://pastie.org/pastes/4882675/download [following] --2013-10-29 10:43:20-- http://pastie.org/pastes/4882675/download Resolving pastie.org (pastie.org)... 96.126.119.119 Connecting to pastie.org (pastie.org)|96.126.119.119|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 2036 (2,0K) [application/octet-stream] Saving to: ‘compatwireless_chan_qos_frag.patch’ 100%[======================================>] 2 036 --.-K/s in 0,001s 2013-10-29 10:43:21 (3,35 MB/s) - ‘compatwireless_chan_qos_frag.patch’ saved [2036/2036] charles-W55xEU compat-wireless-3.6.2-1-snp # patch -p1 < compatwireless_chan_qos_frag.patch patching file drivers/net/wireless/rtl818x/rtl8187/dev.c patching file net/mac80211/tx.c Hunk #1 succeeded at 1495 (offset 8 lines). patching file net/wireless/chan.c charles-W55xEU compat-wireless-3.6.2-1-snp # make ./scripts/gen-compat-autoconf.sh /root/compat-wireless-3.6.2-1-snp/.config /root/compat-wireless-3.6.2-1-snp/config.mk > include/linux/compat_autoconf.h make -C /lib/modules/3.8.0-19-generic/build M=/root/compat-wireless-3.6.2-1-snp modules make[1]: Entering directory `/usr/src/linux-headers-3.8.0-19-generic' CC [M] /root/compat-wireless-3.6.2-1-snp/compat/main.o LD [M] /root/compat-wireless-3.6.2-1-snp/compat/compat.o CC [M] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:8:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_pci.h:217:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_pci_init’ In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:10:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_gmac_cmn.h:95:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_gmac_cmn_init’ In file included from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8:0: /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:25:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:152:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:17:21: warning: ‘bcma_bus_next_num’ defined but not used [-Wunused-variable] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:93:12: warning: ‘bcma_register_cores’ defined but not used [-Wunused-function] make[3]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o] Error 1 make[2]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma] Error 2 make[1]: *** [_module_/root/compat-wireless-3.6.2-1-snp] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.8.0-19-generic' make: *** [modules] Error 2 charles-W55xEU compat-wireless-3.6.2-1-snp # make install Warning: You may or may not need to update your initframfs, you should if any of the modules installed are part of your initramfs. To add support for your distribution to do this automatically send a patch against ./scripts/update-initramfs. If your distribution does not require this send a patch against the '/usr/bin/lsb_release -i -s': LinuxMint tag for your distribution to avoid this warning. make -C /lib/modules/3.8.0-19-generic/build M=/root/compat-wireless-3.6.2-1-snp modules make[1]: Entering directory `/usr/src/linux-headers-3.8.0-19-generic' CC [M] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:8:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_pci.h:217:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_pci_init’ In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:10:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_gmac_cmn.h:95:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_gmac_cmn_init’ In file included from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8:0: /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:25:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:152:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:17:21: warning: ‘bcma_bus_next_num’ defined but not used [-Wunused-variable] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:93:12: warning: ‘bcma_register_cores’ defined but not used [-Wunused-function] make[3]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o] Error 1 make[2]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma] Error 2 make[1]: *** [_module_/root/compat-wireless-3.6.2-1-snp] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.8.0-19-generic' make: *** [modules] Error 2 charles-W55xEU compat-wireless-3.6.2-1-snp # It keeps giving errors, same with other sites, I get the same errors??? I am lost, help needed

    Read the article

  • sudo apt-get install mysql-server fails

    - by danwoods
    Hi all, I'm coming from a fresh install of Ubuntu server 9.10 and trying to install mysql-server by using 'sudo apt-get mysql-server' I get the following errors: dan@dev:~$ sudo apt-get install mysql-server [sudo] password for dan: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libnet-daemon-perl libplrpc-perl mysql-client-5.1 mysql-server-5.1 Suggested packages: dbishell libipc-sharedcache-perl tinyca The following NEW packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libnet-daemon-perl libplrpc-perl mysql-client-5.1 mysql-server mysql-server-5.1 0 upgraded, 8 newly installed, 0 to remove and 0 not upgraded. Need to get 16.5MB of archives. After this operation, 39.0MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://us.archive.ubuntu.com karmic/main libnet-daemon-perl 0.43-1 [46.9kB] Get:2 http://us.archive.ubuntu.com karmic/main libplrpc-perl 0.2020-2 [36.0kB] Get:3 http://us.archive.ubuntu.com karmic/main libdbi-perl 1.609-1 [800kB] Get:4 http://us.archive.ubuntu.com karmic/main libdbd-mysql-perl 4.011-1ubuntu1 [136kB] Get:5 http://us.archive.ubuntu.com karmic-updates/main mysql-client-5.1 5.1.37- 1ubuntu5.1 [8,202kB] Get:6 http://us.archive.ubuntu.com karmic-updates/main mysql-server-5.1 5.1.37-1ubuntu5.1 [7,186kB] Get:7 http://us.archive.ubuntu.com karmic/main libhtml-template-perl 2.9-1 [65.8kB] Get:8 http://us.archive.ubuntu.com karmic-updates/main mysql-server 5.1.37-1ubuntu5.1 [64.3kB] Fetched 16.5MB in 1min 34s (175kB/s) Preconfiguring packages ... Selecting previously deselected package libnet-daemon-perl. (Reading database ... 123083 files and directories currently installed.) Unpacking libnet-daemon-perl (from .../libnet-daemon-perl_0.43-1_all.deb) ... Selecting previously deselected package libplrpc-perl. Unpacking libplrpc-perl (from .../libplrpc-perl_0.2020-2_all.deb) ... Selecting previously deselected package libdbi-perl. Unpacking libdbi-perl (from .../libdbi-perl_1.609-1_i386.deb) ... Selecting previously deselected package libdbd-mysql-perl. Unpacking libdbd-mysql-perl (from .../libdbd-mysql-perl_4.011-1ubuntu1_i386.deb) ... Selecting previously deselected package mysql-client-5.1. Unpacking mysql-client-5.1 (from .../mysql-client-5.1_5.1.37-1ubuntu5.1_i386.deb) ... Selecting previously deselected package mysql-server-5.1. Unpacking mysql-server-5.1 (from .../mysql-server-5.1_5.1.37-1ubuntu5.1_i386.deb) ... Selecting previously deselected package libhtml-template-perl. Unpacking libhtml-template-perl (from .../libhtml-template-perl_2.9-1_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.1.37-1ubuntu5.1_all.deb) ... Processing triggers for man-db ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Setting up libnet-daemon-perl (0.43-1) ... Setting up libplrpc-perl (0.2020-2) ... Setting up libdbi-perl (1.609-1) ... Setting up libdbd-mysql-perl (4.011-1ubuntu1) ... Setting up mysql-client-5.1 (5.1.37-1ubuntu5.1) ... Setting up mysql-server-5.1 (5.1.37-1ubuntu5.1) ... * Stopping MySQL database server mysqld [ OK ] * Starting MySQL database server mysqld [fail] invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.1 (--configure): subprocess installed post-installation script returned error exit status 1 Setting up libhtml-template-perl (2.9-1) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.1; however: Package mysql-server-5.1 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.1 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) What am I missing? [update] mysqld returns: dan@dev:~$ sudo mysqld [sudo] password for dan: 100220 12:18:17 [Note] Plugin 'FEDERATED' is disabled. InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. 100220 12:18:17 InnoDB: Retrying to lock the first data file InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process This goes on for a while... InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. ^[[BInnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. 100220 12:19:57 InnoDB: Unable to open the first data file InnoDB: Error in opening ./ibdata1 100220 12:19:57 InnoDB: Operating system error number 11 in a file operation. InnoDB: Error number 11 means 'Resource temporarily unavailable'. InnoDB: Some operating system error numbers are described at InnoDB: http://dev.mysql.com/doc/refman/5.1/en/operating-system-error-codes.html InnoDB: Could not open or create data files. InnoDB: If you tried to add new data files, and it failed here, InnoDB: you should now edit innodb_data_file_path in my.cnf back InnoDB: to what it was, and remove the new ibdata files InnoDB created InnoDB: in this failed attempt. InnoDB only wrote those files full of InnoDB: zeros, but did not yet use them in any way. But be careful: do not InnoDB: remove old data files which contain your precious data! 100220 12:19:57 [ERROR] Plugin 'InnoDB' init function returned error. 100220 12:19:57 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100220 12:19:57 [ERROR] Can't start server: Bind on TCP/IP port: Address already in use 100220 12:19:57 [ERROR] Do you already have another mysqld server running on port: 3306 ? 100220 12:19:57 [ERROR] Aborting 100220 12:19:57 [Warning] Forcing shutdown of 1 plugins 100220 12:19:57 [Note] mysqld: Shutdown complete How can I check what process is using port: 3306? [Update]: sudo netstat -anp | grep LISTEN returns dan@dev:~$ sudo netstat -anp | grep LISTEN [sudo] password for dan: tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 1372/master tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 4391/mysqld tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1409/cupsd tcp6 0 0 ::1:631 :::* LISTEN 1409/cupsd [More Updates]: I can log into mysql if that makes a difference

    Read the article

  • dpkg: warning: files list file for package 'x' missing

    - by Mark
    I get this warning for several packages every time I install any package or perform apt-get upgrade. Not sure what is causing it; it's a fresh Debian install on my OpenVZ server and I haven't changed any dpkg settings. Here's an example: root@debian:~# apt-get install cowsay Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: filters The following NEW packages will be installed: cowsay 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 21.9 kB of archives. After this operation, 91.1 kB of additional disk space will be used. Get:1 http://ftp.us.debian.org/debian/ unstable/main cowsay all 3.03+dfsg1-4 [21.9 kB] Fetched 21.9 kB in 0s (70.2 kB/s) Selecting previously unselected package cowsay. dpkg: warning: files list file for package 'libssh2-1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libkrb5-3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libwrap0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcap2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpam-ck-connector:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libc6:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libtalloc2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libselinux1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libp11-kit0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libavahi-client3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libbz2-1.0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpcre3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgpm2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgnutls26:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libavahi-common3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcroco3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'liblzma5:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpaper1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libsensors4:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libbsd0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libavahi-common-data:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libss2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libblkid1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libslang2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libacl1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcomerr2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libkrb5support0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'e2fslibs:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'librtmp0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libidn11:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpcap0.8:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libattr1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libdevmapper1.02.1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'odbcinst1debian2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libexpat1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libltdl7:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libkeyutils1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcups2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libsqlite3-0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libck-connector0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'zlib1g:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libnl1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libfontconfig1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libudev0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libsepol1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libmagic1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libk5crypto3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libunistring0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgpg-error0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libusb-0.1-4:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpam0g:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpopt0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgssapi-krb5-2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgeoip1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcurl3-gnutls:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libtasn1-3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libuuid1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgcrypt11:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgdbm3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libdbus-1-3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libsysfs2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libfreetype6:amd64' missing; assuming package has no files currently installed (Reading database ... 21908 files and directories currently installed.) Unpacking cowsay (from .../cowsay_3.03+dfsg1-4_all.deb) ... Processing triggers for man-db ... Setting up cowsay (3.03+dfsg1-4) ... root@debian:~# Everything works fine, but these warning messages are pretty annoying. Does anyone know how I can fix this?

    Read the article

  • Tools and Utilities for the .NET Developer

    - by mbcrump
    Tweet this list! Add a link to my site to your bookmarks to quickly find this page again! Add me to twitter! This is a list of the tools/utilities that I use to do my job/hobby. I wanted this page to load fast and contain information that only you care about. If I have missed a tool that you like, feel free to contact me and I will add it to the list. Also, this list took a lot of time to complete. Please do not steal my work, if you like the page then please link back to my site. I will keep the links/information updated as new tools/utilities are created.  Windows/.NET Development – This is a list of tools that any Windows/.NET developer should have in his bag. I have used at some point in my career everything listed on this page and below is the tools worth keeping. Name Description License AnkhSVN Subversion support for Visual Studio. It also works with VS2010. Free Aurora XAML Designer One of the best XAML creation tools available. Has a ton of built in templates that you can copy/paste into VS2010. COST/Trial BeyondCompare Beyond Compare 3 is the ideal tool for comparing files and folders on your Windows or Linux system. Visualize changes in your code and carefully reconcile them. COST/Trial BuildIT Automated Task Tool Its main purpose is to automate tasks, whether it is the final packaging of a product, an automated daily build, maybe sending out a mailing list, even backing-up files. Free C Sharper for VB Convert VB to C#. COST CLRProfiler Analyze and improve the behavior of your .NET app. Free CodeRush Direct competitor to ReSharper, contains similar feature. This is one of those decide for yourself. COST/Trial Disk2VHD Disk2vhd is a utility that creates VHD (Virtual Hard Disk - Microsoft's Virtual Machine disk format) versions of physical disks for use in Microsoft Virtual PC or Microsoft Hyper-V virtual machines (VMs). Free Eazfuscator.NET Is a free obfuscator for .NET. The main purpose is to protect intellectual property of software. Free EQATEC Profiler Make your .NET app run faster. No source code changes are needed. Just point the profiler to your app, run the modified code, and get a visual report. COST Expression Studio 3/4 Comes with Web, Blend, Sketch Flow and more. You can create websites, produce beautiful XAML and more. COST/Trial Expresso The award-winning Expresso editor is equally suitable as a teaching tool for the beginning user of regular expressions or as a full-featured development environment for the experienced programmer or web designer with an extensive knowledge of regular expressions. Free Fiddler Fiddler is a web debugging proxy which logs all HTTP(s) traffic between your computer and the internet. Free Firebug Powerful Web development tool. If you build websites, you will need this. Free FxCop FxCop is an application that analyzes managed code assemblies (code that targets the .NET Framework common language runtime) and reports information about the assemblies, such as possible design, localization, performance, and security improvements. Free GAC Browser and Remover Easy way to remove multiple assemblies from the GAC. Assemblies registered by programs like Install Shield can also be removed. Free GAC Util The Global Assembly Cache tool allows you to view and manipulate the contents of the global assembly cache and download cache. Free HelpScribble Help Scribble is a full-featured, easy-to-use help authoring tool for creating help files from start to finish. You can create Win Help (.hlp) files, HTML Help (.chm) files, a printed manual and online documentation (on a web site) all from the same Help Scribble project. COST/Trial IETester IETester is a free Web Browser that allows you to have the rendering and JavaScript engines of IE9 preview, IE8, IE7 IE 6 and IE5.5 on Windows 7, Vista and XP, as well as the installed IE in the same process. Free iTextSharp iText# (iTextSharp) is a port of the iText open source java library for PDF generation written entirely in C# for the .NET platform. Use the iText mailing list to get support. Free Kaxaml Kaxaml is a lightweight XAML editor that gives you a "split view" so you can see both your XAML and your rendered content. Free LINQPad LinqPad lets you interactively query databases in a LINQ. Free Linquer Many programmers are familiar with SQL and will need a help in the transition to LINQ. Sometimes there are complicated queries to be written and Linqer can help by converting SQL scripts to LINQ. COST/Trial LiquidXML Liquid XML Studio 2010 is an advanced XML developers toolkit and IDE, containing all the tools needed for designing and developing XML schema and applications. COST/Trial Log4Net log4net is a tool to help the programmer output log statements to a variety of output targets. log4net is a port of the excellent log4j framework to the .NET runtime. We have kept the framework similar in spirit to the original log4j while taking advantage of new features in the .NET runtime. For more information on log4net see the features document. Free Microsoft Web Platform Installer The Microsoft Web Platform Installer 2.0 (Web PI) is a free tool that makes getting the latest components of the Microsoft Web Platform, including Internet Information Services (IIS), SQL Server Express, .NET Framework and Visual Web Developer easy. Free Mono Development Don't have Visual Studio - no problem! This is an open Source C# and .NET development environment for Linux, Windows, and Mac OS X Free Net Mass Downloader While it’s great that Microsoft has released the .NET Reference Source Code, you can only get it one file at a time while you’re debugging. If you’d like to batch download it for reading or to populate the cache, you’d have to write a program that instantiated and called each method in the Framework Class Library. Fortunately, .NET Mass Downloader comes to the rescue! Free nMap Nmap ("Network Mapper") is a free and open source (license) utility for network exploration or security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. Free NoScript (Firefox add-in) The NoScript Firefox extension provides extra protection for Firefox, Flock, Seamonkey and other Mozilla-based browsers: this free, open source add-on allows JavaScript, Java and Flash and other plug-ins to be executed only by trusted web sites of your choice (e.g. your online bank), and provides the most powerful Anti-XSS protection available in a browser. Free NotePad 2 Notepad2, a fast and light-weight Notepad-like text editor with syntax highlighting. This program can be run out of the box without installation, and does not touch your system's registry. Free PageSpy PageSpy is a small add-on for Internet Explorer that allows you to select any element within a webpage, select an option in the context menu, and view detailed information about both the coding behind the page and the element you selected. Free Phrase Express PhraseExpress manages your frequently used text snippets in customizable categories for quick access. Free PowerGui PowerGui is a free community for PowerGUI, a graphical user interface and script editor for Microsoft Windows PowerShell! Free Powershell Comes with Win7, but you can automate tasks by using the .NET Framework. Great for network admins. Free Process Explorer Ever wondered which program has a particular file or directory open? Now you can find out. Process Explorer shows you information about which handles and DLLs processes have opened or loaded. Also, included in the SysInterals Suite. Free Process Monitor Process Monitor is an advanced monitoring tool for Windows that shows real-time file system, Registry and process/thread activity. Free Reflector Explore and analyze compiled .NET assemblies, viewing them in C#, Visual Basic, and IL. This is an Essential for any .NET developer. Free Regular Expression Library Stuck on a Regular Expression but you think someone has already figured it out? Chances are they have. Free Regulator Regulator makes Regular Expressions easy. This is a must have for a .NET Developer. Free RenameMaestro RenameMaestro is probably the easiest batch file renamer you'll find to instantly rename multiple files COST ReSharper The one program that I cannot live without. Supports VS2010 and offers simple refactoring, code analysis/assistance/cleanup/templates. One of the few applications that is worth the $$$. COST/Trial ScrewTurn Wiki ScrewTurn Wiki allows you to create, manage and share wikis. A wiki is a collaboratively-edited, information-centered website: the most famous is Wikipedia. Free SharpDevelop What is #develop? SharpDevelop is a free IDE for C# and VB.NET projects on Microsoft's .NET platform. Free Show Me The Template Show Me The Template is a tool for exploring the templates, be their data, control or items panel, that comes with the controls built into WPF for all 6 themes. Free SnippetCompiler Compiles code snippets without opening Visual Studio. It does not support .NET 4. Free SQL Prompt SQL Prompt is a plug-in that increases how fast you can work with SQL. It provides code-completion for SQL server, reformatting, db schema information and snippets. Awesome! COST/Trial SQLinForm SQLinForm is an automatic SQL code formatter for all major databases  including ORACLE, SQL Server, DB2, UDB, Sybase, Informix, PostgreSQL, Teradata, MySQL, MS Access etc. with over 70 formatting options. COST/OnlineFree SSMS Tools SSMS Tools Pack is an add-in for Microsoft SQL Server Management Studio (SSMS) including SSMS Express. Free Storm STORM is a free and open source tool for testing web services. Free Telerik Code Convertor Convert code from VB to C Sharp and Vice Versa. Free TurtoiseSVN TortoiseSVN is a really easy to use Revision control / version control / source control software for Windows.Since it's not an integration for a specific IDE you can use it with whatever development tools you like. Free UltraEdit UltraEdit is the ideal text, HTML and hex editor, and an advanced PHP, Perl, Java and JavaScript editor for programmers. UltraEdit is also an XML editor including a tree-style XML parser. An industry-award winner, UltraEdit supports disk-based 64-bit file handling (standard) on 32-bit Windows platforms (Windows 2000 and later). COST/Trial Virtual Windows XP Comes with some W7 version and allows you to run WinXP along side W7. Free VirtualBox Virtualization by Sun Microsystems. You can virtualize Windows, Linux and more. Free Visual Log Parser SQL queries against a variety of log files and other system data sources. Free WinMerge WinMerge is an Open Source differencing and merging tool for Windows. WinMerge can compare both folders and files, presenting differences in a visual text format that is easy to understand and handle. Free Wireshark Wireshark is one of the best network protocol analyzer's for Unix and windows. This has been used several times to get me out of a bind. Free XML Notepad 07 Old, but still one of my favorite XML viewers. Free Productivity Tools – This is the list of tools that I use to save time or quickly navigate around Windows. Name Description License AutoHotKey Automate almost anything by sending keystrokes and mouse clicks. You can write a mouse or keyboard macro by hand or use the macro recorder. Free CLCL CLCL is clipboard caching utility. Free Ditto Ditto is an extension to the standard windows clipboard. It saves each item placed on the clipboard allowing you access to any of those items at a later time. Ditto allows you to save any type of information that can be put on the clipboard, text, images, html, custom formats, ..... Free Evernote Remember everything from notes to photos. It will synch between computers/devices. Free InfoRapid Inforapid is a search tool that will display all you search results in a html like browser. If you click on a word in that browser, it will start another search to the word you clicked on. Handy if you want to trackback something to it's true origin. The word you looked for will be highlighted in red. Clicking on the red word will open the containing file in a text based viewer. Clicking on any word in the opened document will start another search on that word. Free KatMouse The prime purpose of the KatMouse utility is to enhance the functionality of mice with a scroll wheel, offering 'universal' scrolling: moving the mouse wheel will scroll the window directly beneath the mouse cursor (not the one with the keyboard focus, which is default on Windows OSes). This is a major increase in the usefulness of the mouse wheel. Free ScreenR Instant Screencast with nothing to download. Works with Mac or PC and free. Free Start++ Start++ is an enhancement for the Start Menu in Windows Vista. It also extends the Run box and the command-line with customizable commands.  For example, typing "w Windows Vista" will take you to the Windows Vista page on Wikipedia! Free Synergy Synergy lets you easily share a single mouse and keyboard between multiple computers with different operating systems, each with its own display, without special hardware. It's intended for users with multiple computers on their desk since each system uses its own monitor(s). Free Texter Texter lets you define text substitution hot strings that, when triggered, will replace hotstring with a larger piece of text. By entering your most commonly-typed snippets of text into Texter, you can save countless keystrokes in the course of the day. Free Total Commander File handling, FTP, Archive handling and much more. Even works with Win3.11. COST/Trial Available Wizmouse WizMouse is a mouse enhancement utility that makes your mouse wheel work on the window currently under the mouse pointer, instead of the currently focused window. This means you no longer have to click on a window before being able to scroll it with the mouse wheel. This is a far more comfortable and practical way to make use of the mouse wheel. Free Xmarks Bookmark sync and search between computers. Free General Utilities – This is a list for power user users or anyone that wants more out of Windows. I usually install a majority of these whenever I get a new system. Name Description License µTorrent µTorrent is a lightweight and efficient BitTorrent client for Windows or Mac with many features. I use this for downloading LEGAL media. Free Audacity Audacity® is free, open source software for recording and editing sounds. It is available for Mac OS X, Microsoft Windows, GNU/Linux, and other operating systems. Learn more about Audacity... Also check our Wiki and Forum for more information. Free AVast Free FREE Antivirus. Free CD Burner XP Pro CDBurnerXP is a free application to burn CDs and DVDs, including Blu-Ray and HD-DVDs. It also includes the feature to burn and create ISOs, as well as a multilanguage interface. Free CDEX You can extract digital audio CDs into mp3/wav. Free Combofix Combofix is a freeware (a legitimate spyware remover created by sUBs), Combofix was designed to scan a computer for known malware, spyware (SurfSideKick, QooLogic, and Look2Me as well as any other combination of the mentioned spyware applications) and remove them. Free Cpu-Z Provides information about some of the main devices of your system. Free Cropper Cropper is a screen capture utility written in C#. It makes it fast and easy to grab parts of your screen. Use it to easily crop out sections of vector graphic files such as Fireworks without having to flatten the files or open in a new editor. Use it to easily capture parts of a web site, including text and images. It's also great for writing documentation that needs images of your application or web site. Free DropBox Drag and Drop files to sync between computers. Free DVD-Fab Converts/Copies DVDs/Blu-Ray to different formats. (like mp4, mkv, avi) COST/Trial Available FastStone Capture FastStone Capture is a powerful, lightweight, yet full-featured screen capture tool that allows you to easily capture and annotate anything on the screen including windows, objects, menus, full screen, rectangular/freehand regions and even scrolling windows/web pages. Free ffdshow FFDShow is a DirectShow decoding filter for decompressing DivX, XviD, H.264, FLV1, WMV, MPEG-1 and MPEG-2, MPEG-4 movies. Free Filezilla FileZilla Client is a fast and reliable cross-platform FTP, FTPS and SFTP client with lots of useful features and an intuitive graphical user interface. You can also download a server version. Free FireFox Web Browser, do you really need an explanation? Free FireGestures A customizable mouse gestures extension which enables you to execute various commands and user scripts with five types of gestures. Free FoxIt Reader Light weight PDF viewer. You should install this with the advanced setting or it will install a toolbar and setup some shortcuts. Free gSynchIt Synch Gmail and Outlook. Even supports Outlook 2010 32/64 bit COST/Trial Available Hulu Desktop At home or in a hotel, this has replaced my cable/satellite subscription. Free ImgBurn ImgBurn is a lightweight CD / DVD / HD DVD / Blu-ray burning application that everyone should have in their toolkit! Free Infrarecorder InfraRecorder is a free CD/DVD burning solution for Microsoft Windows. It offers a wide range of powerful features; all through an easy to use application interface and Windows Explorer integration. Free KeePass KeePass is a free open source password manager, which helps you to manage your passwords in a secure way. Free LastPass Another password management, synchronize between browsers, automatic form filling and more. Free Live Essentials One download and lots of programs including Mail, Live Writer, Movie Maker and more! Free Monitores MonitorES is a small windows utility that helps you to turnoff monitor display when you lock down your machine.Also when you lock your machine, it will pause all your running media programs & set your IM status message to "Away" / Custom message(via options) and restore it back to normal when you back. Free mRemote mRemote is a full-featured, multi-tab remote connections manager. Free Open Office OpenOffice.org 3 is the leading open-source office software suite for word processing, spreadsheets, presentations, graphics, databases and more. It is available in many languages and works on all common computers. It stores all your data in an international open standard format and can also read and write files from other common office software packages. It can be downloaded and used completely free of charge for any purpose. Free Paint.NET Simple, intuitive, and innovative user interface for editing photos. Free Picasa Picasa is free photo editing software from Google that makes your pictures look great. Free Pidgin Pidgin is an easy to use and free chat client used by millions. Connect to AIM, MSN, Yahoo, and more chat networks all at once. Free PING PING is a live Linux ISO, based on the excellent Linux From Scratch (LFS) documentation. It can be burnt on a CD and booted, or integrated into a PXE / RIS environment. Free Putty PuTTY is an SSH and telnet client, developed originally by Simon Tatham for the Windows platform. Free Revo Uninstaller Revo Uninstaller Pro helps you to uninstall software and remove unwanted programs installed on your computer easily! Even if you have problems uninstalling and cannot uninstall them from "Windows Add or Remove Programs" control panel applet.Revo Uninstaller is a much faster and more powerful alternative to "Windows Add or Remove Programs" applet! It has very powerful features to uninstall and remove programs. Free Security Essentials Microsoft Security Essentials is a new, free consumer anti-malware solution for your computer. Free SetupVirtualCloneDrive Virtual CloneDrive works and behaves just like a physical CD/DVD drive, however it exists only virtually. Point to the .ISO file and it appears in Windows Explorer as a Drive. Free Shark 007 Codec Pack Play just about any file format with this download. Also includes my W7 Media Playlist Generator. Free Snagit 9 Screen Capture on steroids. Add arrows, captions, etc to any screenshot. COST/Trial Available SysinternalsSuite Go ahead and download the entire sys internals suite. I have mentioned multiple programs in this suite already. Free TeraCopy TeraCopy is a compact program designed to copy and move files at the maximum possible speed, providing the user with a lot of features. Free for Home TrueCrypt Free open-source disk encryption software for Windows 7/Vista/XP, Mac OS X, and Linux Free TweetDeck Fully featured Twitter client. Free UltraVNC UltraVNC is a powerful, easy to use and free software that can display the screen of another computer (via internet or network) on your own screen. The program allows you to use your mouse and keyboard to control the other PC remotely. It means that you can work on a remote computer, as if you were sitting in front of it, right from your current location. Free Unlocker Unlocks locked files. Pretty simple right? Free VLC Media Player VLC media player is a highly portable multimedia player and multimedia framework capable of reading most audio and video formats Free Windows 7 Media Playlist This program is special to my heart because I wrote it. It has been mentioned on podcast and various websites. It allows you to quickly create wvx video playlist for Windows Media Center. Free WinRAR WinRAR is a powerful archive manager. It can backup your data and reduce the size of email attachments, decompress RAR, ZIP and other files downloaded from Internet and create new archives in RAR and ZIP file format. COST/Trial Available Blogging – I use the following for my blog. Name Description License Insert Code for Windows Live Writer Insert Code for Windows Live Writer will format a snippet of text in a number of programming languages such as C#, HTML, MSH, JavaScript, Visual Basic and TSQL. Free LiveWriter Included in Live Essentials, but the ultimate in Windows Blogging Free PasteAsVSCode Plug-in for Windows Live Writer that pastes clipboard content as Visual Studio code. Preserves syntax highlighting, indentation and background color. Converts RTF, outputted by Visual Studio, into HTML. Free Desktop Management – The list below represent the best in Windows Desktop Management. Name Description License 7 Stacks Allows users to have "stacks" of icons in their taskbar. Free Executor Executor is a multi purpose launcher and a more advanced and customizable version of windows run. Free Fences Fences is a program that helps you organize your desktop and can hide your icons when they are not in use. Free RocketDock Rocket Dock is a smoothly animated, alpha blended application launcher. It provides a nice clean interface to drop shortcuts on for easy access and organization. With each item completely customizable there is no end to what you can add and launch from the dock. Free WindowsTab Tabbing is an essential feature of modern web browsers. Window Tabs brings the productivity of tabbed window management to all of your desktop applications. Free

    Read the article

  • VS 2010 SP1 and SQL CE

    - by ScottGu
    Last month we released the Beta of VS 2010 Service Pack 1 (SP1).  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.   You can download and install the VS 2010 SP1 Beta here. Last week I blogged about the new Visual Studio support for IIS Express that we are adding with VS 2010 SP1. In today’s post I’m going to talk about the new VS 2010 SP1 tooling support for SQL CE, and walkthrough some of the cool scenarios it enables.  SQL CE – What is it and why should you care? SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Easy Migration to SQL Server SQL CE is an embedded database – which makes it ideal for development, testing, and light-usage scenarios.  For high-volume sites and applications you’ll probably want to migrate your database to use SQL Server Express (which is free), SQL Server or SQL Azure.  These servers enable much better scalability, more development features (including features like Stored Procedures – which aren’t supported with SQL CE), as well as more advanced data management capabilities. We’ll ship migration tools that enable you to optionally take SQL CE databases and easily upgrade them to use SQL Server Express, SQL Server, or SQL Azure.  You will not need to change your code when upgrading a SQL CE database to SQL Server or SQL Azure.  Our goal is to enable you to be able to simply change the database connection string in your web.config file and have your application just work. New Tooling Support for SQL CE in VS 2010 SP1 VS 2010 SP1 includes much improved tooling support for SQL CE, and adds support for using SQL CE within ASP.NET projects for the first time.  With VS 2010 SP1 you can now: Create new SQL CE Databases Edit and Modify SQL CE Database Schema and Indexes Populate SQL CE Databases within Data Use the Entity Framework (EF) designer to create model layers against SQL CE databases Use EF Code First to define model layers in code, then create a SQL CE database from them, and optionally edit the DB with VS Deploy SQL CE databases to remote servers using Web Deploy and optionally convert them to full SQL Server databases You can take advantage of all of the above features from within both ASP.NET Web Forms and ASP.NET MVC based projects. Download You can enable SQL CE tooling support within VS 2010 by first installing VS 2010 SP1 (beta). Once SP1 is installed, you’ll also then need to install the SQL CE Tools for Visual Studio download.  This is a separate download that enables the SQL CE tooling support for VS 2010 SP1. Walkthrough of Two Scenarios In this blog post I’m going to walkthrough how you can take advantage of SQL CE and VS 2010 SP1 using both an ASP.NET Web Forms and an ASP.NET MVC based application. Specifically, we’ll walkthrough: How to create a SQL CE database using VS 2010 SP1, then use the EF4 visual designers in Visual Studio to construct a model layer from it, and then display and edit the data using an ASP.NET GridView control. How to use an EF Code First approach to define a model layer using POCO classes and then have EF Code-First “auto-create” a SQL CE database for us based on our model classes.  We’ll then look at how we can use the new VS 2010 SP1 support for SQL CE to inspect the database that was created, populate it with data, and later make schema changes to it.  We’ll do all this within the context of an ASP.NET MVC based application. You can follow the two walkthroughs below on your own machine by installing VS 2010 SP1 (beta) and then installing the SQL CE Tools for Visual Studio download (which is a separate download that enables SQL CE tooling support for VS 2010 SP1). Walkthrough 1: Create a SQL CE Database, Create EF Model Classes, Edit the Data with a GridView This first walkthrough will demonstrate how to create and define a SQL CE database within an ASP.NET Web Form application.  We’ll then build an EF model layer for it and use that model layer to enable data editing scenarios with an <asp:GridView> control. Step 1: Create a new ASP.NET Web Forms Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET Web Forms project.  We’ll use the “ASP.NET Web Application” project template option so that it has a default UI skin implemented: Step 2: Create a SQL CE Database Right click on the “App_Data” folder within the created project and choose the “Add->New Item” menu command: This will bring up the “Add Item” dialog box.  Select the “SQL Server Compact 4.0 Local Database” item (new in VS 2010 SP1) and name the database file to create “Store.sdf”: Note that SQL CE database files have a .sdf filename extension. Place them within the /App_Data folder of your ASP.NET application to enable easy deployment. When we clicked the “Add” button above a Store.sdf file was added to our project: Step 3: Adding a “Products” Table Double-clicking the “Store.sdf” database file will open it up within the Server Explorer tab.  Since it is a new database there are no tables within it: Right click on the “Tables” icon and choose the “Create Table” menu command to create a new database table.  We’ll name the new table “Products” and add 4 columns to it.  We’ll mark the first column as a primary key (and make it an identify column so that its value will automatically increment with each new row): When we click “ok” our new Products table will be created in the SQL CE database. Step 4: Populate with Data Once our Products table is created it will show up within the Server Explorer.  We can right-click it and choose the “Show Table Data” menu command to edit its data: Let’s add a few sample rows of data to it: Step 5: Create an EF Model Layer We have a SQL CE database with some data in it – let’s now create an EF Model Layer that will provide a way for us to easily query and update data within it. Let’s right-click on our project and choose the “Add->New Item” menu command.  This will bring up the “Add New Item” dialog – select the “ADO.NET Entity Data Model” item within it and name it “Store.edmx” This will add a new Store.edmx item to our solution explorer and launch a wizard that allows us to quickly create an EF model: Select the “Generate From Database” option above and click next.  Choose to use the Store.sdf SQL CE database we just created and then click next again.  The wizard will then ask you what database objects you want to import into your model.  Let’s choose to import the “Products” table we created earlier: When we click the “Finish” button Visual Studio will open up the EF designer.  It will have a Product entity already on it that maps to the “Products” table within our SQL CE database: The VS 2010 SP1 EF designer works exactly the same with SQL CE as it does already with SQL Server and SQL Express.  The Product entity above will be persisted as a class (called “Product”) that we can programmatically work against within our ASP.NET application. Step 6: Compile the Project Before using your model layer you’ll need to build your project.  Do a Ctrl+Shift+B to compile the project, or use the Build->Build Solution menu command. Step 7: Create a Page that Uses our EF Model Layer Let’s now create a simple ASP.NET Web Form that contains a GridView control that we can use to display and edit the our Products data (via the EF Model Layer we just created). Right-click on the project and choose the Add->New Item command.  Select the “Web Form from Master Page” item template, and name the page you create “Products.aspx”.  Base the master page on the “Site.Master” template that is in the root of the project. Add an <h2>Products</h2> heading the new Page, and add an <asp:gridview> control within it: Then click the “Design” tab to switch into design-view. Select the GridView control, and then click the top-right corner to display the GridView’s “Smart Tasks” UI: Choose the “New data source…” drop down option above.  This will bring up the below dialog which allows you to pick your Data Source type: Select the “Entity” data source option – which will allow us to easily connect our GridView to the EF model layer we created earlier.  This will bring up another dialog that allows us to pick our model layer: Select the “StoreEntities” option in the dropdown – which is the EF model layer we created earlier.  Then click next – which will allow us to pick which entity within it we want to bind to: Select the “Products” entity in the above dialog – which indicates that we want to bind against the “Product” entity class we defined earlier.  Then click the “Enable automatic updates” checkbox to ensure that we can both query and update Products.  When you click “Finish” VS will wire-up an <asp:EntityDataSource> to your <asp:GridView> control: The last two steps we’ll do will be to click the “Enable Editing” checkbox on the Grid (which will cause the Grid to display an “Edit” link on each row) and (optionally) use the Auto Format dialog to pick a UI template for the Grid. Step 8: Run the Application Let’s now run our application and browse to the /Products.aspx page that contains our GridView.  When we do so we’ll see a Grid UI of the Products within our SQL CE database. Clicking the “Edit” link for any of the rows will allow us to edit their values: When we click “Update” the GridView will post back the values, persist them through our EF Model Layer, and ultimately save them within our SQL CE database. Learn More about using EF with ASP.NET Web Forms Read this tutorial series on the http://asp.net site to learn more about how to use EF with ASP.NET Web Forms.  The tutorial series uses SQL Express as the database – but the nice thing is that all of the same steps/concepts can also now also be done with SQL CE.   Walkthrough 2: Using EF Code-First with SQL CE and ASP.NET MVC 3 We used a database-first approach with the sample above – where we first created the database, and then used the EF designer to create model classes from the database.  In addition to supporting a designer-based development workflow, EF also enables a more code-centric option which we call “code first development”.  Code-First Development enables a pretty sweet development workflow.  It enables you to: Define your model objects by simply writing “plain old classes” with no base classes or visual designer required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping Optionally auto-create a database based on the model classes you define – allowing you to start from code first I’ve done several blog posts about EF Code First in the past – I really think it is great.  The good news is that it also works very well with SQL CE. The combination of SQL CE, EF Code First, and the new VS tooling support for SQL CE, enables a pretty nice workflow.  Below is a simple example of how you can use them to build a simple ASP.NET MVC 3 application. Step 1: Create a new ASP.NET MVC 3 Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET MVC 3 project.  We’ll use the “Internet Project” template so that it has a default UI skin implemented: Step 2: Use NuGet to Install EFCodeFirst Next we’ll use the NuGet package manager (automatically installed by ASP.NET MVC 3) to add the EFCodeFirst library to our project.  We’ll use the Package Manager command shell to do this.  Bring up the package manager console within Visual Studio by selecting the View->Other Windows->Package Manager Console menu command.  Then type: install-package EFCodeFirst within the package manager console to download the EFCodeFirst library and have it be added to our project: When we enter the above command, the EFCodeFirst library will be downloaded and added to our application: Step 3: Build Some Model Classes Using a “code first” based development workflow, we will create our model classes first (even before we have a database).  We create these model classes by writing code. For this sample, we will right click on the “Models” folder of our project and add the below three classes to our project: The “Dinner” and “RSVP” model classes above are “plain old CLR objects” (aka POCO).  They do not need to derive from any base classes or implement any interfaces, and the properties they expose are standard .NET data-types.  No data persistence attributes or data code has been added to them.   The “NerdDinners” class derives from the DbContext class (which is supplied by EFCodeFirst) and handles the retrieval/persistence of our Dinner and RSVP instances from a database. Step 4: Listing Dinners We’ve written all of the code necessary to implement our model layer for this simple project.  Let’s now expose and implement the URL: /Dinners/Upcoming within our project.  We’ll use it to list upcoming dinners that happen in the future. We’ll do this by right-clicking on our “Controllers” folder and select the “Add->Controller” menu command.  We’ll name the Controller we want to create “DinnersController”.  We’ll then implement an “Upcoming” action method within it that lists upcoming dinners using our model layer above.  We will use a LINQ query to retrieve the data and pass it to a View to render with the code below: We’ll then right-click within our Upcoming method and choose the “Add-View” menu command to create an “Upcoming” view template that displays our dinners.  We’ll use the “empty” template option within the “Add View” dialog and write the below view template using Razor: Step 4: Configure our Project to use a SQL CE Database We have finished writing all of our code – our last step will be to configure a database connection-string to use. We will point our NerdDinners model class to a SQL CE database by adding the below <connectionString> to the web.config file at the top of our project: EF Code First uses a default convention where context classes will look for a connection-string that matches the DbContext class name.  Because we created a “NerdDinners” class earlier, we’ve also named our connectionstring “NerdDinners”.  Above we are configuring our connection-string to use SQL CE as the database, and telling it that our SQL CE database file will live within the \App_Data directory of our ASP.NET project. Step 5: Running our Application Now that we’ve built our application, let’s run it! We’ll browse to the /Dinners/Upcoming URL – doing so will display an empty list of upcoming dinners: You might ask – but where did it query to get the dinners from? We didn’t explicitly create a database?!? One of the cool features that EF Code-First supports is the ability to automatically create a database (based on the schema of our model classes) when the database we point it at doesn’t exist.  Above we configured  EF Code-First to point at a SQL CE database in the \App_Data\ directory of our project.  When we ran our application, EF Code-First saw that the SQL CE database didn’t exist and automatically created it for us. Step 6: Using VS 2010 SP1 to Explore our newly created SQL CE Database Click the “Show all Files” icon within the Solution Explorer and you’ll see the “NerdDinners.sdf” SQL CE database file that was automatically created for us by EF code-first within the \App_Data\ folder: We can optionally right-click on the file and “Include in Project" to add it to our solution: We can also double-click the file (regardless of whether it is added to the project) and VS 2010 SP1 will open it as a database we can edit within the “Server Explorer” tab of the IDE. Below is the view we get when we double-click our NerdDinners.sdf SQL CE file.  We can drill in to see the schema of the Dinners and RSVPs tables in the tree explorer.  Notice how two tables - Dinners and RSVPs – were automatically created for us within our SQL CE database.  This was done by EF Code First when we accessed the NerdDinners class by running our application above: We can right-click on a Table and use the “Show Table Data” command to enter some upcoming dinners in our database: We’ll use the built-in editor that VS 2010 SP1 supports to populate our table data below: And now when we hit “refresh” on the /Dinners/Upcoming URL within our browser we’ll see some upcoming dinners show up: Step 7: Changing our Model and Database Schema Let’s now modify the schema of our model layer and database, and walkthrough one way that the new VS 2010 SP1 Tooling support for SQL CE can make this easier.  With EF Code-First you typically start making database changes by modifying the model classes.  For example, let’s add an additional string property called “UrlLink” to our “Dinner” class.  We’ll use this to point to a link for more information about the event: Now when we re-run our project, and visit the /Dinners/Upcoming URL we’ll see an error thrown: We are seeing this error because EF Code-First automatically created our database, and by default when it does this it adds a table that helps tracks whether the schema of our database is in sync with our model classes.  EF Code-First helpfully throws an error when they become out of sync – making it easier to track down issues at development time that you might otherwise only find (via obscure errors) at runtime.  Note that if you do not want this feature you can turn it off by changing the default conventions of your DbContext class (in this case our NerdDinners class) to not track the schema version. Our model classes and database schema are out of sync in the above example – so how do we fix this?  There are two approaches you can use today: Delete the database and have EF Code First automatically re-create the database based on the new model class schema (losing the data within the existing DB) Modify the schema of the existing database to make it in sync with the model classes (keeping/migrating the data within the existing DB) There are a couple of ways you can do the second approach above.  Below I’m going to show how you can take advantage of the new VS 2010 SP1 Tooling support for SQL CE to use a database schema tool to modify our database structure.  We are also going to be supporting a “migrations” feature with EF in the future that will allow you to automate/script database schema migrations programmatically. Step 8: Modify our SQL CE Database Schema using VS 2010 SP1 The new SQL CE Tooling support within VS 2010 SP1 makes it easy to modify the schema of our existing SQL CE database.  To do this we’ll right-click on our “Dinners” table and choose the “Edit Table Schema” command: This will bring up the below “Edit Table” dialog.  We can rename, change or delete any of the existing columns in our table, or click at the bottom of the column listing and type to add a new column.  Below I’ve added a new “UrlLink” column of type “nvarchar” (since our property is a string): When we click ok our database will be updated to have the new column and our schema will now match our model classes. Because we are manually modifying our database schema, there is one additional step we need to take to let EF Code-First know that the database schema is in sync with our model classes.  As i mentioned earlier, when a database is automatically created by EF Code-First it adds a “EdmMetadata” table to the database to track schema versions (and hash our model classes against them to detect mismatches between our model classes and the database schema): Since we are manually updating and maintaining our database schema, we don’t need this table – and can just delete it: This will leave us with just the two tables that correspond to our model classes: And now when we re-run our /Dinners/Upcoming URL it will display the dinners correctly: One last touch we could do would be to update our view to check for the new UrlLink property and render a <a> link to it if an event has one: And now when we refresh our /Dinners/Upcoming we will see hyperlinks for the events that have a UrlLink stored in the database: Summary SQL CE provides a free, embedded, database engine that you can use to easily enable database storage.  With SQL CE 4 you can now take advantage of it within ASP.NET projects and applications (both Web Forms and MVC). VS 2010 SP1 provides tooling support that enables you to easily create, edit and modify SQL CE databases – as well as use the standard EF designer against them.  This allows you to re-use your existing skills and data knowledge while taking advantage of an embedded database option.  This is useful both for small applications (where you don’t need the scalability of a full SQL Server), as well as for development and testing scenarios – where you want to be able to rapidly develop/test your application without having a full database instance.  SQL CE makes it easy to later migrate your data to a full SQL Server or SQL Azure instance if you want to – without having to change any code in your application.  All we would need to change in the above two scenarios is the <connectionString> value within the web.config file in order to have our code run against a full SQL Server.  This provides the flexibility to scale up your application starting from a small embedded database solution as needed. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • IIS SSL Certificate Renewal Pain

    - by Rick Strahl
    I’m in the middle of my annual certificate renewal for the West Wind site and I can honestly say that I hate IIS’s certificate system.  When it works it’s fine, but when it doesn’t man can it be a pain. Because I deal with public certificates on my site merely once a year, and you have to perform the certificate dance just the right way, I seem to run into some sort of trouble every year, thinking that Microsoft surely must have addressed the issues I ran into previously – HA! Not so. Don’t ever use the Renew Certificate Feature in IIS! The first rule that I should have never forgotten is that certificate renewals in IIS (7 is what I’m using but I think it’s no different in 7.5 and 8), simply don’t work if you’re submitting to get a public certificate from a certificate authority. I use DNSimple for my DNS domain management and SSL certificates because they provide ridiculously easy domain management and good prices for SSL certs – especially wildcard certificates, which is what I use on west-wind.com. Certificates in IIS can be found pegged to the machine root. If you go into the IIS Manager, go to the machine root the tree and then click on certificates and you then get various certificate options: Both of these options create a new Certificate request (CSR), which is just a text file. But if you’re silly enough like me to click on the Renew button on your old certificate, you’ll find that you end up generating a very long Certificate Request that looks nothing like the original certificate request and the format that’s used for this is not accepted by most certificate authorities. While I’m not sure exactly what the problem is, it simply looks like IIS is respecting none of your original certificate bit size choices and is generating a huge certificate request that is 3 times the size of a ‘normal’ certificate request. The end result is (and I’ve done this at least twice now) is that the certificate processor is likely to fail processing those renewals. Always create a new Certificate While it’s a little more work and you have to remember how to fill out the certificate request properly, this is the safe way to make sure your certificate generates properly. First comes the Distinguished Name Properties dialog: Ah yes you have to love the nomenclature of this stuff. Distinguished name, Common name – WTF is a common name? It doesn’t look common to me! Make sure this form gets filled out correctly. Common NameThis is the domain name of the Web site. In my case I’m creating a wildcard certificate so I’m using the * prefix. If you’re purchasing a certificate for a specific domain use www.west-wind.com or store.west-wind.com for example. Make sure this matches the EXACT domain you’re trying to use secure access on because that’s all the certificate is going to work on unless you get a wildcard certificate. Organization Is the name of your company or organization. Depending on the kind of certificate you purchase this name will show up on your certificate. Most low end SSL certificates (ie. those that cost under $100 for single domains) don’t list the organization, the higher signature certificates that also require extensive validation by the cert authority do. Regardless you should make sure this matches the right company/organization. Organizational Unit This can be anything. Not really sure what this is for, but traditionally I’ve always set this to Web because – well this is a Web thing after all right? I’ve never seen this used anywhere that I can tell other than to internally reference the cert. State and CountryPretty obvious. Should reflect the location of the business/organization/person or site.   Next you have to configure the bit size used for the certificate: The default on this dialog is 1024, but I’ve found that most providers these days request a minimum bit length of 2048, as did my DNSimple provider. Again check with the provider when you submit to make sure. Bit length mismatches can cause problems if you use a size that isn’t supported by the provider. I had that happen last year when I submitted my CSR and it got rejected quite a bit later, when the certs usually are issued within an hour or less. When you’re done here, the certificate is saved to disk as a .txt file and it should look something like this (this is a 2048 bit length CSR):-----BEGIN NEW CERTIFICATE REQUEST----- MIIEVGCCAz0CAQAwdjELMAkGA1UEBhMCVVMxDzANBgNVBAgMBkhhd2FpaTENMAsG A1UEBwwEUGFpYTEfMB0GA1UECgwWV2VzdCBXaW5kIFRlY2hub2xvZ2llczEMMAoG B1UECwwDV2ViMRgwFgYDVQQDDA8qLndlc3Qtd2luZC5jb20wggEiMA0GCSqGSIb3 DQEBAQUAA4IBDwAwggEKAoIBAQDIPWOFMkMVRp2Ftj9w/cCVV4OYYhoZYtl+8lTk oqDwKca0xWHLgioX/9v0rZLS6a82MHqKEBxVXu+cuCmSE4AQtB/1YH9lS4tpc/be OZDvnTotP6l4MCEzzAfROcw4CiIg6X0RMSnl8IATAvv2V5LQM9TDdt9oDdMpX2IY +vVC9RZ7PMHBmR9kwI2i/lrKitzhQKaHgpmKcRlM6iqpALUiX28w5HJaDKK1MDHN 607tyFJLHijuJKx7PdTqZYf50KkC3NupfZ2avVycf18Q13jHWj59tvwEOczoVzRL l4LQivAqbhyiqMpWnrZunIOUZta5aGm+jo7O1knGWJjxuraTAgMBAAGgggGYMBoG CisGAQQBgjcNAgMxDBYKNi4yLjkyMDAuMjA0BgkrBgEEAYI3FRQxJzAlAgEFDAZS QVNYUFMMC1JBU1hQU1xSaWNrDAtJbmV0TWdyLmV4ZTByBgorBgEEAYI3DQICMWQw YgIBAR5aAE0AaQBjAHIAbwBzAG8AZgB0ACAAUgBTAEEAIABTAEMAaABhAG4AbgBl AGwAIABDAHIAeQBwAHQAbwBnAHIAYQBwAGgAaQBjACAAUAByAG8AdgBpAGQAZQBy AwEAMIHPBgkqhkiG9w0BCQ4xgcEwgb4wDgYDVR0PAQH/BAQDAgTwMBMGA1UdJQQM MAoGCCsGAQUFBwMBMHgGCSqGSIb3DQEJDwRrMGkwDgYIKoZIhvcNAwICAgCAMA4G CCqGSIb3DQMEAgIAgDALBglghkgBZQMEASowCwYJYIZIAWUDBAEtMAsGCWCGSAFl AwQBAjALBglghkgBZQMEAQUwBwYFKw4DAgcwCgYIKoZIhvcNAwcwHQYDVR0OBBYE FD/yOsTbXE+GVFCFMmldzQvyloz9MA0GCSqGSIb3DQEBBQUAA4IBAQCK6LlsCuIM 1AU0niB6QZ9v0FTsGFxP1dYvVUnJyY6VEKNiGFiQjZac7UCs0p58yScdXWEFOE8V OsjAYD3xYNc05+ckyD67UHRGEUAVB9RBvbKW23KeR/8kBmEzc8PemD52YOgExxAJ 57xWmAwEHAvbgYzQvhO8AOzH3TGvvHbg5UKM1pYgNmuwZq5DkL/IDoeIJwfk/wrI wghNTuxxIFgbH4YrgLgv4PRvrS/LaTCRBdboaCgzATMczaOb1nd/DVNR+3fCtMhM W0psTAjzRbmXF3nJyAQa7jF/52gkY0RfFX2lG5tJnG+XDsVNvKNvh9Qa5Tlmkm06 ILKCm9ciWCKk -----END NEW CERTIFICATE REQUEST----- You can take that certificate request and submit that to your certificate provider. Since this is base64 encoded you can typically just paste it into a text box on the submission page, or some providers will ask you to upload the CSR as a file. What does a Renewal look like? Note the length of the CSR will vary somewhat with key strength, but compare this to a renewal request that IIS generated from my existing site:-----BEGIN NEW CERTIFICATE REQUEST----- MIIPpwYFKoZIhvcNAQcCoIIPmDCCD5QCAQExCzAJBgUrDgMCGgUAMIIIqAYJKoZI hvcNAQcBoIIImQSCCJUwggiRMIIH+gIBADBdMSEwHwYDVQQLDBhEb21haW4gQ29u dHJvbCBWYWxpFGF0ZWQxHjAcBgNVBAsMFUVzc2VudGlhbFNTTCBXaWxkY2FyZDEY MBYGA1UEAwwPKi53ZXN0LXdpbmQuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB iQKBgQCK4OuIOR18Wb8tNMGRZiD1c9X57b332Lj7DhbckFqLs0ys8kVDHrTXSj+T Ye9nmAvfPpZmBtE5p9qRNN79rUYugAdl+qEtE4IJe1bRfxXzcKa1SXa8+TEs3zQa zYSmcR2dDuC8om1eAdeCtt0NnkvANgm1VLwGOor/UHMASaEhCQIDAQABoIIG8jAa BgorBgEEAYI3DQIDMQwWCjYuMi45MjAwLjIwNAYJKwYBBAGCNxUUMScwJQIBBQwG UkFTWFBTDAtSQVNYUFNcUmljawwLSW5ldE1nci5leGUwZgYKKwYBBAGCNw0CAjFY MFYCAQIeTgBNAGkAYwByAG8AcwBvAGYAdAAgAFMAdAByAG8AbgBnACAAQwByAHkA cAB0AG8AZwByAGEAcABoAGkAYwAgAFAAcgBvAHYAaQBkAGUAcgMBADCCAQAGCSqG SIb3DQEJDjGB8jCB7zAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIwADA0BgNV HSUELTArBggrBgEFBQcDAQYIKwYBBQUHAwIGCisGAQQBgjcKAwMGCWCGSAGG+EIE ATBPBgNVHSAESDBGMDoGCysGAQQBsjEBAgIHMCswKQYIKwYBBQUHAgEWHWh0dHBz Oi8vc2VjdXJlLmNvbW9kby5jb20vQ1BTMAgGBmeBDAECATApBgNVHREEIjAggg8q Lndlc3Qtd2luZC5jb22CDXdlc3Qtd2luZC5jb20wHQYDVR0OBBYEFEVLAyO8gDiv lsfovKrx9mHPyrsiMIIFMAYJKwYBBAGCNw0BMYIFITCCBR0wggQFoAMCAQICEQDu 1E1T5Jvtkm5LOfSHabWlMA0GCSqGSIb3DQEBBQUAMHIxCzAJBgNVBAYTAkdCMRsw GQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAY BgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMRgwFgYDVQQDEw9Fc3NlbnRpYWxTU0wg Q0EwHhcNMTQwNTA3MDAwMDAwWhcNMTUwNjA2MjM1OTU5WjBdMSEwHwYDVQQLExhE b21haW4gQ29udHJvbCBWYWxpZGF0ZWQxHjAcBgNVBAsTFUVzc2VudGlhbFNTTCBX aWxkY2FyZDEYMBYGA1UEAxQPKi53ZXN0LXdpbmQuY29tMIIBIjANBgkqhkiG9w0B AQEFAAOCAQ8AMIIBCgKCAQEAiyKfL66XB51DlUfm6xXqJBcvMU2qorRHxC+WjEpB amvg8XoqNfCKzDAvLMbY4BLhbYCTagqtslnP3Gj4AKhXqRKU0n6iSbmS1gcWzCJM CHufZ5RDtuTuxhTdJxzP9YqZUfKV5abWQp/TK6V1ryaBJvdqM73q4tRjrQODtkiR PfZjxpybnBHFJS8jYAf8jcOjSDZcgN1d9Evc5MrEJCp/90cAkozyF/NMcFtD6Yj8 UM97z3MzDT2JPDoH3kAr3cCgpUNyQ2+wDNCnL9eWYFkOQi8FZMsZol7KlZ5NgNfO a7iZMVGbqDg6rkS//2uGe6tSQJTTs+mAZB+na+M8XT2UqwIDAQABo4IBwTCCAb0w HwYDVR0jBBgwFoAU2svqrVsIXcz//CZUzknlVcY49PgwHQYDVR0OBBYEFH0AmLiL RSEL9+sQD/n5O4N7/nnqMA4GA1UdDwEB/wQEAwIFoDAMBgNVHRMBAf8EAjAAMDQG A1UdJQQtMCsGCCsGAQUFBwMBBggrBgEFBQcDAgYKKwYBBAGCNwoDAwYJYIZIAYb4 QgQBME8GA1UdIARIMEYwOgYLKwYBBAGyMQECAgcwKzApBggrBgEFBQcCARYdaHR0 cHM6Ly9zZWN1cmUuY29tb2RvLmNvbS9DUFMwCAYGZ4EMAQIBMDsGA1UdHwQ0MDIw MKAuoCyGKmh0dHA6Ly9jcmwuY29tb2RvY2EuY29tL0Vzc2VudGlhbFNTTENBLmNy bDBuBggrBgEFBQcBAQRiMGAwOAYIKwYBBQUHMAKGLGh0dHA6Ly9jcnQuY29tb2Rv Y2EuY29tL0Vzc2VudGlhbFNTTENBXzIuY3J0MCQGCCsGAQUFBzABhhhodHRwOi8v b2NzcC5jb21vZG9jYS5jb20wKQYDVR0RBCIwIIIPKi53ZXN0LXdpbmQuY29tgg13 ZXN0LXdpbmQuY29tMA0GCSqGSIb3DQEBBQUAA4IBAQBqBfd6QHrxXsfgfKARG6np 8yszIPhHGPPmaE7xq7RpcZjY9H+8l6fe4jQbGFjbA5uHBklYI4m2snhPaW2p8iF8 YOkm2V2hEsSTnkf5/flw9mZtlCFEDFXSsBxBdNz8RYTthPMu1h09C0XuDB30sztg nR692FrxJN5/bXsk+MC9nEweTFW/t2HW+XZ8bhM7vsAS+pZionR4MyuQ0mYIt/lD csZVZ91KxTsIm8rNMkkYGFoSIXjQ0+0tCbxMF0i2qnpmNRpA6PU8l7lxxvPkplsk 9KB8QIPFrR5p/i/SUAd9vECWh5+/ktlcrfFP2PK7XcEwWizsvMrNqLyvQVNXSUPT MA0GCSqGSIb3DQEBBQUAA4GBABt/NitwMzc5t22p5+zy4HXbVYzLEjesLH8/v0ot uLQ3kkG8tIWNh5RplxIxtilXt09H4Oxpo3fKUN0yw+E6WsBfg0sAF8pHNBdOJi48 azrQbt4HvKktQkGpgYFjLsormjF44SRtToLHlYycDHBNvjaBClUwMCq8HnwY6vDq xikRoIIFITCCBR0wggQFoAMCAQICEQDu1E1T5Jvtkm5LOfSHabWlMA0GCSqGSIb3 DQEBBQUAMHIxCzAJBgNVBAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0 ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVk MRgwFgYDVQQDEw9Fc3NlbnRpYWxTU0wgQ0EwHhcNMTQwNTA3MDAwMDAwWhcNMTUw NjA2MjM1OTU5WjBdMSEwHwYDVQQLExhEb21haW4gQ29udHJvbCBWYWxpZGF0ZWQx HjAcBgNVBAsTFUVzc2VudGlhbFNTTCBXaWxkY2FyZDEYMBYGA1UEAxQPKi53ZXN0 LXdpbmQuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAiyKfL66X B51DlUfm6xXqJBcvMU2qorRHxC+WjEpBamvg8XoqNfCKzDAvLMbY4BLhbYCTagqt slnP3Gj4AKhXqRKU0n6iSbmS1gcWzCJMCHufZ5RDtuTuxhTdJxzP9YqZUfKV5abW Qp/TK6V1ryaBJvdqM73q4tRjrQODtkiRPfZjxpybnBHFJS8jYAf8jcOjSDZcgN1d 9Evc5MrEJCp/90cAkozyF/NMcFtD6Yj8UM97z3MzDT2JPDoH3kAr3cCgpUNyQ2+w DNCnL9eWYFkOQi8FZMsZol7KlZ5NgNfOa7iZMVGbqDg6rkS//2uGe6tSQJTTs+mA ZB+na+M8XT2UqwIDAQABo4IBwTCCAb0wHwYDVR0jBBgwFoAU2svqrVsIXcz//CZU zknlVcY49PgwHQYDVR0OBBYEFH0AmLiLRSEL9+sQD/n5O4N7/nnqMA4GA1UdDwEB /wQEAwIFoDAMBgNVHRMBAf8EAjAAMDQGA1UdJQQtMCsGCCsGAQUFBwMBBggrBgEF BQcDAgYKKwYBBAGCNwoDAwYJYIZIAYb4QgQBME8GA1UdIARIMEYwOgYLKwYBBAGy MQECAgcwKzApBggrBgEFBQcCARYdaHR0cHM6Ly9zZWN1cmUuY29tb2RvLmNvbS9D UFMwCAYGZ4EMAQIBMDsGA1UdHwQ0MDIwMKAuoCyGKmh0dHA6Ly9jcmwuY29tb2Rv Y2EuY29tL0Vzc2VudGlhbFNTTENBLmNybDBuBggrBgEFBQcBAQRiMGAwOAYIKwYB BQUHMAKGLGh0dHA6Ly9jcnQuY29tb2RvY2EuY29tL0Vzc2VudGlhbFNTTENBXzIu Y3J0MCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC5jb21vZG9jYS5jb20wKQYDVR0R BCIwIIIPKi53ZXN0LXdpbmQuY29tgg13ZXN0LXdpbmQuY29tMA0GCSqGSIb3DQEB BQUAA4IBAQBqBfd6QHrxXsfgfKARG6np8yszIPhHGPPmaE7xq7RpcZjY9H+8l6fe 4jQbGFjbA5uHBklYI4m2snhPaW2p8iF8YOkm2V2hEsSTnkf5/flw9mZtlCFEDFXS sBxBdNz8RYTthPMu1h09C0XuDB30sztgnR692FrxJN5/bXsk+MC9nEweTFW/t2HW +XZ8bhM7vsAS+pZionR4MyuQ0mYIt/lDcsZVZ91KxTsIm8rNMkkYGFoSIXjQ0+0t CbxMF0i2qnpmNRpA6PU8l7lxxvPkplsk9KB8QIPFrR5p/i/SUAd9vECWh5+/ktlc rfFP2PK7XcEwWizsvMrNqLyvQVNXSUPTMYIBrzCCAasCAQEwgYcwcjELMAkGA1UE BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2Fs Zm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxGDAWBgNVBAMTD0Vzc2Vu dGlhbFNTTCBDQQIRAO7UTVPkm+2Sbks59IdptaUwCQYFKw4DAhoFADANBgkqhkiG 9w0BAQEFAASCAQB8PNQ6bYnQpWfkHyxnDuvNKw3wrqF2p7JMZm+SuN2qp3R2LpCR mW2LrGtQIm9Iob/QOYH+8houYNVdvsATGPXX2T8gzn+anof4tOG0vCTK1Bp9bwf9 MkRP+1c8RW/vkYmUW4X5/C+y3CZpMH5dDTaXBIpXFzjX/fxNpH/rvLzGiaYYL3Cn OLO+aOADr9qq5yoqwpiYCSfYNNYKTUNNGfYIidQwYtbHXEYhSukB2oR89xD2sZZ4 bOqFjUPgTa5SsERLDDeg3omMKiIXVYGxlqBEq51Kge6IQt4qQV9P9VgInW7cWmKe dTqNHI9ri3ttewdEnT++TKGKKfTjX9SR8Waj -----END NEW CERTIFICATE REQUEST----- Clearly there’s something very different between this an my original request! And it didn’t work. IIS creates a custom CSR that is encoded in a format that no certificate authority I’ve ever used uses. If you want the gory details of what’s in there look at this ServerFault question (thanks to Mika in the comments). In the end it doesn’t matter  though – no certificate authority knows what to do with this CSR. So create a new CSR and skip the renewal. Always! Use the same Server Keep in mind that on IIS at least you should always create your certificate on a single server and then when you receive the final certificate from your provider import it on that server. IIS tracks the CSR it created and requires it in order to import the final certificate properly. So if for some reason you try to install the certificate on another server, it won’t work. I’ve also run into trouble trying to install the same certificate twice – this time around I didn’t give my certificate the proper friendly name and IIS failed to allow me to assign the certificate to any of my Web sites. So I removed the certificate and tried to import again, only to find it failed the second time around. There are other ways to fix this, but in my case I had to have the certificate re-issued to work – not what you want to do. Regardless of what you do though, when you import make sure you do it right the first time by crossing all your t’s and dotting your i's– it’ll save you a lot of grief! You don’t actually have to use the server that the certificate gets installed on to generate the CSR and first install it, but it is generally a good idea to do so just so you can get the certificate installed into the right place right away. If you have access to the server where you need to install the certificate you might as well use it. But you can use another machine to generated the and install the certificate, then export the certificate and move it to another machine as needed. So you can use your Dev machine to create a certificate then export it and install it on a live server. More on installation and back up/export later. Installing the Certificate Once you’ve submitted a CSR request your provider will process the request and eventually issue you a new final certificate that contains another text file with the final key to import into your certificate store. IIS does this by combining the content in your certificate request with the original CSR. If all goes well your new certificate shows up in the certificate list and you’re ready to assign the certificate to your sites. Make sure you use a friendly name that matches domain name of your site. So use *.mysite.com or www.mysite.com or store.mysite.com to ensure IIS recognizes the certificate. I made the mistake of not naming my friendly name this way and found that IIS was unable to link my sites to my wildcard certificate. It needed to have the *. as part of the certificate otherwise the Hostname input field was blanked out. Changing the Friendly Name If you by accidentally used an invalid friendly name you can change it later in the Windows certificate store. Bring up a Run Box Type MMC File | Add/Remove Snap In Add Certificates | Computer Account | Local Computer Drill into Certificates | Personal | Certificates Find your Certificate | Right Click | Properties Edit the Friendly Name | Click OK Backing up your Certificate The first thing you should do once your certificate is successfully installed is to back it up! In case your server crashes or you otherwise lose your configuration this will ensure you have an easy way to recover and reinstall your certificate either on the same server or a different one. If you’re running a server farm or using a wildcard certificate you also need to get the certificate onto other machines and a PFX file import is the easiest way to do this. To back up your certificate select your certificate and choose Export from the context or sidebar menu: The Export Certificate option allows you to export a password protected binary file that you can import in a single step. You can copy the resulting binary PFX file to back up or copy to other machines to install on. Importing the certificate on another machine is as easy as pointing at the PFX file and specifying the password. IIS handles the rest. Assigning a new certificate to your Site Once you have the new certificate installed, all that’s left to do is assign it to your site. In IIS select your Web site and bring up the Site Bindings from the right sidebar. Add a new binding for https, bind it to port 443, specify your hostname and pick the certificate from the pick list. If you’re using a root site make sure to set up your certificate for www.yoursite.com and also for yoursite.com so that both work properly with SSL. Note that you need to explicitly configure each hostname for a certificate if you plan to use SSL. Luckily if you update your SSL certificate in the following year, IIS prompts you and asks whether you like to update all other sites that are using the existing cert to the newer cert. And you’re done. So what’s the Pain? So, all of this is old hat and it doesn’t look all that bad right? So what’s the pain here? Well if you follow the instructions and do everything right, then the process is about as straight forward as you would expect it to be. You create a cert request, you import it and assign it to your sites. That’s the basic steps and to be perfectly fair it works well – if nothing goes wrong. However, renewing tends to be the problem. The first unintuitive issue is that you simply shouldn’t renew but create a new CSR and generate your new certificate from that. Over the years I’ve fallen prey to the belief that Microsoft eventually will fix this so that the renewal creates the same type of CSR as the old cert, but apparently that will just never happen. Booo! The other problem I ran into is that I accidentally misnamed my imported certificate which in turn set off a chain of events that caused my originally issued certificate to become uninstallable. When I received my completed certificate I installed it and it installed just fine, but the friendly name was wrong. As a result IIS refused to assign the certificate to any of my host headered sites. That’s strike number one. Why the heck should the friendly name have any effect on the ability to attach the certificate??? Next I uninstalled the certificate because I figured that would be the easiest way to make sure I get it right. But I found that I could not reinstall my certificate. I kept getting these stop errors: "ASN1 bad tag value met" that would prevent the installation from completion. After searching around for this error and reading countless long messages on forums, I found that this error supposedly does not actually mean the install failed, but the list wouldn’t refresh. Commodo has this to say: Note: There is a known issue in IIS 7 giving the following error: "Cannot find the certificate request associated with this certificate file. A certificate request must be completed on the computer where it was created." You may also receive a message stating "ASN1 bad tag value met". If this is the same server that you generated the CSR on then, in most cases, the certificate is actually installed. Simply cancel the dialog and press "F5" to refresh the list of server certificates. If the new certificate is now in the list, you can continue with the next step. If it is not in the list, you will need to reissue your certificate using a new CSR (see our CSR creation instructions for IIS 7). After creating a new CSR, login to your Comodo account and click the 'replace' button for your certificate. Not sure if this issue is fixed in IIS 8 but that’s an insane bug to have crop up. As it turns out, in my case the refresh didn’t work and the certificate didn’t show up in the IIS list after the reinstall. In fact when looking at the certificate store I could see my certificate was installed in the right place, but the private key is missing which is most likely why IIS is not picking it up. It looks like IIS could not match the final cert to the original CSR generated. But again some sort of message to that affect might be helpful instead of ASN1 bad tag value met. Recovering the Private Key So it turns out my original problem was that I received the published key, but when I imported the private key was missing. There’s a relatively easy way to recover from this. If your certificate doesn’t show up in IIS check in the certificate store for the local machine (see steps above on how to bring this up). If you look at the certificate in Certificates/Personal/Certificates make sure you see the key as shown in the image below: if the key is missing it means that the certificate is missing the private key most likely. To fix a certificate you can do the following: Double click the certificate Go to the Details Tab Copy down the Serial number You can copy the serial number from the area blurred out above. The serial number will be in a format like ?00 a7 9b a1 a4 9d 91 63 57 d6 9f 26 b8 ee 79 b5 cb and you’ll need to strip out the spaces in order to use it in the next step. Next open up an Administrative command prompt and issue the following command: certutil -repairstore my 00a79ba1a49d916357d69f26b8ee79b5cb You should get a confirmation message that the repair worked. If you now go back to the certificate store you should now see the key icon show up on the certificate. Your certificate is fixed. Now go back into IIS Manager and refresh the list of certificates and if all goes well you should see all the certificates that showed in the cert store now: Remember – back up the key first then map to your site… Summary I deal with a lot of customers who run their own IIS servers, and I can’t tell you how often I hear about botched SSL installations. When I posted some of my issues on Twitter yesterday I got a hell storm of “me too” responses. I’m clearly not the only one, who’s run into this especially with renewals. I feel pretty comfortable with IIS configuration and I do a lot of it for support purposes, but the SSL configuration is one that never seems to go seamlessly. This blog post is meant as reminder to myself to read next time I do a renewal. So I can dot my i's and dash my t’s before I get caught in the mess I’m dealing with today. Hopefully some of you find this useful as well.© Rick Strahl, West Wind Technologies, 2005-2014Posted in IIS7  Security   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • Errors when installing Open Office

    - by user109036
    I followed the first set of instructions on this page to install Open Office: How to install Open Office? However, the last step which says to change the CHMOD of a folder, I got an error saying that the directory does not exist. Open Office now appears in my Ubuntu start menu, but clicking on it does nothing. I tried a reboot. Below is what I could copy from my terminal. I am running the latest Ubuntu. I have not uninstalled Libreoffice as suggested somewhere. The reason is that in the Ubuntu software centre, Libre office appears to be made up of several components and I don't know which ones to remove (or all maybe?). They are Libreoffice Draw, Math, Writer, Calc. After this operation, 480 MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre-lib all 6b24-1.11.5-0ubuntu1~12.10.1 [6,135 kB] Get:2 http://ppa.launchpad.net/upubuntu-com/office/ubuntu/ quantal/main openoffice amd64 3.4~oneiric [321 MB] Get:3 http://gb.archive.ubuntu.com/ubuntu/ quantal/main ca-certificates-java all 20120721 [13.2 kB] Get:4 http://gb.archive.ubuntu.com/ubuntu/ quantal/main tzdata-java all 2012e-0ubuntu2 [140 kB] Get:5 http://gb.archive.ubuntu.com/ubuntu/ quantal/main java-common all 0.43ubuntu3 [61.7 kB] Get:6 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre-headless amd64 6b24-1.11.5-0ubuntu1~12.10.1 [25.4 MB] Get:7 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libgif4 amd64 4.1.6-9.1ubuntu1 [31.3 kB] Get:8 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre amd64 6b24-1.11.5-0ubuntu1~12.10.1 [234 kB] Get:9 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libatk-wrapper-java all 0.30.4-0ubuntu4 [29.8 kB] Get:10 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libatk-wrapper-java-jni amd64 0.30.4-0ubuntu4 [31.1 kB] Get:11 http://gb.archive.ubuntu.com/ubuntu/ quantal/main xorg-sgml-doctools all 1:1.10-1 [12.0 kB] Get:12 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-core-dev all 7.0.23-1 [744 kB] Get:13 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libice-dev amd64 2:1.0.8-2 [57.6 kB] Get:14 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libpthread-stubs0 amd64 0.3-3 [3,258 B] Get:15 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libpthread-stubs0-dev amd64 0.3-3 [2,866 B] Get:16 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libsm-dev amd64 2:1.2.1-2 [19.9 kB] Get:17 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxau-dev amd64 1:1.0.7-1 [10.2 kB] Get:18 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxdmcp-dev amd64 1:1.1.1-1 [26.9 kB] Get:19 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-input-dev all 2.2-1 [133 kB] Get:20 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-kb-dev all 1.0.6-2 [269 kB] Get:21 http://gb.archive.ubuntu.com/ubuntu/ quantal/main xtrans-dev all 1.2.7-1 [84.3 kB] Get:22 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxcb1-dev amd64 1.8.1-1ubuntu1 [82.6 kB] Get:23 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-dev amd64 2:1.5.0-1 [912 kB] Get:24 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-doc all 2:1.5.0-1 [2,460 kB] Get:25 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxt-dev amd64 1:1.1.3-1 [492 kB] Get:26 http://gb.archive.ubuntu.com/ubuntu/ quantal/main ttf-dejavu-extra all 2.33-2ubuntu1 [3,420 kB] Get:27 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe icedtea-6-jre-cacao amd64 6b24-1.11.5-0ubuntu1~12.10.1 [417 kB] Get:28 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe icedtea-6-jre-jamvm amd64 6b24-1.11.5-0ubuntu1~12.10.1 [581 kB] Get:29 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/main icedtea-netx-common all 1.3-1ubuntu1.1 [617 kB] Get:30 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/main icedtea-netx amd64 1.3-1ubuntu1.1 [16.2 kB] Get:31 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jdk amd64 6b24-1.11.5-0ubuntu1~12.10.1 [11.1 MB] Fetched 374 MB in 9min 18s (671 kB/s) Extract templates from packages: 100% Selecting previously unselected package openjdk-6-jre-lib. (Reading database ... 143191 files and directories currently installed.) Unpacking openjdk-6-jre-lib (from .../openjdk-6-jre-lib_6b24-1.11.5-0ubuntu1~12.10.1_all.deb) ... Selecting previously unselected package ca-certificates-java. Unpacking ca-certificates-java (from .../ca-certificates-java_20120721_all.deb) ... Selecting previously unselected package tzdata-java. Unpacking tzdata-java (from .../tzdata-java_2012e-0ubuntu2_all.deb) ... Selecting previously unselected package java-common. Unpacking java-common (from .../java-common_0.43ubuntu3_all.deb) ... Selecting previously unselected package openjdk-6-jre-headless:amd64. Unpacking openjdk-6-jre-headless:amd64 (from .../openjdk-6-jre-headless_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package libgif4:amd64. Unpacking libgif4:amd64 (from .../libgif4_4.1.6-9.1ubuntu1_amd64.deb) ... Selecting previously unselected package openjdk-6-jre:amd64. Unpacking openjdk-6-jre:amd64 (from .../openjdk-6-jre_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package libatk-wrapper-java. Unpacking libatk-wrapper-java (from .../libatk-wrapper-java_0.30.4-0ubuntu4_all.deb) ... Selecting previously unselected package libatk-wrapper-java-jni:amd64. Unpacking libatk-wrapper-java-jni:amd64 (from .../libatk-wrapper-java-jni_0.30.4-0ubuntu4_amd64.deb) ... Selecting previously unselected package xorg-sgml-doctools. Unpacking xorg-sgml-doctools (from .../xorg-sgml-doctools_1%3a1.10-1_all.deb) ... Selecting previously unselected package x11proto-core-dev. Unpacking x11proto-core-dev (from .../x11proto-core-dev_7.0.23-1_all.deb) ... Selecting previously unselected package libice-dev:amd64. Unpacking libice-dev:amd64 (from .../libice-dev_2%3a1.0.8-2_amd64.deb) ... Selecting previously unselected package libpthread-stubs0:amd64. Unpacking libpthread-stubs0:amd64 (from .../libpthread-stubs0_0.3-3_amd64.deb) ... Selecting previously unselected package libpthread-stubs0-dev:amd64. Unpacking libpthread-stubs0-dev:amd64 (from .../libpthread-stubs0-dev_0.3-3_amd64.deb) ... Selecting previously unselected package libsm-dev:amd64. Unpacking libsm-dev:amd64 (from .../libsm-dev_2%3a1.2.1-2_amd64.deb) ... Selecting previously unselected package libxau-dev:amd64. Unpacking libxau-dev:amd64 (from .../libxau-dev_1%3a1.0.7-1_amd64.deb) ... Selecting previously unselected package libxdmcp-dev:amd64. Unpacking libxdmcp-dev:amd64 (from .../libxdmcp-dev_1%3a1.1.1-1_amd64.deb) ... Selecting previously unselected package x11proto-input-dev. Unpacking x11proto-input-dev (from .../x11proto-input-dev_2.2-1_all.deb) ... Selecting previously unselected package x11proto-kb-dev. Unpacking x11proto-kb-dev (from .../x11proto-kb-dev_1.0.6-2_all.deb) ... Selecting previously unselected package xtrans-dev. Unpacking xtrans-dev (from .../xtrans-dev_1.2.7-1_all.deb) ... Selecting previously unselected package libxcb1-dev:amd64. Unpacking libxcb1-dev:amd64 (from .../libxcb1-dev_1.8.1-1ubuntu1_amd64.deb) ... Selecting previously unselected package libx11-dev:amd64. Unpacking libx11-dev:amd64 (from .../libx11-dev_2%3a1.5.0-1_amd64.deb) ... Selecting previously unselected package libx11-doc. Unpacking libx11-doc (from .../libx11-doc_2%3a1.5.0-1_all.deb) ... Selecting previously unselected package libxt-dev:amd64. Unpacking libxt-dev:amd64 (from .../libxt-dev_1%3a1.1.3-1_amd64.deb) ... Selecting previously unselected package ttf-dejavu-extra. Unpacking ttf-dejavu-extra (from .../ttf-dejavu-extra_2.33-2ubuntu1_all.deb) ... Selecting previously unselected package icedtea-6-jre-cacao:amd64. Unpacking icedtea-6-jre-cacao:amd64 (from .../icedtea-6-jre-cacao_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package icedtea-6-jre-jamvm:amd64. Unpacking icedtea-6-jre-jamvm:amd64 (from .../icedtea-6-jre-jamvm_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package icedtea-netx-common. Unpacking icedtea-netx-common (from .../icedtea-netx-common_1.3-1ubuntu1.1_all.deb) ... Selecting previously unselected package icedtea-netx:amd64. Unpacking icedtea-netx:amd64 (from .../icedtea-netx_1.3-1ubuntu1.1_amd64.deb) ... Selecting previously unselected package openjdk-6-jdk:amd64. Unpacking openjdk-6-jdk:amd64 (from .../openjdk-6-jdk_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package openoffice. Unpacking openoffice (from .../openoffice_3.4~oneiric_amd64.deb) ... Processing triggers for doc-base ... Processing 2 added doc-base files... Processing triggers for man-db ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Processing triggers for fontconfig ... Processing triggers for gnome-icon-theme ... Processing triggers for shared-mime-info ... Setting up tzdata-java (2012e-0ubuntu2) ... Setting up java-common (0.43ubuntu3) ... Setting up libgif4:amd64 (4.1.6-9.1ubuntu1) ... Setting up xorg-sgml-doctools (1:1.10-1) ... Setting up x11proto-core-dev (7.0.23-1) ... Setting up libice-dev:amd64 (2:1.0.8-2) ... Setting up libpthread-stubs0:amd64 (0.3-3) ... Setting up libpthread-stubs0-dev:amd64 (0.3-3) ... Setting up libsm-dev:amd64 (2:1.2.1-2) ... Setting up libxau-dev:amd64 (1:1.0.7-1) ... Setting up libxdmcp-dev:amd64 (1:1.1.1-1) ... Setting up x11proto-input-dev (2.2-1) ... Setting up x11proto-kb-dev (1.0.6-2) ... Setting up xtrans-dev (1.2.7-1) ... Setting up libxcb1-dev:amd64 (1.8.1-1ubuntu1) ... Setting up libx11-dev:amd64 (2:1.5.0-1) ... Setting up libx11-doc (2:1.5.0-1) ... Setting up libxt-dev:amd64 (1:1.1.3-1) ... Setting up ttf-dejavu-extra (2.33-2ubuntu1) ... Setting up icedtea-netx-common (1.3-1ubuntu1.1) ... Setting up openjdk-6-jre-lib (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up openjdk-6-jre-headless:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java to provide /usr/bin/java (java) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/keytool to provide /usr/bin/keytool (keytool) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/pack200 to provide /usr/bin/pack200 (pack200) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/rmid to provide /usr/bin/rmid (rmid) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/rmiregistry to provide /usr/bin/rmiregistry (rmiregistry) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/unpack200 to provide /usr/bin/unpack200 (unpack200) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/orbd to provide /usr/bin/orbd (orbd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/servertool to provide /usr/bin/servertool (servertool) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/tnameserv to provide /usr/bin/tnameserv (tnameserv) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/lib/jexec to provide /usr/bin/jexec (jexec) in auto mode Setting up ca-certificates-java (20120721) ... Adding debian:Deutsche_Telekom_Root_CA_2.pem Adding debian:Comodo_Trusted_Services_root.pem Adding debian:Certum_Trusted_Network_CA.pem Adding debian:thawte_Primary_Root_CA_-_G2.pem Adding debian:UTN_USERFirst_Hardware_Root_CA.pem Adding debian:AddTrust_Low-Value_Services_Root.pem Adding debian:Microsec_e-Szigno_Root_CA.pem Adding debian:SwissSign_Silver_CA_-_G2.pem Adding debian:ComSign_Secured_CA.pem Adding debian:Buypass_Class_2_CA_1.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Certum_Root_CA.pem Adding debian:AddTrust_External_Root.pem Adding debian:Chambers_of_Commerce_Root_-_2008.pem Adding debian:Starfield_Root_Certificate_Authority_-_G2.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority_-_G2.pem Adding debian:Visa_eCommerce_Root.pem Adding debian:Digital_Signature_Trust_Co._Global_CA_3.pem Adding debian:AC_Raíz_Certicámara_S.A..pem Adding debian:NetLock_Arany_=Class_Gold=_Fotanúsítvány.pem Adding debian:Taiwan_GRCA.pem Adding debian:Camerfirma_Chambers_of_Commerce_Root.pem Adding debian:Juur-SK.pem Adding debian:Entrust.net_Premium_2048_Secure_Server_CA.pem Adding debian:XRamp_Global_CA_Root.pem Adding debian:Security_Communication_RootCA2.pem Adding debian:AddTrust_Qualified_Certificates_Root.pem Adding debian:NetLock_Qualified_=Class_QA=_Root.pem Adding debian:TC_TrustCenter_Class_2_CA_II.pem Adding debian:DST_ACES_CA_X6.pem Adding debian:thawte_Primary_Root_CA.pem Adding debian:thawte_Primary_Root_CA_-_G3.pem Adding debian:GeoTrust_Universal_CA_2.pem Adding debian:ACEDICOM_Root.pem Adding debian:Security_Communication_EV_RootCA1.pem Adding debian:America_Online_Root_Certification_Authority_2.pem Adding debian:TC_TrustCenter_Universal_CA_I.pem Adding debian:SwissSign_Platinum_CA_-_G2.pem Adding debian:Global_Chambersign_Root_-_2008.pem Adding debian:SecureSign_RootCA11.pem Adding debian:GeoTrust_Global_CA_2.pem Adding debian:Buypass_Class_3_CA_1.pem Adding debian:Baltimore_CyberTrust_Root.pem Adding debian:UbuntuOne-Go_Daddy_Class_2_CA.pem Adding debian:Equifax_Secure_eBusiness_CA_1.pem Adding debian:SwissSign_Gold_CA_-_G2.pem Adding debian:AffirmTrust_Premium_ECC.pem Adding debian:TC_TrustCenter_Universal_CA_III.pem Adding debian:ca.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority_-_G2.pem Adding debian:NetLock_Express_=Class_C=_Root.pem Adding debian:VeriSign_Class_3_Public_Primary_Certification_Authority_-_G5.pem Adding debian:Firmaprofesional_Root_CA.pem Adding debian:Comodo_Secure_Services_root.pem Adding debian:cacert.org.pem Adding debian:GeoTrust_Primary_Certification_Authority.pem Adding debian:RSA_Security_2048_v3.pem Adding debian:Staat_der_Nederlanden_Root_CA.pem Adding debian:Cybertrust_Global_Root.pem Adding debian:DigiCert_High_Assurance_EV_Root_CA.pem Adding debian:TDC_OCES_Root_CA.pem Adding debian:A-Trust-nQual-03.pem Adding debian:Equifax_Secure_CA.pem Adding debian:Digital_Signature_Trust_Co._Global_CA_1.pem Adding debian:GeoTrust_Global_CA.pem Adding debian:Starfield_Class_2_CA.pem Adding debian:ApplicationCA_-_Japanese_Government.pem Adding debian:Swisscom_Root_CA_1.pem Adding debian:Verisign_Class_2_Public_Primary_Certification_Authority_-_G2.pem Adding debian:Camerfirma_Global_Chambersign_Root.pem Adding debian:QuoVadis_Root_CA_3.pem Adding debian:QuoVadis_Root_CA.pem Adding debian:Comodo_AAA_Services_root.pem Adding debian:ComSign_CA.pem Adding debian:AddTrust_Public_Services_Root.pem Adding debian:DigiCert_Assured_ID_Root_CA.pem Adding debian:UTN_DATACorp_SGC_Root_CA.pem Adding debian:CA_Disig.pem Adding debian:E-Guven_Kok_Elektronik_Sertifika_Hizmet_Saglayicisi.pem Adding debian:GlobalSign_Root_CA_-_R3.pem Adding debian:QuoVadis_Root_CA_2.pem Adding debian:Entrust_Root_Certification_Authority.pem Adding debian:GTE_CyberTrust_Global_Root.pem Adding debian:ValiCert_Class_1_VA.pem Adding debian:Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem Adding debian:GeoTrust_Primary_Certification_Authority_-_G2.pem Adding debian:spi-ca-2003.pem Adding debian:America_Online_Root_Certification_Authority_1.pem Adding debian:AffirmTrust_Premium.pem Adding debian:Sonera_Class_1_Root_CA.pem Adding debian:Verisign_Class_2_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Certplus_Class_2_Primary_CA.pem Adding debian:TURKTRUST_Certificate_Services_Provider_Root_2.pem Adding debian:Network_Solutions_Certificate_Authority.pem Adding debian:Go_Daddy_Class_2_CA.pem Adding debian:StartCom_Certification_Authority.pem Adding debian:Hongkong_Post_Root_CA_1.pem Adding debian:Hellenic_Academic_and_Research_Institutions_RootCA_2011.pem Adding debian:Thawte_Premium_Server_CA.pem Adding debian:EBG_Elektronik_Sertifika_Hizmet_Saglayicisi.pem Adding debian:TURKTRUST_Certificate_Services_Provider_Root_1.pem Adding debian:NetLock_Business_=Class_B=_Root.pem Adding debian:Microsec_e-Szigno_Root_CA_2009.pem Adding debian:DigiCert_Global_Root_CA.pem Adding debian:VeriSign_Class_3_Public_Primary_Certification_Authority_-_G4.pem Adding debian:IGC_A.pem Adding debian:TWCA_Root_Certification_Authority.pem Adding debian:S-TRUST_Authentication_and_Encryption_Root_CA_2005_PN.pem Adding debian:VeriSign_Universal_Root_Certification_Authority.pem Adding debian:DST_Root_CA_X3.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority.pem Adding debian:Root_CA_Generalitat_Valenciana.pem Adding debian:UTN_USERFirst_Email_Root_CA.pem Adding debian:ssl-cert-snakeoil.pem Adding debian:Starfield_Services_Root_Certificate_Authority_-_G2.pem Adding debian:GeoTrust_Primary_Certification_Authority_-_G3.pem Adding debian:Certinomis_-_Autorité_Racine.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority.pem Adding debian:TDC_Internet_Root_CA.pem Adding debian:UbuntuOne-ValiCert_Class_2_VA.pem Adding debian:AffirmTrust_Commercial.pem Adding debian:spi-cacert-2008.pem Adding debian:Izenpe.com.pem Adding debian:EC-ACC.pem Adding debian:Go_Daddy_Root_Certificate_Authority_-_G2.pem Adding debian:COMODO_ECC_Certification_Authority.pem Adding debian:CNNIC_ROOT.pem Adding debian:NetLock_Notary_=Class_A=_Root.pem Adding debian:Equifax_Secure_eBusiness_CA_2.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Secure_Global_CA.pem Adding debian:UbuntuOne-Go_Daddy_CA.pem Adding debian:GeoTrust_Universal_CA.pem Adding debian:Wells_Fargo_Root_CA.pem Adding debian:Thawte_Server_CA.pem Adding debian:WellsSecure_Public_Root_Certificate_Authority.pem Adding debian:TC_TrustCenter_Class_3_CA_II.pem Adding debian:COMODO_Certification_Authority.pem Adding debian:Equifax_Secure_Global_eBusiness_CA.pem Adding debian:Security_Communication_Root_CA.pem Adding debian:GlobalSign_Root_CA_-_R2.pem Adding debian:TÜBITAK_UEKAE_Kök_Sertifika_Hizmet_Saglayicisi_-_Sürüm_3.pem Adding debian:Verisign_Class_4_Public_Primary_Certification_Authority_-_G3.pem Adding debian:certSIGN_ROOT_CA.pem Adding debian:RSA_Root_Certificate_1.pem Adding debian:ePKI_Root_Certification_Authority.pem Adding debian:Entrust.net_Secure_Server_CA.pem Adding debian:OISTE_WISeKey_Global_Root_GA_CA.pem Adding debian:Sonera_Class_2_Root_CA.pem Adding debian:Certigna.pem Adding debian:AffirmTrust_Networking.pem Adding debian:ValiCert_Class_2_VA.pem Adding debian:GlobalSign_Root_CA.pem Adding debian:Staat_der_Nederlanden_Root_CA_-_G2.pem Adding debian:SecureTrust_CA.pem done. Setting up openjdk-6-jre:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/policytool to provide /usr/bin/policytool (policytool) in auto mode Setting up libatk-wrapper-java (0.30.4-0ubuntu4) ... Setting up icedtea-6-jre-cacao:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up icedtea-6-jre-jamvm:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up icedtea-netx:amd64 (1.3-1ubuntu1.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/javaws to provide /usr/bin/javaws (javaws) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/itweb-settings to provide /usr/bin/itweb-settings (itweb-settings) in auto mode update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/javaws to provide /usr/bin/javaws (javaws) in auto mode update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/itweb-settings to provide /usr/bin/itweb-settings (itweb-settings) in auto mode Setting up openjdk-6-jdk:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/appletviewer to provide /usr/bin/appletviewer (appletviewer) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/extcheck to provide /usr/bin/extcheck (extcheck) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/idlj to provide /usr/bin/idlj (idlj) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jar to provide /usr/bin/jar (jar) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jarsigner to provide /usr/bin/jarsigner (jarsigner) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javadoc to provide /usr/bin/javadoc (javadoc) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javah to provide /usr/bin/javah (javah) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javap to provide /usr/bin/javap (javap) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jconsole to provide /usr/bin/jconsole (jconsole) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jdb to provide /usr/bin/jdb (jdb) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jhat to provide /usr/bin/jhat (jhat) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jinfo to provide /usr/bin/jinfo (jinfo) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jmap to provide /usr/bin/jmap (jmap) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jps to provide /usr/bin/jps (jps) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jrunscript to provide /usr/bin/jrunscript (jrunscript) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jsadebugd to provide /usr/bin/jsadebugd (jsadebugd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstack to provide /usr/bin/jstack (jstack) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstat to provide /usr/bin/jstat (jstat) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstatd to provide /usr/bin/jstatd (jstatd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/native2ascii to provide /usr/bin/native2ascii (native2ascii) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/rmic to provide /usr/bin/rmic (rmic) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/schemagen to provide /usr/bin/schemagen (schemagen) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/serialver to provide /usr/bin/serialver (serialver) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/wsgen to provide /usr/bin/wsgen (wsgen) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/wsimport to provide /usr/bin/wsimport (wsimport) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/xjc to provide /usr/bin/xjc (xjc) in auto mode Setting up openoffice (3.4~oneiric) ... Setting up libatk-wrapper-java-jni:amd64 (0.30.4-0ubuntu4) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place philip@X301-2:~$ sudo apt-get install libxrandr2:i386 libxinerama1:i386 Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: linux-headers-3.5.0-17 Use 'apt-get autoremove' to remove it. The following extra packages will be installed: gcc-4.7-base:i386 libc6:i386 libgcc1:i386 libx11-6:i386 libxau6:i386 libxcb1:i386 libxdmcp6:i386 libxext6:i386 libxrender1:i386 Suggested packages: glibc-doc:i386 locales:i386 The following NEW packages will be installed gcc-4.7-base:i386 libc6:i386 libgcc1:i386 libx11-6:i386 libxau6:i386 libxcb1:i386 libxdmcp6:i386 libxext6:i386 libxinerama1:i386 libxrandr2:i386 libxrender1:i386 0 upgraded, 11 newly installed, 0 to remove and 93 not upgraded. Need to get 4,936 kB of archives. After this operation, 11.9 MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://gb.archive.ubuntu.com/ubuntu/ quantal/main gcc-4.7-base i386 4.7.2-2ubuntu1 [15.5 kB] Get:2 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libc6 i386 2.15-0ubuntu20 [3,940 kB] Get:3 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libgcc1 i386 1:4.7.2-2ubuntu1 [53.5 kB] Get:4 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxau6 i386 1:1.0.7-1 [8,582 B] Get:5 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxdmcp6 i386 1:1.1.1-1 [13.1 kB] Get:6 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxcb1 i386 1.8.1-1ubuntu1 [48.7 kB] Get:7 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-6 i386 2:1.5.0-1 [776 kB] Get:8 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxext6 i386 2:1.3.1-2 [33.9 kB] Get:9 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxinerama1 i386 2:1.1.2-1 [8,118 B] Get:10 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxrender1 i386 1:0.9.7-1 [20.1 kB] Get:11 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxrandr2 i386 2:1.4.0-1 [18.8 kB] Fetched 4,936 kB in 30s (161 kB/s) Preconfiguring packages ... Selecting previously unselected package gcc-4.7-base:i386. (Reading database ... 146005 files and directories currently installed.) Unpacking gcc-4.7-base:i386 (from .../gcc-4.7-base_4.7.2-2ubuntu1_i386.deb) ... Selecting previously unselected package libc6:i386. Unpacking libc6:i386 (from .../libc6_2.15-0ubuntu20_i386.deb) ... Selecting previously unselected package libgcc1:i386. Unpacking libgcc1:i386 (from .../libgcc1_1%3a4.7.2-2ubuntu1_i386.deb) ... Selecting previously unselected package libxau6:i386. Unpacking libxau6:i386 (from .../libxau6_1%3a1.0.7-1_i386.deb) ... Selecting previously unselected package libxdmcp6:i386. Unpacking libxdmcp6:i386 (from .../libxdmcp6_1%3a1.1.1-1_i386.deb) ... Selecting previously unselected package libxcb1:i386. Unpacking libxcb1:i386 (from .../libxcb1_1.8.1-1ubuntu1_i386.deb) ... Selecting previously unselected package libx11-6:i386. Unpacking libx11-6:i386 (from .../libx11-6_2%3a1.5.0-1_i386.deb) ... Selecting previously unselected package libxext6:i386. Unpacking libxext6:i386 (from .../libxext6_2%3a1.3.1-2_i386.deb) ... Selecting previously unselected package libxinerama1:i386. Unpacking libxinerama1:i386 (from .../libxinerama1_2%3a1.1.2-1_i386.deb) ... Selecting previously unselected package libxrender1:i386. Unpacking libxrender1:i386 (from .../libxrender1_1%3a0.9.7-1_i386.deb) ... Selecting previously unselected package libxrandr2:i386. Unpacking libxrandr2:i386 (from .../libxrandr2_2%3a1.4.0-1_i386.deb) ... Setting up gcc-4.7-base:i386 (4.7.2-2ubuntu1) ... Setting up libc6:i386 (2.15-0ubuntu20) ... Setting up libgcc1:i386 (1:4.7.2-2ubuntu1) ... Setting up libxau6:i386 (1:1.0.7-1) ... Setting up libxdmcp6:i386 (1:1.1.1-1) ... Setting up libxcb1:i386 (1.8.1-1ubuntu1) ... Setting up libx11-6:i386 (2:1.5.0-1) ... Setting up libxext6:i386 (2:1.3.1-2) ... Setting up libxinerama1:i386 (2:1.1.2-1) ... Setting up libxrender1:i386 (1:0.9.7-1) ... Setting up libxrandr2:i386 (2:1.4.0-1) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place $ sudo chmod a+rx /opt/openoffice.org3/share/uno_packages/cache/uno_packages chmod: cannot access `/opt/openoffice.org3/share/uno_packages/cache/uno_packages': No such file or directory

    Read the article

< Previous Page | 170 171 172 173 174 175 176  | Next Page >