Search Results

Search found 12950 results on 518 pages for 'field activities'.

Page 189/518 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • Codeigniter Active record help

    - by sea_1987
    Hello, I am trying to increment a INT column by 1 if a certain field is not null on an update request, currently I have this update too columns, public function updateCronDetails($transaction_reference, $flag, $log) { $data = array ( 'flag' => $flag, 'log' => "$log" ); $this->db->where('transaction_reference', $transaction_reference); $this->db->update('sy_cron', $data); } What I need to know is how I can check if the value being sent to the log field is NULL and if it is how could I increment a column called count by 1?

    Read the article

  • I can't upload a file with CGIHTTPServer

    - by sdemingo
    Hi all, I'm using the CGIHTTPServer to implement a simple cgi server. I'm trying to upload a file by a form with the post method and the multipart/form-data enctype but I have problems when I recover the value of the fields in the cgi script. When the script catch the form fields, the value of the file is a MiniFieldStorage with two fields only (key and file name), and I can't recover the content of the file. As the API doc shows, this content is in value field of a StorageField but in the MiniFieldStorage this field isn't exits. ¿How can I recover a StorageField with the content of the file instead a MiniStorageField? ¿There are other method to upload a file using CGIHTTPServer? Thanks a lot

    Read the article

  • has_many in rails

    - by user363243
    I have a 'users' table which stores all of my websites users, some of which are technicians. The users table has a 'first_name' and 'last_name' field as well as other fields. The other table is called 'service_tickets' which has a foreign key from users called technician_id. This gives me a real problem because when I am looking at the service_tickets table, the related user is actually a technician. That's why the service_tickets table has a technician_id and not a user_id field. I am trying to accomplish something like this: t = service_ticket.find_by_id(7) t.technician.first_name # notice how I don't do t.user.first_name Is this possible in rails? I can't seem to get it to work... Thank you for your help!

    Read the article

  • Assigning parameter as array length

    - by Jcolnz
    I am currently stuck with a homework assignment, question below; Define a default constructor for Deck that initialises the deck field with an array of size 0. Also define a constructor that takes an integer parameter and initialises the deck field with an array of that size. The constructor should also initialise every element with a new random MovieCard. The cards should be uniquely named. so far my code is public class Deck { MovieCard[] deck = new MovieCard[] {}; public Deck() { MovieCard deck[]; } public Deck(int size) { MovieCard deck = new MovieCard[]; } } Obviously this is incomplete by I can't find any references in my previous notes about referring a parameter into an array and setting this as the length.

    Read the article

  • appending and reading text file

    - by Rod
    Environment: Any .Net Framework welcomed. I have a log file that gets written to 24/7. I am trying to create an application that will read the log file and process the data. What's the best way to read the log file efficiently? I imagine monitoring the file with something like FileSystemWatcher. But how do I make sure I don't read the same data once it's been processed by my application? Or say the application aborts for some unknown reason, how would it pick up where it left off last? There's usually a header and footer around the payload that's in the log file. Maybe an id field in the content as well. Not sure yet though about the id field being there. I also imagined maybe saving the lines read count somewhere to maybe use that as bookmark.

    Read the article

  • date enable disable

    - by Newbie
    I have an "add new product page" and I have a field in the database table "jos_stockmovement", "start_week" which contains data for the year and week. In my "Add New Product" page I have a field dropdown-list named "Start Week" which holds the year and week, for example, "2012/36". How do I disable the past week? I all I want to enable is the current week. Heres my code: <td colspan="99"> <%= fld.select :start_week, options_for_select( StockMovement.order("year DESC, week DESC").map { | val | [ "#{ val.year }/#{ val.week }", val.id ] }, :selected => @product.start_week ), :class => "ddl_SW" %> </td>

    Read the article

  • Wifi not working after a few minutes

    - by drtanz
    I'm using a few MacBooks and iPads connected to a router via WiFi. The problem is that a few minutes after they connect via WiFi the connection stops working. This happens on all devices. I went into the router settings by connecting via cable and everything seems in order. Connecting a laptop via cable to the router I can use internet as normal, the problem is only with WiFi. What can be the problem here? Here are the connected clients Connected Clients MAC Address Idle(s) RSSI(dBm) IP Addr Host Name Mode Speed (kbps) 14:10:9F:F3:48:D6 1 -36 192.168.0.5 Jeans-Air n 78000 14:99:E2:C6:41:10 1 -36 192.168.0.8 JeanGaleasiPad n 24000 Here's the router event log Mon Dec 30 04:12:30 2013 Notice (6) WiFi Interface [wl0] set to Channel 1 (Side-Band Channel:N/A)... Mon Dec 30 04:12:25 2013 Notice (6) WiFi Interface [wl0] set to Channel 1 (Side-Band Channel:5) -... Mon Dec 30 02:17:56 2013 Notice (6) WiFi Interface [wl0] set to Channel 40 (Side-Band Channel:36)... Mon Dec 30 02:16:04 2013 Notice (6) WiFi Interface [wl0] set to Channel 11 (Side-Band Channel:7) ... Mon Dec 30 01:59:26 2013 Notice (6) WiFi Interface [wl0] set to Channel 6 (Side-Band Channel:N/A)... Mon Dec 30 01:59:22 2013 Notice (6) WiFi Interface [wl0] set to Channel 6 (Side-Band Channel:2) -... Sun Dec 29 23:27:51 2013 Notice (6) WiFi Interface [wl0] set to Channel 1 (Side-Band Channel:N/A)... Sun Dec 29 23:27:49 2013 Notice (6) WiFi Interface [wl0] set to Channel 11 (Side-Band Channel:N/A... Sun Dec 29 14:32:55 2013 Critical (3) Started Unicast Maintenance Ranging - No Response received - ... Sat Dec 28 13:08:19 2013 Error (4) DHCP REBIND WARNING - Field invalid in response ;CM-MAC=1c:3e... Fri Dec 27 18:10:19 2013 Critical (3) Started Unicast Maintenance Ranging - No Response received - ... Fri Dec 27 16:08:55 2013 Error (4) Map Request Retry Timeout;CM-MAC=1c:3e:84:f1:6b:84;CMTS-MAC=0... Thu Dec 26 21:08:53 2013 Notice (6) WiFi Interface [wl0] set to Channel 11 (Side-Band Channel:7) ... Thu Dec 26 20:43:50 2013 Notice (6) WiFi Interface [wl0] set to Channel 11 (Side-Band Channel:N/A... Tue Dec 24 12:45:03 2013 Critical (3) Started Unicast Maintenance Ranging - No Response received - ... Tue Dec 24 04:55:52 2013 Error (4) Map Request Retry Timeout;CM-MAC=1c:3e:84:f1:6b:84;CMTS-MAC=0... Mon Dec 23 12:32:00 2013 Notice (6) TLV-11 - unrecognized OID;CM-MAC=1c:3e:84:f1:6b:84;CMTS-MAC=0... Mon Dec 23 12:32:00 2013 Error (4) Missing BP Configuration Setting TLV Type: 17.9;CM-MAC=1c:3e:... Mon Dec 23 12:32:00 2013 Error (4) Missing BP Configuration Setting TLV Type: 17.8;CM-MAC=1c:3e:... Mon Dec 23 12:32:00 2013 Warning (5) DHCP WARNING - Non-critical field invalid in response ;CM-MAC... Mon Dec 23 18:32:02 2013 Notice (6) Honoring MDD; IP provisioning mode = IPv4 Mon Dec 23 18:31:10 2013 Critical (3) No Ranging Response received - T3 time-out;CM-MAC=1c:3e:84:f1... Mon Dec 23 18:28:57 2013 Critical (3) Received Response to Broadcast Maintenance Request, But no Un... Mon Dec 23 18:28:25 2013 Critical (3) Started Unicast Maintenance Ranging - No Response received - ... Mon Dec 23 12:17:48 2013 Notice (6) TLV-11 - unrecognized OID;CM-MAC=1c:3e:84:f1:6b:84;CMTS-MAC=0... Mon Dec 23 12:17:48 2013 Error (4) Missing BP Configuration Setting TLV Type: 17.9;CM-MAC=1c:3e:... Mon Dec 23 12:17:48 2013 Error (4) Missing BP Configuration Setting TLV Type: 17.8;CM-MAC=1c:3e:... Mon Dec 23 12:17:48 2013 Warning (5) DHCP WARNING - Non-critical field invalid in response ;CM-MAC... Mon Dec 23 18:17:48 2013 Notice (6) Honoring MDD; IP provisioning mode = IPv4 Mon Dec 23 18:16:58 2013 Critical (3) No Ranging Response received - T3 time-out;CM-MAC=1c:3e:84:f1... Mon Dec 23 18:16:15 2013 Critical (3) Received Response to Broadcast Maintenance Request, But no Un... Mon Dec 23 18:15:43 2013 Critical (3) Started Unicast Maintenance Ranging - No Response received - ...

    Read the article

  • Microsoft and jQuery

    - by Rick Strahl
    The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com. For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today. As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library. Microsoft and jQuery – making Friends jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform. PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft. In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance. Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement. Microsoft has come a long way indeed! What the Microsoft Involvement with jQuery means to you For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place. ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions. Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery. The Microsoft jQuery Plug-ins – Solid Core Features One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5). The following plug-ins are provided by Microsoft: jQuery Templates – a client side template rendering engine jQuery Data Link – a client side databinder that can synchronize changes without code jQuery Globalization – provides formatting and conversion features for dates and numbers The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins. jQuery Templates http://api.jquery.com/category/plugins/templates/ Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed. Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element. I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications. jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code. To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div><div>${LastPrice}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div> </div> </script> The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation. To load up this data you can use code like the following: <script type="text/javascript"> //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/"); $(document).ready(function () { $("#btnGetQuotes").click(GetQuotes); }); function GetQuotes() { var symbols = $("#txtSymbols").val().split(","); $.ajax({ url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes", data: JSON.stringify({ symbols: symbols }), // parameter map type: "POST", // data has to be POSTed contentType: "application/json", timeout: 10000, dataType: "json", success: function (result) { var quotes = result.d; var jEl = $("#stockTemplate").tmpl(quotes); $("#quoteDisplay").empty().append(jEl); }, error: function (xhr, status) { alert(status + "\r\n" + xhr.responseText); } }); }; </script> In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with: var jEl = $("#stockTemplate").tmpl(quotes); which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page. The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful. Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality. jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation: Lack of Error Handling Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered. No String Output Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output. Limited JavaScript Access Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic. Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway. jQuery Data Link http://api.jquery.com/category/plugins/data-link/ jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations. In theory this sounds great, however in reality this plug-in has some serious usability issues. Using the plug-in you can do things like the following to bind data: person = { firstName: "rick", lastName: "strahl"}; $(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); }); This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one. The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two. You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this: function updateFirstName() { $(person).data("firstName", person.firstName + " (code updated)"); } This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal. As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding. Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations. As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point. jQuery Globalization http://github.com/jquery/jquery-global Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features. While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server. You can write code like the following for example to format numbers and dates: var date = new Date(); var output = $.format(date, "MMM. dd, yy") + "\r\n" + $.format(date, "d") + "\r\n" + // 10/25/2010 $.format(1222.32213, "N2") + "\r\n" + $.format(1222.33, "c") + "\r\n"; alert(output); This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div> <div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div> <div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script> There are also parsing methods that can parse dates and numbers from strings into numbers easily: alert($.parseDate("25.10.2010")); alert($.parseInt("12.222")); // de-DE uses . for thousands separators As you can see culture specific options are taken into account when parsing. The globalization plugin provides rich support for a variety of locales: Get a list of all available cultures Query cultures for culture items (like currency symbol, separators etc.) Localized string names for all calendar related items (days of week, months) Generated off of .NET’s supported locales In short you get much of the same functionality that you already might be using in .NET on the server side. The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about. Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications. Changes for Microsoft It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  

    Read the article

  • Unity: how to apply programmatical changes to the Terrain SplatPrototype?

    - by Shivan Dragon
    I have a script to which I add a Terrain object (I drag and drop the terrain in the public Terrain field). The Terrain is already setup in Unity to have 2 PaintTextures: 1 is a Square (set up with a tile size so that it forms a checkered pattern) and the 2nd one is a grass image: Also the Target Strength of the first PaintTexture is lowered so that the checkered pattern also reveals some of the grass underneath. Now I want, at run time, to change the Tile Size of the first PaintTexture, i.e. have more or less checkers depending on various run time conditions. I've looked through Unity's documentation and I've seen you have the Terrain.terrainData.SplatPrototype array which allows you to change this. Also there's a RefreshPrototypes() method on the terrainData object and a Flush() method on the Terrain object. So I made a script like this: public class AStarTerrain : MonoBehaviour { public int aStarCellColumns, aStarCellRows; public GameObject aStarCellHighlightPrefab; public GameObject aStarPathMarkerPrefab; public GameObject utilityRobotPrefab; public Terrain aStarTerrain; void Start () { //I've also tried NOT drag and dropping the Terrain on the public field //and instead just using the commented line below, but I get the same results //aStarTerrain = this.GetComponents<Terrain>()[0]; Debug.Log ("Got terrain "+aStarTerrain.name); SplatPrototype[] splatPrototypes = aStarTerrain.terrainData.splatPrototypes; Debug.Log("Terrain has "+splatPrototypes.Length+" splat prototypes"); SplatPrototype aStarCellSplat = splatPrototypes[0]; Debug.Log("Re-tyling splat prototype "+aStarCellSplat.texture.name); aStarCellSplat.tileSize = new Vector2(2000,2000); Debug.Log("Tyling is now "+aStarCellSplat.tileSize.x+"/"+aStarCellSplat.tileSize.y); aStarTerrain.terrainData.RefreshPrototypes(); aStarTerrain.Flush(); } //... Problem is, nothing gets changed, the checker map is not re-tiled. The console outputs correctly tell me that I've got the Terrain object with the right name, that it has the right number of splat prototypes and that I'm modifying the tileSize on the SplatPrototype object corresponding to the right texture. It also tells me the value has changed. But nothing gets updated in the actual graphical view. So please, what am I missing?

    Read the article

  • TFS 2010 Build Custom Activity for Merging Assemblies

    - by Jakob Ehn
    *** The sample build process template discussed in this post is available for download from here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/ILMerge.xaml ***   In my previous post I talked about library builds that we use to build and replicate dependencies between applications in TFS. This is typically used for common libraries and tools that several other application need to reference. When the libraries grow in size over time, so does the number of assemblies. So all solutions that uses the common library must reference all the necessary assemblies that they need, and if we for example do a refactoring and extract some code into a new assembly, all the clients must update their references to reflect these changes, otherwise it won’t compile. To improve on this, we use a tool from Microsoft Research called ILMerge (Download from here). It can be used to merge several assemblies into one assembly that contains all types. If you haven’t used this tool before, you should check it out. Previously I have implemented this in builds using a simple batch file that contains the full command, something like this: "%ProgramFiles(x86)%\microsoft\ilmerge\ilmerge.exe" /target:library /attr:ClassLibrary1.bl.dll /out:MyNewLibrary.dll ClassLibrary1.dll ClassLibrar2.dll ClassLibrary3.dll This merges 3 assemblies (ClassLibrary1, 2 and 3) into a new assembly called MyNewLibrary.dll. It will copy the attributes (file version, product version etc..) from ClassLibrary1.dll, using the /attr switch. For more info on ILMerge command line tool, see the above link. This approach works, but requires a little bit too much knowledge for the developers creating builds, therefor I have implemented a custom activity that wraps the use of ILMerge. This makes it much simpler to setup a new build definition and have the build automatically do the merging. The usage of the activity is then implemented as part of the Library Build process template mentioned in the previous post. For this article I have just created a simple build process template that only performs the ILMerge operation.   Below is the code for the custom activity. To make it compile, you need to reference the ILMerge.exe assembly. /// <summary> /// Activity for merging a list of assembies into one, using ILMerge /// </summary> public sealed class ILMergeActivity : BaseCodeActivity { /// <summary> /// A list of file paths to the assemblies that should be merged /// </summary> [RequiredArgument] public InArgument<IEnumerable<string>> InputAssemblies { get; set; } /// <summary> /// Full path to the generated assembly /// </summary> [RequiredArgument] public InArgument<string> OutputFile { get; set; } /// <summary> /// Which input assembly that the attibutes for the generated assembly should be copied from. /// Optional. If not specified, the first input assembly will be used /// </summary> public InArgument<string> AttributeFile { get; set; } /// <summary> /// Kind of assembly to generate, dll or exe /// </summary> public InArgument<TargetKindEnum> TargetKind { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { string message = InputAssemblies.Get(context).Aggregate("", (current, assembly) => current + (assembly + " ")); TrackMessage(context, "Merging " + message + " into " + OutputFile.Get(context)); ILMerge m = new ILMerge(); m.SetInputAssemblies(InputAssemblies.Get(context).ToArray()); m.TargetKind = TargetKind.Get(context) == TargetKindEnum.Dll ? ILMerge.Kind.Dll : ILMerge.Kind.Exe; m.OutputFile = OutputFile.Get(context); m.AttributeFile = !String.IsNullOrEmpty(AttributeFile.Get(context)) ? AttributeFile.Get(context) : InputAssemblies.Get(context).First(); m.SetTargetPlatform(RuntimeEnvironment.GetSystemVersion().Substring(0,2), RuntimeEnvironment.GetRuntimeDirectory()); m.Merge(); TrackMessage(context, "Generated " + m.OutputFile); } } [Browsable(true)] public enum TargetKindEnum { Dll, Exe } NB: The activity inherits from a BaseCodeActivity class which is an internal helper class which contains some methods and properties useful for moste custom activities. In this case, it uses the TrackeMessage method for writing to the build log. You either need to remove the TrackMessage method calls, or implement this yourself (which is not very hard… ) The custom activity has the following input arguments: InputAssemblies A list with the (full) paths to the assemblies to merge OutputFile The name of the resulting merged assembly AttributeFile Which assembly to use as the template for the attribute of the merged assembly. This argument is optional and if left blank, the first assembly in the input list is used TargetKind Decides what type of assembly to create, can be either a dll or an exe Of course, there are more switches to the ILMerge.exe, and these can be exposed as input arguments as well if you need it. To show how the custom activity can be used, I have attached a build process template (see link at the top of this post) that merges the output of the projects being built (CommonLibrary.dll and CommonLibrary2.dll) into a merged assembly (NewLibrary.dll). The build process template has the following custom process parameters:   The Assemblies To Merge argument is passed into a FindMatchingFiles activity to located all assemblies that are located in the BinariesDirectory folder after the compilation has been performed by Team Build. Here is the complete sequence of activities that performs the merge operation. It is located at the end of the Try, Compile, Test and Associate… sequence: It splits the AssembliesToMerge parameter and appends the full path (using the BinariesDirectory variable) and then enumerates the matching files using the FindMatchingFiles activity. When running the build, you can see that it merges two assemblies into a new one:     And the merged assembly (and associated pdb file) is copied to the drop location together with the rest of the assemblies:

    Read the article

  • Android - creating a custom preferences activity screen

    - by Bill Osuch
    Android applications can maintain their own internal preferences (and allow them to be modified by users) with very little coding. In fact, you don't even need to write an code to explicitly save these preferences, it's all handled automatically! Create a new Android project, with an intial activity title Main. Create two more activities: ShowPrefs, which extends Activity Set Prefs, which extends PreferenceActivity Add these two to your AndroidManifest.xml file: <activity android:name=".SetPrefs"></activity> <activity android:name=".ShowPrefs"></activity> Now we'll work on fleshing out each activity. First, open up the main.xml layout file and add a couple of buttons to it: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"    android:orientation="vertical"    android:layout_width="fill_parent"    android:layout_height="fill_parent"> <Button android:text="Edit Preferences"    android:id="@+id/prefButton"    android:layout_width="wrap_content"    android:layout_height="wrap_content"    android:layout_gravity="center_horizontal"/> <Button android:text="Show Preferences"    android:id="@+id/showButton"    android:layout_width="wrap_content"    android:layout_height="wrap_content"    android:layout_gravity="center_horizontal"/> </LinearLayout> Next, create a couple button listeners in Main.java to handle the clicks and start the other activities: Button editPrefs = (Button) findViewById(R.id.prefButton);       editPrefs.setOnClickListener(new View.OnClickListener() {              public void onClick(View view) {                  Intent myIntent = new Intent(view.getContext(), SetPrefs.class);                  startActivityForResult(myIntent, 0);              }      });           Button showPrefs = (Button) findViewById(R.id.showButton);      showPrefs.setOnClickListener(new View.OnClickListener() {              public void onClick(View view) {                  Intent myIntent = new Intent(view.getContext(), ShowPrefs.class);                  startActivityForResult(myIntent, 0);              }      }); Now, we'll create the actual preferences layout. You'll need to create a file called preferences.xml inside res/xml, and you'll likely have to create the xml directory as well. Add the following xml: <?xml version="1.0" encoding="utf-8"?> <PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android"> </PreferenceScreen> First we'll add a category, which is just a way to group similar preferences... sort of a horizontal bar. Add this inside the PreferenceScreen tags: <PreferenceCategory android:title="First Category"> </PreferenceCategory> Now add a Checkbox and an Edittext box (inside the PreferenceCategory tags): <CheckBoxPreference    android:key="checkboxPref"    android:title="Checkbox Preference"    android:summary="This preference can be true or false"    android:defaultValue="false"/> <EditTextPreference    android:key="editTextPref"    android:title="EditText Preference"    android:summary="This allows you to enter a string"    android:defaultValue="Nothing"/> The key is how you will refer to the preference in code, the title is the large text that will be displayed, and the summary is the smaller text (this will make sense when you see it). Let's say we've got a second group of preferences that apply to a different part of the app. Add a new category just below the first one: <PreferenceCategory android:title="Second Category"> </PreferenceCategory> In there we'll a list with radio buttons, so add: <ListPreference    android:key="listPref"    android:title="List Preference"    android:summary="This preference lets you select an item in a array"    android:entries="@array/listArray"    android:entryValues="@array/listValues" /> When complete, your full xml file should look like this: <?xml version="1.0" encoding="utf-8"?> <PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android">  <PreferenceCategory android:title="First Category"> <CheckBoxPreference    android:key="checkboxPref"    android:title="Checkbox Preference"    android:summary="This preference can be true or false"    android:defaultValue="false"/> <EditTextPreference    android:key="editTextPref"    android:title="EditText Preference"    android:summary="This allows you to enter a string"    android:defaultValue="Nothing"/>  </PreferenceCategory>  <PreferenceCategory android:title="Second Category">   <ListPreference    android:key="listPref"    android:title="List Preference"    android:summary="This preference lets you select an item in a array"    android:entries="@array/listArray"    android:entryValues="@array/listValues" />  </PreferenceCategory> </PreferenceScreen> However, when you try to save it, you'll get an error because you're missing your array definition. To fix this, add a file called arrays.xml in res/values, and paste in the following: <?xml version="1.0" encoding="utf-8"?> <resources>  <string-array name="listArray">      <item>Value 1</item>      <item>Value 2</item>      <item>Value 3</item>  </string-array>  <string-array name="listValues">      <item>1</item>      <item>2</item>      <item>3</item>  </string-array> </resources> Finally (for the preferences screen at least...) add the code that will display the preferences layout to the SetPrefs.java file:  @Override     public void onCreate(Bundle savedInstanceState) {      super.onCreate(savedInstanceState);      addPreferencesFromResource(R.xml.preferences);      } OK, so now we've got an activity that will set preferences, and save them without the need to write custom save code. Let's throw together an activity to work with the saved preferences. Create a new layout called showpreferences.xml and give it three Textviews: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"     android:orientation="vertical"     android:layout_width="fill_parent"     android:layout_height="fill_parent"> <TextView   android:id="@+id/textview1"     android:layout_width="fill_parent"     android:layout_height="wrap_content"     android:text="textview1"/> <TextView   android:id="@+id/textview2"     android:layout_width="fill_parent"     android:layout_height="wrap_content"     android:text="textview2"/> <TextView   android:id="@+id/textview3"     android:layout_width="fill_parent"     android:layout_height="wrap_content"     android:text="textview3"/> </LinearLayout> Open up the ShowPrefs.java file and have it use that layout: setContentView(R.layout.showpreferences); Then add the following code to load the DefaultSharedPreferences and display them: SharedPreferences prefs = PreferenceManager.getDefaultSharedPreferences(this);    TextView text1 = (TextView)findViewById(R.id.textview1); TextView text2 = (TextView)findViewById(R.id.textview2); TextView text3 = (TextView)findViewById(R.id.textview3);    text1.setText(new Boolean(prefs.getBoolean("checkboxPref", false)).toString()); text2.setText(prefs.getString("editTextPref", "<unset>"));; text3.setText(prefs.getString("listPref", "<unset>")); Fire up the application in the emulator and click the Edit Preferences button. Set various things, click the back button, then the Edit Preferences button again. Notice that your choices have been saved.   Now click the Show Preferences button, and you should see the results of what you set:   There are two more preference types that I did not include here: RingtonePreference - shows a radioGroup that lists your ringtones PreferenceScreen - allows you to embed a second preference screen inside the first - it opens up a new set of preferences when clicked

    Read the article

  • SQL SERVER – Difference Between DATETIME and DATETIME2

    - by pinaldave
    Yesterday I have written a very quick blog post on SQL SERVER – Difference Between GETDATE and SYSDATETIME and I got tremendous response for the same. I suggest you read that blog post before continuing this blog post today. I had asked people to honestly take part and share their view about above two system function. There are few emails as well few comments on the blog post asking question how did I come to know the difference between the same. The answer is real world issues. I was called in for performance tuning consultancy where I was asked very strange question by one developer. Here is the situation he was facing. System had a single table with two different column of datetime. One column was datelastmodified and second column was datefirstmodified. One of the column was DATETIME and another was DATETIME2. Developer was populating them with SYSDATETIME respectively. He was always thinking that the value inserted in the table will be the same. This table was only accessed by INSERT statement and there was no updates done over it in application.One fine day he ran distinct on both of this column and was in for surprise. He always thought that both of the table will have same data, but in fact they had very different data. He presented this scenario to me. I said this can not be possible but when looked at the resultset, I had to agree with him. Here is the simple script generated to demonstrate the problem he was facing. This is just a sample of original table. DECLARE @Intveral INT SET @Intveral = 10000 CREATE TABLE #TimeTable (FirstDate DATETIME, LastDate DATETIME2) WHILE (@Intveral > 0) BEGIN INSERT #TimeTable (FirstDate, LastDate) VALUES (SYSDATETIME(), SYSDATETIME()) SET @Intveral = @Intveral - 1 END GO SELECT COUNT(DISTINCT FirstDate) D_GETDATE, COUNT(DISTINCT LastDate) D_SYSGETDATE FROM #TimeTable GO SELECT DISTINCT a.FirstDate, b.LastDate FROM #TimeTable a INNER JOIN #TimeTable b ON a.FirstDate = b.LastDate GO SELECT * FROM #TimeTable GO DROP TABLE #TimeTable GO Let us see the resultset. You can clearly see from result that SYSDATETIME() does not populate the same value in the both of the field. In fact the value is either rounded down or rounded up in the field which is DATETIME. Event though we are populating the same value, the values are totally different in both the column resulting the SELF JOIN fail and display different DISTINCT values. The best policy is if you are using DATETIME use GETDATE() and if you are suing DATETIME2 use SYSDATETIME() to populate them with current date and time to accurately address the precision. As DATETIME2 is introduced in SQL Server 2008, above script will only work with SQL SErver 2008 and later versions. I hope I have answered few questions asked yesterday. Reference: Pinal Dave (http://www.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Microsoft Tech-Ed North America 2010 - SQL Server Upgrade, 2000 - 2005 - 2008: Notes and Best Practi

    - by ssqa.net
    It is just a week to go for Tech-Ed North America 2010 in New Orleans, this time also I'm speaking at this conference on the subject - SQL Server Upgrade, 2000 - 2005 - 2008: Notes and Best Practices from the Field... more from here .. It is a coincedence that this is the 2nd time the same talk has been selected in Tech-Ed North America for the topic I have presented in SQLBits before....(read more)

    Read the article

  • How to create item in SharePoint2010 document library using SharePoint Web service

    - by ybbest
    Today, I’d like to show you how to create item in SharePoint2010 document library using SharePoint Web service. Originally, I thought I could use the WebSvcLists(list.asmx) that provides methods for working with lists and list data. However, after a bit Googling , I realize that I need to use the WebSvcCopy (copy.asmx).Here are the code used private const string siteUrl = "http://ybbest"; private static void Main(string[] args) { using (CopyWSProxyWrapper copyWSProxyWrapper = new CopyWSProxyWrapper(siteUrl)) { copyWSProxyWrapper.UploadFile("TestDoc2.pdf", new[] {string.Format("{0}/Shared Documents/TestDoc2.pdf", siteUrl)}, Resource.TestDoc, GetFieldInfos().ToArray()); } } private static List<FieldInformation> GetFieldInfos() { var fieldInfos = new List<FieldInformation>(); //The InternalName , DisplayName and FieldType are both required to make it work fieldInfos.Add(new FieldInformation { InternalName = "Title", Value = "TestDoc2.pdf", DisplayName = "Title", Type = FieldType.Text }); return fieldInfos; } Here is the code for the proxy wrapper. public class CopyWSProxyWrapper : IDisposable { private readonly string siteUrl; public CopyWSProxyWrapper(string siteUrl) { this.siteUrl = siteUrl; } private readonly CopySoapClient proxy = new CopySoapClient(); public void UploadFile(string testdoc2Pdf, string[] destinationUrls, byte[] testDoc, FieldInformation[] fieldInformations) { using (CopySoapClient proxy = new CopySoapClient()) { proxy.Endpoint.Address = new EndpointAddress(String.Format("{0}/_vti_bin/copy.asmx", siteUrl)); proxy.ClientCredentials.Windows.ClientCredential = CredentialCache.DefaultNetworkCredentials; proxy.ClientCredentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation; CopyResult[] copyResults = null; try { proxy.CopyIntoItems(testdoc2Pdf, destinationUrls, fieldInformations, testDoc, out copyResults); } catch (Exception e) { System.Console.WriteLine(e); } if (copyResults != null) System.Console.WriteLine(copyResults[0].ErrorMessage); System.Console.ReadLine(); } } public void Dispose() { proxy.Close(); } } You can download the source code here . ******Update********** It seems to be a bug that , you can not set the contentType when create a document item using Copy.asmx. In sp2007 the field type was Choice, however, in sp2010 it is actually Computed. I have tried using the Computed field type with no luck. I have also tried sending the ContentTypeId and this does not work.You might have to write your own web services to handle this.You can check my previous blog on how to get started with you own custom WCF in SP2010 here. References: SharePoint 2010 Web Services SharePoint2007 Web Services SharePoint MSDN Forum

    Read the article

  • Sending Messages to SignalR Hubs from the Outside

    - by Ricardo Peres
    Introduction You are by now probably familiarized with SignalR, Microsoft’s API for real-time web functionality. This is, in my opinion, one of the greatest products Microsoft has released in recent time. Usually, people login to a site and enter some page which is connected to a SignalR hub. Then they can send and receive messages – not just text messages, mind you – to other users in the same hub. Also, the server can also take the initiative to send messages to all or a specified subset of users on its own, this is known as server push. The normal flow is pretty straightforward, Microsoft has done a great job with the API, it’s clean and quite simple to use. And for the latter – the server taking the initiative – it’s also quite simple, just involves a little more work. The Problem The API for sending messages can be achieved from inside a hub – an instance of the Hub class – which is something that we don’t have if we are the server and we want to send a message to some user or group of users: the Hub instance is only instantiated in response to a client message. The Solution It is possible to acquire a hub’s context from outside of an actual Hub instance, by calling GlobalHost.ConnectionManager.GetHubContext<T>(). This API allows us to: Broadcast messages to all connected clients (possibly excluding some); Send messages to a specific client; Send messages to a group of clients. So, we have groups and clients, each is identified by a string. Client strings are called connection ids and group names are free-form, given by us. The problem with client strings is, we do not know how these map to actual users. One way to achieve this mapping is by overriding the Hub’s OnConnected and OnDisconnected methods and managing the association there. Here’s an example: 1: public class MyHub : Hub 2: { 3: private static readonly IDictionary<String, ISet<String>> users = new ConcurrentDictionary<String, ISet<String>>(); 4:  5: public static IEnumerable<String> GetUserConnections(String username) 6: { 7: ISet<String> connections; 8:  9: users.TryGetValue(username, out connections); 10:  11: return (connections ?? Enumerable.Empty<String>()); 12: } 13:  14: private static void AddUser(String username, String connectionId) 15: { 16: ISet<String> connections; 17:  18: if (users.TryGetValue(username, out connections) == false) 19: { 20: connections = users[username] = new HashSet<String>(); 21: } 22:  23: connections.Add(connectionId); 24: } 25:  26: private static void RemoveUser(String username, String connectionId) 27: { 28: users[username].Remove(connectionId); 29: } 30:  31: public override Task OnConnected() 32: { 33: AddUser(this.Context.Request.User.Identity.Name, this.Context.ConnectionId); 34: return (base.OnConnected()); 35: } 36:  37: public override Task OnDisconnected() 38: { 39: RemoveUser(this.Context.Request.User.Identity.Name, this.Context.ConnectionId); 40: return (base.OnDisconnected()); 41: } 42: } As you can see, I am using a static field to store the mapping between a user and its possibly many connections – for example, multiple open browser tabs or even multiple browsers accessing the same page with the same login credentials. The user identity, as is normal in .NET, is obtained from the IPrincipal which in SignalR hubs case is stored in Context.Request.User. Of course, this property will only have a meaningful value if we enforce authentication. Another way to go is by creating a group for each user that connects: 1: public class MyHub : Hub 2: { 3: public override Task OnConnected() 4: { 5: this.Groups.Add(this.Context.ConnectionId, this.Context.Request.User.Identity.Name); 6: return (base.OnConnected()); 7: } 8:  9: public override Task OnDisconnected() 10: { 11: this.Groups.Remove(this.Context.ConnectionId, this.Context.Request.User.Identity.Name); 12: return (base.OnDisconnected()); 13: } 14: } In this case, we will have a one-to-one equivalence between users and groups. All connections belonging to the same user will fall in the same group. So, if we want to send messages to a user from outside an instance of the Hub class, we can do something like this, for the first option – user mappings stored in a static field: 1: public void SendUserMessage(String username, String message) 2: { 3: var context = GlobalHost.ConnectionManager.GetHubContext<MyHub>(); 4: 5: foreach (String connectionId in HelloHub.GetUserConnections(username)) 6: { 7: context.Clients.Client(connectionId).sendUserMessage(message); 8: } 9: } And for using groups, its even simpler: 1: public void SendUserMessage(String username, String message) 2: { 3: var context = GlobalHost.ConnectionManager.GetHubContext<MyHub>(); 4:  5: context.Clients.Group(username).sendUserMessage(message); 6: } Using groups has the advantage that the IHubContext interface returned from GetHubContext has direct support for groups, no need to send messages to individual connections. Of course, you can wrap both mapping options in a common API, perhaps exposed through IoC. One example of its interface might be: 1: public interface IUserToConnectionMappingService 2: { 3: //associate and dissociate connections to users 4:  5: void AddUserConnection(String username, String connectionId); 6:  7: void RemoveUserConnection(String username, String connectionId); 8: } SignalR has built-in dependency resolution, by means of the static GlobalHost.DependencyResolver property: 1: //for using groups (in the Global class) 2: GlobalHost.DependencyResolver.Register(typeof(IUserToConnectionMappingService), () => new GroupsMappingService()); 3:  4: //for using a static field (in the Global class) 5: GlobalHost.DependencyResolver.Register(typeof(IUserToConnectionMappingService), () => new StaticMappingService()); 6:  7: //retrieving the current service (in the Hub class) 8: var mapping = GlobalHost.DependencyResolver.Resolve<IUserToConnectionMappingService>(); Now all you have to do is implement GroupsMappingService and StaticMappingService with the code I shown here and change SendUserMessage method to rely in the dependency resolver for the actual implementation. Stay tuned for more SignalR posts!

    Read the article

  • How Mary Meeker’s Latest Findings May Make You Re-Imagine Commerce

    - by Brenna Johnson-Oracle
    0 0 1 954 5439 Endeca Technologies 45 12 6381 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Today, Mary Meeker released her highly anticipated annual “Internet Trends” presentation for 2014. All 164 slides are jam-packed with pretty much everything you need to know about the state of the Internet. And as luck would have it, Oracle is staying ahead of these trends (but we’ll talk about that later). There were a few surprises, some stats to solidify what you likely already know, and Meeker’s novel observations about where we are all going. What interested me the most is not only how people are engaging in their personal lives, but how they engage with brands. As you could probably predict, Internet usage growth is slowing while tablet user and mobile data traffic growth continue their meteoric rise around the globe, with tremendous growth in underpenetrated markets like China, India, Brazil and Indonesia. Now hold those the “Internet is dead” comments. Keep in mind there’s still plenty of room to grow, and a multiscreen model is Meeker’s vision for our future. Despite 1.5x YOY growth for mobile traffic, mobile still only makes up about 23% of all traffic today. With tablet shipments easily outpacing figures for PCs even at their height (in 2007), mobile will only continue on it’s path, but won’t be everything to everyone. Mobile won’t replace every touchpoint, it’s just created our shorter attention spans and demand for simpler, more personal experiences. As Meeker points out TVs, tablets, PCs, and smartphones are used for different activities at present, but lines will blur (for example, 84% of smartphones owners use their device while watching TV). Day-to-day activities are being re-imagining through simple, beautiful user experiences. It seems like every day I discover a new way a brand/site/app made the most mundane or mounting task enjoyable and frictionless – and I’m not alone. Meeker points out the evolution of how we do everything from how we communicate, get information, use money, meet someone, get places, order a meal, and consume media is all done through new user interfaces that make day-to-day tasks simpler. This movement has caused just about everyone’s patience for a poor UX to take a nosedive. And it’s not just the digital user experience, technology is making a lot of people’s offline lives easier, and less expensive. Today 47% of online shopping utilizes free shipping— nearly half. And Meeker predicts same day local delivery will be the “next big thing” (and you can take a guess on who will own that). Content, Community and Commerce creates the “Internet Trifecta.” Meeker pointed out that when content, communities and commerce occur in a single experience it’s embraced by consumers, which translates to big dollars for brands. The magic happens when consumers can get inspired, research, and buy in a single experience. As the buying cycle has changed and touchpoints (Web, mobile, social, store) are no longer tied to “roles” or steps in the customer journey, brands must make all experiences (content and commerce) available in a single, adaptable experience. (We at Oracle Commerce have a lot to say on this topic – stay tuned!) And in what Meeker calls the “biggest re-imagination of all:” consumers enabled with smartphones and sensors are creating troves of findable and sharable data, which she says is in the early stages, by growing rapidly. She notes that transparency and patterns of consumers with this hardware (FYI - there are up to 10 sensors embedded in smartphones now) has created a Big Data treasure chest to be mined to improve business and the life of the consumer. The opportunities are endless. So what does it all mean for a company doing business online? Start thinking about how you can: Re-imagine your experience. Not your online experience and your mobile experience and your social experience – your overall experience. When consumers can research, buy, and advocate from anywhere (and their attention spans are at an all-time low) channels don’t exist. Enable simple and beautiful interactions informed by all of the online and offline data you leverage across your enterprise. Ethically leverage the endless supply of data (user generated content, clicks, purchases, in-store behavior, social activity) to make experiences more beautiful, more accurate, and more personalized (not to mention, more lucrative for you). Re-imagine content and commerce. Content and commerce must co-exist in a single destination where shoppers can get inspired, explore, research, share, and purchase in a collective experience. Think of how you can deliver an experience where all types of experiences (brand stories and commerce) adapt to every customer need. (Look for more on this topic coming soon). Re-imagine your reach. Look to Meeker’s findings to see how the global appetite for digital experiences is growing, but under-served in many places (i.e.: India, Mexico, Indonesia, Brazil, Philippines, etc.). Growing your online business to a new geography doesn’t have to mean starting from scratch or having an entirely new team manage the new endeavor. Expand using what you’ve already built in a multisite framework, with global language support. And of course, make sure it’s optimized for mobile! Re-imagine the possible. After every Meeker report, I’m always left with the thought “we are just at the beginning.” Everyday there is more data, more possibilities, more online consumers, and more opportunities to use new latest technology to get closer to your customers and be more successful. There’s a lot going on in our Product Development and Product Innovations groups to automate innovation for our customers, so that they can continue to stay ahead of these trends, without disrupting their business. Check out a recent interview with our Innovations Team on some of these new possibilities. Staying on track despite the seemingly endless possibilities out there is the hard part. Prioritizing where you will focus based on your unique brand promise, customer and goals is what you do best. To learn how Oracle Commerce can help your business achieve your goals check out oracle.com/commerce. Check out Meeker’s entire report here.

    Read the article

  • Solaris 11 Launch Blog Carnival Roundup

    - by constant
    Solaris 11 is here! And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter go to eleven. Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure: Getting Started/Overview A lot of people speculated that the official launch of Solaris 11 would be on 11/11 (whatever way you want to turn it), but it actually happened two days earlier. Larry Wake himself offers 11 Reasons Why Oracle Solaris 11 11/11 Isn't Being Released on 11/11/11. Then, Larry goes on with a summary: Oracle Solaris 11: The First Cloud OS gives you a short and sweet rundown of what the major new features of Solaris 11 are. Jeff Victor has his own list of What's New in Oracle Solaris 11. A popular Solaris 11 meme is to write a blog post about 11 favourite features: Jim Laurent's 11 Reasons to Love Solaris 11, Darren Moffat's 11 Favourite Solaris 11 Features, Mike Gerdt's 11 of My Favourite Things! are just three examples of "11 Favourite Things..." type blog posts, I'm sure many more will follow... More official overview content for Solaris 11 is available from the Oracle Tech Network Solaris 11 Portal. Also, check out Rick Ramsey's blog post Solaris 11 Resources for System Administrators on the OTN Blog and his secret 5 Commands That Make Solaris Administration Easier post from the OTN Garage. (Automatic) Installation and the Image Packaging System (IPS) The brand new Image Packaging System (IPS) and the Automatic Installer (IPS), together with numerous other install/packaging/boot/patching features are among the most significant improvements in Solaris 11. But before installing, you may wonder whether Solaris 11 will support your particular set of hardware devices. Again, the OTN Garage comes to the rescue with Rick Ramsey's post How to Find Out Which Devices Are Supported By Solaris 11. Included is a useful guide to all the first steps to get your Solaris 11 system up and running. Tim Foster had a whole handful of blog posts lined up for the launch, teaching you everything you need to know about IPS but didn't dare to ask: The IPS System Repository, IPS Self-assembly - Part 1: Overlays and Part 2: Multiple Packages Delivering Configuration. Watch out for more IPS posts from Tim! If installing packages or upgrading your system from the net makes you uneasy, then you're not alone: Jim Laurent will tech you how Building a Solaris 11 Repository Without Network Connection will make your life easier. Many of you have already peeked into the future by installing Solaris 11 Express. If you're now wondering whether you can upgrade or whether a fresh install is necessary, then check out Alan Hargreaves's post Upgrading Solaris 11 Express b151a with support to Solaris 11. The trick is in upgrading your pkg(1M) first. Networking One of the first things to do after installing Solaris 11 (or any operating system for that matter), is to set it up for networking. Solaris 11 comes with the brand new "Network Auto-Magic" feature which can figure out everything by itself. For those cases where you want to exercise a little more control, Solaris 11 left a few people scratching their heads. Fortunately, Tschokko wrote up this cool blog post: Solaris 11 manual IPv4 & IPv6 configuration right after the launch ceremony. Thanks, Tschokko! And Milek points out a long awaited networking feature in Solaris 11 called Solaris 11 - hostmodel, which I know for a fact that many customers have looked forward to: How to "bind" a Solaris 11 system to a specific gateway for specific IP address it is using. Steffen Weiberle teaches us how to tune the Solaris 11 networking stack the proper way: ipadm(1M). No more fiddling with ndd(1M)! Check out his tutorial on Solaris 11 Network Tunables. And if you want to get even deeper into the networking stack, there's nothing better than DTrace. Alan Maguire teaches you in: DTracing TCP Congestion Control how to probe deeply into the Solaris 11 TCP/IP stack, the TCP congestion control part in particular. Don't miss his other DTrace and TCP related blog posts! DTrace And there we are: DTrace, the king of all observability tools. Long time DTrace veteran and co-author of The DTrace book*, Brendan Gregg blogged about Solaris 11 DTrace syscall provider changes. BTW, after you install Solaris 11, check out the DTrace toolkit which is installed by default in /usr/dtrace/DTT. It is chock full of handy DTrace scripts, many of which contributed by Brendan himself! Security Another big theme in Solaris 11, and one that is crucial for the success of any operating system in the Cloud is Security. Here are some notable posts in this category: Darren Moffat starts by showing us how to completely get rid of root: Completely Disabling Root Logins on Solaris 11. With no root user, there's one major entry point less to worry about. But that's only the start. In Immutable Zones on Encrypted ZFS, Darren shows us how to double the security of your services: First by locking them into the new Immutable Zones feature, then by encrypting their data using the new ZFS encryption feature. And if you're still missing sudo from your Linux days, Darren again has a solution: Password (PAM) caching for Solaris su - "a la sudo". If you're wondering how much compute power all this encryption will cost you, you're in luck: The Solaris X86 AESNI OpenSSL Engine will make sure you'll use your Intel's embedded crypto support to its fullest. And if you own a brand new SPARC T4 machine you're even luckier: It comes with its own SPARC T4 OpenSSL Engine. Dan Anderson's posts show how there really is now excuse not to encrypt any more... Developers Solaris 11 has a lot to offer to developers as well. Ali Bahrami has a series of blog posts that cover diverse developer topics: elffile: ELF Specific File Identification Utility, Using Stub Objects and The Stub Proto: Not Just For Stub Objects Anymore to name a few. BTW, if you're a developer and want to shape the future of Solaris 11, then Vijay Tatkar has a hint for you: Oracle (Sun Systems Group) is hiring! Desktop and Graphics Yes, Solaris 11 is a 100% server OS, but it can also offer a decent desktop environment, especially if you are a developer. Alan Coopersmith starts by discussing S11 X11: ye olde window system in today's new operating system, then Calum Benson shows us around What's new on the Solaris 11 Desktop. Even accessibility is a first-class citizen in the Solaris 11 user interface. Peter Korn celebrates: Accessible Oracle Solaris 11 - released! Performance Gone are the days of "Slowaris", when Solaris was among the few OSes that "did the right thing" while others cut corners just to win benchmarks. Today, Solaris continues doing the right thing, and it delivers the right performance at the same time. Need proof? Check out Brian's BestPerf blog with continuous updates from the benchmarking lab, including Recent Benchmarks Using Oracle Solaris 11! Send Me More Solaris 11 Launch Articles! These are just a few of the more interesting blog articles that came out around the Solaris 11 launch, I'm sure there are many more! Feel free to post a comment below if you find a particularly interesting blog post that hasn't been listed so far and share your enthusiasm for Solaris 11! *Affiliate link: Buy cool stuff and support this blog at no extra cost. We both win! var flattr_uid = '26528'; var flattr_tle = 'Solaris 11 Launch Blog Carnival Roundup'; var flattr_dsc = '<strong>Solaris 11 is here!</strong>And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter <a href="http://en.wikipedia.org/wiki/Up_to_eleven">go to eleven</a>.Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure:'; var flattr_tag = 'blogging,digest,Oracle,Solaris,solaris,solaris 11'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2011/11/solaris-11-launch-blog-carnival-roundup'; var flattr_lng = 'en_GB'

    Read the article

  • Partitioning Webcast Details - 17/03/2010

    - by Alex Blyth
    Hi AllHere are the details for Wednesday's (17th March 2010) webcast on Partitioning:Webcast is at http://strtc.oracle.com (IE6, 7 & 8 supported only)Conference ID for the webcast is 6168728There is no conference keyPlease use your real name in the name field (just makes it easier for us to help you out if we can't answer your questions on the call)Audio details:NZ Toll Free - 0800888157 orAU Toll Free - 1800420354Meeting ID: 7914841Meeting Passcode: 17032010Talk to you all WednesdayAlex

    Read the article

  • A couple of nice features when using OracleTextSearch

    - by kyle.hatlestad
    If you have your UCM/URM instance configured to use the Oracle 11g database as the search engine, you can be using OracleTextSearch as the search definition. OracleTextSearch uses the advanced features of Oracle Text for indexing and searching. This includes the ability to specify metadata fields to be optimized for the search index, fast rebuilding, and index optimization. If you are on 10g of UCM, then you'll need to load the OracleTextSearch component that is available in the CS10gR35UpdateBundle component on the support site (patch #6907073). If you are on 11g, no component is needed. Then you specify the search indexer name with the configuration flag of SearchIndexerEngineName=OracleTextSearch. Please see the docs for other configuration settings and setup instructions. So I thought I would highlight a couple of other unique features available with OracleTextSearch. The first is the Drill Down feature. This feature allows you to specify specific metadata fields that will break down the results of that field based on the total results. So in the above graphic, you can see how it broke down the extensions and gives a count for each. Then you just need to click on that link to then drill into that result. This setting is perfect for option list fields and ones with a distinct set of values possible. By default, it will use the fields Type, Security Group, and Account (if enabled). But you can also specify your own fields. In 10g, you can use the following configuration entry: DrillDownFields=xWebsiteObjectType,dExtension,dSecurityGroup,dDocType And in 11g, you can specify it through the Configuration Manager applet. Simply click on the Advanced Search Design, highlight the field to filter, click Edit, and check 'Is a filter category'. The other feature you get with OracleTextSearch are search snippets. These snippets show the occurrence of the search term in context of their usage. This is very similar to how Google displays its results. If you are on 10g, this is enabled by default. If you are on 11g, you need to turn on the feature. The following configuration entry will enable it: OracleTextDisableSearchSnippet=false Once enabled, you can add the snippets to your search results. Go to Change View -> Customize and add a new search result view. In the Available Fields in the Special section, select Snippet and move it to the Main or Additional Information. If you want to include the snippets with the Classic results, you can add the idoc variable of <$srfDocSnippet$> to display them. One caveat is that this can effect search performance on large collections. So plan the infrastructure accordingly.

    Read the article

  • SSIS Design Pattern: Loading Variable-Length Rows

    - by andyleonard
    Introduction I encounter flat file sources with variable-length rows on occassion. Here, I supply one SSIS Design Pattern for loading them. What's a Variable-Length Row Flat File? Great question - let's start with a definition. A variable-length row flat file is a text source of some flavor - comma-separated values (CSV), tab-delimited file (TDF), or even fixed-length, positional-, or ordinal-based (where the location of the data on the row defines its field). The major difference between a "normal"...(read more)

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • How to apply programatical changes to the Terrain SplatPrototype

    - by Shivan Dragon
    I have a script to which I add a Terrain object (I drag and drop the terrain in the public Terrain field). The Terrain is already setup in Unity to have 2 PaintTextures: 1 is a Square (set up with a tile size so that it forms a checkered pattern) and the 2nd one is a grass image: Also the Target Strength of the first PaintTexture is lowered so that the checkered pattern also reveals some of the grass underneath. Now I want, at run time, to change the Tile Size of the first PaintTexture, i.e. have more or less checkers depending on various run time conditions. I've looked through Unity's documentation and I've seen you have the Terrain.terrainData.SplatPrototype array which allows you to change this. Also there's a RefreshPrototypes() method on the terrainData object and a Flush() method on the Terrain object. So I made a script like this: public class AStarTerrain : MonoBehaviour { public int aStarCellColumns, aStarCellRows; public GameObject aStarCellHighlightPrefab; public GameObject aStarPathMarkerPrefab; public GameObject utilityRobotPrefab; public Terrain aStarTerrain; void Start () { //I've also tried NOT drag and dropping the Terrain on the public field //and instead just using the commented line below, but I get the same results //aStarTerrain = this.GetComponents<Terrain>()[0]; Debug.Log ("Got terrain "+aStarTerrain.name); SplatPrototype[] splatPrototypes = aStarTerrain.terrainData.splatPrototypes; Debug.Log("Terrain has "+splatPrototypes.Length+" splat prototypes"); SplatPrototype aStarCellSplat = splatPrototypes[0]; Debug.Log("Re-tyling splat prototype "+aStarCellSplat.texture.name); aStarCellSplat.tileSize = new Vector2(2000,2000); Debug.Log("Tyling is now "+aStarCellSplat.tileSize.x+"/"+aStarCellSplat.tileSize.y); aStarTerrain.terrainData.RefreshPrototypes(); aStarTerrain.Flush(); } //... Problem is, nothing gets changed, the checker map is not re-tiled. The console outputs correctly tell me that I've got the Terrain object with the right name, that it has the right number of splat prototypes and that I'm modifying the tileSize on the SplatPrototype object corresponding to the right texture. It also tells me the value has changed. But nothing gets updated in the actual graphical view. So please, what am I missing?

    Read the article

  • Customize Team Build 2010 – Part 13: Get control over the Build Output

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application In the part 8, I have explained how you can add informational messages, warnings or errors to the build output. If you want to integrate with other lines of text to the build output, you need to do more. This post will show you how you can add extra steps, additional information and hyperlinks to the build output. UPDATE 13-12-2010: Thanks to Jason Pricket, it is now also possible to not show every activity in the build log. This is really useful when you are doing for-loops in your template. To see how you can do that, check out Jason's blog: http://blogs.msdn.com/b/jpricket/archive/2010/12/09/tfs-2010-making-your-build-log-less-noisy.aspx Add an hyperlink to the end of the build output Lets start with a simple example of how you can adjust the build output. In this case we are going to add at the end of the build output an hyperlink where a user can click on to for example start the deployment to the test environment. In part 4 you can find information how you can create a custom activity To add information to the build output, you need the BuildDetail. This value is a variable in your xaml and is thus easily transferable to you custom activity. Besides the BuildDetail the user has also to specify the text and the url that has to be added to the end of the build output. The following code segment shows you how you can achieve this.     [BuildActivity(HostEnvironmentOption.All)]    public sealed class AddHyperlinkToBuildOutput : CodeActivity    {        [RequiredArgument]        public InArgument<IBuildDetail> BuildDetail { get; set; }         [RequiredArgument]        public InArgument<string> DisplayText { get; set; }         [RequiredArgument]        public InArgument<string> Url { get; set; }         protected override void Execute(CodeActivityContext context)        {            // Obtain the runtime value of the input arguments                        IBuildDetail buildDetail = context.GetValue(this.BuildDetail);            string displayText = context.GetValue(this.DisplayText);            string url = context.GetValue(this.Url);             // Add the hyperlink            buildDetail.Information.AddExternalLink(displayText, new Uri(url));            buildDetail.Information.Save();        }    } If you add this activity to somewhere in your build process template (within the scope Run on Agent), you will get the following build output Add an line of text to the build output The next challenge is to add this kind of output not only to the end of the build output but at the step that is currently executing. To be able to do this, you need the current node in the build output. The following code shows you how you can achieve this. First you need to get the current activity tracking, which you can get with the following line of code             IActivityTracking currentTracking = context.GetExtension<IBuildLoggingExtension>().GetActivityTracking(context); Then you can create a new node and set its type to Activity Tracking Node (so copy it from the current node) and do nice things with the node.             IBuildInformationNode childNode = currentTracking.Node.Children.CreateNode();            childNode.Type = currentTracking.Node.Type;            childNode.Fields.Add("DisplayText", "This text is displayed."); You can also add a build step to display progress             IBuildStep buildStep = childNode.Children.AddBuildStep("Custom Build Step", "This is my custom build step");            buildStep.FinishTime = DateTime.Now.AddSeconds(10);            buildStep.Status = BuildStepStatus.Succeeded; Or you can add an hyperlink to the node             childNode.Children.AddExternalLink("My link", new Uri(http://www.ewaldhofman.nl)); When you combine this together you get the following result in the build output   You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Customize Team Build 2010 – Part 13: Get control over the Build Output

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application     In the part 8, I have explained how you can add informational messages, warnings or errors to the build output. If you want to integrate with other lines of text to the build output, you need to do more. This post will show you how you can add extra steps, additional information and hyperlinks to the build output. Add an hyperlink to the end of the build output Lets start with a simple example of how you can adjust the build output. In this case we are going to add at the end of the build output an hyperlink where a user can click on to for example start the deployment to the test environment. In part 4 you can find information how you can create a custom activity To add information to the build output, you need the BuildDetail. This value is a variable in your xaml and is thus easily transferable to you custom activity. Besides the BuildDetail the user has also to specify the text and the url that has to be added to the end of the build output. The following code segment shows you how you can achieve this.     [BuildActivity(HostEnvironmentOption.All)]    public sealed class AddHyperlinkToBuildOutput : CodeActivity    {        [RequiredArgument]        public InArgument<IBuildDetail> BuildDetail { get; set; }         [RequiredArgument]        public InArgument<string> DisplayText { get; set; }         [RequiredArgument]        public InArgument<string> Url { get; set; }         protected override void Execute(CodeActivityContext context)        {            // Obtain the runtime value of the input arguments                        IBuildDetail buildDetail = context.GetValue(this.BuildDetail);            string displayText = context.GetValue(this.DisplayText);            string url = context.GetValue(this.Url);             // Add the hyperlink            buildDetail.Information.AddExternalLink(displayText, new Uri(url));            buildDetail.Information.Save();        }    } If you add this activity to somewhere in your build process template (within the scope Run on Agent), you will get the following build output Add an line of text to the build output The next challenge is to add this kind of output not only to the end of the build output but at the step that is currently executing. To be able to do this, you need the current node in the build output. The following code shows you how you can achieve this. First you need to get the current activity tracking, which you can get with the following line of code             IActivityTracking currentTracking = context.GetExtension<IBuildLoggingExtension>().GetActivityTracking(context); Then you can create a new node and set its type to Activity Tracking Node (so copy it from the current node) and do nice things with the node.             IBuildInformationNode childNode = currentTracking.Node.Children.CreateNode();            childNode.Type = currentTracking.Node.Type;            childNode.Fields.Add("DisplayText", "This text is displayed."); You can also add a build step to display progress             IBuildStep buildStep = childNode.Children.AddBuildStep("Custom Build Step", "This is my custom build step");            buildStep.FinishTime = DateTime.Now.AddSeconds(10);            buildStep.Status = BuildStepStatus.Succeeded; Or you can add an hyperlink to the node             childNode.Children.AddExternalLink("My link", new Uri(http://www.ewaldhofman.nl)); When you combine this together you get the following result in the build output     You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Built-in card-reader doesn't work. HP Compaq nx6325 notebook

    - by user10940
    I have a HP-Compaq nx6325 notebook with an built-in card-reader (SD, MS/Pro, MMC, SM, XD) and the ubuntu (10.10.) don't see it. I've tried to install it manually, with this steps (and with this tifmxx driver), but doesn't work. The compile log: $ echo /home/tvera/downloads/cr_install /home/tvera/downloads/cr_install $ make -C /lib/modules/2.6.35-25-generic/build M=/home/tvera/downloads/cr_install make[1]: Entering directory `/usr/src/linux-headers-2.6.35-25-generic' CC [M] /home/tvera/downloads/cr_install/tifm_core.o In file included from /home/tvera/downloads/cr_install/tifm_core.c:12: /home/tvera/downloads/cr_install/linux/tifm.h:128: error: field ‘cdev’ has incomplete type /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_uevent’: /home/tvera/downloads/cr_install/tifm_core.c:69: warning: passing argument 1 of ‘add_uevent_var’ from incompatible pointer type include/linux/kobject.h:244: note: expected ‘struct kobj_uevent_env *’ but argument is of type ‘char **’ /home/tvera/downloads/cr_install/tifm_core.c:69: warning: passing argument 2 of ‘add_uevent_var’ makes pointer from integer without a cast include/linux/kobject.h:244: note: expected ‘const char *’ but argument is of type ‘int’ /home/tvera/downloads/cr_install/tifm_core.c: At top level: /home/tvera/downloads/cr_install/tifm_core.c:161: warning: initialization from incompatible pointer type /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_free’: /home/tvera/downloads/cr_install/tifm_core.c:170: warning: type defaults to ‘int’ in declaration of ‘__mptr’ /home/tvera/downloads/cr_install/tifm_core.c:170: warning: initialization from incompatible pointer type /home/tvera/downloads/cr_install/tifm_core.c: At top level: /home/tvera/downloads/cr_install/tifm_core.c:177: error: unknown field ‘release’ specified in initializer /home/tvera/downloads/cr_install/tifm_core.c:178: warning: initialization from incompatible pointer type /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_alloc_adapter’: /home/tvera/downloads/cr_install/tifm_core.c:190: error: implicit declaration of function ‘class_device_initialize’ /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_add_adapter’: /home/tvera/downloads/cr_install/tifm_core.c:211: error: ‘BUS_ID_SIZE’ undeclared (first use in this function) /home/tvera/downloads/cr_install/tifm_core.c:211: error: (Each undeclared identifier is reported only once /home/tvera/downloads/cr_install/tifm_core.c:211: error: for each function it appears in.) /home/tvera/downloads/cr_install/tifm_core.c:212: error: implicit declaration of function ‘class_device_add’ /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_remove_adapter’: /home/tvera/downloads/cr_install/tifm_core.c:237: error: implicit declaration of function ‘class_device_del’ /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_free_adapter’: /home/tvera/downloads/cr_install/tifm_core.c:243: error: implicit declaration of function ‘class_device_put’ /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_alloc_device’: /home/tvera/downloads/cr_install/tifm_core.c:275: error: ‘struct device’ has no member named ‘bus_id’ /home/tvera/downloads/cr_install/tifm_core.c:275: error: ‘BUS_ID_SIZE’ undeclared (first use in this function) make[2]: *** [/home/tvera/downloads/cr_install/tifm_core.o] Error 1 make[1]: *** [_module_/home/tvera/downloads/cr_install] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-2.6.35-25-generic' make: *** [all] Error 2 The output of lsusb: Bus 001 Device 005: ID 05e3:0702 Genesys Logic, Inc. USB 2.0 IDE Adapter Bus 003 Device 003: ID 0458:003a KYE Systems Corp. (Mouse Systems) NetScroll+ Mini Traveler Bus 003 Device 002: ID 08ff:2580 AuthenTec, Inc. AES2501 Fingerprint Sensor Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >