Search Results

Search found 5819 results on 233 pages for 'compiler theory'.

Page 186/233 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • How can I refactor this JavaScript code to avoid making functions in a loop?

    - by Bungle
    I wrote the following code for a project that I'm working on: var clicky_tracking = [ ['related-searches', 'Related Searches'], ['related-stories', 'Related Stories'], ['more-videos', 'More Videos'], ['web-headlines', 'Publication'] ]; for (var x = 0, length_x = clicky_tracking.length; x < length_x; x++) { links = document.getElementById(clicky_tracking[x][0]) .getElementsByTagName('a'); for (var y = 0, length_y = links.length; y < length_y; y++) { links[y].onclick = (function(name, url) { return function() { clicky.log(url, name, 'outbound'); }; }(clicky_tracking[x][1], links[y].href)); } } What I'm trying to do is: define a two-dimensional array, with each instance the inner arrays containing two elements: an id attribute value (e.g., "related-searches") and a corresponding description (e.g., "Related Searches"); for each of the inner arrays, find the element in the document with the corresponding id attribute, and then gather a collection of all <a> elements (hyperlinks) within it; loop through that collection and attach an onclick handler to each hyperlink, which should call clicky.log, passing in as parameters the description that corresponds to the id (e.g., "Related Searches" for the id "related-searches") and the value of the href attribute for the <a> element that was clicked. Hopefully that wasn't thoroughly confusing! The code may be more self-explanatory than that. I believe that what I've implemented here is a closure, but JSLint complains: http://img.skitch.com/20100526-k1trfr6tpj64iamm8r4jf5rbru.png So, my questions are: How can I refactor this code to make JSLint agreeable? Or, better yet, is there a best-practices way to do this that I'm missing, regardless of what JSLint thinks? Should I rely on event delegation instead? That is, attaching onclick event handlers to the document elements with the id attributes in my arrays, and then looking at event.target? I've done that once before and understand the theory, but I'm very hazy on the details, and would appreciate some guidance on what that would look like - assuming this is a viable approach. Thanks very much for any help!

    Read the article

  • Strange Map Reduce Behavior in CouchDB. Rereduce?

    - by Tony
    I have a mapreduce issue with couchdb (both functions shown below): when I run it with grouplevel = 2 (exact) I get accurate output: {"rows":[ {"key":["2011-01-11","staff-1"],"value":{"total":895.72,"count":2,"services":6,"services_ignored":6,"services_liked":0,"services_disliked":0,"services_disliked_avg":0,"Revise":{"total":275.72,"count":1},"Review":{"total":620,"count":1}}}, {"key":["2011-01-11","staff-2"],"value":{"total":8461.689999999999,"count":2,"services":41,"services_ignored":37,"services_liked":4,"services_disliked":0,"services_disliked_avg":0,"Revise":{"total":4432.4,"count":1},"Review":{"total":4029.29,"count":1}}}, {"key":["2011-01-11","staff-3"],"value":{"total":2100.72,"count":1,"services":10,"services_ignored":4,"services_liked":3,"services_disliked":3,"services_disliked_avg":2.3333333333333335,"Revise":{"total":2100.72,"count":1}}}, However, changing to grouplevel=1 so the values for all the different staff keys should be all grouped by date no longer gives accurate output (notice the total is currect but all others are wrong): {"rows":[ {"key":["2011-01-11"],"value":{"total":11458.130000000001,"count":2,"services":0,"services_ignored":0,"services_liked":0,"services_disliked":0,"services_disliked_avg":0,"None":{"total":11458.130000000001,"count":2}}}, My only theory is this has something to do with rereduce, which I have not yet learned. Should I explore that option or am I missing something else here? This is the Map function: function(doc) { if(doc.doc_type == 'Feedback') { emit([doc.date.split('T')[0], doc.staff_id], doc); } } And this is the Reduce: function(keys, vals) { // sum all key points by status: total, count, services (liked, rejected, ignored) var ret = { 'total':0, 'count':0, 'services': 0, 'services_ignored': 0, 'services_liked': 0, 'services_disliked': 0, 'services_disliked_avg': 0, }; var total_disliked_score = 0; // handle status function handle_status(doc) { if(!doc.status || doc.status == '' || doc.status == undefined) { status = 'None'; } else if (doc.status == 'Declined') { status = 'Rejected'; } else { status = doc.status; } if(!ret[status]) ret[status] = {'total':0, 'count':0}; ret[status]['total'] += doc.total; ret[status]['count'] += 1; }; // handle likes / dislikes function handle_services(services) { ret.services += services.length; for(var a in services) { if (services[a].user_likes == 10) { ret.services_liked += 1; } else if (services[a].user_likes >= 1) { ret.services_disliked += 1; total_disliked_score += services[a].user_likes; if (total_disliked_score >= ret.services_disliked) { ret.services_disliked_avg = total_disliked_score / ret.services_disliked; } } else { ret.services_ignored += 1; } } } // loop thru docs for(var i in vals) { // increment the total $ ret.total += vals[i].total; ret.count += 1; // update totals and sums for the status of this route handle_status(vals[i]); // do the likes / dislikes stats if(vals[i].groups) { for(var ii in vals[i].groups) { if(vals[i].groups[ii].services) { handle_services(vals[i].groups[ii].services); } } } // handle deleted services if(vals[i].hidden_services) { if (vals[i].hidden_services) { handle_services(vals[i].hidden_services); } } } return ret; }

    Read the article

  • how to bind repeater control as message threading

    - by Shalin Gajjar
    i have crm application. i have one difficulties that how i bind repeater control as message threading. like first thread as question and second thread as answer of that question. if user asked multiple question then first,second,.. threads as question and.as it is like message chatting... for keeping data from database i use this stored procedure: set ANSI_NULLS ON set QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[ViewMessageThreads] (@inquiry_id varchar(50)) AS BEGIN SET NOCOUNT ON; select i.body as master_body, h.body as history_body, q.body as question_body, q.Created_date as question_timestamp, a.body as answer_body, a.Created_date as answer_timestamp, t.Type_name as user_type from tbl_Inquiry_History i left join tbl_Inquiry_master h on h.Inquiry_id=i.Inquiry_id left join tbl_Question q on q.Inquiry_id=i.Inquiry_id left join tbl_Answer a on a.Question_id=q.Inquiry_id left join tbl_User_master u on u.Id=i.User_id left join tbl_Login_master l on l.Id=u.User_id left join tbl_Type t on t.Id = l.type_id where (i.Inquiry_id=@inquiry_id) END and this gives me result as: master_body history_body question_body question_t.. answer_body answer_t.. user_type __________________________________________________________________________________________ question 1 NULL question 1 2005-03-14... NULL NULL User question 1 NULL question 2 2005-03-14... NULL NULL User and i include this design source of repeater: <asp:Repeater ID="Repeater_Inquiry_Messages" runat="server"> <ItemTemplate> <table id="ctl00_ContentPlaceHolder1_dl_ticketmsg" cellspacing="0" border="0" style="width:100%;border-collapse:collapse;"> <tbody><tr> <td style="background-color:#F5F5FF;"> <table cellpadding="0" cellspacing="0" border="0"> <tbody><tr> <td class="header"> <span id="ctl00_ContentPlaceHolder1_dl_ticketmsg_ctl00_lbl_msg_no"><%#Container.ItemIndex+1 %></span></td> <td class="normaltext" valign="bottom"> <span id="ctl00_ContentPlaceHolder1_dl_ticketmsg_ctl00_lbl_tagline">Message By <b><asp:Label ID="lbl_user_t" runat="server" Text='<%#Eval("user_type")%>'/></b> on <asp:Label ID="lbldatetime" runat="server" Text='<%# DataBinder.Eval(Container.DataItem, "question_timestamp","{0:ddd, dd MMMM yyyy}")%>'/></span></td> </tr> <tr> <td class="header"> &nbsp;</td> <td class="normaltext" valign="bottom"> <b>Message :</b><br> <span id="ctl00_ContentPlaceHolder1_dl_ticketmsg_ctl00_Label1"><asp:Label ID="lbl_inquiry_desc" runat="server" Text='<%#Eval("question_body")%>'/></span></td> </tr> </tbody></table> </td> </tr> </tbody></table> </ItemTemplate> <SeparatorTemplate> <table> <tr> <td style="height:3px"></td> </tr> </table> </SeparatorTemplate> <ItemTemplate> <table id="ctl00_ContentPlaceHolder1_dl_ticketmsg1" cellspacing="0" border="0" style="width:100%;border-collapse:collapse;"> <tbody><tr> <td style="background-color:#F5F5FF;"> <table cellpadding="0" cellspacing="0" border="0"> <tbody><tr> <td class="header"> <span id="ctl00_ContentPlaceHolder1_dl_ticketmsg1_ctl00_lbl_msg_no"><%#Container.ItemIndex+1 %></span></td> <td class="normaltext" valign="bottom"> <span id="ctl00_ContentPlaceHolder1_dl_ticketmsg1_ctl00_lbl_tagline">Message By <b><asp:Label ID="Label1" runat="server" Text='<%#Eval("user_type")%>'/></b> on <asp:Label ID="Label2" runat="server" Text='<%# DataBinder.Eval(Container.DataItem, "answer_timestamp","{0:ddd, dd MMMM yyyy}")%>'/></span></td> </tr> <tr> <td class="header"> &nbsp;</td> <td class="normaltext" valign="bottom"> <b>Message :</b><br> <span id="ctl00_ContentPlaceHolder1_dl_ticketmsg1_ctl00_Label1"><asp:Label ID="Label3" runat="server" Text='<%#Eval("answer_body")%>'/></span></td> </tr> <tr> <td class="header"> &nbsp;</td> <td class="normaltext" valign="bottom"> <b></b> </td> </tr> </tbody></table> </td> </tr> </tbody></table> </ItemTemplate> </asp:Repeater> how ever this gives me only question thread while i commenting up this second message thread. ----------------------------------------Updated--------------------------------------- please help me.. ---------------------------------------Updated---------------------------------------- Server Error in '/OmInvestmentStockMarketing_new' Application. Compilation Error Description: An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately. Compiler Error Message: CS1026: ) expected Source Error: Line 162: <span id="ctl00_ContentPlaceHolder1_dl_ticketmsg_ctl00_lbl_msg_no"><%#Container.ItemIndex+1 %></span></td> Line 163: <td class="normaltext" valign="bottom"> Line 164: <span id="ctl00_ContentPlaceHolder1_dl_ticketmsg_ctl00_lbl_tagline">Message By <b><asp:Label ID="lbl_user_t" runat="server" Text='<%# If(Eval("cargo2").ToString() Is "Admin", "You", Eval("cargo2"))%>'/></b> Line 165: on <asp:Label ID="lbldatetime" runat="server" Text='<%# DataBinder.Eval(Container.DataItem,"cargo1","{0:ddd, dd MMMM yyyy}")%>'/></span></td> Line 166: </tr> Source File: c:\Documents and Settings\Vishal\My Documents\Visual Studio 2005\WebSites\OmInvestmentStockMarketing_new\Admin\OWM_Inquiry.aspx Line: 164 Show Detailed Compiler Output: Show Complete Compilation Source: Version Information: Microsoft .NET Framework Version:2.0.50727.3053; ASP.NET Version:2.0.50727.3053

    Read the article

  • What would you do to make this code more "over-engineered"? [closed]

    - by Mez
    A friend and I got bored, and, long story short, decided to make an over-engineered FizzBuzz in PHP <?php interface INumber { public function go(); public function setNumber($i); } class FBNumber implements INumber { private $value; private $fizz; private $buzz; public function __construct($fizz = 3 , $buzz = 5) { $this->setFizz($fizz); $this->setBuzz($buzz); } public function setNumber($i) { if(is_int($i)) { $this->value = $i; } } private function setFizz($i) { if(is_int($i)) { $this->fizz = $i; } } private function setBuzz($i) { if(is_int($i)) { $this->buzz = $i; } } private function isFizz() { return ($this->value % $this->fizz == 0); } private function isBuzz() { return ($this->value % $this->buzz == 0); } private function isNeither() { return (!$this->isBuzz() AND !$this->isFizz()); } private function isFizzBuzz() { return ($this->isFizz() OR $this->isBuzz()); } private function fizz() { if ($this->isFizz()) { return "Fizz"; } } private function buzz() { if ($this->isBuzz()) { return "Buzz"; } } private function number() { if ($this->isNeither()) { return $this->value; } } public function go() { return $this->fizz() . $this->buzz() . $this->number(); } } class FizzBuzz { private $limit; private $number_class; private $numbers = array(); function __construct(INumber $number_class, $limit = 100) { $this->number_class = $number_class; $this->limit = $limit; } private function collectNumbers() { for ($i=1; $i <= $this->limit; $i++) { $n = clone($this->number_class); $n->setNumber($i); $this->numbers[$i] = $n->go(); unset($n); } } private function printNumbers() { $return = ''; foreach($this->numbers as $number){ $return .= $number . "\n"; } return $return; } public function go() { $this->collectNumbers(); return $this->printNumbers(); } } $fb = new FizzBuzz(new FBNumber()); echo $fb->go(); In theory, what could we/would you do to make it even more "over-engineered"?

    Read the article

  • Using Core Data Concurrently and Reliably

    - by John Topley
    I'm building my first iOS app, which in theory should be pretty straightforward but I'm having difficulty making it sufficiently bulletproof for me to feel confident submitting it to the App Store. Briefly, the main screen has a table view, upon selecting a row it segues to another table view that displays information relevant for the selected row in a master-detail fashion. The underlying data is retrieved as JSON data from a web service once a day and then cached in a Core Data store. The data previous to that day is deleted to stop the SQLite database file from growing indefinitely. All data persistence operations are performed using Core Data, with an NSFetchedResultsController underpinning the detail table view. The problem I am seeing is that if you switch quickly between the master and detail screens several times whilst fresh data is being retrieved, parsed and saved, the app freezes or crashes completely. There seems to be some sort of race condition, maybe due to Core Data importing data in the background whilst the main thread is trying to perform a fetch, but I'm speculating. I've had trouble capturing any meaningful crash information, usually it's a SIGSEGV deep in the Core Data stack. The table below shows the actual order of events that happen when the detail table view controller is loaded: Main Thread Background Thread viewDidLoad Get JSON data (using AFNetworking) Create child NSManagedObjectContext (MOC) Parse JSON data Insert managed objects in child MOC Save child MOC Post import completion notification Receive import completion notification Save parent MOC Perform fetch and reload table view Delete old managed objects in child MOC Save child MOC Post deletion completion notification Receive deletion completion notification Save parent MOC Once the AFNetworking completion block is triggered when the JSON data has arrived, a nested NSManagedObjectContext is created and passed to an "importer" object that parses the JSON data and saves the objects to the Core Data store. The importer executes using the new performBlock method introduced in iOS 5: NSManagedObjectContext *child = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]; [child setParentContext:self.managedObjectContext]; [child performBlock:^{ // Create importer instance, passing it the child MOC... }]; The importer object observes its own MOC's NSManagedObjectContextDidSaveNotification and then posts its own notification which is observed by the detail table view controller. When this notification is posted the table view controller performs a save on its own (parent) MOC. I use the same basic pattern with a "deleter" object for deleting the old data after the new data for the day has been imported. This occurs asynchronously after the new data has been fetched by the fetched results controller and the detail table view has been reloaded. One thing I am not doing is observing any merge notifications or locking any of the managed object contexts or the persistent store coordinator. Is this something I should be doing? I'm a bit unsure how to architect this all correctly so would appreciate any advice.

    Read the article

  • What is the best practice to segment c#.net projects based on a single base project

    - by Anthony
    Honestly, I can't word my question any better without describing it. I have a base project (with all its glory, dlls, resources etc) which is a CMS. I need to use this project as a base for othe custom bake projects. This base project is to be maintained and updated among all custom bake projects. I use subversion (Collabnet and Tortise SVN) I have two questions: 1 - Can I use subversion to share the base project among other projects What I mean here is can I "Checkout" the base project into another "Checked Out" project and have both update and commit seperatley. So, to paint a picture, let's say I am working on a custom project and I modify the core/base prject in some way (which I know will suit the others) can I then commit those changes and upon doing so when I update the base project in the other "Checked out" resources will it pull the changes? In short, I would like not to have to manually deploy updated core files whenever I make changes into each seperate project. 2 - If I create a custom file (let's say an webcontrol or aspx page etc) can I have it compile seperatley from the base project Another tricky one to explain. When I publish my web application it creates DLLs based on the namespaces of projects attached to it. So I may have a number of DLLs including the "Website's" namespace DLL, which could simply be website. I want to be able to make a seperate, custom, control which does not compile into those DLLs as the custom files should not rely on those DLLS to run. Is it as simple to set a seperate namespace for those files like CustomFiles.ProjectName for example? Think of the whole idea as adding modules to the .NET project, I don't want the module's code in any of the core DLLs but I do need for module to be able to access the core dlls. (There is no need for the core project to access the module code as it should be one way only in theory, though I reckon it woould not be possible anyway without using JSON/SOAP or something like that, maybe I am wrong.) I want to create a pluggable environment much like that of Joomla/Wordpress as since PHP generally doesn't have to be compiled first I see this is the reason why all this is possible/easy. The idea is to allow pluggable themes, modules etc etc. (I haven't tried simply adding .NET themes after compile/publish but I am assuming this is possible anyway? OR does the compiler need to reference items in the files?)

    Read the article

  • How does Sentry aggregate errors?

    - by Hugo Rodger-Brown
    I am using Sentry (in a django project), and I'd like to know how I can get the errors to aggregate properly. I am logging certain user actions as errors, so there is no underlying system exception, and am using the culprit attribute to set a friendly error name. The message is templated, and contains a common message ("User 'x' was unable to perform action because 'y'"), but is never exactly the same (different users, different conditions). Sentry clearly uses some set of attributes under the hood to determine whether to aggregate errors as the same exception, but despite having looked through the code, I can't work out how. Can anyone short-cut my having to dig further into the code and tell me what properties I need to set in order to manage aggregation as I would like? [UPDATE 1: event grouping] This line appears in sentry.models.Group: class Group(MessageBase): """ Aggregated message which summarizes a set of Events. """ ... class Meta: unique_together = (('project', 'logger', 'culprit', 'checksum'),) ... Which makes sense - project, logger and culprit I am setting at the moment - the problem is checksum. I will investigate further, however 'checksum' suggests that binary equivalence, which is never going to work - it must be possible to group instances of the same exception, with differenct attributes? [UPDATE 2: event checksums] The event checksum comes from the sentry.manager.get_checksum_from_event method: def get_checksum_from_event(event): for interface in event.interfaces.itervalues(): result = interface.get_hash() if result: hash = hashlib.md5() for r in result: hash.update(to_string(r)) return hash.hexdigest() return hashlib.md5(to_string(event.message)).hexdigest() Next stop - where do the event interfaces come from? [UPDATE 3: event interfaces] I have worked out that interfaces refer to the standard mechanism for describing data passed into sentry events, and that I am using the standard sentry.interfaces.Message and sentry.interfaces.User interfaces. Both of these will contain different data depending on the exception instance - and so a checksum will never match. Is there any way that I can exclude these from the checksum calculation? (Or at least the User interface value, as that has to be different - the Message interface value I could standardise.) [UPDATE 4: solution] Here are the two get_hash functions for the Message and User interfaces respectively: # sentry.interfaces.Message def get_hash(self): return [self.message] # sentry.interfaces.User def get_hash(self): return [] Looking at these two, only the Message.get_hash interface will return a value that is picked up by the get_checksum_for_event method, and so this is the one that will be returned (hashed etc.) The net effect of this is that the the checksum is evaluated on the message alone - which in theory means that I can standardise the message and keep the user definition unique. I've answered my own question here, but hopefully my investigation is of use to others having the same problem. (As an aside, I've also submitted a pull request against the Sentry documentation as part of this ;-)) (Note to anyone using / extending Sentry with custom interfaces - if you want to avoid your interface being use to group exceptions, return an empty list.)

    Read the article

  • Coping with feelings of technical mediocrity

    - by Karim
    As I've progressed as a programmer, I noticed more nuance and areas I could study in depth. In part, I've come to think of myself from, at one point, a "guru" to now much less, even mediocre or inadequate. Is this normal, or is it a sign of a destructive excessive ambition? Background I started to program when I was still a kid, I had about 10 or 11 years. I really enjoy my work and never get bored from it. It's amazing how somebody could be paid for what he really likes to do and would be doing it anyway even for free. When I first started to program, I was feeling proud of what I was doing, each application I built was for me a success and after 2-3 year I had a feeling that I'm a coding guru. It was a nice feeling. ;-) But the more I was in the field and the more types of software I started to develop, I was starting to have a feeling that I'm completely wrong in thinking I'm a guru. I felt that I'm not even a mediocre developer. Each new field I start to work on is giving me this feeling. Like when I once developed a device driver for a client, I saw how much I need to learn about device drivers. When I developed a video filter for an application, I saw how much do I still need to learn about DirectShow, Color Spaces, and all the theory behind that. The worst thing was when I started to learn algorithms. It was several years ago. I knew then the basic structures and algorithms like the sorting, some types of trees, some hashtables, strings, etc. and when I really wanted to learn a group of structures I learned about 5-6 new types and saw that in fact even this small group has several hundred subtypes of structures. It's depressing how little time people have in their lives to learn all this stuff. I'm now a software developer with about 10 years of experience and I still feel that I'm not a proficient developer when I think about things that others do in the industry.

    Read the article

  • Extend argparse to write set names in the help text for optional argument choices and define those sets once at the end

    - by Kent
    Example of the problem If I have a list of valid option strings which is shared between several arguments, the list is written in multiple places in the help string. Making it harder to read: def main(): elements = ['a', 'b', 'c', 'd', 'e', 'f'] parser = argparse.ArgumentParser() parser.add_argument( '-i', nargs='*', choices=elements, default=elements, help='Space separated list of case sensitive element names.') parser.add_argument( '-e', nargs='*', choices=elements, default=[], help='Space separated list of case sensitive element names to ' 'exclude from processing') parser.parse_args() When running the above function with the command line argument --help it shows: usage: arguments.py [-h] [-i [{a,b,c,d,e,f} [{a,b,c,d,e,f} ...]]] [-e [{a,b,c,d,e,f} [{a,b,c,d,e,f} ...]]] optional arguments: -h, --help show this help message and exit -i [{a,b,c,d,e,f} [{a,b,c,d,e,f} ...]] Space separated list of case sensitive element names. -e [{a,b,c,d,e,f} [{a,b,c,d,e,f} ...]] Space separated list of case sensitive element names to exclude from processing What would be nice It would be nice if one could define an option list name, and in the help output write the option list name in multiple places and define it last of all. In theory it would work like this: def main_optionlist(): elements = ['a', 'b', 'c', 'd', 'e', 'f'] # Two instances of OptionList are equal if and only if they # have the same name (ALFA in this case) ol = OptionList('ALFA', elements) parser = argparse.ArgumentParser() parser.add_argument( '-i', nargs='*', choices=ol, default=ol, help='Space separated list of case sensitive element names.') parser.add_argument( '-e', nargs='*', choices=ol, default=[], help='Space separated list of case sensitive element names to ' 'exclude from processing') parser.parse_args() And when running the above function with the command line argument --help it would show something similar to: usage: arguments.py [-h] [-i [ALFA [ALFA ...]]] [-e [ALFA [ALFA ...]]] optional arguments: -h, --help show this help message and exit -i [ALFA [ALFA ...]] Space separated list of case sensitive element names. -e [ALFA [ALFA ...]] Space separated list of case sensitive element names to exclude from processing sets in optional arguments: ALFA {a,b,c,d,e,f} Question I need to: Replace the {'l', 'i', 's', 't', 's'} shown with the option name, in the optional arguments. At the end of the help text show a section explaining which elements each option name consists of. So I ask: Is this possible using argparse? Which classes would I have to inherit from and which methods would I need to override? I have tried looking at the source for argparse, but as this modification feels pretty advanced I don´t know how to get going.

    Read the article

  • Applying styles to a GridView matching certain criteria

    - by NickK
    Hi everyone. I'm fairly new to ASP.Net so it's probably just me being a bit stupid, but I just can't figure out why this isn't working. Basically, I have a GridView control (GridView1) on a page which is reading from a database. I already have a CSS style applied to the GridView and all I want to do is change the background image applied in the style depending on if a certain cell has data in it or not. The way I'm trying to handle this change is updating the CSS class applied to each row through C#. I have the code below doing this: protected void GridView1_RowDataBound(object sender, GridViewRowEventArgs e) { GridViewRow row = e.Row; string s = row.Cells[7].Text; if (s.Length > 0) { row.CssClass = "newRowBackground"; } else { row.CssClass = "oldRowBackground"; } } In theory, the data from Cell[7] will either be null or be a string (in this case, likely a person's name). The problem is that when the page loads, every row in the GridView has the new style applied to it, whether it's empty or not. However, when I change it to use hard coded examples, it works fine. So for example, the below would work exactly how I want it to: protected void GridView1_RowDataBound(object sender, GridViewRowEventArgs e) { GridViewRow row = e.Row; string s = row.Cells[7].Text; if (s == "Smith") //Matching a name in one of the rows { row.CssClass = "newRowBackground"; } else { row.CssClass = "oldRowBackground"; } } It seems as if the top piece of code is always returning the string with a value greater than 0, but when I check the database the fields are all null (except for my test record of "Smith"). I'm probably doing something very simple that's wrong here, but I can't see what. Like I said, I'm still very new to this. One thing I have tried is changing the argument in the if statement to things like: if (s != null), if (s != "") and if (s == string.empty) all with no luck. Any help is greatly appreciated and don't hesitate to tell me if I'm just being stupid here. :)

    Read the article

  • Insert Registration Data in MySQL using PHP

    - by J M 4
    I may not be asking this in the best way possible but i will try my hardest. Thank you ahead of time for your help: I am creating an enrollment website which allows an individual OR manager to enroll for medical testing services for professional athletes. I will NOT be using the site as a query DB which anybody can view information stored within the database. The information is instead simply stored, and passed along in a CSV format to our network provider so they can use as needed after the fact. There are two possible scenarios: Scenario 1 - Individual Enrollment If an individual athlete chooses to enroll him/herself, they enter their personal information, submit their payment information (credit/bank account) for processing, and their information is stored in an online database as Athlete1. Scenario 2 - Manager Enrollment If a manager chooses to enroll several athletes he manages/ promotes for, he enters his personal information, then enters the personal information for each athlete he wishes to pay for (name, address, ssn, dob, etc), then submits payment information for ALL athletes he is enrolling. This number can range from 1 single athlete, up to 20 athletes per single enrollment (he can return and complete a follow up enrollment for additional athletes). Initially, I was building the database to house ALL information regardless of enrollment type in a single table which housed over 400 columns (think 20 athletes with over 10 fields per athlete such as name, dob, ssn, etc). Now that I think about it more, I believe create multiple tables (manager(s), athlete(s)) may be a better idea here but still not quite sure how to go about it for the following very important reasons: Issue 1 If I list the manager as the parent table, I am afraid the individual enrolling athlete will not show up in the primary table and will not be included in the overall registration file which needs to be sent on to the network providers. Issue 2 All athletes being enrolled by a manager are being stored in SESSION as F1FirstName, F2FirstName where F1 and F2 relate to the id of the fighter. I am not sure technically speaking how to store multiple pieces of information within the same table under separate rows using PHP. For example, all athleteswill have a first name. The very basic theory of what i am trying to do is: If number_of_athletes 1, store F1FirstName in row 1, column 1 of Table "Athletes"; store F1LastName in row 1, column 2 of Table "Athletes"; store F2FirstName in row 2, column 1 of Table "Athletes"; store F2LastName in row 2, column 2 of table "Athletes"; Does this make sense? I know this question is very long and probably difficult so i appreciate the guidance.

    Read the article

  • Directly Jump to another C++ function

    - by maligree
    I'm porting a small academic OS from TriCore to ARM Cortex (Thumb-2 instruction set). For the scheduler to work, I sometimes need to JUMP directly to another function without modifying the stack nor the link register. On TriCore (or, rather, on tricore-g++), this wrapper template (for any three-argument-function) works: template< class A1, class A2, class A3 > inline void __attribute__((always_inline)) JUMP3( void (*func)( A1, A2, A3), A1 a1, A2 a2, A3 a3 ) { typedef void (* __attribute__((interrupt_handler)) Jump3)( A1, A2, A3); ( (Jump3)func )( a1, a2, a3 ); } //example for using the template: JUMP3( superDispatch, this, me, next ); This would generate the assembler instruction J (a.k.a. JUMP) instead of CALL, leaving the stack and CSAs unchanged when jumping to the (otherwise normal) C++ function superDispatch(SchedulerImplementation* obj, Task::Id from, Task::Id to). Now I need an equivalent behaviour on ARM Cortex (or, rather, for arm-none-linux-gnueabi-g++), i.e. generate a B (a.k.a. BRANCH) instruction instead of BLX (a.k.a. BRANCH with link and exchange). But there is no interrupt_handler attribute for arm-g++ and I could not find any equivalent attribute. So I tried to resort to asm volatile and writing the asm code directly: template< class A1, class A2, class A3 > inline void __attribute__((always_inline)) JUMP3( void (*func)( A1, A2, A3), A1 a1, A2 a2, A3 a3 ) { asm volatile ( "mov.w r0, %1;" "mov.w r1, %2;" "mov.w r2, %3;" "b %0;" : : "r"(func), "r"(a1), "r"(a2), "r"(a3) : "r0", "r1", "r2" ); } So far, so good, in my theory, at least. Thumb-2 requires function arguments to be passed in the registers, i.e. r0..r2 in this case, so it should work. But then the linker dies with undefined reference to `r6' on the closing bracket of the asm statement ... and I don't know what to make of it. OK, I'm not the expert in C++, and the asm syntax is not very straightforward... so has anybody got a hint for me? A hint to the correct __attribute__ for arm-g++ would be one way, a hint to fix the asm code would be another. Another way maybe would be to tell the compiler that a1..a3 should already be in the registers r0..r2 when the asm statement is entered (I looked into that a bit, but did not find any hint).

    Read the article

  • Declare Locally or Globally in Delphi?

    - by lkessler
    I have a procedure my program calls tens of thousands of times that uses a generic structure like this: procedure PrintIndiEntry(JumpID: string); type TPeopleIncluded = record IndiPtr: pointer; Relationship: string; end; var PeopleIncluded: TList<TPeopleIncluded>; PI: TPeopleIncluded; begin { PrintIndiEntry } PeopleIncluded := TList<TPeopleIncluded>.Create; { A loop here that determines a small number (up to 100) people to process } while ... do begin PI.IndiPtr := ...; PI.Relationship := ...; PeopleIncluded.Add(PI); end; DoSomeProcess(PeopleIncluded); PeopleIncluded.Clear; PeopleIncluded.Free; end { PrintIndiEntry } Alternatively, I can declare PeopleIncluded globally rather than locally as follows: unit process; interface type TPeopleIncluded = record IndiPtr: pointer; Relationship: string; end; var PeopleIncluded: TList<TPeopleIncluded>; PI: TPeopleIncluded; procedure PrintIndiEntry(JumpID: string); begin { PrintIndiEntry } { A loop here that determines a small number (up to 100) people to process } while ... do begin PI.IndiPtr := ...; PI.Relationship := ...; PeopleIncluded.Add(PI); end; DoSomeProcess(PeopleIncluded); PeopleIncluded.Clear; end { PrintIndiEntry } procedure InitializeProcessing; begin PeopleIncluded := TList<TPeopleIncluded>.Create; end; procedure FinalizeProcessing; begin PeopleIncluded.Free; end; My question is whether in this situation it is better to declare PeopleIncluded globally rather than locally. I know the theory is to define locally whenever possible, but I would like to know if there are any issues to worry about with regards to doing tens of thousands of of "create"s and "free"s? Making them global will do only one create and one free. What is the recommended method to use in this case? If the recommended method is to still define it locally, then I'm wondering if there are any situations where it is better to define globally when defining locally is still an option.

    Read the article

  • avoiding enums as interface identifiers c++ OOP

    - by AlasdairC
    Hi I'm working on a plugin framework using dynamic loaded shared libraries which is based on Eclipse's (and probally other's) extension-point model. All plugins share similar properties (name, id, version etc) and each plugin could in theory satisfy any extension-point. The actual plugin (ie Dll) handling is managed by another library, all I am doing really is managing collections of interfaces for the application. I started by using an enum PluginType to distinguish the different interfaces, but I have quickly realised that using template functions made the code far cleaner and would leave the grunt work up to the compiler, rather than forcing me to use lots of switch {...} statements. The only issue is where I need to specify like functionality for class members - most obvious example is the default plugin which provides a particular interface. A Settings class handles all settings, including the default plugin for an interface. ie Skin newSkin = settings.GetDefault<ISkin>(); How do I store the default ISkin in a container without resorting to some other means of identifying the interface? As I mentioned above, I currently use a std::map<PluginType, IPlugin> Settings::defaults member to achieve this (where IPlugin is an abstract base class which all plugins derive from. I can then dynamic_cast to the desired interface when required, but this really smells of bad design to me and introduces more harm than good I think. would welcome any tips edit: here's an example of the current use of default plugins typedef boost::shared_ptr<ISkin> Skin; typedef boost::shared_ptr<IPlugin> Plugin; enum PluginType { skin, ..., ... } class Settings { public: void SetDefault(const PluginType type, boost::shared_ptr<IPlugin> plugin) { m_default[type] = plugin; } boost::shared_ptr<IPlugin> GetDefault(const PluginType type) { return m_default[type]; } private: std::map<PluginType, boost::shared_ptr<IPlugin> m_default; }; SkinManager::Initialize() { Plugin thedefault = g_settings.GetDefault(skinplugin); Skin defaultskin = boost::dynamic_pointer_cast<ISkin>(theskin); defaultskin->Initialize(); } I would much rather call the getdefault as the following, with automatic casting to the derived class. However I need to specialize for every class type. template<> Skin Settings::GetDefault<ISkin>() { return boost::dynamic_pointer_cast<ISkin>(m_default(skin)); }

    Read the article

  • SwingWorker exceptions lost even when using wrapper classes

    - by Ti Strga
    I've been struggling with the usability problem of SwingWorker eating any exceptions thrown in the background task, for example, described on this SO thread. That thread gives a nice description of the problem, but doesn't discuss recovering the original exception. The applet I've been handed needs to propagate the exception upwards. But I haven't been able to even catch it. I'm using the SimpleSwingWorker wrapper class from this blog entry specifically to try and address this issue. It's a fairly small class but I'll repost it at the end here just for reference. The calling code looks broadly like try { // lots of code here to prepare data, finishing with SpecialDataHelper helper = new SpecialDataHelper(...stuff...); helper.execute(); } catch (Throwable e) { // used "Throwable" here in desperation to try and get // anything at all to match, including unchecked exceptions // // no luck, this code is never ever used :-( } The wrappers: class SpecialDataHelper extends SimpleSwingWorker { public SpecialDataHelper (SpecialData sd) { this.stuff = etc etc etc; } public Void doInBackground() throws Exception { OurCodeThatThrowsACheckedException(this.stuff); return null; } protected void done() { // called only when successful // never reached if there's an error } } The feature of SimpleSwingWorker is that the actual SwingWorker's done()/get() methods are automatically called. This, in theory, rethrows any exceptions that happened in the background. In practice, nothing is ever caught, and I don't even know why. The SimpleSwingWorker class, for reference, and with nothing elided for brevity: import java.util.concurrent.ExecutionException; import javax.swing.SwingWorker; /** * A drop-in replacement for SwingWorker<Void,Void> but will not silently * swallow exceptions during background execution. * * Taken from http://jonathangiles.net/blog/?p=341 with thanks. */ public abstract class SimpleSwingWorker { private final SwingWorker<Void,Void> worker = new SwingWorker<Void,Void>() { @Override protected Void doInBackground() throws Exception { SimpleSwingWorker.this.doInBackground(); return null; } @Override protected void done() { // Exceptions are lost unless get() is called on the // originating thread. We do so here. try { get(); } catch (final InterruptedException ex) { throw new RuntimeException(ex); } catch (final ExecutionException ex) { throw new RuntimeException(ex.getCause()); } SimpleSwingWorker.this.done(); } }; public SimpleSwingWorker() {} protected abstract Void doInBackground() throws Exception; protected abstract void done(); public void execute() { worker.execute(); } }

    Read the article

  • PHP/MySQL Interview - How would you have answered?

    - by martincarlin87
    I was asked this interview question so thought I would post it here to see how other users would answer: Please write some code which connects to a MySQL database (any host/user/pass), retrieves the current date & time from the database, compares it to the current date & time on the local server (i.e. where the application is running), and reports on the difference. The reporting aspect should be a simple HTML page, so that in theory this script can be put on a web server, set to point to a particular database server, and it would tell us whether the two servers’ times are in sync (or close to being in sync). This is what I put: // Connect to database server $dbhost = 'localhost'; $dbuser = 'xxx'; $dbpass = 'xxx'; $dbname = 'xxx'; $conn = mysql_connect($dbhost, $dbuser, $dbpass) or die (mysql_error()); // Select database mysql_select_db($dbname) or die(mysql_error()); // Retrieve the current time from the database server $sql = 'SELECT NOW() AS db_server_time'; // Execute the query $result = mysql_query($sql) or die(mysql_error()); // Since query has now completed, get the time of the web server $php_server_time = date("Y-m-d h:m:s"); // Store query results in an array $row = mysql_fetch_array($result); // Retrieve time result from the array $db_server_time = $row['db_server_time']; echo $db_server_time . '<br />'; echo $php_server_time; if ($php_server_time != $db_server_time) { // Server times are not identical echo '<p>Database server and web server are not in sync!</p>'; // Convert the time stamps into seconds since 01/01/1970 $php_seconds = strtotime($php_server_time); $sql_seconds = strtotime($db_server_time); // Subtract smaller number from biggest number to avoid getting a negative result if ($php_seconds > $sql_seconds) { $time_difference = $php_seconds - $sql_seconds; } else { $time_difference = $sql_seconds - $php_seconds; } // convert the time difference in seconds to a formatted string displaying hours, minutes and seconds $nice_time_difference = gmdate("H:i:s", $time_difference); echo '<p>Time difference between the servers is ' . $nice_time_difference; } else { // Timestamps are exactly the same echo '<p>Database server and web server are in sync with each other!</p>'; } Yes, I know that I have used the deprecated mysql_* functions but that aside, how would you have answered, i.e. what changes would you make and why? Are there any factors I have omitted which I should take into consideration? The interesting thing is that my results always seem to be an exact number of minutes apart when executed on my hosting account: 2012-12-06 11:47:07 2012-12-06 11:12:07

    Read the article

  • Parallelism in .NET – Part 20, Using Task with Existing APIs

    - by Reed
    Although the Task class provides a huge amount of flexibility for handling asynchronous actions, the .NET Framework still contains a large number of APIs that are based on the previous asynchronous programming model.  While Task and Task<T> provide a much nicer syntax as well as extending the flexibility, allowing features such as continuations based on multiple tasks, the existing APIs don’t directly support this workflow. There is a method in the TaskFactory class which can be used to adapt the existing APIs to the new Task class: TaskFactory.FromAsync.  This method provides a way to convert from the BeginOperation/EndOperation method pair syntax common through .NET Framework directly to a Task<T> containing the results of the operation in the task’s Result parameter. While this method does exist, it unfortunately comes at a cost – the method overloads are far from simple to decipher, and the resulting code is not always as easily understood as newer code based directly on the Task class.  For example, a single call to handle WebRequest.BeginGetResponse/EndGetReponse, one of the easiest “pairs” of methods to use, looks like the following: var task = Task.Factory.FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The compiler is unfortunately unable to infer the correct type, and, as a result, the WebReponse must be explicitly mentioned in the method call.  As a result, I typically recommend wrapping this into an extension method to ease use.  For example, I would place the above in an extension method like: public static class WebRequestExtensions { public static Task<WebResponse> GetReponseAsync(this WebRequest request) { return Task.Factory.FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); } } This dramatically simplifies usage.  For example, if we wanted to asynchronously check to see if this blog supported XHTML 1.0, and report that in a text box to the user, we could do: var webRequest = WebRequest.Create("http://www.reedcopsey.com"); webRequest.GetReponseAsync().ContinueWith(t => { using (var sr = new StreamReader(t.Result.GetResponseStream())) { string str = sr.ReadLine();; this.textBox1.Text = string.Format("Page at {0} supports XHTML 1.0: {1}", t.Result.ResponseUri, str.Contains("XHTML 1.0")); } }, TaskScheduler.FromCurrentSynchronizationContext());   By using a continuation with a TaskScheduler based on the current synchronization context, we can keep this request asynchronous, check based on the first line of the response string, and report the results back on our UI directly.

    Read the article

  • Michael Crump&rsquo;s notes for 70-563 PRO &ndash; Designing and Developing Windows Applications usi

    - by mbcrump
    TIME TO GO PRO! This is my notes for 70-563 PRO – Designing and Developing Windows Applications using .NET Framework 3.5 I created it using several resources (various certification web sites, msdn, official ms 70-548 book). The reason that I created this review is because a) I am taking the exam. b) MS did not create a book for this exam. Use the(MS 70-548)book. c) To make sure I am familiar with each before the exam. I hope that it provides a good start for your own notes. I hope that someone finds this useful. At least, it will give you a starting point of what to expect to know on the PRO exam. Also, for those wondering, the PRO exam does contains very little code. It is basically all theory. 1. Validation Controls – How to prevent users from entering invalid data on forms. (MaskedTextBox control and RegEx) 2. ServiceController – used to start and control the behavior of existing services. 3. User Feedback (know winforms Status Bar, Tool Tips, Color, Error Provider, Context-Sensitive and Accessibility) 4. Specific (derived) exceptions must be handled before general (base class) exceptions. By moving the exception handling for the base type Exception to after exception handling of ArgumentNullException, all ArgumentNullException thrown by the Helper method will be caught and logged correctly. 5. A heartbeat method is a method exposed by a Web service that allows external applications to check on the status of the service. 6. New users must master key tasks quickly. Giving these tasks context and appropriate detail will help. However, advanced users will demand quicker paths. Shortcuts, accelerators, or toolbar buttons will speed things along for the advanced user. 7. MSBuild uses project files to instruct the build engine what to build and how to build it. MSBuild project files are XML files that adhere to the MSBuild XML schema. The MSBuild project files contain complete file, build action, and dependency information for each individual projects. 8. Evaluating whether or not to fix a bug involves a triage process. You must identify the bug's impact, set the priority, categorize it, and assign a developer. Many times the person doing the triage work will assign the bug to a developer for further investigation. In fact, the workflow for the bug work item inside of Team System supports this step. Developers are often asked to assess the impact of a given bug. This assessment helps the person doing the triage make a decision on how to proceed. When assessing the impact of a bug, you should consider time and resources to fix it, bug risk, and impacts of the bug. 9. In large projects it is generally impossible and unfeasible to fix all bugs because of the impact on schedule and budget. 10. Code reviews should be conducted by a technical lead or a technical peer. 11. Testing Applications 12. WCF Services – application state 13. SQL Server 2005 / 2008 Express Edition – reliable storage of data / Microsoft SQL Server 3.5 Compact Database– used for client computers to retrieve and save data from a shared location. 14. SQL Server 2008 Compact Edition – used for minimum possible memory and can synchronize data with a corporate SQL Server 2008 Database. Supports offline user and minimum dependency on external components. 15. MDI and SDI Forms (specifically IsMDIContainer) 16. GUID – in the case of data warehousing, it is important to define unique keys. 17. Encrypting / Security Data 18. Understanding of Isolated Storage/Proper location to store items 19. LINQ to SQL 20. Multithreaded access 21. ADO.NET Entity Framework model 22. Marshal.ReleaseComObject 23. Common User Interface Layout (ComboBox, ListBox, Listview, MaskedTextBox, TextBox, RichTextBox, SplitContainer, TableLayoutPanel, TabControl) 24. DataSets Class - http://msdn.microsoft.com/en-us/library/system.data.dataset%28VS.71%29.aspx 25. SQL Server 2008 Reporting Services (SSRS) 26. SystemIcons.Shield (Vista UAC) 27. Leverging stored procedures to perform data manipulation for a database schema that can change. 28. DataContext 29. Microsoft Windows Installer Packages, ClickOnce(bootstrapping features), XCopy. 30. Client Application Services – will authenticate users by using the same data source as a ASP.NET web application. 31. SQL Server 2008 Caching 32. StringBuilder 33. Accessibility Guidelines for Windows Applications http://msdn.microsoft.com/en-us/library/ms228004.aspx 34. Logging erros 35. Testing performance related issues. 36. Role Based Security, GenericIdentity and GenericPrincipal 37. System.Net.CookieContainer will store session data for webapps (see isolated storage for winforms) 38. .NET CLR Profiler tool will identify objects that cause performance issues. 39. ADO.NET Synchronization (SyncGroup) 40. Globalization - CultureInfo 41. IDisposable Interface- reports on several questions relating to this. 42. Adding timestamps to determine whether data has changed or not. 43. Converting applications to .NET Framework 3.5 44. MicrosoftReportViewer 45. Composite Controls 46. Windows Vista KNOWN folders. 47. Microsoft Sync Framework 48. TypeConverter -Provides a unified way of converting types of values to other types, as well as for accessing standard values and sub properties. http://msdn.microsoft.com/en-us/library/system.componentmodel.typeconverter.aspx 49. Concurrency control mechanisms The main categories of concurrency control mechanisms are: Optimistic - Delay the checking of whether a transaction meets the isolation rules (e.g., serializability and recoverability) until its end, without blocking any of its (read, write) operations, and then abort a transaction, if the desired rules are violated. Pessimistic - Block operations of a transaction, if they may cause violation of the rules. Semi-optimistic - Block operations in some situations, and do not block in other situations, while delaying rules checking to transaction's end, as done with optimistic. 50. AutoResetEvent 51. Microsoft Messaging Queue (MSMQ) 4.0 52. Bulk imports 53. KeyDown event of controls 54. WPF UI components 55. UI process layer 56. GAC (installing, removing and queuing) 57. Use a local database cache to reduce the network bandwidth used by applications. 58. Sound can easily be annoying and distracting to users, so use it judiciously. Always give users the option to turn sound off. Because a user might have sound off, never convey important information through sound alone.

    Read the article

  • Agile Awakenings and the Rules of Agile

    - by Robert May
    For those that care, you can read my history of management and technology to understand why I think I’m qualified to talk about this at all.  It’s boring, so feel free to skip it. Awakenings I first started to play around with the idea of “agile” in 2004 or 2005.  I found a book on the Rational Unified Process that I thought was good, and attempted to implement parts of it.  I thought I was agile, but really, it wasn’t.   I still didn’t understand the concept of a team.  I still wanted to tell the team what to do and how to get it done.  I still thought I was smarter than the team. After that job, I started work on another project and began helping that team.  The first few months were really rough.  We were implementing Scrum, which was relatively new to everyone on the team, and, quite frankly, I was doing a poor job of it.  I was trying to micro-manage every aspect of the teams work, and we were all miserable. The moment of change came when the senior architect bailed on the project.  His comment to me was: “This isn’t Agile.  Where are the stand-ups?  Where are the stories?”  He was dead on, and I finally woke up.  I finally realized that I was the problem!  I wasn’t trusting the team.  I wasn’t helping the team.  I was being a manager. Like many (most?), I was claiming to be Agile and use Scrum, but I wasn’t in fact following the rules Scrum.  Since then, I’ve done a lot of studying, hands on practice, coaching of many different teams, and other learning around Scrum, and I have discovered that Scrum has some rules that must be followed for success, even though the process is about continuous improvement. I’ve been practicing Scrum right for about 4 years now and have helped multiple teams implement it successfully, so what you’re about to get is based on experience, rather than just theory. The Rules of Scrum In my experience, what I’ve found is that most companies that claim to be doing Scrum or Agile are actually NOT doing either.  This stems largely because they think that they can “adopt the rules of Agile that fit their organization.”  Sadly, many of them think that this means they can adopt iterations (sprints) and not much else.  Either that, or they think they can do whatever they want, or were doing before, and call it Scrum.  This is simply not true. Here are some rules that must be followed for you to really be doing Scrum.  I’ll go into detail on each one of these posts in future blog posts and update links here.  My intent is that this will help other teams implementing scrum to see more success. Agile does not allow you to do whatever you want A Product Owner is required A ScrumMaster is required The team must function as a Team, and QA must be part of the team Support from upper management is required A prioritized product backlog is required A prioritized sprint backlog is required Release planning is required Complete spring planning is required Showcases are required Velocity must be measured Retrospectives are required Daily stand-ups are required Visibility is absolutely required For now, I think that’s enough, although I reserve the right to add more.  If you’re breaking any of these rules, you’re probably not doing Scrum.  There are exceptions to these rules, but until you have practiced Scrum for a while, you don’t know what those exceptions are. Breaking the Rules Many teams break these rules because they are the ones that expose the most pain.  Scrum is not Advil.  It’s not intended to mask the pain, its intended to cure it.  Let me explain that analogy a bit more.  Recently, my 7 year old son broke his arm, quite severely (see the X-Ray to the right).  That caused him a great deal of pain.  We went first to one doctor, and after viewing the X-Ray, they determined that there was no way that they’d cast the arm at their location.  It was simply too bad of a break for them to deal with.  They did, however, give him some Advil for the pain and put a splint on his arm to stabilize the broken bones.  Within minutes, he was feeling much better.  Had we been stupid, we could have gone home and he’d have been just as happy as ever . . . until the pain medication wore off or one of his siblings touched the splint.  Then, all of that pain would come right back to the top.  Sure, he could make it go away by just taking more Advil and moving the splint out of the way, but that wasn’t going to fix the problem permanently. We ended up in an emergency room with a doctor who could fix his arm.  However, we were warned that the fix was going to be VERY painful, and it was.  Even with heavy sedation (Propofol), my son was in enough pain that he squirmed and wiggled trying to get his arm away from the doctor.  He had to endure this pain in order to have a functional arm. But the setting wasn’t the end.  He had to have several casts, had to have it re-broken once, since the first setting didn’t take and finally was given a clean bill of health. Agile implementation is much like this story.  Agile was developed as a result of people recognizing that the development methodologies that were currently in place simply were ineffective.  However, the fix to the broken development that’s been festering for many years is not painless.  Many people start Agile thinking that things will be wonderful.  They won’t!  Agile is about visibility, and often, it brings great pain to surface.  It causes all of the missed deadlines, the cowboy coders, the coasters, the micro-managers, the lazy, and all of the other problems that are really part of your development process now to become painfully visible to EVERYONE.  Many people don’t like this exposure.  Agile will make the pain better, but not if you remove the cast (the rules above) prematurely and start breaking the rules that expose the most pain.  The healing will take time and is not instant (like Advil).  Figuring out what the true source of pain and fixing it is very valuable to you, your team, and your company.  Remember as you’re doing this that Agile isn’t the source of the pain, it’s really just exposing it.  Find the source. My recommendation is that ALL of these rules are followed for a minimum of six months, and preferably for an entire year, before you decide to break any of these rules.  Get a few good releases under your belt.  Figure out what your velocity is and start firing as a team.  Chances are, after you see agile really in action, you won’t want to break the rules because you’ll see their value. More Reading Jean Tabaka recently published a list of 78 Things I Have Learned in 6 Years of Agile Coaching.  Highly recommended. Technorati Tags: Agile,Scrum,Rules

    Read the article

  • ASP.NET Server-side comments

    - by nmarun
    I believe a good number of you know about Server-side commenting. This blog is just like a revival to refresh your memories. When you write comments in your .aspx/.ascx files, people usually write them as: 1: <!-- This is a comment. --> To show that it actually makes a difference for using the server-side commenting technique, I’ve started a web application project and my default.aspx page looks like this: 1: <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="ServerSideComment._Default" %> 2: <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> 3: </asp:Content> 4: <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> 5: <h2> 6: <!-- This is a comment --> 7: Welcome to ASP.NET! 8: </h2> 9: <p> 10: To learn more about ASP.NET visit <a href="http://www.asp.net" title="ASP.NET Website">www.asp.net</a>. 11: </p> 12: <p> 13: You can also find <a href="http://go.microsoft.com/fwlink/?LinkID=152368&amp;clcid=0x409" 14: title="MSDN ASP.NET Docs">documentation on ASP.NET at MSDN</a>. 15: </p> 16: </asp:Content> See the comment in line 6 and when I run the app, I can do a view source on the browser which shows up as: 1: <h2> 2: <!-- This is a comment --> 3: Welcome to ASP.NET! 4: </h2> Using Fiddler shows the page size as: Let’s change the comment style and use server-side commenting technique. 1: <h2> 2: <%-- This is a comment --%> 3: Welcome to ASP.NET! 4: </h2> Upon rendering, the view source looks like: 1: <h2> 2: 3: Welcome to ASP.NET! 4: </h2> Fiddler now shows the page size as: The difference is that client-side comments are ignored by the browser, but they are still sent down the pipe. With server-side comments, the compiler ignores everything inside this block. Visual Studio’s Text Editor toolbar also puts comments as server-side ones. If you want to give it a shot, go to your design page and press Ctrl+K, Ctrl+C on some selected text and you’ll see it commented in the server-side commenting style.

    Read the article

  • SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Introduction – Day 1 of 31

    - by pinaldave
    List of all the Interview Questions and Answers Series blogs Posts covering interview questions and answers always make for interesting reading.  Some people like the subject for their helpful hints and thought provoking subject, and others dislike these posts because they feel it is nothing more than cheating.  I’d like to discuss the pros and cons of a Question and Answer format here. Interview Questions and Answers are Helpful Just like blog posts, books, and articles, interview Question and Answer discussions are learning material.  The popular Dummy’s books or Idiots Guides are not only for “dummies,” but can help everyone relearn the fundamentals.  Question and Answer discussions can serve the same purpose.  You could call this SQL Server Fundamentals or SQL Server 101. I have administrated hundreds of interviews during my career and I have noticed that sometimes an interviewee with several years of experience lacks an understanding of the fundamentals.  These individuals have been in the industry for so long, usually working on a very specific project, that the ABCs of the business have slipped their mind. Or, when a college graduate is looking to get into the industry, he is not expected to have experience since he is just graduated. However, the new grad is expected to have an understanding of fundamentals and theory.  Sometimes after the stress of final exams and graduation, it can be difficult to remember the correct answers to interview questions, though. An interview Question and Answer discussion can be very helpful to both these individuals.  It is simply a way to go back over the building blocks of a topic.  Many times a simple review like this will help “jog” your memory, and all those previously-memorized facts will come flooding back to you.  It is not a way to re-learn a topic, but a way to remind yourself of what you already know. A Question and Answer discussion can also be a way to go over old topics in a more interesting manner.  Especially if you have been working in the industry, or taking lots of classes on the topic, everything you read can sound like a repeat of what you already know.  Going over a topic in a new format can make the material seem fresh and interesting.  And an interested mind will be more engaged and remember more in the end. Interview Questions and Answers are Harmful A common argument against a Question and Answer discussion is that it will give someone a “cheat sheet.” A new guy with relatively little experience can read the interview questions and answers, and then memorize them. When an interviewer asks him the same questions, he will repeat the answers and get the job. Honestly, is he good hire because he memorized the interview questions? Wouldn’t it be better for the interviewer to hire someone with actual experience?  The answer is not as easy as it seems – there are many different factors to be considered. If the interviewer is asking fundamentals-related questions only, he gets the answers he wants to hear, and then hires this first candidate – there is a good chance that he is hiring based on personality rather than experience.  If the interviewer is smart he will ask deeper questions, have more than one person on the interview team, and interview a variety of candidates.  If one interviewee happens to memorize some answers, it usually doesn’t mean he will automatically get the job at the expense of more qualified candidates. Another argument against interview Question and Answers is that it will give candidates a false sense of confidence, and that they will appear more qualified than they are. Well, if that is true, it will not last after the first interview when the candidate is asked difficult questions and he cannot find the answers in the list of interview Questions and Answers.  Besides, confidence is one of the best things to walk into an interview with! In today’s competitive job market, there are often hundreds of candidates applying for the same position.  With so many applicants to choose from, interviewers must make decisions about who to call back and who to hire based on their gut feeling.  One drawback to reading an interview Question and Answer article is that you might sound very boring in your interview – saying the same thing as every single candidate, and parroting answers that sound like someone else wrote them for you – because they did.  However, it is definitely better to go to an interview prepared, just make sure that you give a lot of thought to your answers to make them sound like your own voice.  Remember that you will be hired based on your skills as well as your personality, so don’t think that having all the right answers will make get you hired.  A good interviewee will be prepared, confident, and know how to stand out. My Opinion A list of interview Questions and Answers is really helpful as a refresher or for beginners. To really ace an interview, one needs to have real-world, hands-on experience with SQL Server as well. Interview questions just serve as a starter or easy read for experienced professionals. When I have to learn new technology, I often search online for interview questions and get an idea about the breadth and depth of the technology. Next Action I am going to write about interview Questions and Answers for next 30 days. I have previously written a series of interview questions and answers; now I have re-written them keeping the latest version of SQL Server and current industry progress in mind. If you have faced interesting interview questions or situations, please write to me and I will publish them as a guest post. If you want me to add few more details, leave a comment and I will make sure that I do my best to accommodate. Tomorrow we will start the interview Questions and Answers series, with a few interesting stories, best practices and guest posts. We will have a prize give-away and other awards when the series ends. List of all the Interview Questions and Answers Series blogs Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Subterranean IL: Pseudo custom attributes

    - by Simon Cooper
    Custom attributes were designed to make the .NET framework extensible; if a .NET language needs to store additional metadata on an item that isn't expressible in IL, then an attribute could be applied to the IL item to represent this metadata. For instance, the C# compiler uses DecimalConstantAttribute and DateTimeConstantAttribute to represent compile-time decimal or datetime constants, which aren't allowed in pure IL, and FixedBufferAttribute to represent fixed struct fields. How attributes are compiled Within a .NET assembly are a series of tables containing all the metadata for items within the assembly; for instance, the TypeDef table stores metadata on all the types in the assembly, and MethodDef does the same for all the methods and constructors. Custom attribute information is stored in the CustomAttribute table, which has references to the IL item the attribute is applied to, the constructor used (which implies the type of attribute applied), and a binary blob representing the arguments and name/value pairs used in the attribute application. For example, the following C# class: [Obsolete("Please use MyClass2", true)] public class MyClass { // ... } corresponds to the following IL class definition: .class public MyClass { .custom instance void [mscorlib]System.ObsoleteAttribute::.ctor(string, bool) = { string('Please use MyClass2' bool(true) } // ... } and results in the following entry in the CustomAttribute table: TypeDef(MyClass) MemberRef(ObsoleteAttribute::.ctor(string, bool)) blob -> {string('Please use MyClass2' bool(true)} However, there are some attributes that don't compile in this way. Pseudo custom attributes Just like there are some concepts in a language that can't be represented in IL, there are some concepts in IL that can't be represented in a language. This is where pseudo custom attributes come into play. The most obvious of these is SerializableAttribute. Although it looks like an attribute, it doesn't compile to a CustomAttribute table entry; it instead sets the serializable bit directly within the TypeDef entry for the type. This flag is fully expressible within IL; this C#: [Serializable] public class MySerializableClass {} compiles to this IL: .class public serializable MySerializableClass {} For those interested, a full list of pseudo custom attributes is available here. For the rest of this post, I'll be concentrating on the ones that deal with P/Invoke. P/Invoke attributes P/Invoke is built right into the CLR at quite a deep level; there are 2 metadata tables within an assembly dedicated solely to p/invoke interop, and many more that affect it. Furthermore, all the attributes used to specify p/invoke methods in C# or VB have their own keywords and syntax within IL. For example, the following C# method declaration: [DllImport("mscorsn.dll", SetLastError = true)] [return: MarshalAs(UnmanagedType.U1)] private static extern bool StrongNameSignatureVerificationEx( [MarshalAs(UnmanagedType.LPWStr)] string wszFilePath, [MarshalAs(UnmanagedType.U1)] bool fForceVerification, [MarshalAs(UnmanagedType.U1)] ref bool pfWasVerified); compiles to the following IL definition: .method private static pinvokeimpl("mscorsn.dll" lasterr winapi) bool marshal(unsigned int8) StrongNameSignatureVerificationEx( string marshal(lpwstr) wszFilePath, bool marshal(unsigned int8) fForceVerification, bool& marshal(unsigned int8) pfWasVerified) cil managed preservesig {} As you can see, all the p/invoke and marshal properties are specified directly in IL, rather than using attributes. And, rather than creating entries in CustomAttribute, a whole bunch of metadata is emitted to represent this information. This single method declaration results in the following metadata being output to the assembly: A MethodDef entry containing basic information on the method Four ParamDef entries for the 3 method parameters and return type An entry in ModuleRef to mscorsn.dll An entry in ImplMap linking ModuleRef and MethodDef, along with the name of the function to import and the pinvoke options (lasterr winapi) Four FieldMarshal entries containing the marshal information for each parameter. Phew! Applying attributes Most of the time, when you apply an attribute to an element, an entry in the CustomAttribute table will be created to represent that application. However, some attributes represent concepts in IL that aren't expressible in the language you're coding in, and can instead result in a single bit change (SerializableAttribute and NonSerializedAttribute), or many extra metadata table entries (the p/invoke attributes) being emitted to the output assembly.

    Read the article

  • Robotic Arm &ndash; Hardware

    - by Szymon Kobalczyk
    This is first in series of articles about project I've been building  in my spare time since last Summer. Actually it all began when I was researching a topic of modeling human motion kinematics in order to create gesture recognition library for Kinect. This ties heavily into motion theory of robotic manipulators so I also glanced at some designs of robotic arms. Somehow I stumbled upon this cool looking open source robotic arm: It was featured on Thingiverse and published by user jjshortcut (Jan-Jaap). Since for some time I got hooked on toying with microcontrollers, robots and other electronics, I decided to give it a try and build it myself. In this post I will describe the hardware build of the arm and in later posts I will be writing about the software to control it. Another reason to build the arm myself was the cost factor. Even small commercial robotic arms are quite expensive – products from Lynxmotion and Dagu look great but both cost around USD $300 (actually there is one cheap arm available but it looks more like a toy to me). In comparison this design is quite cheap. It uses seven hobby grade servos and even the cheapest ones should work fine. The structure is build from a set of laser cut parts connected with few metal spacers (15mm and 47mm) and lots of M3 screws. Other than that you’d only need a microcontroller board to drive the servos. So in total it comes a lot cheaper to build it yourself than buy an of the shelf robotic arm. Oh, and if you don’t like this one there are few more robotic arm projects at Thingiverse (including one by oomlout). Laser cut parts Some time ago I’ve build another robot using laser cut parts so I knew the process already. You can grab the design files in both DXF and EPS format from Thingiverse, and there are also 3D models of each part in STL. Actually the design is split into a second project for the mini servo gripper (there is also a standard servo version available but it won’t fit this arm).  I wanted to make some small adjustments, layout, and add measurements to the parts before sending it for cutting. I’ve looked at some free 2D CAD programs, and finally did all this work using QCad 3 Beta with worked great for me (I also tried LibreCAD but it didn’t work that well). All parts are cut from 4 mm thick material. Because I was worried that acrylic is too fragile and might break, I also ordered another set cut from plywood. In the end I build it from plywood because it was easier to glue (I was told acrylic requires a special glue). Btw. I found a great laser cutter service in Kraków and highly recommend it (www.ebbox.com.pl). It cost me only USD $26 for both sets ($16 acrylic + $10 plywood). Metal parts I bought all the M3 screws and nuts at local hardware store. Make sure to look for nylon lock (nyloc) nuts for the gripper because otherwise it unscrews and comes apart quickly. I couldn’t find local store with metal spacers and had to order them online (you’d need 11 x 47mm and 3 x 15mm). I think I paid less than USD $10 for all metal parts. Servos This arm uses five standards size servos to drive the arm itself, and two micro servos are used on the gripper. Author of the project used Modelcraft RS-2 Servo and Modelcraft ES-05 HT Servo. I had two Futaba S3001 servos laying around, and ordered additional TowerPro SG-5010 standard size servos and TowerPro SG90 micro servos. However it turned out that the SG90 won’t fit in the gripper so I had to replace it with a slightly smaller E-Sky EK2-0508 micro servo. Later it also turned out that Futaba servos make some strange noise while working so I swapped one with TowerPro SG-5010 which has higher torque (8kg / cm). I’ve also bought three servo extension cables. All servos cost me USD $45. Assembly The build process is not difficult but you need to think carefully about order of assembling it. You can do the base and upper arm first. Because two servos in the base are close together you need to put first with one piece of lower arm already connected before you put the second servo. Then you connect the upper arm and finally put the second piece of lower arm to hold it together. Gripper and base require some gluing so think it through too. Make sure to look closely at all the photos on Thingiverse (also other people copies) and read additional posts on jjshortcust’s blog: My mini servo grippers and completed robotic arm  Multiply the robotic arm and electronics Here is also Rob’s copy cut from aluminum My assembled arm looks like this – I think it turned out really nice: Servo controller board The last piece of hardware I needed was an electronic board that would take command from PC and drive all seven servos. I could probably use Arduino for this task, and in fact there are several Arduino servo shields available (for example from Adafruit or Renbotics).  However one problem is that most support only up to six servos, and second that their accuracy is limited by Arduino’s timer frequency. So instead I looked for dedicated servo controller and found a series of Maestro boards from Pololu. I picked the Pololu Mini Maestro 12-Channel USB Servo Controller. It has many nice features including native USB connection, high resolution pulses (0.25µs) with no jitter, built-in speed and acceleration control, and even scripting capability. Another cool feature is that besides servo control, each channel can be configured as either general input or output. So far I’m using seven channels so I still have five available to connect some sensors (for example distance sensor mounted on gripper might be useful). And last but important factor was that they have SDK in .NET – what more I could wish for! The board itself is very small – half of the size of Tic-Tac box. I picked one for about USD $35 in this store. Perhaps another good alternative would be the Phidgets Advanced Servo 8-Motor – but it is significantly more expensive at USD $87.30. The Maestro Controller Driver and Software package includes Maestro Control Center program with lets you immediately configure the board. For each servo I first figured out their move range and set the min/max limits. I played with setting the speed an acceleration values as well. Big issue for me was that there are two servos that control position of lower arm (shoulder joint), and both have to be moved at the same time. This is where the scripting feature of Pololu board turned out very helpful. I wrote a script that synchronizes position of second servo with first one – so now I only need to move one servo and other will follow automatically. This turned out tricky because I couldn’t find simple offset mapping of the move range for each servo – I had to divide it into several sub-ranges and map each individually. The scripting language is bit assembler-like but gets the job done. And there is even a runtime debugging and stack view available. Altogether I’m very happy with the Pololu Mini Maestro Servo Controller, and with this final piece I completed the build and was able to move my arm from the Meastro Control program.   The total cost of my robotic arm was: $10 laser cut parts $10 metal parts $45 servos $35 servo controller ----------------------- $100 total So here you have all the information about the hardware. In next post I’ll start talking about the software that I wrote in Microsoft Robotics Developer Studio 4. Stay tuned!

    Read the article

  • bluetooth not working on Ubuntu 13.10

    - by iacopo
    I upgrated ubuntu from 13.4 to 13.10 and my bluetooth stopped working. When I open bluetooth I'm able to put it ON but the visibility doesn't show anything and didn't detect any device. when I: dmesg | grep Blue [ 2.046249] usb 3-1: Product: Bluetooth V2.0 Dongle [ 2.046252] usb 3-1: Manufacturer: Bluetooth v2.0 [ 15.255710] Bluetooth: Core ver 2.16 [ 15.255748] Bluetooth: HCI device and connection manager initialized [ 15.255759] Bluetooth: HCI socket layer initialized [ 15.255765] Bluetooth: L2CAP socket layer initialized [ 15.255776] Bluetooth: SCO socket layer initialized [ 20.110379] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 20.110386] Bluetooth: BNEP filters: protocol multicast [ 20.110400] Bluetooth: BNEP socket layer initialized [ 20.120635] Bluetooth: RFCOMM TTY layer initialized [ 20.120656] Bluetooth: RFCOMM socket layer initialized [ 20.120660] Bluetooth: RFCOMM ver 1.11 when I digit: lsusb Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 0bc2:2300 Seagate RSS LLC Expansion Portable Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 006 Device 002: ID 0e6a:6001 Megawin Technology Co., Ltd GEMBIRD Flexible keyboard KB-109F-B-DE Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 002: ID 13ee:0001 MosArt Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub when I: hciconfig -a hci0: Type: BR/EDR Bus: USB BD Address: 00:1B:10:00:2A:EC ACL MTU: 1017:8 SCO MTU: 64:0 DOWN RX bytes:457 acl:0 sco:0 events:16 errors:0 TX bytes:68 acl:0 sco:0 commands:16 errors:0 Features: 0xff 0xff 0x8d 0xfe 0x9b 0xf9 0x00 0x80 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: Link mode: SLAVE ACCEPT when I digit: rfkill list 0: phy0: Wireless LAN Soft blocked: yes Hard blocked: no 1: hci0: Bluetooth Soft blocked: no Hard blocked: no when I digit: sudo gedit /etc/bluetooth/main.conf [General] # List of plugins that should not be loaded on bluetoothd startup #DisablePlugins = network,input # Default adaper name # %h - substituted for hostname # %d - substituted for adapter id Name = %h-%d # Default device class. Only the major and minor device class bits are # considered. Class = 0x000100 # How long to stay in discoverable mode before going back to non-discoverable # The value is in seconds. Default is 180, i.e. 3 minutes. # 0 = disable timer, i.e. stay discoverable forever DiscoverableTimeout = 0 # How long to stay in pairable mode before going back to non-discoverable # The value is in seconds. Default is 0. # 0 = disable timer, i.e. stay pairable forever PairableTimeout = 0 # Use some other page timeout than the controller default one # which is 16384 (10 seconds). PageTimeout = 8192 # Automatic connection for bonded devices driven by platform/user events. # If a platform plugin uses this mechanism, automatic connections will be # enabled during the interval defined below. Initially, this feature # intends to be used to establish connections to ATT channels. AutoConnectTimeout = 60 # What value should be assumed for the adapter Powered property when # SetProperty(Powered, ...) hasn't been called yet. Defaults to true InitiallyPowered = true # Remember the previously stored Powered state when initializing adapters RememberPowered = false # Use vendor id source (assigner), vendor, product and version information for # DID profile support. The values are separated by ":" and assigner, VID, PID # and version. # Possible vendor id source values: bluetooth, usb (defaults to usb) #DeviceID = bluetooth:1234:5678:abcd # Do reverse service discovery for previously unknown devices that connect to # us. This option is really only needed for qualification since the BITE tester # doesn't like us doing reverse SDP for some test cases (though there could in # theory be other useful purposes for this too). Defaults to true. ReverseServiceDiscovery = true # Enable name resolving after inquiry. Set it to 'false' if you don't need # remote devices name and want shorter discovery cycle. Defaults to 'true'. NameResolving = true # Enable runtime persistency of debug link keys. Default is false which # makes debug link keys valid only for the duration of the connection # that they were created for. DebugKeys = false # Enable the GATT functionality. Default is false EnableGatt = false when I digit: dmesg | grep Bluetooth [ 2.013041] usb 3-1: Product: Bluetooth V2.0 Dongle [ 2.013049] usb 3-1: Manufacturer: Bluetooth v2.0 [ 13.798293] Bluetooth: Core ver 2.16 [ 13.798338] Bluetooth: HCI device and connection manager initialized [ 13.798352] Bluetooth: HCI socket layer initialized [ 13.798357] Bluetooth: L2CAP socket layer initialized [ 13.798368] Bluetooth: SCO socket layer initialized [ 20.184162] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 20.184173] Bluetooth: BNEP filters: protocol multicast [ 20.184197] Bluetooth: BNEP socket layer initialized [ 20.238947] Bluetooth: RFCOMM TTY layer initialized [ 20.238983] Bluetooth: RFCOMM socket layer initialized [ 20.239018] Bluetooth: RFCOMM ver 1.11 When I digit: uname -a Linux casa-desktop 3.11.0-13-generic #20-Ubuntu SMP Wed Oct 23 07:38:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux When I digit: lsmod Module Size Used by parport_pc 32701 0 rfcomm 69070 4 bnep 19564 2 ppdev 17671 0 ip6t_REJECT 12910 1 xt_hl 12521 6 ip6t_rt 13507 3 nf_conntrack_ipv6 18938 9 nf_defrag_ipv6 34616 1 nf_conntrack_ipv6 ipt_REJECT 12541 1 xt_LOG 17718 8 xt_limit 12711 11 xt_tcpudp 12884 32 xt_addrtype 12635 4 nf_conntrack_ipv4 15012 9 nf_defrag_ipv4 12729 1 nf_conntrack_ipv4 xt_conntrack 12760 18 ip6table_filter 12815 1 ip6_tables 27025 1 ip6table_filter nf_conntrack_netbios_ns 12665 0 nf_conntrack_broadcast 12589 1 nf_conntrack_netbios_ns nf_nat_ftp 12741 0 nf_nat 26653 1 nf_nat_ftp kvm_amd 59958 0 nf_conntrack_ftp 18608 1 nf_nat_ftp kvm 431315 1 kvm_amd nf_conntrack 91736 8 nf_nat_ftp,nf_conntrack_netbios_ns,nf_nat,xt_conntrack,nf_conntrack_broadcast,nf_conntrack_ftp,nf_conntrack_ipv4,nf_conntrack_ipv6 iptable_filter 12810 1 crct10dif_pclmul 14289 0 crc32_pclmul 13113 0 ip_tables 27239 1 iptable_filter snd_hda_codec_realtek 55704 1 ghash_clmulni_intel 13259 0 aesni_intel 55624 0 aes_x86_64 17131 1 aesni_intel snd_hda_codec_hdmi 41117 1 x_tables 34059 13 ip6table_filter,xt_hl,ip_tables,xt_tcpudp,xt_limit,xt_conntrack,xt_LOG,iptable_filter,ip6t_rt,ipt_REJECT,ip6_tables,xt_addrtype,ip6t_REJECT lrw 13257 1 aesni_intel snd_hda_intel 48171 5 gf128mul 14951 1 lrw glue_helper 13990 1 aesni_intel ablk_helper 13597 1 aesni_intel joydev 17377 0 cryptd 20329 3 ghash_clmulni_intel,aesni_intel,ablk_helper snd_hda_codec 188738 3 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intel arc4 12608 2 snd_hwdep 13602 1 snd_hda_codec rt2800pci 18690 0 snd_pcm 102033 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel radeon 1402449 3 rt2800lib 79963 1 rt2800pci btusb 28267 0 rt2x00pci 13287 1 rt2800pci rt2x00mmio 13603 1 rt2800pci snd_page_alloc 18710 2 snd_pcm,snd_hda_intel rt2x00lib 55238 4 rt2x00pci,rt2800lib,rt2800pci,rt2x00mmio snd_seq_midi 13324 0 mac80211 596969 3 rt2x00lib,rt2x00pci,rt2800lib snd_seq_midi_event 14899 1 snd_seq_midi ttm 83995 1 radeon snd_rawmidi 30095 1 snd_seq_midi cfg80211 479757 2 mac80211,rt2x00lib drm_kms_helper 52651 1 radeon snd_seq 61560 2 snd_seq_midi_event,snd_seq_midi bluetooth 371880 12 bnep,btusb,rfcomm microcode 23518 0 eeprom_93cx6 13344 1 rt2800pci snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi crc_ccitt 12707 1 rt2800lib snd_timer 29433 2 snd_pcm,snd_seq snd 69141 21 snd_hda_codec_realtek,snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device,snd_seq_midi psmouse 97626 0 drm 296739 5 ttm,drm_kms_helper,radeon k10temp 13126 0 soundcore 12680 1 snd serio_raw 13413 0 i2c_algo_bit 13413 1 radeon i2c_piix4 22106 0 video 19318 0 mac_hid 13205 0 lp 17759 0 parport 42299 3 lp,ppdev,parport_pc hid_generic 12548 0 usbhid 53014 0 hid 105818 2 hid_generic,usbhid pata_acpi 13038 0 usb_storage 62062 1 r8169 67341 0 sdhci_pci 18985 0 sdhci 42630 1 sdhci_pci mii 13934 1 r8169 pata_atiixp 13242 0 ohci_pci 13561 0 ahci 25819 2 libahci 31898 1 ahci Someone can help me?

    Read the article

  • PeopleSoft Upgrades, Fusion, & BI for Leading European PeopleSoft Applications Customers

    - by Mark Rosenberg
    With so many industry-leading services firms around the globe managing their businesses with PeopleSoft, it’s always an adventure setting up times and meetings for us to keep in touch with them, especially those outside of North America who often do not get to join us at Oracle OpenWorld. Fortunately, during the first two weeks of May, Nigel Woodland (Oracle’s Service Industries Director for the EMEA region) and I successfully blocked off our calendars to visit seven different customers spanning four countries in Western Europe. We met executives and leaders at four Staffing industry firms, two Professional Services firms that engage in consulting and auditing, and a Financial Services firm. As we shared the latest information regarding product capabilities and plans, we also gained valuable insight into the hot technology topics facing these businesses. What we heard was both informative and inspiring, and I suspect other Oracle PeopleSoft applications customers can benefit from one or more of the following observations from our trip. Great IT Plans Get Executed When You Respect the Users Each of our visits followed roughly the same pattern. After introductions, Nigel outlined Oracle’s product and technology strategy, including a discussion of how we at Oracle invest in each layer of the “technology stack” to provide customers with unprecedented business management capabilities and choice. Then, I provided the specifics of the PeopleSoft product line’s investment strategy, detailing the dramatic number of rich usability and functionality enhancements added to release 9.1 since its general availability in 2009 and the game-changing capabilities slated for 9.2. What was most exciting about each of these discussions was that shortly after my talking about what customers can do with release 9.1 right now to drive up user productivity and satisfaction, I saw the wheels turning in the minds of our audiences. Business analyst and end user-configurable tools and technologies, such as WorkCenters and the Related Action Framework, that provide the ability to tailor a “central command center” to the exact needs of each recruiter, biller, and every other role in the organization were exactly what each of our customers had been looking for. Every one of our audiences agreed that these tools which demonstrate a respect for the user would finally help IT pole vault over the wall of resistance that users had often raised in the past. With these new user-focused capabilities, IT is positioned to definitively partner with the business, instead of drag the business along, to unlock the value of their investment in PeopleSoft. This topic of respecting the user emerged during our very first visit, which was at Vital Services Group at their Head Office “The Mill” in Manchester, England. (If you are a student of architecture and are ever in Manchester, you should stop in to see this amazingly renovated old mill building.) I had just finished explaining our PeopleSoft 9.2 roadmap, and Mike Code, PeopleSoft Systems Manager for this innovative staffing company, said, “Mark, the new features you’ve shown us in 9.1/9.2 are very relevant to our business. As we forge ahead with the 9.1 upgrade, the ability to configure a targeted user interface with WorkCenters, Related Actions, Pivot Grids, and Alerts will enable us to satisfy the business that this upgrade is for them and will deliver tangible benefits. In fact, you’ve highlighted that we need to start talking to the business to keep up the momentum to start reviewing the 9.2 upgrade after we get to 9.1, because as much as 9.1 and PeopleTools 8.52 offers, what you’ve shown us for 9.2 is what we’ve envisioned was ultimately possible with our investment in PeopleSoft applications.” We also received valuable feedback about our investment for the Staffing industry when we visited with Hans Wanders, CIO of Randstad (the second largest Staffing company in the world) in the Netherlands. After our visit, Hans noted, “It was very interesting to see how the PeopleSoft applications have developed. I was truly impressed by many of the new developments.” Hans and Mike, sincere thanks for the validation that our team’s hard work and dedication to “respecting the users” is worth the effort! Co-existence of PeopleSoft and Fusion Applications Just Makes Sense As a “product person,” one of the most rewarding things about visiting customers is that they actually want to talk to me. Sometimes, they want to discuss a product area that we need to enhance; other times, they are interested in learning how to extract more value from their applications; and still others, they want to tell me how they are using the applications to drive real value for the business. During this trip, I was very pleased to hear that several of our customers not only thought the co-existence of Fusion applications alongside PeopleSoft applications made sense in theory, but also that they were aggressively looking at how to deploy one or more Fusion applications alongside their PeopleSoft HCM and FSCM applications. The most common deployment plan in the works by three of the organizations is to upgrade to PeopleSoft 9.1 or 9.2, and then adopt one of the new Fusion HCM applications, such as Fusion Performance Management or the full suite of  Fusion Talent Management. For example, during an applications upgrade planning discussion with the staffing company Hays plc., Mark Thomas, who is Hays’ UK IT Director, commented, “We are very excited about where we can go with the latest versions of the PeopleSoft applications in conjunction with Fusion Talent Management.” Needless to say, this news was very encouraging, because it reiterated that our applications investment strategy makes good business sense for our customers. Next Generation Business Intelligence Is the Key to the Future The third, and perhaps most exciting, lesson I learned during this journey is that our audiences already know that the latest generation of Business Intelligence technologies will be the “secret sauce” for organizations to transform business in radical ways. While a number of the organizations we visited on the trip have deployed or are deploying Oracle Business Intelligence Enterprise Edition and the associated analytics applications to provide dashboards of easy-to-understand, user-configurable metrics that help optimize business performance according to current operating procedures, what’s most exciting to them is being able to use Business Intelligence to change the way an organization does business, grows revenue, and makes a profit. In particular, several executives we met asked whether we can help them minimize the need to have perfectly structured data and at the same time generate analytics that improve order fulfillment decision-making. To them, the path to future growth lies in having the ability to analyze unstructured data rapidly and intuitively and leveraging technology’s ability to detect patterns that a human cannot reasonably be expected to see. For illustrative purposes, here is a good example of a business problem where analyzing a combination of structured and unstructured data can produce better results. If you have a resource manager trying to decide which person would be the best fit for an assignment in terms of ensuring (a) client satisfaction, (b) the individual’s satisfaction with the work, (c) least travel distance, and (d) highest margin, you traditionally compare resource qualifications to assignment needs, calculate margins on past work with the client, and measure distances. To perform these comparisons, you are likely to need the organization to have profiles setup, people ranked against profiles, margin targets setup, margins measured, distances setup, distances measured, and more. As you can imagine, this requires organizations to plan and implement data setup, capture, and quality management initiatives to ensure that dependable information is available to support resourcing analysis and decisions. In the fast-paced, tight-budget world in which most organizations operate today, the effort and discipline required to maintain high-quality, structured data like those described in the above example are certainly not desirable and in some cases are not feasible. You can imagine how intrigued our audiences were when I informed them that we are ready to help them analyze volumes of unstructured data, detect trends, and produce recommendations. Our discussions delved into examples of how the firms could leverage Oracle’s Secure Enterprise Search and Endeca technologies to keyword search against, compare, and learn from unstructured resource and assignment data. We also considered examples of how they could employ Oracle Real-Time Decisions to generate statistically significant recommendations based on similar resourcing scenarios that have produced the desired satisfaction and profit margin results. --- Although I had almost no time for sight-seeing during this trip to Europe, I have to say that it may have been one of the most energizing and engaging trips of my career. Showing these dedicated customers how they can give every user a uniquely tailored set of tools and address business problems in ways that have to date been impossible made the journey across the Atlantic more than worth it. If any of these three topics intrigue you, I’d recommend you contact your Oracle applications representative to arrange for more detailed discussions with the appropriate members of our organization.

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >