Search Results

Search found 208 results on 9 pages for 'ptah opener of the mouth'.

Page 8/9 | < Previous Page | 4 5 6 7 8 9  | Next Page >

  • Simulate stochastic bipartite network based on trait values of species - in R

    - by Scott Chamberlain
    I would like to create bipartite networks in R. For example, if you have a data.frame of two types of species (that can only interact across species, not within species), and each species has a trait value (e.g., size of mouth in the predator allows who gets to eat which prey species), how do we simulate a network based on the traits of the species (that is, two species can only interact if their traits overlap in values for instance)? UPDATE: Here is a minimal example of what I am trying to do. 1) create phylogenetic tree; 2) simulate traits on the phylogeny; 3) create networks based on species trait values. # packages install.packages(c("ape","phytools")) library(ape); library(phytools) # Make phylogenetic trees tree_predator <- rcoal(10) tree_prey <- rcoal(10) # Simulate traits on each tree trait_predator <- fastBM(tree_predator) trait_prey <- fastBM(tree_prey) # Create network of predator and prey ## This is the part I can't do yet. I want to create bipartite networks, where ## predator and prey interact based on certain crriteria. For example, predator ## species A and prey species B only interact if their body size ratio is ## greater than X.

    Read the article

  • Cant get JavaScipt to check for null objects

    - by CountMurphy
    I really don't know what the issue is here. Far as I can see, the code is simple and should work fine. var Prices=""; for (var PriceCount = 1; PriceCount <= 120; PriceCount++) { var CurrentPrice = "Price" + PriceCount; if (prevDoc.getElementById(CurrentPrice).value != null) { if (Prices == "") { Prices = prevDoc.getElementById(CurrentPrice).value; } else { Prices += "," + prevDoc.getElementById(CurrentPrice).value; } } else { break; } } There could be up to 120 hidden inputs on the form. The moment we check for an input that doesn't exist the loop should break. My test page has two input elements that get pulled. On the third (the null) I get this error in firebug: prevDoc.getElementById(CurrentPrice) is null if (prevDoc.getElementById(CurrentPrice).value != null) { Yes it is null...that's what the check is for ?_? Does any one know what I'm doing wrong? This seems like it should be really straight forward. EDIT: for clarity's sake, prevDoc=window.opener.document

    Read the article

  • How to upgrade a 1.4.3 TortoiseSVN-created repository to 1.6.x?

    - by SiegeX
    A few years ago, TortoiseSVN 1.4.3 was deployed to our software development team and we are now looking at upgrading the client to the latest 1.6.x version. I had hoped this upgrade would be transparent with the additional features and modifications being client-side. For the most part, this was true except for a very important feature -- merging. When I try to merge a feature branch back into truck I get a show-stopping "Merge tracking not supported error." Here are some facts worth noting: When the repo was first created (before I was on board), it was created via the TortoiseSVN client itself. We do not have a 'svn server daemon' per se, rather the repository folders/database resides on a share folder that is accessible from our workstation machines via file:///. This was actually an eye opener for me, I had always thought there was some SVN server daemon we were talking to. We do not have any access to the underlying machine hosting the SVN share other than the ability to read/write to the share itself. I don't even know what OS the machine is running on. This share server was chosen because its drives are backed up nightly by our IT group. In all honesty, we really don't need the merge tracking feature although it would be nice to have. For the time being it would be sufficient to be able to use a 1.6.x TortoiseSVN client on the 1.4.3 repository and have it merge (sans tracking) without error. So now the question becomes, how does one upgrade a client-created 1.4.3 repo to a 1.6.x compatible version without access to the underlying machine the repo resides on? I was hoping the TortoiseSVN client itself had the ability to do this but that does not appear to be the case. Will I be forced to copy the entire repo over to my local drive, run some svn commands to upgrade the repo locally then copy the repo back to the share point? If so, will doing this break any compatibility with the the 1.4.3 clients in case we cant upgrade them all at the same time? Thanks for the help.

    Read the article

  • How to upgrade a 1.4.3 TortoiseSVN-created repository to 1.6.x?

    - by SiegeX
    A few years ago, TortoiseSVN 1.4.3 was deployed to our software development team and we are now looking at upgrading the client to the latest 1.6.x version. I had hoped this upgrade would be transparent with the additional features and modifications being client-side. For the most part, this was true except for a very important feature -- merging. When I try to merge a feature branch back into truck I get a show-stopping "Merge tracking not supported error." Here are some facts worth noting: When the repo was first created (before I was on board), it was created via the TortoiseSVN client itself. We do not have a 'svn server daemon' per se, rather the repository folders/database resides on a share folder that is accessible from our workstation machines via file:///. This was actually an eye opener for me, I had always thought there was some SVN server daemon we were talking to. We do not have any access to the underlying machine hosting the SVN share other than the ability to read/write to the share itself. I don't even know what OS the machine is running on. This share server was chosen because its drives are backed up nightly by our IT group. In all honesty, we really don't need the merge tracking feature although it would be nice to have. For the time being it would be sufficient to be able to use a 1.6.x TortoiseSVN client on the 1.4.3 repository and have it merge (sans tracking) without error. So now the question becomes, how does one upgrade a client-created 1.4.3 repo to a 1.6.x compatible version without access to the underlying machine the repo resides on? I was hoping the TortoiseSVN client itself had the ability to do this but that does not appear to be the case. Will I be forced to copy the entire repo over to my local drive, run some svn commands to upgrade the repo locally then copy the repo back to the share point? If so, will doing this break any compatibility with the the 1.4.3 clients in case we cant upgrade them all at the same time? Thanks for the help.

    Read the article

  • SQLAuthority News – Meeting SQL Friends – SQLPASS 2011 Event Log

    - by pinaldave
    One of the biggest reason I go to SQLPASS is that my friends are going there too. There are so many friends with whom I often talk on Facebook and Twitter but I rarely get time to meet them as well talk with them. One thing I am usually sure that many fo them will be for sure attend SQLPASS. This is one event which every SQL Server Enthusiast should attend. Just like everybody I had pleasant time to meet many of my SQL friends. There were so many friends that I met and I did not click photo. There were so many friends who clicked photo in their camera and I do not have them. Here are 1% of the photos which I have. If you are not in the photo, it does not mean I have less respect to our friendship. Please post link to our photo together :) I was very fortunate that I was able to snap a quick photograph with Pinal Dave with Dr. David DeWitt. I stood outside of the hall waiting for Dr. to show up and when he was heading down from convention center I requested him if I can have one photo for my memory lane and very politely he agreed to have one. It indeed made my day! Pinal Dave with Dr. David DeWitt Every single time I met Steve, I make sure I have one photo for my memory. Steve is so kind every single time. If you know SQL and do not know Steve Jones, you do not know SQL (IMHO). Following is the photograph with Michael McLean. More details about this photo in future blog post! Pinal Dave, Michael McLean, and Rick Morelan Arnie always shares his wisdom with me. I still remember when I very first time visited USA, I was standing alone in corner and Arnie walked to me and introduced to every single person he know. Talking to Arnie is always pleasure and inspiring. Arnie Rowland and Pinal Dave I am now published author and have written two books so far. I am fortunate to have Rick Morelan as Co-author of both of my books. He is great guy and very easy to become friends with. I am very much impressed by him and his kindness during book co-authoring. Here is very first of our photograph together at SQLPASS. Rick Morelan and Pinal Dave Diego Nogare and I have been talking for long time on twitter and on various social media channels. I finally got chance to meet my friend from Brazil. It was excellent experience to meet a friend whom one wants to meet for long time and had never got chance earlier. Buck Woody – who does not know Buck. He is funny, kind and most important friends of every one. Buck is so kind that he does not hesitate to approach people even though he is famous and most known in community. Every time I meet him I learn something. He is always smiling and approachable. Pinal Dave and Buck Woddy Rushabh Mehta is current SQL PASS president and personal friend. He has always smiling face and tremendous love for SQL community. I often wonder where he gets all the time for all the time and efforts he puts in for community. I never miss a chance to meet and greet him. Even though he is renowned SQL Guru and extremely busy person – every single time I meet him he always asks me – “How is Nupur and Shaivi?” He even remembers my wife and daughters name. I am touched. Rushabh Mehta and Pinal Dave Nigel Sammy has extremely well sense of humor and passion from community. We have excellent synergy while we are together. The attached photo is taken while I was talking to him on Seattle Shoreline about SQL. Pinal Dave and Nigel Sammy Rick Morelan wanted my this trip to be memorable. I am vegetarian and I told him that I do not like Seafood. Well, to prove the point, he took me to fantastic Seafood restaurant in Seattle and treated me with mouth watering vegetarian dishes. I think when I go to Seattle next time, I am going to make him to take me again to the same place. Rick, Rushabh, Pinal and Paras Well, this is a short summary of few of the friends I met at Seattle. What is the life without friends, eh? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • Why Software Sucks...and What You Can Do About It – book review

    - by DigiMortal
        How do our users see the products we are writing for them and how happy they are with our work? Are they able to get their work done without fighting with cool features and crashes or are they just switching off resistance part of their brain to survive our software? Yeah, the overall picture of software usability landscape is not very nice. Okay, it is not even nice. But, fortunately, Why Software Sucks...and What You Can Do About It by David S. Platt explains everything. Why Software Sucks… is book for software users but I consider it as a-must reading also for developers and specially for their managers whose politics often kills all usability topics as soon as they may appear. For managers usability is soft topic that can be manipulated the way it is best in current state of project. Although developers are not UI designers and usability experts they are still very often forced to deal with these topics and this is how usability problems start (of course, also designers are able to produce designs that are stupid and too hard to use for users, but this blog here is about development). I found this book to be very interesting and funny reading. It is not humor book but it explains you all so you remember later very well what you just read. It took me about three evenings to go through this book and I am still enjoying what I found and how author explains our weird young working field to end users. I suggest this book to all developers – while you are demanding your management to hire or outsource usability expert you are at least causing less pain to end users. So, go and buy this book, just like I did. And… they thanks to mr. Platt :) There is one book more I suggest you to read if you are interested in usability - Don't Make Me Think: A Common Sense Approach to Web Usability, 2nd Edition by Steve Krug. Editorial review from Amazon Today’s software sucks. There’s no other good way to say it. It’s unsafe, allowing criminal programs to creep through the Internet wires into our very bedrooms. It’s unreliable, crashing when we need it most, wiping out hours or days of work with no way to get it back. And it’s hard to use, requiring large amounts of head-banging to figure out the simplest operations. It’s no secret that software sucks. You know that from personal experience, whether you use computers for work or personal tasks. In this book, programming insider David Platt explains why that’s the case and, more importantly, why it doesn’t have to be that way. And he explains it in plain, jargon-free English that’s a joy to read, using real-world examples with which you’re already familiar. In the end, he suggests what you, as a typical user, without a technical background, can do about this sad state of our software—how you, as an informed consumer, don’t have to take the abuse that bad software dishes out. As you might expect from the book’s title, Dave’s expose is laced with humor—sometimes outrageous, but always dead on. You’ll laugh out loud as you recall incidents with your own software that made you cry. You’ll slap your thigh with the same hand that so often pounded your computer desk and wished it was a bad programmer’s face. But Dave hasn’t written this book just for laughs. He’s written it to give long-overdue voice to your own discovery—that software does, indeed, suck, but it shouldn’t. Table of contents Acknowledgments xiii Introduction Chapter 1: Who’re You Calling a Dummy? Where We Came From Why It Still Sucks Today Control versus Ease of Use I Don’t Care How Your Program Works A Bad Feature and a Good One Stopping the Proceedings with Idiocy Testing on Live Animals Where We Are and What You Can Do Chapter 2: Tangled in the Web Where We Came From How It Works Why It Still Sucks Today Client-Centered Design versus Server-Centered Design Where’s My Eye Opener? It’s Obvious—Not! Splash, Flash, and Animation Testing on Live Animals What You Can Do about It Chapter 3: Keep Me Safe The Way It Was Why It Sucks Today What Programmers Need to Know, but Don’t A Human Operation Budgeting for Hassles Users Are Lazy Social Engineering Last Word on Security What You Can Do Chapter 4: Who the Heck Are You? Where We Came From Why It Still Sucks Today Incompatible Requirements OK, So Now What? Chapter 5: Who’re You Looking At? Yes, They Know You Why It Sucks More Than Ever Today Users Don’t Know Where the Risks Are What They Know First Milk You with Cookies? Privacy Policy Nonsense Covering Your Tracks The Google Conundrum Solution Chapter 6: Ten Thousand Geeks, Crazed on Jolt Cola See Them in Their Native Habitat All These Geeks Who Speaks, and When, and about What Selling It The Next Generation of Geeks—Passing It On Chapter 7: Who Are These Crazy Bastards Anyway? Homo Logicus Testosterone Poisoning Control and Contentment Making Models Geeks and Jocks Jargon Brains and Constraints Seven Habits of Geeks Chapter 8: Microsoft: Can’t Live With ’Em and Can’t Live Without ’Em They Run the World Me and Them Where We Came From Why It Sucks Today Damned if You Do, Damned if You Don’t We Love to Hate Them Plus ça Change Growing-Up Pains What You Can Do about It The Last Word Chapter 9: Doing Something About It 1. Buy 2. Tell 3. Ridicule 4. Trust 5. Organize Epilogue About the Author

    Read the article

  • EU Digital Agenda scores 85/100

    - by trond-arne.undheim
    If the Digital Agenda was a bottle of wine and I were wine critic Robert Parker, I would say the Digital Agenda has "a great bouquet, many good elements, with astringent, dry and puckering mouth feel that will not please everyone, but still displaying some finesse. A somewhat controlled effort with no surprises and a few noticeable flaws in the delivery. Noticeably shorter aftertaste than advertised by the producers. Score: 85/100. Enjoy now". The EU Digital Agenda states that "standards are vital for interoperability" and has a whole chapter on interoperability and standards. With this strong emphasis, there is hope the EU's outdated standardization system finally is headed for reform. It has been 23 years since the legal framework of standardisation was completed by Council Decision 87/95/EEC8 in the Information and Communications Technology (ICT) sector. Standardization is market driven. For several decades the IT industry has been developing standards and specifications in global open standards development organisations (fora/consortia), many of which have transparency procedures and practices far superior to the European Standards Organizations. The Digital Agenda rightly states: "reflecting the rise and growing importance of ICT standards developed by certain global fora and consortia". Some fora/consortia, of course, are distorted, influenced by single vendors, have poor track record, and need constant vigilance, but they are the minority. Therefore, the recognition needs to be accompanied by eligibility criteria focused on openness. Will the EU reform its ICT standardization by the end of 2010? Possibly, and only if DG Enterprise takes on board that Information and Communications Technologies (ICTs) have driven half of the productivity growth in Europe over the past 15 years, a prominent fact in the EU's excellent Digital Competitiveness report 2010 published on Monday 17 May. It is ok to single out the ICT sector. It simply is the most important sector right now as it fuels growth in all other sectors. Let's not wait for the entire standardization package which may take another few years. Europe does not have time. The Digital Agenda is an umbrella strategy with deliveries from a host of actors across the Commission. For instance, the EU promises to issue "guidance on transparent ex-ante disclosure rules for essential intellectual property rights and licensing terms and conditions in the context of standard setting", by 2011 in the Horisontal Guidelines now out for public consultation by DG COMP and to some extent by DG ENTR's standardization policy reform. This is important. The EU will issue procurement guidance as interoperability frameworks are put into practice. This is a joint responsibility of several DGs, and is likely to suffer coordination problems, controversy and delays. We have seen plenty of the latter already and I have commented on the Commission's own interoperability elsewhere, with mixed luck. :( Yesterday, I watched the cartoonesque Korean western film The Good, the Bad and the Weird. In the movie (and I meant in the movie only), a bandit, a thief, and a bounty hunter, all excellent at whatever they do, fight for a treasure map. Whether that is a good analogy for the situation within the Commission, others are better judges of than I. However, as a movie fanatic, I still await the final shoot-out, and, as in the film, the only certainty is that "life is about chasing and being chased". The missed opportunity (in this case not following up the push from Member States to better define open standards based interoperability) is a casualty of the chaos ensued in the European Wild West (and I mean that in the most endearing sense, and my excuses beforehand to actors who possibly justifiably cannot bear being compared to fictional movie characters). Instead of exposing the ongoing fight, the EU opted for the legalistic use of the term "standards" throughout the document. This is a term that--to the EU-- excludes most standards used by the IT industry world wide. So, while it, for a moment, meant "weapon down", it will not lead to lasting peace. The Digital Agenda calls for the Member States to "Implement commitments on interoperability and standards in the Malmö and Granada Declarations by 2013". This is a far cry from the actual Ministerial Declarations which called upon the Commission to help them with this implementation by recognizing and further defining open standards based interoperability. Unless there is more forthcoming from the Commission, the market's judgement will be: you simply fall short. Generally, I think the EU focus now should be "from policy to practice" and the Digital Agenda does indeed stop short of tackling some highly practical issues. There is need for progress beyond the Digital Agenda. Here are some suggestions that would help Europe re-take global leadership on openness, public sector reform, and economic growth: A strong European software strategy centred around open standards based interoperability by 2011. An ambitious new eCommission strategy for 2011-15 focused on migration to open standards by 2015. Aligning the IT portfolio across the Commission into one Digital Agenda DG by 2012. Focusing all best practice exchange in eGovernment on one social networking site, epractice.eu (full disclosure: I had a role in getting that site up and running) Prioritizing public sector needs in global standardization over European standardization by 2014.

    Read the article

  • In Social Relationship Management, the Spirit is Willing, but Execution is Weak

    - by Mike Stiles
    In our final talk in this series with Aberdeen’s Trip Kucera, we wanted to find out if enterprise organizations are actually doing anything about what they’re learning around the importance of communicating via social and using social listening for a deeper understanding of customers and prospects. We found out that if your brand is lagging behind, you’re not alone. Spotlight: How was Aberdeen able to find out if companies are putting their money where their mouth is when it comes to implementing social across the enterprise? Trip: One way to think about the relative challenges a business has in a given area is to look at the gap between “say” and “do.” The first of those words reveals the brand’s priorities, while the second reveals their ability to execute on those priorities. In Aberdeen’s research, we capture this by asking firms to rank the value of a set of activities from one on the low end to five on the high end. We then ask them to rank their ability to execute those same activities, again on a one to five, not effective to highly effective scale. Spotlight: And once you get their self-assessments, what is it you’re looking for? Trip: There are two things we’re looking for in this analysis. The first is we want to be able to identify the widest gaps between perception of value and execution. This suggests impediments to adoption or simply a high level of challenge, be it technical or otherwise. It may also suggest areas where we can expect future investment and innovation. Spotlight: So the biggest potential pain points surface, places where they know something is critical but also know they aren’t doing much about it. What’s the second thing you look for? Trip: The second thing we want to do is look at specific areas in which high-performing companies, the Leaders, are out-executing the Followers. This points to the business impact of these activities since Leaders are defined by a set of business performance metrics. Put another way, we’re correlating adoption of specific business competencies with performance, looking for what high-performers do differently. Spotlight: Ah ha, that tells us what steps the winners are taking that are making them winners. So what did you find out? Trip: Generally speaking, we see something of a glass curtain when it comes to the social relationship management execution gap. There isn’t a single social media activity in which more than 50% of respondents indicated effectiveness, which would be a 4 or 5 on that 1-5 scale. This despite the fact that 70% of firms indicate that generating positive social media mentions is valuable or very valuable, a 4 or 5 on our 1-5 scale. Spotlight: Well at least they get points for being honest. The verdict they’re giving themselves is that they just aren’t cutting it in these highly critical social development areas. Trip: And the widest gap is around directly engaging with customers and/or prospects on social networks, which 69% of firms rated as valuable but only 34% of companies say they are executing well. Perhaps even more interesting is that these two are interdependent since you’re most likely to generate goodwill on social through happy, engaged customers. This data also suggests that social is largely being used as a broadcast channel rather than for one-to-one engagement. As we’ve discussed previously, social is an inherently personal media. Spotlight: And if they’re still using it as a broadcast channel, that shows they still fail to understand the root of social and see it as just another outlet for their ads and push-messaging. That’s depressing. Trip: A second way to evaluate this data is by using Aberdeen’s performance benchmarking. The story is both a bit different, but consistent in its own way. The first thing we notice is that Leaders are more effective in their execution of several key social relationship management capabilities, namely generating positive mentions and engaging with “influencers” and customers. Based on the fact that Aberdeen uses a broad set of performance metrics to rank the respondents as either “Leaders” (top 35% in weighted performance) or “Followers” (bottom 65% in weighted performance), from website conversion to annual revenue growth, we can then correlated high social effectiveness with company performance. We can also connect the specific social capabilities used by Leaders with effectiveness. We spoke about a few of those key capabilities last time and also discuss them in a new report: Social Powers Activate: Engineering Social Engagement to Win the Hidden Sales Cycle. Spotlight: What all that tells me is there are rewards for making the effort and getting it right. That’s how you become a Leader. Trip: But there’s another part of the story, which is that overall effectiveness, even among Leaders, is muted. There’s just one activity in which more than a majority of Leaders cite high effectiveness, effectiveness being the generation of positive buzz. While 80% of Leaders indicate “directly engaging with customers” through social media channels is valuable, the highest rated activity among Leaders, only 42% say they’re effective. This gap even among Leaders shows the challenges still involved in effective social relationship management. @mikestilesPhoto: stock.xchng

    Read the article

  • Finding it Hard to Deliver Right Customer Experience: Think BPM!

    - by Ajay Khanna
    Our relationship with our customers is not a just a single interaction and we should not treat it like one. A customer’s relationship with a vendor is like a journey which starts way before customer makes a purchase and lasts long after that. The journey may start with customer researching a product that may lead to the eventual purchase and may continue with support or service needs for the product. A typical customer journey can be represented as shown below: As you may notice, customers tend to use multiple channels to interact with a company throughout their journey.  They also expect that they should get consistent experience, no matter what interaction channel they may choose. Customers do not like to repeat the information they have already provided and expect companies to remember their preferences, and offer them relevant products and services. If the company fails to meet this expectation, customers not only will abandon the purchase and go to the competitor but may also influence others’ purchase decision. Gone are the days when word of mouth was the only medium, and the customer could influence “Six” others. This is the age of social media and customer’s good or bad experience, especially bad get highly amplified and may influence hundreds of others. Challenges that face B2C companies today include: Delivering consistent experience: The reason that delivering consistent experience is challenging is due to fragmented data, disjointed systems and siloed multichannel interactions. Customers tend to get different service quality if they use web vs. phone vs. store. They get different responses from different service agents or get inconsistent answers if they call sales vs. service group in the company. Such inconsistent experiences result in lower customer satisfaction or NPS (net promoter score) numbers. Increasing Revenue: To stay competitive companies frequently introduce new products and services. Delay in launching such offerings has a significant impact on revenue realization. In addition to new product revenue, there are multiple opportunities to up-sell and cross-sell that impact bottom line. If companies are not able to identify such opportunities, bring a product to market quickly, or not offer the right product to the right customer at the right time, significant loss of revenue may occur. Ensuring Compliance: Companies must be compliant to ever changing regulations, these could be about Know Your Customer (KYC), Export/Import regulations, or taxation policies. In addition to government agencies, companies also need to comply with the SLA that they have committed to their customers. Lapse in meeting any of these requirements may lead to serious fines, penalties and loss in business. Companies have to make sure that they are in compliance will all such regulations and SLA commitments, at any given time. With the advent of social networks and mobile technology, companies not only need to focus on process efficiency but also on customer engagement. Improving engagement means delivering the customer experience as the customer is expecting and interacting with the customer at right time using right channel. Customers expect to be able to contact you via any channel of their choice (web, email, chat, mobile, social media), purchase via any viable channel (web, phone, store, mobile). Customers expect companies to understand their particular needs and remember their preferences on repeated visits. To deliver such an integrated, consistent, and contextual experience, power of BPM in must. Your company may be organized in departments like Marketing, Sales, Service. You may hold prospect data in SFA, order information in ERP, customer issues in CRM. However, the experience delivered to the customer must not be constrained by your system legacy. BPM helps in designing the right experience for the right customer and integrates all the underlining channels, systems, applications to make sure right information will be delivered to the right knowledge worker or to the customer every single time.     Orchestrating information across all systems (MDM, CRM, ERP), departments (commerce, merchandising, marketing service) and channels (Email, phone, web, social)  is the key, and that’s what BPM delivers. In addition to orchestrating systems and channels for consistency, BPM also provides an ability for analysis and decision management. By using data from historical transactions, social media and from other systems, users can determine the customer preferences, customer value, and churn propensity. This information, in the context, is then used while making a decision at a process step. Working with real-time decision management system can also suggest right up-sell or cross-sell offers, discounts or next-best-action steps for a particular customer. Timely action on customer issues or request is also a key tenet of a good customer experience. BPM’s complex event processing capabilities help companies to take proactive actions before issues get escalated. BPM system can be designed to listen to a certain event patters then deduce from those customer situations (credit card stolen, baggage lost, change of address) and do a triage before situation goes out of control. If such a situation arises you can send alerts to right people or immediately invoke corrective actions. Last but not least one of BPM’s key values is to drive continuous improvement. Learning about customers past experiences, interactions and social conversations, provide valuable insight. Such insight can be used to improve products, customer facing processes, and customer experience. You may take these insights as an input to design better more efficient and customer friendly sales, contact center or self-service processes. If customer experience is important for your business, make sure you have incorporated BPM as a part of your strategy to design, orchestrate and improve your customer facing processes.

    Read the article

  • How can I get the new Facebook Javascript SDK to work in IE8?

    - by archbishop
    I've boiled down my page to the simplest possible thing, and it still doesn't work in IE8. Here's the entire html page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml" xml:lang="en" lang="en"> <head></head> <body> <div id="fb-root"></div> <fb:login-button></fb:login-button> <script src="http://connect.facebook.net/en_US/all.js"></script> <script> FB.init({appId: 'd663755ef4dd07c246e047ea97b44d6a', status: true, cookie: true, xfbml: true}); FB.Event.subscribe('auth.sessionChange', function(response) { alert(JSON.stringify(response)); }); FB.getLoginStatus(function (response) { alert(JSON.stringify(response)); }); </script> </body> </html> In firefox, safari, and chrome (on a mac), I get the behavior I expect: if I am not logged into Facebook, I get a dialog on page load with an empty session. When I click the Login button and log in, I get a second dialog with a session. If I am logged into Facebook, I get two dialogs with sessions: one from the getLoginStatus call, and another from the event. In IE8, I get no dialogs when I load the page. The getLoginStatus callback is not invoked. When I click the Login button, I get a dialog, but it has a very strange error in it: Invalid Argument The Facebook Connect cross-domain receiver URL (http://static.ak.fbcdn.net/connect/xd_proxy.php#?=&cb=f3e91da434653f2&origin=http%3A%2F%2Fsword.sqprod.com%2Ff210cba91f2a6d4&relation=opener&transport=flash&frame=f27aa957225164&result=xxRESULTTOKENxx) must have the application's Connect URL (http://mysiteurl.com/) as a prefix. You can configure the Connect URL in the Application Settings Editor. I've sanitized the Connect URL above, but it is correct. The dialog does have username/password fields. If I log in, the dialog box gets redirected to my Connect URL, but there's no fb cookie, so of course nothing works. What am I doing wrong here?

    Read the article

  • Which credentials should I put in for Google App Engine BulkLoader at development server?

    - by Hoang Pham
    Hello everyone, I would like to ask which kind of credentials do I need to put on for importing data using the Google App Engine BulkLoader class appcfg.py upload_data --config_file=models.py --filename=listcountries.csv --kind=CMSCountry --url=http://localhost:8178/remote_api vit/ And then it asks me for credentials: Please enter login credentials for localhost Here is an extraction of the content of the models.py, I use this listcountries.csv file class CMSCountry(db.Model): sortorder = db.StringProperty() name = db.StringProperty(required=True) formalname = db.StringProperty() type = db.StringProperty() subtype = db.StringProperty() sovereignt = db.StringProperty() capital = db.StringProperty() currencycode = db.StringProperty() currencyname = db.StringProperty() telephonecode = db.StringProperty() lettercode = db.StringProperty() lettercode2 = db.StringProperty() number = db.StringProperty() countrycode = db.StringProperty() class CMSCountryLoader(bulkloader.Loader): def __init__(self): bulkloader.Loader.__init__(self, 'CMSCountry', [('sortorder', str), ('name', str), ('formalname', str), ('type', str), ('subtype', str), ('sovereignt', str), ('capital', str), ('currencycode', str), ('currencyname', str), ('telephonecode', str), ('lettercode', str), ('lettercode2', str), ('number', str), ('countrycode', str) ]) loaders = [CMSCountryLoader] Every tries to enter the email and password result in "Authentication Failed", so I could not import the data to the development server. I don't think that I have any problem with my files neither my models because I have successfully uploaded the data to the appspot.com application. So what should I put in for localhost credentials? I also tried to use Eclipse with Pydev but I still got the same message :( Here is the output: Uploading data records. [INFO ] Logging to bulkloader-log-20090820.121659 [INFO ] Opening database: bulkloader-progress-20090820.121659.sql3 [INFO ] [Thread-1] WorkerThread: started [INFO ] [Thread-2] WorkerThread: started [INFO ] [Thread-3] WorkerThread: started [INFO ] [Thread-4] WorkerThread: started [INFO ] [Thread-5] WorkerThread: started [INFO ] [Thread-6] WorkerThread: started [INFO ] [Thread-7] WorkerThread: started [INFO ] [Thread-8] WorkerThread: started [INFO ] [Thread-9] WorkerThread: started [INFO ] [Thread-10] WorkerThread: started Password for [email protected]: [DEBUG ] Configuring remote_api. url_path = /remote_api, servername = localhost:8178 [DEBUG ] Bulkloader using app_id: abc [INFO ] Connecting to /remote_api [ERROR ] Exception during authentication Traceback (most recent call last): File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\bulkloader.py", line 2802, in Run request_manager.Authenticate() File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\bulkloader.py", line 1126, in Authenticate remote_api_stub.MaybeInvokeAuthentication() File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\ext\remote_api\remote_api_stub.py", line 488, in MaybeInvokeAuthentication datastore_stub._server.Send(datastore_stub._path, payload=None) File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\appengine_rpc.py", line 344, in Send f = self.opener.open(req) File "C:\Python25\lib\urllib2.py", line 381, in open response = self._open(req, data) File "C:\Python25\lib\urllib2.py", line 399, in _open '_open', req) File "C:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "C:\Python25\lib\urllib2.py", line 1107, in http_open return self.do_open(httplib.HTTPConnection, req) File "C:\Python25\lib\urllib2.py", line 1082, in do_open raise URLError(err) URLError: <urlopen error (10061, 'Connection refused')> [INFO ] Authentication Failed Thank you!

    Read the article

  • Blit Queue Optimization Algorithm

    - by martona
    I'm looking to implement a module that manages a blit queue. There's a single surface, and portions of this surface (bounded by rectangles) are copied to elsewhere within the surface: add_blt(rect src, point dst); There can be any number of operations posted, in order, to the queue. Eventually the user of the queue will stop posting blits, and ask for an optimal set of operations to actually perform on the surface. The task of the module is to ensure that no pixel is copied unnecessarily. This gets tricky because of overlaps of course. A blit could re-blit a previously copied pixel. Ideally blit operations would be subdivided in the optimization phase in such a way that every block goes to its final place with a single operation. It's tricky but not impossible to put this together. I'm just trying to not reinvent the wheel. I looked around on the 'net, and the only thing I found was the SDL_BlitPool Library which assumes that the source surface differs from the destination. It also does a lot of grunt work, seemingly unnecessarily: regions and similar building blocks are a given. I'm looking for something higher-level. Of course, I'm not going to look a gift horse in the mouth, and I also don't mind doing actual work... If someone can come forward with a basic idea that makes this problem seem less complex than it does right now, that'd be awesome too. EDIT: Thinking about aaronasterling's answer... could this work? Implement customized region handler code that can maintain metadata for every rectangle it contains. When the region handler splits up a rectangle, it will automatically associate the metadata of this rectangle with the resulting sub-rectangles. When the optimization run starts, create an empty region handled by the above customized code, call this the master region Iterate through the blt queue, and for every entry: Let srcrect be the source rectangle for the blt beng examined Get the intersection of srcrect and master region into temp region Remove temp region from master region, so master region no longer covers temp region Promote srcrect to a region (srcrgn) and subtract temp region from it Offset temp region and srcrgn with the vector of the current blt: their union will cover the destination area of the current blt Add to master region all rects in temp region, retaining the original source metadata (step one of adding the current blt to the master region) Add to master region all rects in srcrgn, adding the source information for the current blt (step two of adding the current blt to the master region) Optimize master region by checking if adjacent sub-rectangles that are merge candidates have the same metadata. Two sub-rectangles are merge candidates if (r1.x1 == r2.x1 && r1.x2 == r2.x2) | (r1.y1 == r2.y1 && r1.y2 == r2.y2). If yes, combine them. Enumerate master region's sub-rectangles. Every rectangle returned is an optimized blt operation destination. The associated metadata is the blt operation`s source.

    Read the article

  • jQuery - Form input return confirm to delete

    - by bruno
    hello guys. I struggled a lot before posting here :) now, I want to replace my default javascript confirmation for deleting a file. I saw a lot of examples here, but no example with form input. Now I have his form: <form action="delete.php" method="post"> <input type="hidden" name="id" value="<{$pid}>" /> <input type="hidden" name="picture" value="<{$lang_del_pic}>" /> <input type="image" src="<{xoImgUrl}>img/del-icon.gif" width="16" height="16" align="bottom" border="0" alt="Delete media" name="pictured" value="<{$lang_del_pic}>" onclick="javascript: return confirm('<{$lang_confirm_del}>');" /> </form> Now, I did everything, I have this div: <div id="dialog-confirm" title="Empty the recycle bin?"> <p><span class="ui-icon ui-icon-alert" style="float:left; margin:0 7px 20px 0;"></span>These items will be permanently deleted and cannot be recovered. Are you sure?</p> </div> this javascript: <script type="text/javascript"> $(document).ready(function() { $("#dialog").dialog({ autoOpen: false, modal: true }); }); $(".confirmLink").click(function(e) { e.preventDefault(); var targetUrl = $(this).attr("href"); $("#dialog").dialog({ buttons : { "Confirm" : function() { window.location.href = targetUrl; }, "Cancel" : function() { $(this).dialog("close"); } } }); $("#dialog").dialog("open"); }); </script> and this new form: <form name="dialog-confirm" id="dialog-confirm" method="post"> <input type="hidden" name="id" value="<{$pid}>" /> <input type="hidden" name="picture" value="<{$lang_del_pic}>" /> <input type="image" src="<{xoImgUrl}>img/del-icon.gif" width="16" height="16" align="bottom" border="0" alt="Delete media" name="pictured" value="" id="opener" /> </form> On press, I call successfuly the jQuery modal diolog, and everything works, but somehow, when I press 'delete all' the script tells me - "the script is called without the necessary parameters" Now I guess I am failig to send the pic ID to be deleted with the jQuery, .. but do not know how to fix it. Any ideas ?

    Read the article

  • South Florida Code Camp 2010 &ndash; VI &ndash; 2010-02-27

    - by Dave Noderer
    Catching up after our sixth code camp here in the Ft Lauderdale, FL area. Website at: http://www.fladotnet.com/codecamp. For the 5th time, DeVry University hosted the event which makes everything else really easy! Statistics from 2010 South Florida Code Camp: 848 registered (we use Microsoft Group Events) ~ 600 attended (516 took name badges) 64 speakers (including speaker idol) 72 sessions 12 parallel tracks Food 400 waters 600 sodas 900 cups of coffee (it was cold!) 200 pounds of ice 200 pizza's 10 large salad trays 900 mouse pads Photos on facebook Dave Noderer: http://www.facebook.com/home.php#!/album.php?aid=190812&id=693530361 Joe Healy: http://www.facebook.com/devfish?ref=mf#!/album.php?aid=202787&id=720054950 Will Strohl:http://www.facebook.com/home.php#!/album.php?aid=2045553&id=1046966128&ref=mf Veronica Gonzalez: http://www.facebook.com/home.php#!/album.php?aid=150954&id=672439484 Florida Speaker Idol One of the sessions at code camp was the South Florida Regional speaker idol competition. After user group level competitions there are five competitors. I acted as MC and score keeper while Ed Hill, Bob O’Connell, John Dunagan and Shervin Shakibi were judges. This statewide competition is being run by Roy Lawsen in Lakeland and the winner, Jeff Truman from Naples will move on to the state finals to be held at the Orlando Code Camp on 3/27/2010: http://www.orlandocodecamp.com/. Each speaker has 10 minutes. The participants were: Alex Koval Jeff Truman Jared Nielsen Chris Catto Venkat Narayanasamy They all did a great job and I’m working with each to make sure they don’t stop there and start speaking at meetings. Thanks to everyone involved! Volunteers As always events like this don’t happen without a lot of help! The key people were: Ed Hill, Bob O’Connell – DeVry For the months leading up to the event, Ed collects all of the swag, books, etc and stores them. He holds meeting with various DeVry departments to coordinate the day, he works with the students in the days  before code camp to stuff bags, print signs, arrange tables and visit BJ’s for our supplies (I go and pay but have a small car!). And of course the day of the event he is there at 5:30 am!! We took two SUV’s to BJ’s, i was really worried that the 36 cases of water were going to break his rear axle! He also helps with the students and works very hard before and after the event. Rainer Haberman – Speakers and Volunteer of the Year Rainer has helped over the past couple of years but this time he took full control of arranging the tracks. I did some preliminary work solicitation speakers but he took over all communications after that. We have tried various organizations around speakers, chair per track, central team but having someone paying attention to the details is definitely the way to go! This was the first year I did not have to jump in at the last minute and re-arrange everything. There were lots of kudo’s from the speakers too saying they felt it was more organized than they have experienced in the past from any code camp. Thanks Rainer! Ray Alamonte – Book Swap We saw the idea of a book swap from the Alabama Code Camp and thought we would give it a try. Ray jumped in and took control. The idea was to get people to bring their old technical books to swap or for others to buy. You got a ticket for each book you brought that you could then turn in to buy another book. If you did not have a ticket you could buy a book for $1. Net proceeds were $153 which I rounded up and donated to the Red Cross. There is plenty going on in Haiti and Chile! I don’t think we really got a count of how many books came in. I many cases the books barely hit the table before being picked up again. At the end we were left with a dozen books which we donated to the DeVry library. A great success we will definitely do again! Jace Weiss / Ratchelen Hut – Coffee and Snacks Wow, this was an eye opener. In past years a few of us would struggle to give some attention to coffee, snacks, etc. But it was always tenuous and always ended up running out of coffee. In the past we have tried buying Dunkin Donuts coffee, renting urns, borrowing urns, etc. This year I actually purchased 2 – 100 cup Westbend commercial brewers plus a couple of small urns (30 and 60 cup we used for decaf). We got them both started early (although i forgot to push the on button on one!) and primed it with 10 boxes of Joe from Dunkin. then Jace and Rachelen took over.. once a batch was brewed they would refill the boxes, keep the area clean and at one point were filling cups. We never ran out of coffee and served a few hundred more than last  year. We did look but next year I’ll get a large insulated (like gatorade) dispensing container. It all went very smoothly and having help focused on that one area was a big win. Thanks Jace and Rachelen! Ken & Shirley Golding / Roberta Barbosa – Registration Ken & Shirley showed up and took over registration. This year we printed small name tags for everyone registered which was great because it is much easier to remember someone’s name when they are labeled! In any case it went the smoothest it has ever gone. All three were actively pulling people through the registration, answering questions, directing them to bags and information very quickly. I did not see that there was too big a line at any time. Thanks!! Scott Katarincic / Vishal Shukla – Website For the 3rd?? year in a row, Scott was in charge of the website starting in August or September when I start on code camp. He handles all the requests, makes changes to the site and admin. I think two years ago he wrote all the backend administration and tunes it and the website a bit but things are pretty stable. The only thing I do is put up the sponsors. It is a big pressure off of me!! Thanks Scott! Vishal jumped into the web end this year and created a new Silverlight agenda page to replace the old ajax page. We will continue to enhance this but it is definitely a good step forward! Thanks! Alex Funkhouser – T-shirts/Mouse pads/tables/sponsors Alex helps in many areas. He helps me bring in sponsors and handles all the logistics for t-shirts, sponsor tables and this year the mouse pads. He is also a key person to help promote the event as well not to mention the after after party which I did not attend and don’t want to know much about! Students There were a number of student volunteers but don’t have all of their names. But thanks to them, they stuffed bags, patrolled pizza and helped with moving things around. Sponsors We had a bunch of great sponsors which allowed us to feed people and give a way a lot of great swag. Our major sponsors of DeVry, Microsoft (both DPE and UGSS), Infragistics, Telerik, SQL Share (End to End, SQL Saturdays), and Interclick are very much appreciated. The other sponsors Applied Innovations (also supply code camp hosting), Ultimate Software (a great local SW company), Linxter (reliable cloud messaging we are lucky to have here!), Mediascend (a media startup), SoftwareFX (another local SW company we are happy to have back participating in CC), CozyRoc (if you do SSIS, check them out), Arrow Design (local DNN and Silverlight experts),Boxes and Arrows (a local SW consulting company) and Robert Half. One thing we did this year besides a t-shirt was a mouse pad. I like it because it will be around for a long time on many desks. After much investigation and years of using mouse pad’s I’ve determined that the 1/8” fabric top is the best and that is what we got!   So now I get a break for a few months before starting again!

    Read the article

  • TechEd 2010 Day One – How I Travel

    - by BuckWoody
    Normally when I blog on the first day of a conference, well, there hasn’t been a first day yet. So I talk about the value of a conference or some other facet. And normally in my (non-conference) blogs, I show you how I have learned to be a data professional – things I’ve learned how to do over the years. But in all that time, I don’t think I’ve ever talked about a big part of my job – traveling. I’ve traveled a lot throughout the years, when I’ve taught, gone to conferences, consulted and in my current role assisting Microsoft customers with large-scale database system designs.  So I’ll share a few thoughts about what I do. Keep in mind that I travel for short durations, just a day or so, and sometimes I travel internationally. For those I prepare differently – what I’m talking about here is what I do for a multi-day, same-country trip. Hopefully you find it useful. I’ll tag a few other travelers I know to add their thoughts.  Preparing for Travel   When I’m notified of a trip, I begin researching the location. I find the flights, hotel and (if I have to) a car to use while I’m away. We have an in-house system we use to book the travel, but when I travel not-for-Microsoft I use Expedia and Kayak to find what I need.  Traveling on Sunday and Friday is the worst. I have to do it sometimes (like this week) and it’s always a bad idea. But you can blunt the impact by booking as early as you can stand it. That means I have to be up super-early, but the flights are normally on time. I stay flexible, and always have a backup plan in case the flights are delayed or canceled.  For the hotel, I tend to go on the cheaper side, and I look for older hotels that have been renovated, or quirky ones. For instance, in Boise, ID recently I stayed at a 60’s-themed (think Mad-Men) hotel that was very cool. Always I go on the less expensive side – I find the “luxury” hotels nail me for Internet, food, everything. The cheaper places include all kinds of things, and even have breakfasts, shuttles and all kinds of things that start to add up. I even call ahead to make sure there’s an iron and ironing board available, since I’ll need those when I get there.  I find any way I can not to get a car. I use mass-transit wherever possible, and try to make friends and pay their gas to take me places. In a pinch, I’ll use a taxi. It ends up being cheaper, faster, and less stressful all around.  Packing  Over the years I’ve learned never to check luggage whenever I can. To do that, I lay out everything I want to take with me on the bed, and then try and make sure I’m really going to use it. I wear a dark wool set of pants, which I can clean and wear in hot and cold climates. I bring undies and socks of course, and for most places I have to wear “dress up” shirts. I bring at least two print T-Shirts in case I want to dress down for something while I’m gone, but I only bring one set of shoes. All the  clothes are rolled as tightly as possible as I learned in the military. Then I use those to cushion the electronics I take.  For toiletries I bring a shaver, toothpaste and toothbrush, D/O and a small brush. Everything else the hotel will provide.  For entertainment, I take a small Zune, a full PC-Headset (so I can make IP calls on the road) and my laptop. I don’t take books or anything else – everything is electronic. I use E-books (downloaded from our Library), Audio-Books (on the Zune) and I also bring along a Kaossilator (more here) to play music in the hotel room or even on the plane without being heard.  If I can, I pack into one roll-on bag. There’s not a lot better than this one, but I also have a Bag I was given as a prize for something or other here at Microsoft. Either way, I like something with less pockets and more big, open compartments. Everything gets rolled up and packed in, with all of the wires and charges in small bags my wife made for me. The laptop (and anything I don’t want gate-checked) goes on top or in an outside pouch so I can grab it quickly if I have to gate-check the bag. As much as I can, I try to go in one bag. When I can’t (like this week) I use this bag since it can expand, roll up, crush and even be put away later. It’s super-heavy canvas and worth the price. This allows me to not check a bag.  Journey Logistics The day of the trip, I have everything ready since I’m getting up early. I pack a few small snacks inside a plastic large-mouth water bottle, which protects the snacks and lets me get water in the terminal. I bring along those little powdered drink mixes to add to the water.  At the airport, I make a beeline for the power-outlets. I charge up my laptop and phone, and download all my e-mails so I can work on them off-line in the air. I don’t travel as often as I used to – just every month or so now, so I don’t have a membership to an airline club. If I travel much more, I’ll invest in one again – they are WELL worth the money, for the wifi, food and quiet if for nothing else.  I print out my logistics on paper and put that in my pocket – flight numbers, hotel addresses and phones for everything. That way if I have to make a change, I don’t have to boot up anything or even have power to be able to roll with the punches if things change.  Working While Away  While I’m away I realize I’m going to be swamped with things at the conference or with my clients. So I turn on Out-Of-Office notifications to let people know I won’t be as responsive, and I keep my Outlook calendar up to date so my co-workers know what I’m up to. I even update it with hotel and phone info in case they really need to reach me. I share my calendar with my wife so my family knows what I’m doing as well.  I check my e-mail during breaks, but I only respond to them in the evening or early morning at the hotel. I tweet during conferences. The point is to be as present as possible during the event or when I’m at the clients. Both deserve it.  So those are my initial thoughts. I’ll tag Brent Ozar, Brad McGeHee and Paul Randal, and they can tag whomever they wish. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How can a code editor effectively hint at code nesting level - without using indentation?

    - by pgfearo
    I've written an XML text editor that provides 2 view options for the same XML text, one indented (virtually), the other left-justified. The motivation for the left-justified view is to help users 'see' the whitespace characters they're using for indentation of plain-text or XPath code without interference from indentation that is an automated side-effect of the XML context. I want to provide visual clues (in the non-editable part of the editor) for the left-justified mode that will help the user, but without getting too elaborate. I tried just using connecting lines, but that seemed too busy. The best I've come up with so far is shown in a mocked up screenshot of the editor below, but I'm seeking better/simpler alternatives (that don't require too much code). [Edit] Taking the heatmap idea (from: @jimp) I get this and 3 alternatives - labelled a, b and c: The following section describes the accepted answer as a proposal, bringing together ideas from a number of other answers and comments. As this question is now community wiki, please feel free to update this. NestView The name for this idea which provides a visual method to improve the readability of nested code without using indentation. Contour Lines The name for the differently shaded lines within the NestView The image above shows the NestView used to help visualise an XML snippet. Though XML is used for this illustration, any other code syntax that uses nesting could have been used for this illustration. An Overview: The contour lines are shaded (as in a heatmap) to convey nesting level The contour lines are angled to show when a nesting level is being either opened or closed. A contour line links the start of a nesting level to the corresponding end. The combined width of contour lines give a visual impression of nesting level, in addition to the heatmap. The width of the NestView may be manually resizable, but should not change as the code changes. Contour lines can either be compressed or truncated to keep acheive this. Blank lines are sometimes used code to break up text into more digestable chunks. Such lines could trigger special behaviour in the NestView. For example the heatmap could be reset or a background color contour line used, or both. One or more contour lines associated with the currently selected code can be highlighted. The contour line associated with the selected code level would be emphasized the most, but other contour lines could also 'light up' in addition to help highlight the containing nested group Different behaviors (such as code folding or code selection) can be associated with clicking/double-clicking on a Contour Line. Different parts of a contour line (leading, middle or trailing edge) may have different dynamic behaviors associated. Tooltips can be shown on a mouse hover event over a contour line The NestView is updated continously as the code is edited. Where nesting is not well-balanced assumptions can be made where the nesting level should end, but the associated temporary contour lines must be highlighted in some way as a warning. Drag and drop behaviors of Contour Lines can be supported. Behaviour may vary according to the part of the contour line being dragged. Features commonly found in the left margin such as line numbering and colour highlighting for errors and change state could overlay the NestView. Additional Functionality The proposal addresses a range of additional issues - many are outside the scope of the original question, but a useful side-effect. Visually linking the start and end of a nested region The contour lines connect the start and end of each nested level Highlighting the context of the currently selected line As code is selected, the associated nest-level in the NestView can be highlighted Differentiating between code regions at the same nesting level In the case of XML different hues could be used for different namespaces. Programming languages (such as c#) support named regions that could be used in a similar way. Dividing areas within a nesting area into different visual blocks Extra lines are often inserted into code to aid readability. Such empty lines could be used to reset the saturation level of the NestView's contour lines. Multi-Column Code View Code without indentation makes the use of a multi-column view more effective because word-wrap or horizontal scrolling is less likely to be required. In this view, once code has reach the bottom of one column, it flows into the next one: Usage beyond merely providing a visual aid As proposed in the overview, the NestView could provide a range of editing and selection features which would be broadly in line with what is expected from a TreeView control. The key difference is that a typical TreeView node has 2 parts: an expander and the node icon. A NestView contour line can have as many as 3 parts: an opener (sloping), a connector (vertical) and a close (sloping). On Indentation The NestView presented alongside non-indented code complements, but is unlikely to replace, the conventional indented code view. It's likely that any solutions adopting a NestView, will provide a method to switch seamlessly between indented and non-indented code views without affecting any of the code text itself - including whitespace characters. One technique for the indented view would be 'Virtual Formatting' - where a dynamic left-margin is used in lieu of tab or space characters. The same nesting-level data used to dynamically render the NestView could also used for the more conventional-looking indented view. Printing Indentation will be important for the readability of printed code. Here, the absence of tab/space characters and a dynamic left-margin means that the text can wrap at the right-margin and still maintain the integrity of the indented view. Line numbers can be used as visual markers that indicate where code is word-wrapped and also the exact position of indentation: Screen Real-Estate: Flat Vs Indented Addressing the question of whether the NestView uses up valuable screen real-estate: Contour lines work well with a width the same as the code editor's character width. A NestView width of 12 character widths can therefore accommodate 12 levels of nesting before contour lines are truncated/compressed. If an indented view uses 3 character-widths for each nesting level then space is saved until nesting reaches 4 levels of nesting, after this nesting level the flat view has a space-saving advantage that increases with each nesting level. Note: A minimum indentation of 4 character widths is often recommended for code, however XML often manages with less. Also, Virtual Formatting permits less indentation to be used because there's no risk of alignment issues A comparison of the 2 views is shown below: Based on the above, its probably fair to conclude that view style choice will be based on factors other than screen real-estate. The one exception is where screen space is at a premium, for example on a Netbook/Tablet or when multiple code windows are open. In these cases, the resizable NestView would seem to be a clear winner. Use Cases Examples of real-world examples where NestView may be a useful option: Where screen real-estate is at a premium a. On devices such as tablets, notepads and smartphones b. When showing code on websites c. When multiple code windows need to be visible on the desktop simultaneously Where consistent whitespace indentation of text within code is a priority For reviewing deeply nested code. For example where sub-languages (e.g. Linq in C# or XPath in XSLT) might cause high levels of nesting. Accessibility Resizing and color options must be provided to aid those with visual impairments, and also to suit environmental conditions and personal preferences: Compatability of edited code with other systems A solution incorporating a NestView option should ideally be capable of stripping leading tab and space characters (identified as only having a formatting role) from imported code. Then, once stripped, the code could be rendered neatly in both the left-justified and indented views without change. For many users relying on systems such as merging and diff tools that are not whitespace-aware this will be a major concern (if not a complete show-stopper). Other Works: Visualisation of Overlapping Markup Published research by Wendell Piez, dated from 2004, addresses the issue of the visualisation of overlapping markup, specifically LMNL. This includes SVG graphics with significant similarities to the NestView proposal, as such, they are acknowledged here. The visual differences are clear in the images (below), the key functional distinction is that NestView is intended only for well-nested XML or code, whereas Wendell Piez's graphics are designed to represent overlapped nesting. The graphics above were reproduced - with kind permission - from http://www.piez.org Sources: Towards Hermenutic Markup Half-steps toward LMNL

    Read the article

  • What Makes a Good Design Critic? CHI 2010 Panel Review

    - by jatin.thaker
    Author: Daniel Schwartz, Senior Interaction Designer, Oracle Applications User Experience Oracle Applications UX Chief Evangelist Patanjali Venkatacharya organized and moderated an innovative and stimulating panel discussion titled "What Makes a Good Design Critic? Food Design vs. Product Design Criticism" at CHI 2010, the annual ACM Conference on Human Factors in Computing Systems. The panelists included Janice Rohn, VP of User Experience at Experian; Tami Hardeman, a food stylist; Ed Seiber, a restaurant architect and designer; John Kessler, a food critic and writer at the Atlanta Journal-Constitution; and Larry Powers, Chef de Cuisine at Shaun's restaurant in Atlanta, Georgia. Building off the momentum of his highly acclaimed panel at CHI 2009 on what interaction design can learn from food design (for which I was on the other side as a panelist), Venkatacharya brought together new people with different roles in the restaurant and software interaction design fields. The session was also quite delicious -- but more on that later. Criticism, as it applies to food and product or interaction design, was the tasty topic for this forum and showed that strong parallels exist between food and interaction design criticism. Figure 1. The panelists in discussion: (left to right) Janice Rohn, Ed Seiber, Tami Hardeman, and John Kessler. The panelists had great insights to share from their respective fields, and they enthusiastically discussed as if they were at a casual collegial dinner. John Kessler stated that he prefers to have one professional critic's opinion in general than a large sampling of customers, however, "Web sites like Yelp get users excited by the collective approach. People are attracted to things desired by so many." Janice Rohn added that this collective desire was especially true for users of consumer products. Ed Seiber remarked that while people looked to the popular view for their target tastes and product choices, "professional critics like John [Kessler] still hold a big weight on public opinion." Chef Powers indicated that chefs take in feedback from all sources, adding, "word of mouth is very powerful. We also look heavily at the sales of the dishes to see what's moving; what's selling and thus successful." Hearing this discussion validates our design work at Oracle in that we listen to our users (our diners) and industry feedback (our critics) to ensure an optimal user experience of our products. Rohn considers that restaurateur Danny Meyer's book, Setting the Table: The Transforming Power of Hospitality in Business, which is about creating successful restaurant experiences, has many applicable parallels to user experience design. Meyer actually argues that the customer is not always right, but that "they must always feel heard." Seiber agreed, but noted "customers are not designers," and while designers need to listen to customer feedback, it is the designer's job to synthesize it. Seiber feels it's the critic's job to point out when something is missing or not well-prioritized. In interaction design, our challenges are quite similar, if not parallel. Software tasks are like puzzles that are in search of a solution on how to be best completed. As a food stylist, Tami Hardeman has the demanding and challenging task of presenting food to be as delectable as can be. To present food in its best light requires a lot of creativity and insight into consumer tastes. It's no doubt then that this former fashion stylist came up with the ultimate catch phrase to capture the emotion that clients want to draw from their users: "craveability." The phrase was a hit with the audience and panelists alike. Sometime later in the discussion, Seiber remarked, "designers strive to apply craveability to products, and I do so for restaurants in my case." Craveabilty is also very applicable to interaction design. Creating straightforward and smooth workflows for users of Oracle Applications is a primary goal for my colleagues. We want our users to really enjoy working with our products where it makes them more efficient and better at their jobs. That's our "craveability." Patanjali Venkatacharya asked the panel, "if a design's "craveability" appeals to some cultures but not to others, then what is the impact to the food or product design process?" Rohn stated that "taste is part nature and part nurture" and that the design must take the full context of a product's usage into consideration. Kessler added, "good design is about understanding the context" that the experience necessitates. Seiber remarked how important seat comfort is for diners and how the quality of seating will add so much to the complete dining experience. Sometimes if these non-food factors are not well executed, they can also take away from an otherwise pleasant dining experience. Kessler recounted a time when he was dining at a restaurant that actually had very good food, but the photographs hanging on all the walls did not fit in with the overall décor and created a negative overall dining experience. While the tastiness of the food is critical to a restaurant's success, it is a captivating complete user experience, as in interaction design, which will keep customers coming back and ultimately making the restaurant a hit. Figure 2. Patanjali Venkatacharya enjoyed the Sardinian flatbread salad. As a surprise Chef Powers brought out a signature dish from Shaun's restaurant for all the panelists to sample and critique. The Sardinian flatbread dish showcased Atlanta's taste for fresh and local produce and cheese at its finest as a salad served on a crispy flavorful flat bread. Hardeman said it could be photographed from any angle, a high compliment coming from a food stylist. Seiber really enjoyed the colors that the dish brought together and thought it would be served very well in a casual restaurant on a summer's day. The panel really appreciated the taste and quality of the different components and how the rosemary brought all the flavors together. Seiber remarked that "a lot of effort goes into the appearance of simplicity." Rohn indicated that the same notion holds true with software user interface design. A tremendous amount of work goes into crafting straightforward interfaces, including user research, prototyping, design iterations, and usability studies. Design criticism for food and software interfaces clearly share many similarities. Both areas value expert opinions and user feedback. Both areas understand the importance of great design needing to work well in its context. Last but not least, both food and interaction design criticism value "craveability" and how having users excited about experiencing and enjoying the designs is an important goal. Now if we can just improve the taste of software user interfaces, people may choose to dine on their enterprise applications over a fresh organic salad.

    Read the article

  • What Makes a Good Design Critic? CHI 2010 Panel Review

    - by Applications User Experience
    Author: Daniel Schwartz, Senior Interaction Designer, Oracle Applications User Experience Oracle Applications UX Chief Evangelist Patanjali Venkatacharya organized and moderated an innovative and stimulating panel discussion titled "What Makes a Good Design Critic? Food Design vs. Product Design Criticism" at CHI 2010, the annual ACM Conference on Human Factors in Computing Systems. The panelists included Janice Rohn, VP of User Experience at Experian; Tami Hardeman, a food stylist; Ed Seiber, a restaurant architect and designer; Jonathan Kessler, a food critic and writer at the Atlanta Journal-Constitution; and Larry Powers, Chef de Cuisine at Shaun's restaurant in Atlanta, Georgia. Building off the momentum of his highly acclaimed panel at CHI 2009 on what interaction design can learn from food design (for which I was on the other side as a panelist), Venkatacharya brought together new people with different roles in the restaurant and software interaction design fields. The session was also quite delicious -- but more on that later. Criticism, as it applies to food and product or interaction design, was the tasty topic for this forum and showed that strong parallels exist between food and interaction design criticism. Figure 1. The panelists in discussion: (left to right) Janice Rohn, Ed Seiber, Tami Hardeman, and Jonathan Kessler. The panelists had great insights to share from their respective fields, and they enthusiastically discussed as if they were at a casual collegial dinner. Jonathan Kessler stated that he prefers to have one professional critic's opinion in general than a large sampling of customers, however, "Web sites like Yelp get users excited by the collective approach. People are attracted to things desired by so many." Janice Rohn added that this collective desire was especially true for users of consumer products. Ed Seiber remarked that while people looked to the popular view for their target tastes and product choices, "professional critics like John [Kessler] still hold a big weight on public opinion." Chef Powers indicated that chefs take in feedback from all sources, adding, "word of mouth is very powerful. We also look heavily at the sales of the dishes to see what's moving; what's selling and thus successful." Hearing this discussion validates our design work at Oracle in that we listen to our users (our diners) and industry feedback (our critics) to ensure an optimal user experience of our products. Rohn considers that restaurateur Danny Meyer's book, Setting the Table: The Transforming Power of Hospitality in Business, which is about creating successful restaurant experiences, has many applicable parallels to user experience design. Meyer actually argues that the customer is not always right, but that "they must always feel heard." Seiber agreed, but noted "customers are not designers," and while designers need to listen to customer feedback, it is the designer's job to synthesize it. Seiber feels it's the critic's job to point out when something is missing or not well-prioritized. In interaction design, our challenges are quite similar, if not parallel. Software tasks are like puzzles that are in search of a solution on how to be best completed. As a food stylist, Tami Hardeman has the demanding and challenging task of presenting food to be as delectable as can be. To present food in its best light requires a lot of creativity and insight into consumer tastes. It's no doubt then that this former fashion stylist came up with the ultimate catch phrase to capture the emotion that clients want to draw from their users: "craveability." The phrase was a hit with the audience and panelists alike. Sometime later in the discussion, Seiber remarked, "designers strive to apply craveability to products, and I do so for restaurants in my case." Craveabilty is also very applicable to interaction design. Creating straightforward and smooth workflows for users of Oracle Applications is a primary goal for my colleagues. We want our users to really enjoy working with our products where it makes them more efficient and better at their jobs. That's our "craveability." Patanjali Venkatacharya asked the panel, "if a design's "craveability" appeals to some cultures but not to others, then what is the impact to the food or product design process?" Rohn stated that "taste is part nature and part nurture" and that the design must take the full context of a product's usage into consideration. Kessler added, "good design is about understanding the context" that the experience necessitates. Seiber remarked how important seat comfort is for diners and how the quality of seating will add so much to the complete dining experience. Sometimes if these non-food factors are not well executed, they can also take away from an otherwise pleasant dining experience. Kessler recounted a time when he was dining at a restaurant that actually had very good food, but the photographs hanging on all the walls did not fit in with the overall décor and created a negative overall dining experience. While the tastiness of the food is critical to a restaurant's success, it is a captivating complete user experience, as in interaction design, which will keep customers coming back and ultimately making the restaurant a hit. Figure 2. Patnajali Venkatacharya enjoyed the Sardian flatbread salad. As a surprise Chef Powers brought out a signature dish from Shaun's restaurant for all the panelists to sample and critique. The Sardinian flatbread dish showcased Atlanta's taste for fresh and local produce and cheese at its finest as a salad served on a crispy flavorful flat bread. Hardeman said it could be photographed from any angle, a high compliment coming from a food stylist. Seiber really enjoyed the colors that the dish brought together and thought it would be served very well in a casual restaurant on a summer's day. The panel really appreciated the taste and quality of the different components and how the rosemary brought all the flavors together. Seiber remarked that "a lot of effort goes into the appearance of simplicity." Rohn indicated that the same notion holds true with software user interface design. A tremendous amount of work goes into crafting straightforward interfaces, including user research, prototyping, design iterations, and usability studies. Design criticism for food and software interfaces clearly share many similarities. Both areas value expert opinions and user feedback. Both areas understand the importance of great design needing to work well in its context. Last but not least, both food and interaction design criticism value "craveability" and how having users excited about experiencing and enjoying the designs is an important goal. Now if we can just improve the taste of software user interfaces, people may choose to dine on their enterprise applications over a fresh organic salad.

    Read the article

  • Grow Your Business with Security

    - by Darin Pendergraft
    Author: Kevin Moulton Kevin Moulton has been in the security space for more than 25 years, and with Oracle for 7 years. He manages the East EnterpriseSecurity Sales Consulting Team. He is also a Distinguished Toastmaster. Follow Kevin on Twitter at twitter.com/kevin_moulton, where he sometimes tweets about security, but might also tweet about running, beer, food, baseball, football, good books, or whatever else grabs his attention. Kevin will be a regular contributor to this blog so stay tuned for more posts from him. It happened again! There I was, reading something interesting online, and realizing that a friend might find it interesting too. I clicked on the little email link, thinking that I could easily forward this to my friend, but no! Instead, a new screen popped up where I was asked to create an account. I was expected to create a User ID and password, not to mention providing some personally identifiable information, just for the privilege of helping that website spread their word. Of course, I didn’t want to have to remember a new account and password, I didn’t want to provide the requisite information, and I didn’t want to waste my time. I gave up, closed the web page, and moved on to something else. I was left with a bad taste in my mouth, and my friend might never find her way to this interesting website. If you were this content provider, would this be the outcome you were looking for? A few days later, I had a similar experience, but this one went a little differently. I was surfing the web, when I happened upon some little chotcke that I just had to have. I added it to my cart. When I went to buy the item, I was again brought to a page to create account. Groan! But wait! On this page, I also had the option to sign in with my OpenID account, my Facebook account, my Yahoo account, or my Google Account. I have all of those! No new account to create, no new password to remember, and no personally identifiable information to be given to someone else (I’ve already given it all to those other guys, after all). In this case, the vendor was easy to deal with, and I happily completed the transaction. That pleasant experience will bring me back again. This is where security can grow your business. It’s a differentiator. You’ve got to have a presence on the web, and that presence has to take into account all the smart phones everyone’s carrying, and the tablets that took over cyber Monday this year. If you are a company that a customer can deal with securely, and do so easily, then you are a company customers will come back to again and again. I recently had a need to open a new bank account. Every bank has a web presence now, but they are certainly not all the same. I wanted one that I could deal with easily using my laptop, but I also wanted 2-factor authentication in case I had to login from a shared machine, and I wanted an app for my iPad. I found a bank with all three, and that’s who I am doing business with. Let’s say, for example, that I’m in a regular Texas Hold-em game on Friday nights, so I move a couple of hundred bucks from checking to savings on Friday afternoons. I move a similar amount each week and I do it from the same machine. The bank trusts me, and they trust my machine. Most importantly, they trust my behavior. This is adaptive authentication. There should be no reason for my bank to make this transaction difficult for me. Now let's say that I login from a Starbucks in Uzbekistan, and I transfer $2,500. What should my bank do now? Should they stop the transaction? Should they call my home number? (My former bank did exactly this once when I was taking money out of an ATM on a business trip, when I had provided my cell phone number as my primary contact. When I asked them why they called my home number rather than my cell, they told me that their “policy” is to call the home number. If I'm on the road, what exactly is the use of trying to reach me at home to verify my transaction?) But, back to Uzbekistan… Should my bank assume that I am happily at home in New Jersey, and someone is trying to hack into my account? Perhaps they think they are protecting me, but I wouldn’t be very happy if I happened to be traveling on business in Central Asia. What if my bank were to automatically analyze my behavior and calculate a risk score? Clearly, this scenario would be outside of my typical behavior, so my risk score would necessitate something more than a simple login and password. Perhaps, in this case, a one-time password to my cell phone would prove that this is not just some hacker half way around the world. But, what if you're not a bank? Do you need this level of security? If you want to be a business that is easy to deal with while also protecting your customers, then of course you do. You want your customers to trust you, but you also want them to enjoy doing business with you. Make it easy for them to do business with you, and they’ll come back, and perhaps even Tweet about it, or Like you, and then their friends will follow. How can Oracle help? Oracle has the technology and expertise to help you to grown your business with security. Oracle Adaptive Access Manager will help you to prevent fraud while making it easier for your customers to do business with you by providing the risk analysis I discussed above, step-up authentication, and much more. Oracle Mobile and Social Access Service will help you to secure mobile access to applications by expanding on your existing back-end identity management infrastructure, and allowing your customers to transact business with you using the social media accounts they already know. You also have device fingerprinting and metrics to help you to grow your business securely. Security is not just a cost anymore. It’s a way to set your business apart. With Oracle’s help, you can be the business that everyone’s tweeting about. Image courtesy of Flickr user shareski

    Read the article

  • The Presentation Isn't Over Until It's Over

    - by Phil Factor
    The senior corporate dignitaries settled into their seats looking important in a blue-suited sort of way. The lights dimmed as I strode out in front to give my presentation.  I had ten vital minutes to make my pitch.  I was about to dazzle the top management of a large software company who were considering the purchase of my software product. I would present them with a dazzling synthesis of diagrams, graphs, followed by  a live demonstration of my software projected from my laptop.  My preparation had been meticulous: It had to be: A year’s hard work was at stake, so I’d prepared it to perfection.  I stood up and took them all in, with a gaze of sublime confidence. Then the laptop expired. There are several possible alternative plans of action when this happens     A. Stare at the smoking laptop vacuously, flapping ones mouth slowly up and down     B. Stand frozen like a statue, locked in indecision between fright and flight.     C. Run out of the room, weeping     D. Pretend that this was all planned     E. Abandon the presentation in favour of a stilted and tedious dissertation about the software     F. Shake your fist at the sky, and curse the sense of humour of your preferred deity I started for a few seconds on plan B, normally referred to as the ‘Rabbit in the headlamps of the car’ technique. Suddenly, a little voice inside my head spoke. It spoke the famous inane words of Yogi Berra; ‘The game isn't over until it's over.’ ‘Too right’, I thought. What to do? I ran through the alternatives A-F inclusive in my mind but none appealed to me. I was completely unprepared for this. Nowadays, longevity has since taught me more than I wanted to know about the wacky sense of humour of fate, and I would have taken two laptops. I hadn’t, but decided to do the presentation anyway as planned. I started out ignoring the dead laptop, but pretending, instead that it was still working. The audience looked startled. They were expecting plan B to be succeeded by plan C, I suspect. They weren’t used to denial on this scale. After my introductory talk, which didn’t require any visuals, I came to the diagram that described the application I’d written.  I’d taken ages over it and it was hot stuff. Well, it would have been had it been projected onto the screen. It wasn’t. Before I describe what happened then, I must explain that I have thespian tendencies.  My  triumph as Professor Higgins in My Fair Lady at the local operatic society is now long forgotten, but I remember at the time of my finest performance, the moment that, glancing up over the vast audience of  moist-eyed faces at the during the poignant  scene between Eliza and Higgins at the end, I  realised that I had a talent that one day could possibly  be harnessed for commercial use I just talked about the diagram as if it was there, but throwing in some extra description. The audience nodded helpfully when I’d done enough. Emboldened, I began a sort of mime, well, more of a ballet, to represent each slide as I came to it. Heaven knows I’d done my preparation and, in my mind’s eye, I could see every detail, but I had to somehow project the reality of that vision to the audience, much the same way any actor playing Macbeth should do the ghost of Banquo.  My desperation gave me a manic energy. If you’ve ever demonstrated a windows application entirely by mime, gesture and florid description, you’ll understand the scale of the challenge, but then I had nothing to lose. With a brief sentence of description here and there, and arms flailing whilst outlining the size and shape of  graphs and diagrams, I used the many tricks of mime, gesture and body-language  learned from playing Captain Hook, or the Sheriff of Nottingham in pantomime. I set out determinedly on my desperate venture. There wasn’t time to do anything but focus on the challenge of the task: the world around me narrowed down to ten faces and my presentation: ten souls who had to be hypnotized into seeing a Windows application:  one that was slick, well organized and functional I don’t remember the details. Eight minutes of my life are gone completely. I was a thespian berserker.  I know however that I followed the basic plan of building the presentation in a carefully controlled crescendo until the dazzling finale where the results were displayed on-screen.  ‘And here you see the results, neatly formatted and grouped carefully to enhance the significance of the figures, together with running trend-graphs!’ I waved a mime to signify an animated  window-opening, and looked up, in my first pause, to gaze defiantly  at the audience.  It was a sight I’ll never forget. Ten pairs of eyes were gazing in rapt attention at the imaginary window, and several pairs of eyes were glancing at the imaginary graphs and figures.  I hadn’t had an audience like that since my starring role in  Beauty and the Beast.  At that moment, I realized that my desperate ploy might work. I sat down, slightly winded, when my ten minutes were up.  For the first and last time in my life, the audience of a  ‘PowerPoint’ presentation burst into spontaneous applause. ‘Any questions?’ ‘Yes,  Have you got an agent?’ Yes, in case you’re wondering, I got the deal. They bought the software product from me there and then. However, it was a life-changing experience for me and I have never ever again trusted technology as part of a presentation.  Even if things can’t go wrong, they’ll go wrong and they’ll kill the flow of what you’re presenting.  if you can’t do something without the techno-props, then you shouldn’t do it.  The greatest lesson of all is that great presentations require preparation and  ‘stage-presence’ rather than fancy graphics. They’re a great supporting aid, but they should never dominate to the point that you’re lost without them.

    Read the article

  • First PC Build (Part 1)

    - by Anthony Trudeau
    Originally posted on: http://geekswithblogs.net/tonyt/archive/2014/08/05/157959.aspxA couple of months ago I made the decision to build myself a new computer. The intended use is gaming and for using the last real version of Photoshop. I was motivated by the poor state of console gaming and a simple desire to do something I haven’t done before – build a PC from the ground up. I’ve been using PCs for more than two decades. I’ve replaced a component hear and there, but for the last 10 years or so I’ve only used laptops. Therefore, this article will be written from the perspective of someone familiar with PCs, but completely new at building. I’m not an expert and this is not a definitive guide for building a PC, but I do hope that it encourages you to try it yourself. Component List Research There was a lot of research necessary, because building a PC is completely new to me, and I haven’t kept up with what’s out there. The first thing you want to do is nail down what your goals are. Your goals are going to be driven by what you want to do with your computer and personal choice. Don’t neglect the second one, because if you’re doing this for fun you want to get what you want. In my case, I focused on three things: performance, longevity, and aesthetics. The performance aspect is important for gaming and Photoshop. This will drive what components you get. For example, heavy gaming use is going to drive your choice of graphics card. Longevity is relevant to me, because I don’t want to be changing things out anytime soon for the next hot game. The consequence of performance and longevity is cost. Finally, aesthetics was my next consideration. I could have just built a box, but it wouldn’t have been nearly as fun for me. Aesthetics might not be important to you. They are for me. I also like gadgets and that played into at least one purchase for this build. I used PC Part Picker to put together my component list. I found it invaluable during the process and I’d recommend it to everyone. One caveat is that I wouldn’t trust the compatibility aspects. It does a pretty good job of not steering you wrong, but do your own research. The rest of it isn’t really sexy. I started out with what appealed to me and then I made changes and additions as I dived deep into researching each component and interaction I could find. The resources I used are innumerable. I used reviews, product descriptions, forum posts (praises and problems), et al. to assist me. I also asked friends into gaming what they thought about my component list. And when I got near the end I posted my list to the Reddit /r/buildapc forum. I cannot stress the value of extra sets of eyeballs and first hand experiences. Some of the resources I used: PC Part Picker Tom’s Hardware bit-tech Reddit Purchase PC Part Picker favors certain vendors. You should look at others too. In my case I found their favorites to be the best. My priorities were out-the-door price and shipping time. I knew that once I started getting parts I’d want to start building. Luckily, I timed it well and everything arrived within the span of a few days. Here are my opinions on the vendors I ended up using in alphabetical order. Amazon.com is a good, reliable choice. They have excellent customer service in my experience, and I knew I wouldn’t have trouble with them. However, shipping time is often a problem when you use their free shipping unless you order expensive items (I’ve found items over $100 ship quickly). Ultimately though, price wasn’t always the best and their collection of sales tax in my state turned me off them. I did purchase my case from them. I ordered the mouse as well, but I cancelled after it was stuck four days in a “shipping soon” state. I purchased the mouse locally. Best Buy is not my favorite place to do business. There’s a lot of history with poor, uninterested sales representatives and they used to have a lot of bad anti-consumer policies. That’s a lot better now, but the bad taste is still in my mouth. I ended up purchasing the accessories from them including mouse (locally) and headphones. NCIX is a company that I’ve never heard of before. It popped up as a recommendation for my CPU cooler on PC Part Picker. I didn’t do a lot of research on the company, because their policy on you buying insurance for your orders turned me off. That policy makes it clear to me that the company finds me responsible for the shipment once it leaves their dock. That’s not right, and may run afoul of state laws. Regardless they shipped my CPU cooler quickly and I didn’t have a problem. NewEgg.com is a well known company. I had never done business with them, but I’m glad I did. They shipped quickly and provided good visibility over everything. The prices were also the best in most cases. My main complaint is that they have a lot of exchange only return policies on components. To their credit those policies are listed in the cart underneath each item. The visibility tells me that they’re not playing any shenanigans and made me comfortable dealing with that risk. The vast majority of what I ordered came from them. Coming Next In the next part I’ll tackle my build experience.

    Read the article

  • Bug Triage

    In this blog post brain dump, I'll attempt to describe the process my team tries to follow when dealing with new bug reports (specifically, code defect reports). This is not official Microsoft policy, just the way we do things… if you do things differently and want to share, you can do so at the bottom in the comments (or on your blog).Feature Triage TeamA subset of the feature crew, the triage team (which has representations from the PM, Dev and QA disciplines), looks at all unassigned bugs at regular intervals. This can be weekly or daily (or other frequency) dependent on which part of the product cycle we are in and what the untriaged bug load looks like. They discuss each bug considering the evidence and make a decision of whether the bug goes from Not Yet Assigned to Assigned (plus the name of the DEV to fix this) or whether it goes from Active to Resolved (which means it gets assigned back to the requestor for closure or further debate if they were not present at the triage meeting). Close to critical milestones, the feature triage team needs to further justify bugs they take to additional higher-level triage teams.Bug Opened = Not Yet AssignedSomeone (typically an SDET from the QA team) creates the bug item (e.g. in TFS), ensuring they populate all the relevant fields including: Title, Description, Repro Steps (including the Actual Result at the end of the steps), attachments of code and/or screenshots, Build number that they observed the issue in, regression details if applicable, how it was found, if a test case exists or needs to be created etc. They also indicate their opinion on the Priority and Severity. The bug status is left as Not Yet Assigned."Issue" versus "Fix for issue"The solution to some bugs is easy to determine, e.g. "bug: the column name is misspelled". Obviously the fix is to correct the spelling – still, the triage team should be explicit and enter the correct spelling in the bug's Description. Note that a bad bug name here would be "bug: fix the spelling of the column" (it describes the solution, rather than the problem).Other solutions are trickier to establish, e.g. "bug: the column header is not accessible (can only be clicked on with the mouse, not reached via keyboard)". What is the correct solution here? The last thing to do is leave this undetermined and just assign it to a developer. The solution has to be entered in the description. Behind this type of a bug usually hides a spec defect or a new feature request.The person opening the bug should focus on describing the issue, rather than the solution. The person indicates what the fix is in their opinion by stating the Expected Result (immediately after stating the Actual Result). If they have a complex suggested solution, that should be split out in a separate part, but the triage team has the final say before assigning it. If the solution is lengthy/complicated to describe, the bug can be assigned to the PM. Note: the strict interpretation suggests that any bug with no clear, obvious solution is always a hole in the spec and should always go to the PM. This also ensures the spec gets updated.Not Yet Assigned - Not Yet Assigned (on someone else's plate)If the bug is observed in our feature, but the cause is actually another team, we change the Area Path (which is the way we identify teams in TFS) and leave it as Not Yet Assigned. The triage team may add more comments as appropriate including potentially changing the repro steps. In some cases, we may even resolve the bug in our area path and open a new bug in the area path of the other team.Even though there is no action on a dev on the team, the bug still needs to be tracked. One way of doing this is to implement some notification system that informs the team when the tracked bug changed status; another way is to occasionally run a global query (against all area paths) for bugs that have been opened by a member of the team and follow up with the current owners for stale bugs.Not Yet Assigned - ResolvedThis state transition can only be made by the Feature Triage Team.0. Sometimes the bug description is not clear and in that case it gets Resolved as More Information Needed, so the original requestor can provide it.After understanding what the bug item is about, the first decision is to determine whether it needs to go to a dev.1. If it is a known bug, it gets resolved as "Duplicate" and linked to the existing bug.2. If it is "By Design" it gets resolved as such, indicating that the triage team does not think this is a bug.3. If the bug does not repro on latest bits, it is resolved as "No Repro"4. The most painful: If it is decided that we cannot fix it for this release it gets resolved as "Postponed" or "Won't Fix". The former is typically due to resources and time constraints, while the latter is due to deciding that it is not important enough to consume our resources in any release (yes, not all bugs must be fixed!). For both cases, there are other factors that contribute to the decision such as: existence of a reasonable workaround, frequency we expect users to encounter the issue, dependencies on other team to offer a solution, whether it breaks a core scenario, whether it prohibits customer feedback on a major feature, is it a regression from a previous release, impact of the fix on other partner teams (e.g. User Education, User Experience, Localization/Globalization), whether this is the right fix, does the fix impact performance goals, and last but not least, severity of bug (e.g. loss of customer data, security threat, crash, hang). The bar for fixing a bug goes up as the release date approaches. The triage team becomes hardnosed about which bugs to take, while the developers are busy resolving assigned bugs thus everyone drives for Zero Bug Bounce (ZBB). ZBB is when you have 0 active bugs older than 48 hours.Not Yet Assigned - AssignedIf the bug is something we decide to fix in this release and the solution is known, then it is assigned to a DEV. This is either the developer that will do the work, or a Lead that can further assign it to one of his developer team based on a load balancing algorithm of their choosing.Sometimes, the triage team needs the dev to do some investigation work before deciding whether to take the fix; similarly, the checkin for the fix may be gated on code review by the triage team. In these cases, these instructions are provided in the comments section of the bug and when the developer is done they notify the triage team for final decision.Additionally, a Priority and Severity (from 0 to 4) has to be entered, e.g. a P0 means "drop anything you are doing and fix this now" whereas a P4 is something you get to after all P0,1,2,3 bugs are fixed.From a testing perspective, if the bug was found through ad-hoc testing or an external team, the decision is made whether test cases should be added to avoid future regressions. This is communicated to the QA team.Assigned - ResolvedWhen the developer receives the bug (they should be checking daily for new bugs on their plate looking at bugs in order of priority and from older to newer) they can send it back to triage if the information is not clear. Otherwise, they investigate the bug, setting the Sub Status to "Investigating"; if they cannot make progress, they set the Sub Status to "Blocked" and discuss this with triage or whoever else can help them get unblocked. Once they are unblocked, they set the Sub Status to "Working on Solution"; once they are code complete they send a code review request, setting the Sub Status to "Fix Available". After the iterative code review process is over and everyone is happy with the fix, the developer checks it in and changes the state of the bug from Active (and Assigned to them) to Resolved (and Assigned to someone else).The developer needs to ensure that when the status is changed to Resolved that it is assigned to a QA person. For example, maybe the PM opened the bug, but it should be a QA person that will verify the fix - the developer needs to manually change the assignee in that case. Typically the QA person will send an email to the original requestor notifying them that the fix is verified.Resolved - ??In all cases above, note that the final state was Resolved. What happens after that? The final step should be Closed. The bug is closed once the QA person verifying the fix is happy with it. If the person is not happy, then they change the state from Resolved to Active, thus sending it back to the developer. If the developer and QA person cannot reach agreement, then triage can be brought into it. An easy way to do that is change the status back to Not Yet Assigned with appropriate comments so the triage team can re-review.It is important to note that only QA can close a bug. That means that if the opener of the bug was a PM, when the bug gets resolved by the dev it may land on the PM's plate and after a quick review, the PM would re-assign to an SDET, which is the only role that can close bugs. One exception to this is if the person that filed the bug is external: in that case, we leave it Resolved and assigned to them and also send them a notification that they need to verify the fix. Another exception is if specialized developer knowledge is needed for verifying the bug fix (e.g. it was a refactoring suggestion bug typically not observable by the user) in which case it is fine to have a developer verify the fix, and ideally a different developer to the one that opened the bug.Other links on bug triageA quick search reveals that others have talked about this subject, e.g. here, here, here, here and here.Your take?If you have other best practices your team uses to deal with incoming bug reports, feel free to share in the comments below or on your blog. Comments about this post welcome at the original blog.

    Read the article

  • Craftsmanship is ALL that Matters

    - by Wayne Molina
    Today, I'm going to talk about a touchy subject: the notion of working in a company that doesn't use the prescribed "best practices" in its software development endeavours.  Over the years I have, using a variety of pseudonyms, asked this question on popular programming forums.  Although I always add in some minor variation of the story to avoid suspicion that it's the same person posting, the crux of the tale remains the same: A Programmer’s Tale A junior software developer has just started a new job at an average company, creating average line-of-business applications for internal use (the most typical scenario programmers find themselves in).  This hypothetical newbie has spent a lot of time reading up on the "theory" of software development, devouring books, blogs and screencasts from well-known and respected software developers in the community in order to broaden his knowledge and "do what the pros do".  He begins his new job, eager to apply what he's learned on a real-world project only to discover that his new teammates doesn't use any of those concepts and techniques.  They hack their way through development, or in a best-case scenario use some homebrew, thrown-together semblance of a framework for their applications that follows not one of the best practices suggested by the “elite” in the software community - things like TDD (TDD as a "best practice" is the only subjective part of this post, but it's included here due to a very large following of respected developers who consider it one), the SOLID principles, well-known and venerable tools, even version control in a worst case and truly nightmarish scenario.  Our protagonist is frustrated that he isn't doing things the "proper" way - a way he's spent personal time digesting and learning about and, more importantly, a way that some of the top developers in the industry advocate - and turns to a forum to ask the advice of his peers. Invariably the answer I, in the guise of the concerned newbie, will receive is that A) I don't know anything and should just shut my mouth and sling code the bad way like everybody else on the team, and B) These "best practices" are fade or a joke, and the only thing that matters is shipping software to your customers. I am here today to say that anyone who says this, or anything like it, is not only full of crap but indicative of exactly the type of “developer” that has helped to give our industry a bad name.  Here is why: One Who Knows Nothing, Understands Nothing On one hand, you have the cognoscenti of the .NET development world.  Guys like James Avery, Jeremy Miller, Ayende Rahien and Rob Conery; all well-respected and noted programmers that are pretty much our version of celebrities.  These guys write blogs, books, and post videos outlining the "correct" way of writing software to make sure it not only works but is maintainable and extensible and a joy to work with.  They tout the virtues of the SOLID principles, or of using TDD/BDD, or using a mature ORM like NHibernate, Subsonic or even Entity Framework. On the other hand, you have Joe Everyman, Lead Software Developer at Initrode Corporation - in our hypothetical story Joe is the junior developer's new boss.  Joe's been with Initrode for 10 years, starting as the company’s very first programmer and over the years building up a little fiefdom of his own until at the present he’s in charge of all Initrode’s software development.  Joe writes code the same way he always has, without bothering to learn much, if anything.  He looked at NHibernate once and found it was "too hard", so he uses a primitive implementation of the TableDataGateway pattern as a wrapper around SqlClient.SqlConnection and SqlClient.SqlCommand instead of an actual ORM (or, in a better case scenario, has created his own ORM); the thought of using LINQ or Entity Framework or really anything other than his own hastily homebrew solution has never occurred to him.  He doesn't understand TDD and considers “testing” to be using the .NET debugger to step through code, or simply loading up an app and entering some values to see if it works.  He doesn't really understand SOLID, and he doesn't care to.  He's worked as a programmer for years, and that's all that counts.  Right?  WRONG. Who would you rather trust?  Someone with years of experience and who writes books, creates well-known software and is akin to a celebrity, or someone with no credibility outside their own minute environment who throws around their clout and company seniority as the "proof" of their ability?  Joe Everyman may have years of experience at Initrode as a programmer, and says to do things "his way" but someone like Jeremy Miller or Ayende Rahien have years of experience at companies just like Initrode, THEY know ten times more than Joe Everyman knows or could ever hope to know, and THEY say to do things "this way". Here's another way of thinking about it: If you wanted to get into politics and needed advice on the best way to do it, would you rather listen to the mayor of Hicktown, USA or Barack Obama?  One is a small-time nobody while the other is very well-known and, as such, would probably have much more accurate and beneficial advice. NOTE: The selection of Barack Obama as an example in no way, shape, or form suggests a political affiliation or political bent to this post or blog, and no political innuendo should be mistakenly read from it; the intent was merely to compare a small-time persona with a well-known persona in a non-software field.  Feel free to replace the name "Barack Obama" with any well-known Congressman, Senator or US President of your choice. DIY Considered Harmful I will say right now that the homebrew development environment is the WORST one for an aspiring programmer, because it relies on nothing outside it's own little box - no useful skill outside of the small pond.  If you are forced to use some half-baked, homebrew ORM created by your Director of Software, you are not learning anything valuable you can take with you in the future; now, if you plan to stay at Initrode for 10 years like Joe Everyman, this is fine and dandy.  However if, like most of us, you want to advance your career outside a very narrow space you will do more harm than good by sticking it out in an environment where you, to be frank, know better than everybody else because you are aware of alternative and, in almost most cases, better tools for the job.  A junior developer who understands why the SOLID principles are good to follow, or why TDD is beneficial, or who knows that it's better to use NHibernate/Subsonic/EF/LINQ/well-known ORM versus some in-house one knows better than a senior developer with 20 years experience who doesn't understand any of that, plain and simple.  Anyone who disagrees is either a liar, or someone who, just like Joe Everyman, Lead Developer, relies on seniority and tenure rather than adapting their knowledge as things evolve. In many cases, the Joe Everymans of the world act this way out of fear - they cannot possibly fathom that a “junior” could know more than them; after all, they’ve spent 10 or more years in the same company, doing the same job, cranking out the same shoddy software.  And here comes a newbie who hasn’t spent 10+ years doing the same things, with a fresh and often radical take on the craft, and Joe Everyman is afraid he might have to put some real effort into his career again instead of just pointing to his 10 years of service at Initrode as “proof” that he’s good, or that he might have to learn something new to improve; in most cases the problem is Joe Everyman, and by extension Initrode itself, has a mentality of just being “good enough”, and mediocrity is the rule of the day. A Thorn Bush is No Place for a Phoenix My advice is that if you work on a team where they don't use the best practices that some of the most famous developers in our field say is the "right" way to do things (and have legions of people who agree), and YOU are aware of these practices and can see why they work, then LEAVE the company.  Find a company where they DO care about quality, and craftsmanship, otherwise you will never be happy.  There is no point in "dumbing" yourself down to the level of your co-workers and slinging code without care to craftsmanship.  In 95% of these situations there will be no point in bringing it to the attention of Joe Everyman because he won't listen; he might even get upset that someone is trying to "upstage" him and fire the newbie, and replace someone with loads of untapped potential with a drone that will just nod affirmatively and grind out the tasks assigned without question. Find a company that has people smart enough to listen to the "best and brightest", and be happy.  Do not, I repeat, DO NOT waste away in a job working for ignorant people.  At the end of the day software development IS a craft, and a level of craftsmanship is REQUIRED for any serious professional.  When you have knowledgeable people with the credibility to back it up saying one thing, and small-time people who are, to put it bluntly, nobodies in the field saying and doing something totally different because they can't comprehend it, leave the nobodies to their own devices to fade into obscurity.  Work for a company that uses REAL software engineering techniques and really cares about craftsmanship.  The biggest issue affecting our career, and the reason software development has never been the respected, white-collar career it was meant to be, is because hacks and charlatans can pass themselves off as professional programmers without following a lick of good advice from programmers much better at the craft than they are.  These modern day snake-oil salesmen entrench themselves in companies by hoodwinking non-technical businesspeople and customers with their shoddy wares, end up in senior/lead/executive positions, and push their lack of knowledge on everybody unfortunate enough to work with/for/under them, crushing any dissent or voices of reason and change under their tyrannical heel and leaving behind a trail of dismayed and, often, unemployed junior developers who were made examples of to keep up the facade and avoid the shadow of doubt being cast upon them. To sum this up another way: If you surround yourself with learned people, you will learn.  Surround yourself with ignorant people who can't, as the saying goes, see the forest through the trees, and you'll learn nothing of any real value.  There is more to software development than just writing code, and the end goal should not be just "shipping software", it should be shipping software that is extensible, maintainable, and above all else software whose creation has broadened your knowledge in some capacity, even if a minor one.  An eager newbie who knows theory and thirsts for knowledge can easily be moulded and taught the advanced topics, but the same can't be said of someone who only cares about the finish line.  This industry needs more people espousing the benefits of software craftsmanship and proper software engineering techniques, and less Joe Everymans who are unwilling to adapt or foster new ways of thinking. Conclusion - I Cast “Protection from Fire” I am fairly certain this post will spark some controversy and might even invite the flames.  Please keep in mind these are opinions and nothing more.  A little healthy rant and subsequent flamewar can be good for the soul once in a while.  To paraphrase The Godfather: It helps to get rid of the bad blood.

    Read the article

  • JQuery/JavaScript confusion with Previous and Next buttons.

    - by Anders H
    I've got some inherited JQuery code that isn't working as I'd think and I'm just not even sure what to research or look up next. The Problem: I've got a few DIVs within the HTML: a container, a "frame" and the content. If the content is longer than the frame, it's cut off using overflow:hidden and a Next - button appears. The next button works correctly. However, there's also a previous button with similar code, but it does not work as expected and just displays whenever the next button does. Whenever either button is hidden, it will not reappear again when navigating back through the "pages". When the I may be overlooking something in the code, but here it is in full. The HTML: <div id="draggable" class="ui-widget-content"> <div id="draggable-title" class="cufon">about</div> <a id="draggable-close" href="javascript:void(0);"><div class="close-img icon"></div></a> <div class="clear"></div> <div id="draggable-frame"> <div id="draggable-content"> </div> </div> <a id="prevContent" href="javascript:void(0);">&laquo; previous</a><a id="nextContent" href="javascript:void(0);">next &raquo;</a> </div> The JavaScript: $(function() { $("#draggable").draggable(); }); $(document).ready(function(){ $("#draggable-frame").scrollTop(0); $("#prevContent").click(function(){ $("#draggable-content").fadeOut("medium"); setTimeout("showPrev()", 250); if($("#draggable-frame").scrollTop()+$("#draggable-frame").height() >= $("#draggable-content").height()) { $("#prevContent").hide(); } $("#draggable-content").fadeIn("medium"); }); $("#nextContent").click(function(){ $("#draggable-content").fadeOut("medium"); setTimeout("showNext()", 250); if($("#draggable-frame").scrollTop()+$("#draggable-frame").height() >= $("#draggable-content").height()) { $("#nextContent").hide(); } $("#draggable-content").fadeIn("medium"); }); $("#draggable-close").click(function(){ $("#draggable").fadeOut("medium"); }); $("#prevContent").hide(); $("#nextContent").hide(); showContent("about"); $(".opener").click(function(){ $("#draggable-frame").scrollTop(0); $("#draggable").fadeIn("medium"); showContent($(this).attr("title")); }); }); // function showPrev() { $("#draggable-frame").scrollTop($("#draggable-frame").scrollTop() - $("#draggable-frame").height()); } // function showNext() { $("#draggable-frame").scrollTop($("#draggable-frame").scrollTop() + $("#draggable-frame").height()); } function showContent(title) { $("#draggable-title").html(title); $("#draggable-content").html($("#"+title).html()); Cufon.replace('.cufon', { fontFamily: 'cav', hover: true }); $("#nextContent").hide(); $("#prevContent").hide(); if($("#draggable-content").height() > $("#draggable-frame").height()) { $("#nextContent").show(); } if($("#draggable-content").height() > $("#draggable-frame").height()) { $("#prevContent").show(); } } Even just point me in the right direction to research would be a big help right now. Thank you.

    Read the article

  • You should NOT be writing jQuery in SharePoint if&hellip;

    - by Mark Rackley
    Yes… another one of these posts. What can I say? I’m a pot stirrer.. a rabble rouser *rabble rabble* jQuery in SharePoint seems to be a fairly polarizing issue with one side thinking it is the most awesome thing since Princess Leia as the slave girl in Return of the Jedi and the other half thinking it is the worst idea since Mannequin 2: On the Move. The correct answer is OF COURSE “it depends”. But what are those deciding factors that make jQuery an awesome fit or leave a bad taste in your mouth? Let’s see if I can drive the discussion here with some polarizing comments of my own… I know some of you are getting ready to leave your comments even now before reading the rest of the blog, which is great! Iron sharpens iron… These discussions hopefully open us up to understanding the entire process better and think about things in a different way. You should not be writing jQuery in SharePoint if you are not a developer… Let’s start off with my most polarizing and rant filled portion of the blog post. If you don’t know what you are doing or you don’t have a background that helps you understand the implications of what you are writing then you should not be writing jQuery in SharePoint! I truly believe that one of the biggest reasons for the jQuery haters is because of all the bad jQuery out there. If you don’t know what you are doing you can do some NASTY things! One of the best stories I’ve heard about this is from my good friend John Ferringer (@ferringer). John tells this story during our Mythbusters session we do together. One of his clients was undergoing a Denial of Service attack and they couldn’t figure out what was going on! After much searching they found that some genius jQuery developer wrote some code for an image rotator, but did not take into account what happens when there are no images to load! The code just kept hitting the servers over and over and over again which prevented anything else from getting done! Now, I’m NOT saying that I have not done the same sort of thing in the past or am immune from such mistakes. My point is that if you don’t know what you are doing, there are very REAL consequences that can have a major impact on your organization AND they will be hard to track down.  Think how happy your boss will be after you copy and pasted some jQuery from a blog without understanding what it does, it brings down the farm, AND it takes them 3 days to track it back to you.  :/ Good times will not be had. Like it or not JavaScript/jQuery is a programming language. While you .NET people sit on your high horses because your code is compiled and “runs faster” (also debatable), the rest of us will be actually getting work done and delivering solutions while you are trying to figure out why your widget won’t deploy. I can pick at that scab because I write .NET code too and speak from experience. I can do both, and do both well. So, I am not speaking from ignorance here. In JavaScript/jQuery you have variables, loops, conditionals, functions, arrays, events, and built in methods. If you are not a developer you just aren’t going to take advantage of all of that and use it correctly. Ahhh.. but there is hope! There is a lot of jQuery resources out there to help you learn and learn well! There are many experts on the subject that will gladly tell you when you are smoking crack. I just this minute saw a tweet from @cquick with a link to: “jQuery Fundamentals”. I just glanced through it and this may be a great primer for you aspiring jQuery devs. Take advantage of all the resources and become a developer! Hey, it will look awesome on your resume right? You should not be writing jQuery in SharePoint if it depends too much on client resources for a good user experience I’ve said it once and I’ll say it over and over until you understand. jQuery is executed on the client’s computer. Got it? If you are looping through hundreds of rows of data, searching through an enormous DOM, or performing many calculations it is going to take some time! AND if your user happens to be sitting on some old PC somewhere that they picked up at a garage sale their experience will be that much worse! If you can’t give the user a good experience they will not use the site. So, if jQuery is causing the user to have a bad experience, don’t use it. I sometimes go as far to say that you should NOT go to jQuery as a first option for external facing web sites because you have ZERO control over what the end user’s computer will be. You just can’t guarantee an awesome user experience all of the time. Ahhh… but you have no choice? (where have I heard that before?). Well… if you really have no choice, here are some tips to help improve the experience: Avoid screen scraping This is not 1999 and SharePoint is not an old green screen from a mainframe… so why are you treating it like it is? Screen scraping is time consuming and client intensive. Take advantage of tools like SPServices to do your data retrieval when possible. Fine tune your DOM searches A lot of time can be eaten up just searching the DOM and ignoring table rows that you don’t need. Write better jQuery to only loop through tables rows that you need, or only access specific elements you need. Take advantage of Element ID’s to return the one element you are looking for instead of looping through all the DOM over and over again. Write better jQuery Remember this is development. Think about how you can write cleaner, faster jQuery. This directly relates to the previous point of improving your DOM searches, but also when using arrays, variables and loops. Do you REALLY need to loop through that array 3 times? How can you knock it down to 2 times or even 1? When you have lots of calculations and data that you are manipulating every operation adds up. Think about how you can streamline it. Back in the old days before RAM was abundant, Cores were plentiful and dinosaurs roamed the earth, us developers had to take performance into account in everything we did. It’s a lost art that really needs to be used here. You should not be writing jQuery in SharePoint if you are sending a lot of data over the wire… Developer:  “Awesome… you can easily call SharePoint’s web services to retrieve and write data using SPServices!” Administrator: “Crap! you can easily call SharePoint’s web services to retrieve and write data using SPServices!” SPServices may indeed be the best thing that happened to SharePoint since the invention of SharePoint Saturdays by Godfather Lotter… BUT you HAVE to use it wisely! (I REFUSE to make the Spiderman reference). If you do not know what you are doing your code will bring back EVERY field and EVERY row from a list and push that over the internet with all that lovely XML wrapped around it. That can be a HUGE amount of data and will GREATLY impact performance! Calling several web service methods at the same time can cause the same problem and can negatively impact your SharePoint servers. These problems, thankfully, are not difficult to rectify if you are careful: Limit list data retrieved Use CAML to reduce the number of rows returned and limit the fields returned using ViewFields.  You should definitely be doing this regardless. If you aren’t I hope your admin thumps you upside the head. Batch large list updates You may or may not have noticed that if you try to do large updates (hundreds of rows) that the performance is either completely abysmal or it fails over half the time. You can greatly improve performance and avoid timeouts by breaking up your updates into several smaller updates. I don’t know if there is a magic number for best performance, it really depends on how much data you are sending back more than the number of rows. However, I have found that 200 rows generally works well.  Play around and find the right number for your situation. Delay Web Service calls when possible One of the cool things about jQuery and SPServices is that you can delay queries to the server until they are actually needed instead of doing them all at once. This can lead to performance improvements over DataViewWebParts and even .NET code in the right situations. So, don’t load the data until it’s needed. In some instances you may not need to retrieve the data at all, so why retrieve it ALL the time? You should not be writing jQuery in SharePoint if there is a better solution… jQuery is NOT the silver bullet in SharePoint, it is not the answer to every question, it is just another tool in the developers toolkit. I urge all developers to know what options exist out there and choose the right one! Sometimes it will be jQuery, sometimes it will be .NET,  sometimes it will be XSL, and sometimes it will be some other choice… So, when is there a better solution to jQuery? When you can’t get away from performance problems Sometimes jQuery will just give you horrible performance regardless of what you do because of unavoidable obstacles. In these situations you are going to have to figure out an alternative. Can I do it with a DVWP or do I have to crack open Visual Studio? When you need to do something that jQuery can’t do There are lots of things you can’t do in jQuery like elevate privileges, event handlers, workflows, or interact with back end systems that have no web service interface. It just can’t do everything. When it can be done faster and more efficiently another way Why are you spending time to write jQuery to do a DataViewWebPart that would take 5 minutes? Or why are you trying to implement complicated logic that would be simple to do in .NET? If your answer is that you don’t have the option, okay. BUT if you do have the option don’t reinvent the wheel! Take advantage of the other tools. The answer is not always jQuery… sorry… the kool-aid tastes good, but sweet tea is pretty awesome too. You should not be using jQuery in SharePoint if you are a moron… Let’s finish up the blog on a high note… Yes.. it’s true, I sometimes type things just to get a reaction… guess this section title might be a good example, but it feels good sometimes just to type the words that a lot of us think… So.. don’t be that guy! Another good buddy of mine that works for Microsoft told me. “I loved jQuery in SharePoint…. until I had to support it.”. He went on to explain that some user was making several web service calls on a page using jQuery and then was calling Microsoft and COMPLAINING because the page took so long to load… DUH! What do you expect to happen when you are pushing that much data over the wire and are making that many web service calls at once!! It’s one thing to write that kind of code and accept it’s just going to take a while, it’s COMPLETELY another issue to do that and then complain when it’s not lightning fast!  Someone’s gene pool needs some chlorine. So, I think this is a nice summary of the blog… DON’T be that guy… don’t be a moron. How can you stop yourself from being a moron? Ah.. glad you asked, here are some tips: Think Is jQuery the right solution to my problem? Is there a better approach? What are the implications and pitfalls of using jQuery in this situation? Search What are others doing? Does someone have a better solution? Is there a third party library that does the same thing I need? Plan Write good jQuery. Limit calculations and data sent over the wire and don’t reinvent the wheel when possible. Test Okay, it works well on your machine. Try it on others ESPECIALLY if this is for an external site. Test with empty data. Test with hundreds of rows of data. Test as many scenarios as possible. Monitor those server resources to see the impact there as well. Ask the experts As smart as you are, there are people smarter than you. Even the experts talk to each other to make sure they aren't doing something stupid. And for the MOST part they are pretty nice guys. Marc Anderson and Christophe Humbert are two guys who regularly keep me in line. Make sure you aren’t doing something stupid. Repeat So, when you think you have the best solution possible, repeat the steps above just to be safe.  Conclusion jQuery is an awesome tool and has come in handy on many occasions. I’m even teaching a 1/2 day SharePoint & jQuery workshop at the upcoming SPTechCon in Boston if you want to berate me in person. However, it’s only as awesome as the developer behind the keyboard. It IS development and has its pitfalls. Knowledge and experience are invaluable to giving the user the best experience possible.  Let’s face it, in the end, no matter our opinions, prejudices, or ego providing our clients, customers, and users with the best solution possible is what counts. Period… end of sentence…

    Read the article

< Previous Page | 4 5 6 7 8 9  | Next Page >