Search Results

Search found 4872 results on 195 pages for 'comments'.

Page 28/195 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Bug Triage

    In this blog post brain dump, I'll attempt to describe the process my team tries to follow when dealing with new bug reports (specifically, code defect reports). This is not official Microsoft policy, just the way we do things… if you do things differently and want to share, you can do so at the bottom in the comments (or on your blog).Feature Triage TeamA subset of the feature crew, the triage team (which has representations from the PM, Dev and QA disciplines), looks at all unassigned bugs at regular intervals. This can be weekly or daily (or other frequency) dependent on which part of the product cycle we are in and what the untriaged bug load looks like. They discuss each bug considering the evidence and make a decision of whether the bug goes from Not Yet Assigned to Assigned (plus the name of the DEV to fix this) or whether it goes from Active to Resolved (which means it gets assigned back to the requestor for closure or further debate if they were not present at the triage meeting). Close to critical milestones, the feature triage team needs to further justify bugs they take to additional higher-level triage teams.Bug Opened = Not Yet AssignedSomeone (typically an SDET from the QA team) creates the bug item (e.g. in TFS), ensuring they populate all the relevant fields including: Title, Description, Repro Steps (including the Actual Result at the end of the steps), attachments of code and/or screenshots, Build number that they observed the issue in, regression details if applicable, how it was found, if a test case exists or needs to be created etc. They also indicate their opinion on the Priority and Severity. The bug status is left as Not Yet Assigned."Issue" versus "Fix for issue"The solution to some bugs is easy to determine, e.g. "bug: the column name is misspelled". Obviously the fix is to correct the spelling – still, the triage team should be explicit and enter the correct spelling in the bug's Description. Note that a bad bug name here would be "bug: fix the spelling of the column" (it describes the solution, rather than the problem).Other solutions are trickier to establish, e.g. "bug: the column header is not accessible (can only be clicked on with the mouse, not reached via keyboard)". What is the correct solution here? The last thing to do is leave this undetermined and just assign it to a developer. The solution has to be entered in the description. Behind this type of a bug usually hides a spec defect or a new feature request.The person opening the bug should focus on describing the issue, rather than the solution. The person indicates what the fix is in their opinion by stating the Expected Result (immediately after stating the Actual Result). If they have a complex suggested solution, that should be split out in a separate part, but the triage team has the final say before assigning it. If the solution is lengthy/complicated to describe, the bug can be assigned to the PM. Note: the strict interpretation suggests that any bug with no clear, obvious solution is always a hole in the spec and should always go to the PM. This also ensures the spec gets updated.Not Yet Assigned - Not Yet Assigned (on someone else's plate)If the bug is observed in our feature, but the cause is actually another team, we change the Area Path (which is the way we identify teams in TFS) and leave it as Not Yet Assigned. The triage team may add more comments as appropriate including potentially changing the repro steps. In some cases, we may even resolve the bug in our area path and open a new bug in the area path of the other team.Even though there is no action on a dev on the team, the bug still needs to be tracked. One way of doing this is to implement some notification system that informs the team when the tracked bug changed status; another way is to occasionally run a global query (against all area paths) for bugs that have been opened by a member of the team and follow up with the current owners for stale bugs.Not Yet Assigned - ResolvedThis state transition can only be made by the Feature Triage Team.0. Sometimes the bug description is not clear and in that case it gets Resolved as More Information Needed, so the original requestor can provide it.After understanding what the bug item is about, the first decision is to determine whether it needs to go to a dev.1. If it is a known bug, it gets resolved as "Duplicate" and linked to the existing bug.2. If it is "By Design" it gets resolved as such, indicating that the triage team does not think this is a bug.3. If the bug does not repro on latest bits, it is resolved as "No Repro"4. The most painful: If it is decided that we cannot fix it for this release it gets resolved as "Postponed" or "Won't Fix". The former is typically due to resources and time constraints, while the latter is due to deciding that it is not important enough to consume our resources in any release (yes, not all bugs must be fixed!). For both cases, there are other factors that contribute to the decision such as: existence of a reasonable workaround, frequency we expect users to encounter the issue, dependencies on other team to offer a solution, whether it breaks a core scenario, whether it prohibits customer feedback on a major feature, is it a regression from a previous release, impact of the fix on other partner teams (e.g. User Education, User Experience, Localization/Globalization), whether this is the right fix, does the fix impact performance goals, and last but not least, severity of bug (e.g. loss of customer data, security threat, crash, hang). The bar for fixing a bug goes up as the release date approaches. The triage team becomes hardnosed about which bugs to take, while the developers are busy resolving assigned bugs thus everyone drives for Zero Bug Bounce (ZBB). ZBB is when you have 0 active bugs older than 48 hours.Not Yet Assigned - AssignedIf the bug is something we decide to fix in this release and the solution is known, then it is assigned to a DEV. This is either the developer that will do the work, or a Lead that can further assign it to one of his developer team based on a load balancing algorithm of their choosing.Sometimes, the triage team needs the dev to do some investigation work before deciding whether to take the fix; similarly, the checkin for the fix may be gated on code review by the triage team. In these cases, these instructions are provided in the comments section of the bug and when the developer is done they notify the triage team for final decision.Additionally, a Priority and Severity (from 0 to 4) has to be entered, e.g. a P0 means "drop anything you are doing and fix this now" whereas a P4 is something you get to after all P0,1,2,3 bugs are fixed.From a testing perspective, if the bug was found through ad-hoc testing or an external team, the decision is made whether test cases should be added to avoid future regressions. This is communicated to the QA team.Assigned - ResolvedWhen the developer receives the bug (they should be checking daily for new bugs on their plate looking at bugs in order of priority and from older to newer) they can send it back to triage if the information is not clear. Otherwise, they investigate the bug, setting the Sub Status to "Investigating"; if they cannot make progress, they set the Sub Status to "Blocked" and discuss this with triage or whoever else can help them get unblocked. Once they are unblocked, they set the Sub Status to "Working on Solution"; once they are code complete they send a code review request, setting the Sub Status to "Fix Available". After the iterative code review process is over and everyone is happy with the fix, the developer checks it in and changes the state of the bug from Active (and Assigned to them) to Resolved (and Assigned to someone else).The developer needs to ensure that when the status is changed to Resolved that it is assigned to a QA person. For example, maybe the PM opened the bug, but it should be a QA person that will verify the fix - the developer needs to manually change the assignee in that case. Typically the QA person will send an email to the original requestor notifying them that the fix is verified.Resolved - ??In all cases above, note that the final state was Resolved. What happens after that? The final step should be Closed. The bug is closed once the QA person verifying the fix is happy with it. If the person is not happy, then they change the state from Resolved to Active, thus sending it back to the developer. If the developer and QA person cannot reach agreement, then triage can be brought into it. An easy way to do that is change the status back to Not Yet Assigned with appropriate comments so the triage team can re-review.It is important to note that only QA can close a bug. That means that if the opener of the bug was a PM, when the bug gets resolved by the dev it may land on the PM's plate and after a quick review, the PM would re-assign to an SDET, which is the only role that can close bugs. One exception to this is if the person that filed the bug is external: in that case, we leave it Resolved and assigned to them and also send them a notification that they need to verify the fix. Another exception is if specialized developer knowledge is needed for verifying the bug fix (e.g. it was a refactoring suggestion bug typically not observable by the user) in which case it is fine to have a developer verify the fix, and ideally a different developer to the one that opened the bug.Other links on bug triageA quick search reveals that others have talked about this subject, e.g. here, here, here, here and here.Your take?If you have other best practices your team uses to deal with incoming bug reports, feel free to share in the comments below or on your blog. Comments about this post welcome at the original blog.

    Read the article

  • SQLAuthority News – Download Whitepaper – Understanding and Controlling Parallel Query Processing in SQL Server

    - by pinaldave
    My recently article SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database has received many good comments regarding MAXDOP 1 and MAXDOP 0. I really enjoyed reading the comments as the comments are received from industry leaders and gurus. I was further researching on the subject and I end up on following white paper written by Microsoft. Understanding and Controlling Parallel Query Processing in SQL Server Data warehousing and general reporting applications tend to be CPU intensive because they need to read and process a large number of rows. To facilitate quick data processing for queries that touch a large amount of data, Microsoft SQL Server exploits the power of multiple logical processors to provide parallel query processing operations such as parallel scans. Through extensive testing, we have learned that, for most large queries that are executed in a parallel fashion, SQL Server can deliver linear or nearly linear response time speedup as the number of logical processors increases. However, some queries in high parallelism scenarios perform suboptimally. There are also some parallelism issues that can occur in a multi-user parallel query workload. This white paper describes parallel performance problems you might encounter when you run such queries and workloads, and it explains why these issues occur. In addition, it presents how data warehouse developers can detect these issues, and how they can work around them or mitigate them. To review the document, please download the Understanding and Controlling Parallel Query Processing in SQL Server Word document. Note: Above abstract has been taken from here. The real question is what does the parallel queries has made life of DBA much simpler or is it looked at with potential issue related to degradation of the performance? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Picking a code review tool

    - by marcog
    We are a startup looking to migrate from Fogbugz/Kiln to a new issue tracker/code review system. We are very happy with Jira, especially the configurability, but we are undecided on a code review tool. We have been trialing Bitbucket, but it doesn't fit our workflow well. Here are the problems we have identified with BB: Comments can be hard to find: when commenting on code not visible in the diff when code that is commented on is later changed viewing the full file doesn't include comments (also doesn't show changes) Viewing comments on individual commits can be a pain We have the implementer merge the diff and close the issue, whereas pull requests are more suited to the open source model where someone with commit rights merges We would like to automate creation of the code review (either from Jira or a command line tool) No syntax highlighting Once the pull request exceeds a certain size, BB won't show the whole thing and you have to view individual commits Linking BB pull requests to Jira issues is a bit janky: we have a pull request URL field on Jira, but this doesn't work when there are changes in multiple repositories Does anyone have any good suggestion given the above? We are tight on budget, and Jira integration is a big plus. We also have multiple commits per issue, and would like to have the option of viewing individual commits in the review. It might also be worth noting that we have a separate reviewer and tester for each issue.

    Read the article

  • Applying WCAG 2.0 to Non-Web ICT: second draft published from WCAG2ICT Task Force - for public review

    - by Peter Korn
    Last Thursday the W3C published an updated Working Draft of Guidance on Applying WCAG 2.0 to Non-Web Information and Communications Technologies. As I noted last July when the first draft was published, the motivation for this guidance comes from the Section 508 refresh draft, and also the European Mandate 376 draft, both of which seek to apply the WCAG 2.0 level A and AA Success Criteria to non-web ICT documents and software. This second Working Draft represents a major step forward in harmonization with the December 5th, 2012 Mandate 376 draft documents, including specifically Draft EN 301549 "European accessibility requirements for public procurement of ICT products and services". This work greatly increases the likelihood of harmonization between the European and American technical standards for accessibility, for web sites and web applications, non-web documents, and non-web software. As I noted last October at the European Policy Centre event: "The Accessibility Act – Ensuring access to goods and services across the EU", and again last month at the follow-up EPC event: "Accessibility - From European challenge to global opportunity", "There isn't a 'German Macular Degernation', a 'French Cerebral Palsy', an 'American Autism Spectrum Disorder'. Disabilities are part of the human condition. They’re not unique to any one country or geography – just like ICT. Even the built environment – phones, trains and cars – is the same worldwide. The definition of ‘accessible’ should be global – and the solutions should be too. Harmonization should be global, and not just EU-wide. It doesn’t make sense for the EU to have a different definition to the US or Japan." With these latest drafts from the W3C and Mandate 376 team, we've moved a major step forward toward that goal of a global "definition of 'accessible' ICT." I strongly encourage all interested parties to read the Call for Review, and to submit comments during the current review period, which runs through 15 February 2013. Comments should be sent to public-wcag2ict-comments-AT-w3.org. I want to thank my colleagues on the WCAG2ICT Task Force for the incredible time and energy and expertise they brought to this work - including particularly my co-authors Judy Brewer, Loïc Martínez Normand, Mike Pluke, Andi Snow-Weaver, and Gregg Vanderheiden; and the document editors Michael Cooper, and Andi Snow-Weaver.

    Read the article

  • Disqus ads are disqusting and here is how you turn them off

    - by Gopinath
    After couple of months I spent sometime yesterday reviewing my blog and coziie.com to see if everything is fine. Disqus, the best commenting system and an unusual suspect was looking weird. Commenting sections of my sites are displayed links of third party sites which I was not aware of. The content is annoying to me and I believe my site users are also annoyed. I don’t remember configuring something in disqus to display ads or earn money by promoting other’s content. Why on earth I would like to shows content of someone else’s website right inside comments section and annoy readers? Here is a screen grab of comment section that shows ads.   It turns to be disqus automatically enabled a feature called as “Discovery” to all publishers who upgraded the commenting system to the latest release. I remember upgrading commenting system to the latest release couple of months ago but I don’t remember specifically allowing disqus to spam my comment section!! I’m extremely unhappy with the way disqus automatically enabled spamming comment sections in the name of so called new features that benefits bloggers. How to turn of Discovery or Ads in Disqus I turned them off as soon as I noticed them and it’s very easy to do that. Here are the steps to be followed to turn off ads in comments Login in to disqus Switch to Settings tab Click on Discovery tab Choose the option Just comments Save the settings.  Though it’s easy to turn off the ads, it would have been nice if disqus did not enable them by default. Hey guys at disqus, you lost my trust and from now onwards I’ll double check before opting in to any new features.

    Read the article

  • JDK bug migration: bugs.sun.com now backed by JIRA

    - by darcy
    The JDK bug migration from a Sun legacy system to JIRA has reached another planned milestone: the data displayed on bugs.sun.com is now backed by JIRA rather than by the legacy system. Besides maintaining the URLs to old bugs, bugs filed since the migration to JIRA are now visible too. The basic information presented about a bug is the same as before, but reformatted and using JIRA terminology: Instead of a "category", a bug now has a "component / subcomponent" classification. As outlined previously, part of the migration effort was reclassifying bugs according to a new classification scheme; I'll write more about the new scheme in a subsequent blog post. Instead of a list of JDK versions a bug is "reported against," there is a list of "affected versions." The names of the JDK versions have largely been regularized; code names like "tiger" and "mantis" have been replaced by the release numbers like "5.0" and "1.4.2". Instead of "release fixed," there are now "Fixed Versions." The legacy system had many fields that could hold a sequence of text entries, including "Description," "Workaround", and "Evaluation." JIRA instead only has two analogous fields labeled as "Description" and a unified stream of "Comments." Nearly coincident with switching to JIRA, we also enabled an agent which automatically updates a JIRA issue in response to pushes into JDK-related Hg repositories. These comments include the changeset URL, the user making the push, and a time stamp. These comments are first added when a fix is pushed to a team integration repository and then added again when the fix is pushed into the master repository for a release. We're still in early days of production usage of JIRA for JDK bug tracking, but the transition to production went smoothly and over 1,000 new issues have already been filed. Many other facets of the migration are still in the works, including hosting new incidents filed at bugs.sun.com in a tailored incidents project in JIRA.

    Read the article

  • Is there any way to send a column value from outer query to inner sub query? [closed]

    - by chetan
    'Discussions' table schema title description desid replyto upvote downvote views browser used a1 none 1 1 12 - bad topic b2 a1 2 3 14 sql database a3 none 4 5 34 - crome b4 a3 3 4 12 The above table has two types of content types Main Topics and Comments. Unique content identifier 'desid' used to identify that its a main topic or a comment. 'desid' starts with 'a' for Main Topic and for comment 'desid' starts with 'b'. For comment 'replyto' is the 'desid' of main topic to which this comment is associated. I like to find out the list of the top main topics that are arranged on the basis of (upvote+downvote+visits+number of comments to it) addition. The following query gives top topics list in order of (upvote+downvote+visits) select * with highest number of upvote+downvote+views by query "select * from [DB_user1212].[dbo].[discussions] where desid like 'a%' order by (upvote+downvote+visited) desc For (comments+upvote+downvote+views ) I tried select * from [DB_user1212].[dbo].[discussions] where desid like 'a%' order by ((select count(*) from [DB_user1212].[dbo].[discussions] where replyto = desid )+upvote+downvote+visited) desc but it didn't work because its not possible to send desid from outer query to inner subquery. How to solve this? Please note that I want solution in query language only.

    Read the article

  • Service layer coupling

    - by Justin
    I am working on writing a service layer for an order system in php. It's the typical scenario, you have an Order that can have multiple Line Items. So lets say a request is received to store a line item with pictures and comments. I might receive a json request such as { 'type': 'Bike', 'color': 'Red', 'commentIds': [3193,3194] 'attachmentIds': [123,413] } My idea was to have a Service_LineItem_Bike class that knows how to take the json data and store an entity for a bike. My question is, the Service_LineItem class now needs to fetch comments and file attachments, and store the relationships. Service_LineItem seems like it should interact with a Service_Comment and a Service_FileUpload. Should instances of these two other services be instantiated and passed to the Service_LineItem constructor,or set by getters and setters? Dependency injection seems like the right solution, allowing a service access to a 'service fetching helper' seems wrong, and this should stay at the application level. I am using Doctrine 2 as a ORM, and I can technically write a dql query inside Service_LineItem to fetch the comments and file uploads necessary for the association, but this seems like it would have a tighter coupling, rather then leaving this up to the right service object.

    Read the article

  • Generating HTML Help files based on XML documentation

    - by geekrutherford
    Since discovering the XML commenting features built into .NET years ago I have been using it to help make my code more readable and simpler for other developers to understand exactly what the code is doing. Entering /// preceding a line of code causes Visual Studio to insert "summary" tags.  It also results in additional tags being generated if you are commenting a method with parameters and a return type. I already knew that Intellisense would pick up these comments and display them when coding and selecting properties, methods, etc. from a class.  I also knew that you could set Visual Studio to generate an XML file containing said comments.  Only recently did I begin to wonder if I could generate some kind of readable help files based on these comments I so diligently added. After searching the web I came across NDoc, an open source project which creates documentation for you based on the XML files generated by Visual Studio.  Unfortunately, NDoc has become stale and no longer supported (last release was back in 2005). Fortunately there is a little known tool from Microsoft themselves called "Sandcastle Help File Builder".  This nifty little tool gives you a graphical interface that allows you to specify multiple DLL and XML files from which to generate a MSDN like HTML Help File for your own projects! You can check it out here: http://shfb.codeplex.com/ If you are curious how to set Visual Studio to generate the above reference XML documentation files simply go to your projects property page and edit as shown below (my paths are specific, you can leave yours at the default values):

    Read the article

  • Functional Programming, JavaScript and UI - some neophyte questions

    - by jamesson
    This has been discussed in other threads, however I am hoping for some comments relevant to UI and an explanation of some vitriol I had flung my way in a Certain IRC Channel Which shall remain nameless. In the discussion here, the comments in the accepted answer suggest that I approach the given code from a functional perspective, which was new to me at the time. Wikipedia said, among other things, that FP "avoids state and mutable data", which includes according to the discussion global vars. Now, being that I am already pretty far along in my project I am not going to learn FP before I finish, but... How is it possible to avoid global vars if, for instance, I have a UI whose entire functionality changes if a mousebutton is down? I have a number of things like this. Why was there a strong negative reaction in the Certain IRC channel to implementing FP in JS? When I Brought up what seemed to me to be supportive comments by Crockford, people got even madder. Now, this being IRC there is no rep system, but they at least gave indication of having read TGP (which I haven't gotten to yet) so I'm assuming they're not idiots. Many thanks in advance Joe

    Read the article

  • Are there any good examples of open source C# projects with a large number of refactorings?

    - by Arjen Kruithof
    I'm doing research into software evolution and C#/.NET, specifically on identifying refactorings from changesets, so I'm looking for a suitable (XP-like) project that may serve as a test subject for extracting refactorings from version control history. Which open source C# projects have undergone large (number of) refactorings? Criteria A suitable project has its change history publicly available, has compilable code at most commits and at least several refactorings applied in the past. It does not have to be well-known, and the code quality or number of bugs is irrelevant. Preferably the code is in a Git or SVN repository. The result of this research will be a tool that automatically creates informative, concise comments for a changeset. This should improve on the common development practice of just not leaving any comments at all. EDIT: As Peter argues, ideally all commit comments would be teleological (goal-oriented). Practically, if a comment is made at all it is often descriptive, merely a summary of the changes. Sadly we're a long way from automatically inferring developer intentions!

    Read the article

  • Contact form problem - I do receive messages, but no contents (blank page).

    - by nitbuntu
    I have a contact form on site which used to work, but since last few months has stopped working properly. This could have been due to some coding error that I can't figure out. What happens is that I receive the messages sent, but they are completely blank, with no contents at all. What could be the problems? I'm attaching first the front-end page, and then the back-end. Sample of contact.php the front-end code:- <div id="content"> <h2 class="newitemsxl">Contact Us</h2> <div id="contactcontent"> <form method="post" action="contactus.php"> Name:<br /> <input type="text" name="Name" /><br /> Email:<br /> <input type="text" name="replyemail" /><br /> Your message:<br /> <textarea name="comments" cols="40" rows="4"></textarea><br /><br /> <?php require("ClassMathGuard.php"); MathGuard::insertQuestion(); ?><br /> <input type="submit" name="submit" value="Send" /> * Refresh browser for a different question. :-) </form> </div> </div> Sample of contactus.php (backend code):- <?php /* first we need to require our MathGuard class */ require ("ClassMathGuard.php"); /* this condition checks the user input. Don't change the condition, just the body within the curly braces */ if (MathGuard :: checkResult($_REQUEST['mathguard_answer'], $_REQUEST['mathguard_code'])) { $mailto="[email protected]"; $pcount=0; $gcount=0; $subject = "A Stylish Goods Enquiry"; $from="[email protected]"; echo ("Great, you're message has been sent !"); //insert your code that will be executed when user enters the correct answer } else { echo ("Sorry, wrong answer, please go back and try again !"); //insert your code which tells the user he is spamming your website } while (list($key,$val)=each($HTTP_POST_VARS)) { $pstr = $pstr."$key : $val \n "; ++$pcount; } while (list($key,$val)=each($HTTP_GET_VARS)) { $gstr = $gstr."$key : $val \n "; ++$gcount; } if ($pcount > $gcount) { $comments=$pstr; mail($mailto,$subject,$comments,"From:".$from); } else { $comments=$gstr; mail($mailto,$subject,$comments,"From:".$from); } ?>

    Read the article

  • Javascript to add cdata section on the fly?

    - by Chris G.
    I'm having trouble with special characters that exist in an xml node attribute. To combat this, I'm trying to render the attributes as child nodes and, where necessary, using cdata sections to get around the special characters. The problem is, I can't seem to get the cdata section appended to the node correctly. I'm iterating over the source xml node's attributes and creating new nodes. If the attribute.name = "description" I want to put the attribute.text() in a cdata section and append the new node. That's where I jump the track. // newXMLData is the new xml document that I've created in memory for (var ctr =0;ctr< this.attributes.length;ctr++){ // iterate over the attributes if( this.attributes[ctr].name =="Description"){ // if the attribute name is "Description" add a CDATA section var thisNodeName = this.attributes[ctr].name; newXMLDataNode.append("<"+thisNodeName +"></"+ thisNodeName +">" ); var cdata = newXMLData.createCDATASection('test'); // here's where it breaks. } else { // It's not "Description" so just append the new node. newXMLDataNode.append("<"+ this.attributes[ctr].name +">" + $(this.attributes[ctr]).text() + "</"+ this.attributes[ctr].name +">" ); } } Any ideas? Is there another way to add a cdata section? Here's a sample snippet of the source... <row pSiteID="4" pSiteTile="Test Site Name " pSiteURL="http://www.cnn.com" ID="1" Description="<div>blah blah blah since June 2007.&amp;nbsp; T<br>&amp;nbsp;<br>blah blah blah blah&amp;nbsp; </div>" CreatedDate="2010-09-20 14:46:18" Comments="Comments example.&#10;" > here's what I'm trying to create... <Site> <PSITEID>4</PSITEID> <PSITETILE>Test Site Name</PSITETILE> <PSITEURL>http://www.cnn.com</PSITEURL> <ID>1</ID> <DESCRIPTION><![CDATA[<div>blah blah blah since June 2007.&amp;nbsp; T<br>&amp;nbsp;<br>blah blah blah blah&amp;nbsp; </div ]]></DESCRIPTION> <CREATEDDATE>2010-09-20 14:46:18</CREATEDDATE> <COMMENTS><![CDATA[ Comments example.&#10;]]></COMMENTS> </Site>

    Read the article

  • Monitoring the Application alongside SQL Server

    - by Tony Davis
    Sometimes, on Simple-Talk, it takes a while to spot strange and unexpected patterns of user activity, or small bugs. For example, one morning we spotted that an article’s comment count had leapt to 1485, but that only four were displayed. With some rooting around in Google Analytics, and the endlessly annoying Community Server admin-interface, we were able to work out that a few days previously the article had been subject to a spam attack and that the comment count was for some reason including both accepted and unaccepted comments (which in turn uncovered a bug in the SQL). This sort of incident made us a lot keener on monitoring Simple-talk website usage more effectively. However, the metrics we wanted are troublesome, because they are far too specific for Google Analytics to measure, and the SQL Server backend doesn’t keep sufficient information to enable us to plot trends. The latter could provide, for example, the total number of comments made on, or votes cast for, articles, over all time, but not the number that occur by hour over a set time. We lacked a baseline, in other words. We couldn’t alter the database, as it is a bought-in package. We had neither the resources nor inclination to build-in dedicated application monitoring. Possibly, we could investigate a third-party tool to do the job; but then it occurred to us that we were already using a monitoring tool (SQL Monitor) to keep an eye on the database. It stored data, made graphs and sent alerts. Could we get it to monitor some aspects of the application as well? Of course, SQL Monitor’s single purpose is to check and monitor SQL Server, over time, rather than to monitor applications that use SQL Server. However, how different is the business of gathering and plotting SQL Server Wait Stats, from gathering and plotting various aspects of user activity on the site? Not a lot, it turns out. The latest version allows us to write our own custom monitoring scripts, meaning that we could now monitor any metric in the application that returns an integer. It took little time to write a simple SQL Query that collects basic metrics of the total number of subscribers, votes cast, comments made, or views of articles, over time. The SQL Monitor database polls Simple-Talk every second or so in order to get the latest totals, and can then store and plot this information, or even correlate SQL Server usage to application usage. You can see the live data by visiting monitor.red-gate.com. Click the "Analysis" tab, and select one of the "Simple-talk:" entries in the "Show" box and an appropriate data range (e.g. last 30 days). It’s nascent, and we’re still working on it, but it’s already given us more confidence that we’ll spot quickly trends, bugs, or bursts of ‘abnormal’ activity. If there is a sudden rise in comments, we get an alert, and if it’s due to a spam attack, we can moderate or ban the perpetrator very quickly. We’ve often argued that a tool should perform a single job well rather than turn into a Swiss-army knife, but ironically we’ve rather appreciated being able to make best use of what’s there anyway for a slightly different purpose. Is this a good or common practice? What do you think? Cheers, Tony.

    Read the article

  • Lease Accounting Closed for Comment

    - by Theresa Hickman
    December 15, 2010 marked the last day to send public comments to FASB and IASB on lease accounting. June 2011 is the deadline for the final consideration of the Leases Exposure Draft that will be given to standard setters in order to create a new lease accounting standard. Landlords, lessees, retailers, airlines industry, etc. are all worried right now about the changes to lease accounting. They feel the changes will be too costly and complex without adding significant improvement to the quality and relevance of financial statements. In a nutshell, IASB and FASB want to abolish operating leases where the lessee records the periodic payments as an expense over time. The proposed changes will mean that the accounting for leases will move from the P&L and hit both the lessee's and lessor's balance sheets. For companies that occupy a lot of property, this could significantly increase their liabilities not to mention front-load much of the costs that they were able to spread out over time before. Why are IASB and FASB doing this? Their goal is to have consistent accounting for both the lessees and lessors with higher quality financial statements. Leasing is one of four major projects being undertaken by the IASB and FASB in order to complete convergence between US GAAP and IFRS. I spoke to our resident accounting expert Seamus Moran about this to better understand how this might impact accounting software. He reminded me that the proposed changes to both US GAAP and IFRS in respect to leases are "proposed." It is still inappropriate to account for leases the way they are being proposed and we still need to account for them in accordance to the current regulations, which is what current accounting software programs, such as E-Business Suite Release 12.1 and prior and PeopleSoft Enterprise support. The FASB (US GAAP) and IASB (IFRS) exposure drafts (EDs) that outline the proposal were published. The FASB edition was published on August 17th, with comments due by December 15th. The IASB edition was published on the same date, and comments were due in London on the same date. Exposure drafts are the method both the FASB and the IASB use to solicit General Acceptance, the "GA" in GAAP. Both Boards will consider the input they have received, and perhaps revise the proposal. The proposal has come in for some criticism, both from the finance houses and the uses of the leased assets. There is, given the opposition to it, an excellent chance that the Leasing proposal will be modified or rewritten. We will know this in about six months, the usual time it takes for the FASB and IASB to digest the comments they receive. If they feel the proposal has General Acceptance, they will issue the final Standard at that time; if not, they will issue a revised proposal with another year of comment of drafting. Oracle participates in the standard setting process and is fully aware of the leasing proposal. We have designs that would reflect the proposal in hand. These designs will be finalized when the proposal is finalized. It is likely that customers will develop new financial arrangements if the proposal is finalized, and we are working with customers and partners to stay in touch with people's business responses to the proposal. The IASB and FASB are aware that ERP companies will have to revise their software, and that the companies filing results under IFRS or under US GAAP will have to implement such software. The form and timing of the release of the updated software will depend on the schedule of the take up of the new standard, the complexity of the standard, and the releases supported at the time the standard becomes effective.

    Read the article

  • 2-column; multi-accordion pane

    - by Josh
    Alright, I'm having some issues and I believe it's a CSS one. Here is what I'm working on currently: http://www.notedls.com/demo/ Focusing on the News accordion menu. The idea here is to have a small image (50x50 with padding) and then a huge headline next to it. When the user clicks the headline, it expands to the article. If the user wants to read comments or make a comment themselves they can then click the View Comments to expand it even further. The issue I'm having (if it isn't clear) is the spacing with the image and the text. I could simply just increase the height of the ui.accordion-acc or -left to make everything fit, but that doesn't solve the issue. If you notice when you click on the first expansion of Headline 1, it will wrap View Comments underneath the image. This is something I don't want, I've tried separating these elements into additional divs and even floating, but its just not working. Essentially, I want blank space infinitely underneath the image for however long the article+comments may take the field.

    Read the article

  • JavaScript - Building JSON object

    - by user208662
    Hello, I'm trying to understand how to build a JSON object in JavaScript. This JSON object will get passed to a JQuery ajax call. Currently, I'm hard-coding my JSON and making my JQuery call as shown here: $.ajax({ url: "/services/myService.svc/PostComment", type: "POST", contentType: "application/json; charset=utf-8", data: '{"comments":"test","priority":"1"}', dataType: "json", success: function (res) { alert("Thank you!"); }, error: function (req, msg, obj) { alert("There was an error"); } }); This approach works. But, I need to dynamically build my JSON and pass it onto the JQuery call. However, I cannot figure out how to dynamically build the JSON object. Currently, I'm trying the following without any luck: var comments = $("#commentText").val(); var priority = $("#priority").val(); var json = { "comments":comments,"priority":priority }; $.ajax({ url: "/services/myService.svc/PostComment", type: "POST", contentType: "application/json; charset=utf-8", data: json, dataType: "json", success: function (res) { alert("Thank you!"); }, error: function (req, msg, obj) { alert("There was an error"); } }); Can someone please tell me what I am doing wrong? I noticed that with the second version, my service is not even getting reached. Thank you

    Read the article

  • Error number 13 - Remote access svn with dav_svn failing

    - by C. Ross
    I'm getting the following error on my svn repository <D:error> <C:error/> <m:human-readable errcode="13"> Could not open the requested SVN filesystem </m:human-readable> </D:error> I've followed the instructions from the How to Geek, and the Ubuntu Community Page, but to no success. I've even given the repository 777 permissions. <Location /svn/myProject > # Uncomment this to enable the repository DAV svn # Set this to the path to your repository SVNPath /svn/myProject # Comments # Comments # Comments AuthType Basic AuthName "My Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd # More Comments </Location> The permissions follow: drwxrwsrwx 6 www-data webdev 4096 2010-02-11 22:02 /svn/myProject And svnadmin validates the directory $svnadmin verify /svn/myProject/ * Verified revision 0. and I'm accessing the repository at http://ipAddress/svn/myProject Edit: The apache error log says [Fri Feb 12 13:55:59 2010] [error] [client <ip>] (20014)Internal error: Can't open file '/svn/myProject/format': Permission denied [Fri Feb 12 13:55:59 2010] [error] [client <ip>] Could not fetch resource information. [500, #0] [Fri Feb 12 13:55:59 2010] [error] [client <ip>] Could not open the requested SVN filesystem [500, #13] [Fri Feb 12 13:55:59 2010] [error] [client <ip>] Could not open the requested SVN filesystem [500, #13] Even though I confirmed that this file is ugo readable and writable. What am I doing wrong?

    Read the article

  • Apache POI Comment Excel

    - by Marquinio
    I need to add a comment to an HSSF Cell in Excel. Everything works fine the very first time but if I open the same file and run the code again it corrupts the file. I've also noticed that I need to create a Drawing object on a Sheet only once: _sheet.createDrawingPatriarch(); If the line above gets executed more than once comments will not work. So has anyone tried adding comments to Cells, closing the file, opening the file again and trying to add more comments to different cells? The below code works but if I open the file again then comments are not added, plus the file gets corrupted!!! Is there a way to get the existing Drawing object from a Sheet? Any ideas appreciated. Thanks!! _drawing = (HSSFPatriarch) _sheet.createDrawingPatriarch(); Row row = _sheet.getRow(rowIndex_); Cell cell = row.getCell(0); CreationHelper factory = _workbook.getCreationHelper(); HSSFAnchor anchor = new HSSFClientAnchor(0, 0, 0, 0, (short)4, 2, (short)6, 5); org.apache.poi.ss.usermodel.Comment comment = _drawing.createComment(anchor); RichTextString str = factory.createRichTextString("Hello, World "+rowIndex_); comment.setString(str); cell.setCellComment(comment);

    Read the article

  • JQuery & Wordpress - Hide multiple divs inside unique ID?

    - by steelfrog
    I'm trying to write a short Wordpress JQuery for Wordpress comments that would allow users to toggle specific comments on and off. This is my first script, and I'm having a tough time. Once overtly simplified, the comments are formatted like so: <li id="li-comment-<?php comment_ID() ?>"> <div class="gravatar"><img src="#" /></div> <div class="comment_poster">Username</div> <div class="comment_options">Option buttons</div> <div class="comment_content">Comment</div> </li> In the "comment_options" DIV is a series of buttons that control the individual comments (reply, quote, edit, close, etc.). The close button is what I'm trying to write this script for. I need it to toggle the "gravtar" and "comment_content" DIVs, but leave the rest in place so that it still displays the user ID and controls. However, I can't seem to figure out how to contain the action. This is what I have so far: $(document).ready(function() { $("div.trigger").click(function() { $("div.gravatar").slideToggle(); $("div.comment_content").slideToggle(); }); }); The problem with this is that it closes are the .gravatar and .comment_content on the page, not just the ones found in the same list item. If you're curious, this is the page I'm working on. Any idea how I could resolve this? Again, this is my firs time with JQuery, so I'm a little fuzzy on how it all works. Thanks!

    Read the article

  • How will Arel affect rails' includes() 's capabilities.

    - by Tim Snowhite
    I've looked over the Arel sources, and some of the activerecord sources for Rails 3.0, but I can't seem to glean a good answer for myself as to whether Arel will be changing our ability to use includes(), when constructing queries, for the better. There are instances when one might want to modify the conditions on an activerecord :include query in 2.3.5 and before, for the association records which would be returned. But as far as I know, this is not programmatically tenable for all :include queries: (I know some AR-find-includes make t#{n}.c#{m} renames for all the attributes, and one could conceivably add conditions to these queries to limit the joined sets' results; but others do n_joins + 1 number of queries over the id sets iteratively, and I'm not sure how one might hack AR to edit these iterated queries.) Will Arel allow us to construct ActiveRecord queries which specify the resulting associated model objects when using includes()? Ex: User :has_many posts( has_many :comments) User.all(:include => :posts) #say I wanted the post objects to have their #comment counts loaded without adding a comment_count column to `posts`. #At the post level, one could do so by: posts_with_counts = Post.all(:select => 'posts.*, count(comments.id) as comment_count', :joins => 'left outer join comments on comments.post_id = posts.id', :group_by => 'posts.id') #i believe #But it seems impossible to do so while linking these post objects to each #user as well, without running User.all() and then zippering the objects into #some other collection (ugly) #OR running posts.group_by(&:user) (even uglier, with the n user queries)

    Read the article

  • Using NHibernate to select entities based on activity of children entities

    - by mannish
    I'm having a case of the Mondays... I need to select blog posts based on recent activity in the post's comments collection (a Post has a List<Comment> property and likewise, a Comment has a Post property, establishing the relationship. I don't want to show the same post twice, and I only need a subset of the entities, not all of the posts. First thought was to grab all posts that have comments, then order those based on the most recent comment. For this to work, I'm pretty sure I'd have to limit the comments for each Post to the first/newest Comment. Last I'd simply take the top 5 (or whatever max results number I want to pass into the method). Second thought would be to grab all of the comments, ordered by CreatedOn, and filter so there's only one Comment per Post. Then return those top (whatever) posts. This seems like the same as the first option, just going through the back door. I've got an ugly, two query option I've got working with some LINQ on the side for filtering, but I know there's a more elegant way to do it in using the NHibernate API. Hoping to see some good ideas here.

    Read the article

  • entity framework - getting null exception using foreign key

    - by Nick
    Having some trouble with what should be a very simple scenario. For example purposes, I have two tables: -Users -Comments There is a one-to-many relationship set up for this; there is a foreign key from Comments.CommentorID to Users.UserID. When I do the LINQ query and try to bind to a DataList, I get a null exception. Here is the code: FKMModel.FKMEntities ctx = new FKMModel.FKMEntities(); IQueryable<Comment> CommentQuery = from x in ctx.Comment where x.SiteID == 101 select x; List<Comment> Comments = CommentQuery.ToList(); dl_MajorComments.DataSource = Comments; dl_MajorComments.DataBind(); In the ASPX page, I have the following as an ItemTemplate (I simplified it and took out the styling, etc, for purposes of posting here since it's irrelevant): <div> <%# ((FKMModel.Comment)Container.DataItem).FKMUser.Username %> <%# ((FKMModel.Comment)Container.DataItem).CommentDate.Value.ToShortDateString() %> <%# ((FKMModel.Comment)Container.DataItem).CommentTime %> </div> The exception occurs on the first binding (FKMUser.Username). Since the foreign key is set up, I should have no problem accessing any properties from the Users table. Intellisense set up the FKMUser navigation property and it knows the properties of that foreign table. What is going on here??? Thanks, Nick

    Read the article

  • How to query JDO persistent objects in unowned relationship model?

    - by Paul B
    Hello, I'm trying to migrate my app from PHP and RDBMS (MySQL) to Google App Engine and have a hard time figuring out data model and relationships in JDO. In my current app I use a lot of JOIN queries like: SELECT users.name, comments.comment FROM users, comments WHERE users.user_id = comments.user_id AND users.email = '[email protected]' As I understand, JOIN queries are not supported in this way so the only(?) way to store data is using unowned relationships and "foreign" keys. There is a documentation regarding that, but no useful examples. So far I have something like this: @PersistenceCapable public class Users {     @PrimaryKey     @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)     private Key key;     @Persistent     private String name;         @Persistent     private String email;         @Persistent     private Set<Key> commentKeys;     // Accessors... } @PersistenceCapable public class Comments {     @PrimaryKey     @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)     private Key key;     @Persistent     private String comment;         @Persistent     private Date commentDate;     @Persistent     private Key userKey;     // Accessors... } So, how do I get a list with commenter's name, comment and date in one query? I see how I probably could get away with 3 queries but that seems wrong and would create unnecessary overhead. Please, help me out with some code examples. -- Paul.

    Read the article

  • Pulling specific entries from RSS feed [PHP]

    - by n0s
    So, I have an RSS feed with variations of each item. What I want to do is just get entries that contain a specific section of text. For example: <item> <title>RADIO SHOW - CF64K - 05-20-10 + WRAPUP </title> <link>http://linktoradioshow.com</link> <comments>Radio show from 05-20-10</comments> <pubDate>Thu, 20 May 2010 19:12:12 +0200</pubDate> <category domain="http://linktoradioshow.com/browse/199">Audio / Other</category> <dc:creator>n0s</dc:creator> <guid>http://otherlinktoradioshow.com/</guid> <enclosure url="http://linktoradioshow.com/" length="13005" /> </item> <item> <title>RADIO SHOW - CF128K - 05-20-10 + WRAPUP </title> <link>http://linktoradioshow.com</link> <comments>Radio show from 05-20-10</comments> <pubDate>Thu, 20 May 2010 19:12:12 +0200</pubDate> <category domain="http://linktoradioshow.com/browse/199">Audio / Other</category> <dc:creator>n0s</dc:creator> <guid>http://otherlinktoradioshow.com/</guid> <enclosure url="http://linktoradioshow.com/" length="13005" /> </item> I only want to display the results that contain the string CF64K. While it's probably really simple regex, I can't seem to wrap my head around getting it right. I always get seem to only be able to display the string 'CF64K', and not the stuff that surrounds it. Thanks in advance.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >