Search Results

Search found 29638 results on 1186 pages for 'phone number'.

Page 784/1186 | < Previous Page | 780 781 782 783 784 785 786 787 788 789 790 791  | Next Page >

  • SQL Server 2000 incorrect message: duplicate key with unique index

    - by iar
    Database server: SQL Server 2000 - 8.00.760 - SP3 - Standard Edition I have a table with 5 columns, in which I want to add a large (300.000) number of records. There is a unique index on the combination of two columns. If I add the records with this unique index in place, SQL Server gives this error message: Msg 2601, Level 14, State 3, Line 1 Cannot insert duplicate key row in object 'TESTTABLE' with unique index 'test'. The statement has been terminated. (0 row(s) affected) However, if I delete the unique index and then add the records, I do not get any error when I create this unique index afterwards.

    Read the article

  • Creating an excel macro to sum lines with duplicate values

    - by john
    I need a macro to look at the list of data below, provide a number of instances it appears and sum the value of each of them. I know a pivot table or series of forumlas could work but i'm doing this for a coworker and it has to be a 'one click here' kinda deal. The data is as follows. A B Smith 200.00 Dean 100.00 Smith 100.00 Smith 50.00 Wilson 25.00 Dean 25.00 Barry 100.00 The end result would look like this Smith 3 350.00 Dean 2 125.00 Wilson 1 25.00 Barry 1 100.00 Thanks in advance for any help you can offer!

    Read the article

  • exception with Linq to SQL using sdf file

    - by Ben
    Hi, I've set up a project with an SDF local database file and am trying to access it using an LINQ To SQL (".dbml") file. I have used the connection string provided by the sdf file and can instanciate the object with out a problem: thisDataContext = new MyDataContext(GetConnectionString()); However, whenever i try to access any information from it eg var collection = (from MyObject p in thisDataContext.MyTable select p); I get the error - "The table name is not valid. [ Token line number (if known) = 2,Token line offset (if known) = 14,Table name = Person ]" I am using Visual Studio 2008 SP1 .Net 3.5 and SQL 2008 CE. I gather something similar happened for SQL 2005 CE and a Hotfix was released, but i would have thought the fix would have been fixed in this version before release. Does anyone know the fix for this? Thanks

    Read the article

  • Microsoft T-SQL Counting Consecutive Records

    - by JeffW
    Problem: From the most current day per person, count the number of consecutive days that each person has received 0 points for being good. Sample data to work from : Date Name Points 2010-05-07 Jane 0 2010-05-06 Jane 1 2010-05-07 John 0 2010-05-06 John 0 2010-05-05 John 0 2010-05-04 John 0 2010-05-03 John 1 2010-05-02 John 1 2010-05-01 John 0 Expected answer: Jane was bad on 5/7 but good the day before that. So Jane was only bad 1 day in a row most recently. John was bad on 5/7, again on 5/6, 5/5 and 5/4. He was good on 5/3. So John was bad the last 4 days in a row. Code to create sample data: IF OBJECT_ID('tempdb..#z') IS NOT NULL BEGIN DROP TABLE #z END select getdate() as Date,'John' as Name,0 as Points into #z insert into #z values(getdate()-1,'John',0) insert into #z values(getdate()-2,'John',0) insert into #z values(getdate()-3,'John',0) insert into #z values(getdate()-4,'John',1) insert into #z values(getdate(),'Jane',0) insert into #z values(getdate()-1,'Jane',1) select * from #z order by name,date desc

    Read the article

  • Apache ErrorDocument not working for PHP 500 error

    - by Jason
    I have a number of ErrorDocuments setup in my .htaccess file for errors such as 404, 401, 403 etc which all redirect to my error page but the ErrorDocument set for a 500 error is never displayed when PHP reports a 500. The 500 code is sent to the browser and the output is blank. Is there something special I need to do to enable 500 error documents for use with PHP? My directives look like this: ErrorDocument 401 /errorpage.php?error=401 ErrorDocument 403 /errorpage.php?error=403 ErrorDocument 404 /errorpage.php?error=404 ErrorDocument 500 /errorpage.php?error=500 I've looked through the php.ini and can't see anything that would obviously override the Apache settings and there are no ErrorDocument directives in my httpd.conf either. Anywhere else I should be looking? Thanks in advance.

    Read the article

  • Maven SVN checkout example

    - by c0mrade
    Concerning my previous question - http://stackoverflow.com/questions/2622191/is-there-a-svn-maven , I went trough usage page - http://maven.apache.org/scm/maven-scm-plugin/usage.html didn't found quite what I was looking for, here what I would like to do : Checkout list of projects which I specify for ex : http://mysvnurl/myprojectname where I could parametrize the project name Then lets say if there is 3-4 or n number of sub-projects(modules)(for which I can specify the names as well) that I want to checkout, and that I could specify branch(trunk) and revision. To simplify : Checkout certain sub-projects(modules) trunk of myprojectname Ex : http://mysvnurl/myprojectname/project1/trunk/* // get everything http://mysvnurl/myprojectname/project2/trunk/* // get everything How would I do that with maven, can someone with more expirience explain how to do this or point me to somewhere where I can read how to, I've googled nothing specific to my needs. Thank you

    Read the article

  • Boost.Program_Options not working with short options

    - by inajamaica
    I have the following options_description: po::options_description config("Configuration File or Command Line"); config.add_options() ("run-time,t", po::value(&runTime_)-default_value(1440.0), "set max simulation duration") ("starting-iteration,i", po::value(&startingIteration_)-default_value(1), "set starting simulation iteration") ("repetitions,r", po::value(&repetitions_)-default_value(100), "set number of iterations") ... ; As you can see the three shown have a long,short names employed. The long versions all work. However, none of the short ones do, and each time I try a -t 12345.0 or a -i 12345, etc., I get the following from Program_Options: std::logic_error: in option 'starting-iteration': invalid option value I'm using Boost 1.42 on Win32. Any thoughts on what might be going on here? Thanks!

    Read the article

  • jQuery Autocomplete: how to refresh list?

    - by cinematic
    I have a form with a variable number of autocomplete fields that use the same select list. These fields are added and removed as needed. Whenever a parameter on the page is changed, I (try to) update the list for all the fields by first calling unbind() then autocomplete() with a new parameter added to the url. $('input.foo').unbind().autocomplete(url?new_param=bar); The problem is that unbind() does not seem to be unbinding. When I type in the input field, it fires off the entire history of autocomplete events. I also tried flushCache to no avail. How do I clear out the old events? Thanks.

    Read the article

  • Default values for model fields in a Ruby on Rails form

    - by Callum Rogers
    I have a Model which has fields username, data, tags, date, votes. I have form using form_for that creates a new item and puts it into the database. However, as you can guess I want the votes field to equal 0 and the date field to equal the current date when it is placed into the database. How and where would I set/apply these values to the item? I can get it to work with hidden fields in the form but this comes with obvious issues (someone could set the votes field to a massive number.

    Read the article

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

  • Stable random color algorithm

    - by Olmo
    Here we have an interesting real-world algorithm requirement involving colors. 1) Nice random colors: In ordeeing to draw a beautifull chart (i.e: pie chart) we need to pick a random set of Colors that: a) are different enought b) Play nicely Doesnt Look hard. For example u fix bright and saturation and divide hue in steps of 360/Num_sectors 2) Stable: given Pie1 with sectors with labes ('A','B','C') and Pie2 with sector with labels ('B','C','D'), will be nice if color('B',pie1)= color('B',pie2) and the same for 'C' and so on, so people don't get crazy when seeing similar updated charts, even if some sectors appear some dissapeared or the number of sectors changed. The label is the only stable thing. 3) hard-coded colors: the algorithm allows hardcoded label-color relationships as an input but stills doing a good work (1 & 2) for the rest of free labels. I think this algorithm, even if it looks quite ad-hoc, will be usefull in more then one situation. Any ideas?

    Read the article

  • What is the worst C#/.NET gotcha?

    - by MusiGenesis
    This question is similar to this one, but focused on C# and .NET. I was recently working with a DateTime object, and wrote something like this: DateTime dt = DateTime.Now; dt.AddDays(1); return dt; // still today's date! WTF? The intellisense documentation for AddDays says it adds a day to the date, which it doesn't - it actually returns a date with a day added to it, so you have to write it like: DateTime dt = DateTime.Now; dt = dt.AddDays(1); return dt; // tomorrow's date This one has bitten me a number of times before, so I thought it would be useful to catalog the worst C# gotchas.

    Read the article

  • Replacing Credit Card Numbers

    - by Del
    Hi, I'm using an sql to replace credit card numbers with xxxx and finding that REGEX_REPLACE does not consistently replace everything. Below is the SET command i'm using on the SQL SET COMMENTS_LONG = REGEXP_REPLACE (COMMENTS_LONG,'\D[1-6]\d{3}.\d{4}.\d{4}.\d{3}(\d{1}.\d{3})?|\D[1-6]\d{12,15}|\D[1-6]\d{3}.\d{3}.?\d{3}.\d{5}', ' XXXXXXXXXXXXXXXX') Before Elizabeth aclled to change address.5430-6000-2111-1931 A After Elizabeth aclled to change address XXXXXXXXXXXXXXXX1 A I tried increasing the number of X but result is the same. I also find that i have to put a space in front of the first X as it appears to move 1 char to the left.

    Read the article

  • Correct permutation cycle for Verhoeff algorithm

    - by James
    Hello, I'm implementing the Verhoeff algorithm for a check digit scheme, but there seems to be some disagreement in web sources as to which permutation cycle should form the basis of the permutation table. Wikipedia uses: (36)(01589427) while apparently, Numerical Recipies uses a different cycle and this book uses: (0)(14)(23)(56789), quoted from a 1990 article by Winters. It also notes that Verhoeff used the one Wikipedia quotes. Now, my number theory is a little rusty, but the Wikipedia cycle clearly will repeat after the 8th power, while the book one will take 10, despite it saying that s^8=s. Table 2.14(b) has other errors in the 2-cycles, so this is dubious anyway. Unfortunately, I don't have copies of the original articles (and am too tight to pay/disgusted that 40-year old knowledge is still being held to ransom by publishers), nor a copy of Numerical Recipes to check (and am loath to install their paranoia-induced copy protection plug-in to view online). So does any one know which is correct? Are they both correct?

    Read the article

  • How to work with varargs and reflection

    - by PeterMmm
    Simple question, how make this code working ? public class T { public static void main(String[] args) throws Exception { new T().m(); } public // as mentioned by Bozho void foo(String... s) { System.err.println(s[0]); } void m() throws Exception { String[] a = new String[]{"hello", "kitty"}; System.err.println(a.getClass()); Method m = getClass().getMethod("foo", a.getClass()); m.invoke(this, (Object[]) a); } } Output: class [Ljava.lang.String; Exception in thread "main" java.lang.IllegalArgumentException: wrong number of arguments at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597)

    Read the article

  • Removing duplicates without overriding hash method

    - by Javi
    Hello, I have a List which contains a list of objects and I want to remove from this list all the elements which have the same values in two of their attributes. I had though about doing something like this: List<Class1> myList; .... Set<Class1> mySet = new HashSet<Class1>(); mySet.addAll(myList); and overriding hash method in Class1 so it returns a number which depends only in the attributes I want to consider. The problem is that I need to do a different filtering in another part of the application so I can't override hash method in this way (I would need two different hash methods). What's the most efficient way of doing this filtering without overriding hash method? Thanks

    Read the article

  • Reduce lines of code

    - by coffeeaddict
    I'm not a JavaScript guru (yet). I am trying to figure out a way to cut down the number of lines below...are there any shortcuts for lets say the if statements? function showDialog(divID) { var dialogDiv = $("#" + divID); var height = 500; var width = 400; var resizable = false; if (dialogDiv.attr("height") != "") { height = parseInt(dialogDiv.attr("height")); } if (dialogDiv.attr("width") != "") { width = parseInt(dialogDiv.attr("width")); } if (dialogDiv.attr("resizable") != "") { resizable = dialogDiv.attr("resizable"); } dialogDiv.dialog ( { resizable: resizable, width: width, height: height, bgiframe: true, modal: true, autoOpen: false, show: 'blind' } ) dialogDiv.dialog("open"); }

    Read the article

  • Base class for windows service

    - by user187020
    My new project has a design in which there are number windows services for performing different tasks. I have been given a task to create base class from which all of the windows service will inherit. This base class will perform common functions like creating instances of other windows services by iterating through the config file (may be like Activator.CreateInstance), do event logging on onStart, onStop etc. and may contain some more functionality. Before I start developing stuff, wondering if there is any pattern already in place or someone has good understanding of how to implement this kind of functionality. Any help appreciated.

    Read the article

  • TFS and code coverage for web application (MVC) assemblies not working

    - by Andrew
    I've got an MVC web application with associated controller tests that run under a TFS build as per normal. I can see the tests running and passing in the build log and they appear in the "Result details for Any CPU/Release" section of the build I also have a number of other assemblies with associated tests that are running in the same build. Tests are passing and the details are being shown in the results and logs just fine. I've enabled code coverage in the build script and the testrunconfig. The coverage is appearing for all assemblies EXCEPT the web application even though it looks like the tests have been run for it. Is there anything obvious that I have missed or some sort of work around that I need to do? I've searched around for a while and haven't found an answer. Has anyone got code coverage working for MVC web applications?

    Read the article

  • Comparison of Code Review Tools/Systems

    - by SytS
    There are a number of tools/systems available aimed at streamlining and enhancing the code review process, including: CodeStriker Review Board, code review system in use at VMWare Code Collaborator, commercial product by SmartBear Rietveld, based on Modrian, the code review system in use at Google Crucible, commercial product by Atlassian These systems all have varying feature sets, and differ in degrees of maturity and polish; the selection is a little bewildering for someone who is evaluating code review systems for the frist time. Some of these tools have already been mentioned in other questions/answers on StackOverflow, but I would like to see a more comprehensive comparison of the more popular systems, especially with respect to: integration with source control systems integration with bug tracking systems supported workflow (reviews pre/post commit, review or contiguous/non-contigous revision ranges, etc) deployment/maintenance requirements

    Read the article

  • Why is thread local storage so slow?

    - by dsimcha
    I'm working on a custom mark-release style memory allocator for the D programming language that works by allocating from thread-local regions. It seems that the thread local storage bottleneck is causing a huge (~50%) slowdown in allocating memory from these regions compared to an otherwise identical single threaded version of the code, even after designing my code to have only one TLS lookup per allocation/deallocation. This is based on allocating/freeing memory a large number of times in a loop, and I'm trying to figure out if it's an artifact of my benchmarking method. My understanding is that thread local storage should basically just involve accessing something through an extra layer of indirection, similar to accessing a variable via a pointer. Is this incorrect? How much overhead does thread-local storage typically have? Note: Although I mention D, I'm also interested in general answers that aren't specific to D, since D's implementation of thread-local storage will likely improve if it is slower than the best implementations.

    Read the article

  • sql: can i do a max(count(*)) ?

    - by every_answer_gets_a_point
    here's my code: select yr,count(*) from movie join casting on casting.movieid=movie.id join actor on casting.actorid = actor.id where actor.name = 'John Travolta' group by yr here's the question Which were the busiest years for 'John Travolta'. Show the number of movies he made for each year. here's the table structure movie(id, title, yr, score, votes, director) actor(id, name) casting(movieid, actorid, ord) this is the output i am getting: yr count(*) 1976 1 1977 1 1978 1 1981 1 1994 1 etcetc i need to get the rows for which count(*) is max how do i do this?

    Read the article

  • How to model "target day" in UML Classdiagrams

    - by Tobiask
    Hi there, I want to describe the following situation in an UML Classdiagram: A day, on which a newspaper is send to a customer. This day could be sth. like "every friday" or "every first day of a month". My idea to represent this in a UML Classdiagram: -targetDay:Integer -targetDayGrid:Enumeration targetDay would be sth. like "1" (for monday) oder "5" (for friday) or it could be "1" for the first day of the month or "10" for the 10th day of the month. targetDayGrid is an enum: weekly, monthly. So the enum sets the semantic meaning of the number in targetDay. I´m not happy with this, do you know any other solution to represent my problem? Or do you think my solution is okay?

    Read the article

  • Displaying webcam feed using opencv and python

    - by Mitch
    Hi ive been trying to create a simple program with python which utilises opencv to get a video feed from my webcam and display it on the screen. I know im partly there because the window is created and the light on my webcam flicks on, but it just doesnt seem to show anything in the window. hopefully someone can explain what im doing wrong. import cv cv.NamedWindow("w1", cv.CV_WINDOW_AUTOSIZE) capture = cv.CaptureFromCAM(0) def repeat(): frame = cv.QueryFrame(capture) cv.ShowImage("w1", frame) while True: repeat() on an unrelated note, i have noticed that my webcam sometimes changes its index number in cv.CaptureFromCAM and sometimes i need to put in 0, 1 or 2 even though i only have one camera connected and i havnt unplugged it (i know because the light doesnt come on unless i change the index). is there a way to get python to determine the correct index? thanks Mitch

    Read the article

  • NHibernate - mappping composite-id to table with non-composite primary key

    - by Dmitry
    I have table like this: ID - PK; KEY1 - UNIQUE1; KEY2 - UNIQUE1; If I don't want to use ID in my mappings, can I tell NHibernate to use KEY1 & KEY2 as a composite id? When I declare composite id like this: <composite-id> <key-property name="KEY1"/> <key-property name="KEY2"/> </composite-id> I get FKUnmatchingColumnsException (Foreign key ... must have same number of columns as the referenced primary key ...)

    Read the article

< Previous Page | 780 781 782 783 784 785 786 787 788 789 790 791  | Next Page >