Search Results

Search found 4140 results on 166 pages for 'alias analysis'.

Page 33/166 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • FxCop CA2227 warning and ReadOnlyCollection<T>

    - by brickner
    In my VS2008 SP1, .NET 3.5 SP1 project, I have different classes that contain different properties. I use C#3.0 auto properties a lot. Some of these properties need to be collections. Since I want to make it simple, I use ReadOnlyCollection<T for these properties. I don't want to use IEnumerable<T since I want random access to the elements. I use Code Analysis (FxCop rules) and I get the CA2227 warning. I don't understand why does ReadOnlyCollection<T should have a set method while it can't be changed... The set method can only do exactly what the property can do. Example: using System.Collections.ObjectModel; namespace CA2227 { public class MyClass { public ReadOnlyCollection<int> SomeNumbers { get; set; } } } CA2227 : Microsoft.Usage : Change 'MyClass.SomeNumbers' to be read-only by removing the property setter. C:\Users...\Visual Studio 2008\Projects\CA2227\MyClass.cs 7 CA2227

    Read the article

  • Should I suppress CA1062: Validate arguments of public methods?

    - by brickner
    I've recently upgraded my project to Visual Studio 2010 from Visual Studio 2008. In Visual Studio 2008, this Code Analysis rule doesn't exist. Now I'm not sure if I should use this rule or not. I'm building an open source library so it seems important to keep people safe from doing mistakes. However, if all I'm going to do is throw ArgumentNullException when the parameter is null, it seems like writing useless code since ArgumentNullException will be thrown even if I won't write that code. Should I remove that rule or fix the violations?

    Read the article

  • DataMining / Analyzing responses to Multiple Choice Questions in a survey

    - by Shailesh Tainwala
    Hi, I have a set of training data consisting of 20 multiple choice questions (A/B/C/D) answered by a hundred respondents. The answers are purely categorical and cannot be scaled to numerical values. 50 of these respondents were selected for free product trial. The selection process is not known. What interesting knowledge can be mined from this information? The following is a list of what I have come up with so far- A study of percentages (Example - Percentage of people who answered B on Qs.5 and got selected for free product trial) Conditional probabilities (Example - What is the probability that a person will get selected for free product trial given that he answered B on Qs.5) Naive Bayesian classifier (This can be used to predict whether a person will be selected or not for a given set of values for any subset of questions). Can you think of any other interesting analysis or data-mining activities that can be performed? The usual suspects like correlation can be eliminated as the response is not quantifiable/scoreable. Is my approach correct?

    Read the article

  • Should I supress CA1062: Validate arguments of public methods?

    - by brickner
    I've recently upgraded my project to Visual Studio 2010 from Visual Studio 2008. In Visual Studio 2008, this Code Analysis rule doesn't exist. Now I'm not sure if I should use this rule or not. I'm building an open source library so it seems important to keep people safe from doing mistakes. However, if all I'm going to do is throw ArgumentNullException when the parameter is null, it seems like writing useless code since ArgumentNullException will be thrown even if I won't write that code. Should I remove that rule or fix the violations?

    Read the article

  • How to properly rewrite ASSERT code to pass /analyze in msvc?

    - by Sorin Sbarnea
    Visual Studio added code analysis (/analyze) for C/C++ in order to help identify bad code. This is quite a nice feature but when you deal with and old project you may be overwhelmed by the number of warnings. Most of the problems are generating because the old code is doing some ASSERT at the beginning of the method or function. I think this is the ASSERT definition used in the code (from afx.h) #define ASSERT(f) DEBUG_ONLY((void) ((f) || !::AfxAssertFailedLine(THIS_FILE, __LINE__) || (AfxDebugBreak(), 0))) Example code: ASSERT(pBytes != NULL); *pBytes = 0; // <- warning C6011: Dereferencing NULL pointer 'pBytes' I'm looking for an easy and safe solution to solve these warnings that does not imply disabling these warnings. Did I mention that there are lots of occurrences in current codebase?

    Read the article

  • Using "Show values as" option in Excel 2007 pivot table when source is SSAS cube?

    - by Brad
    I have an Excel 2007 pivot table showing "Year" across the top and "Month" down the side. What I am trying to do is represent the values as "% Difference" from the same month of the previous year. (Ex. If Jan-07 is $100,000 and Jan-08 is $120,000, I would like Jan-08 to show '20%'). However, every time I try to do this (using the "Show values as" tab of Value Field Settings) all of my numbers go to '#N/A'. Is there a way to do this using an Analysis Services cube as the data source? When I do this exact same thing using data on a different sheet as the data source for the pivot table, it works fine. Thanks in advance for any insight into this.

    Read the article

  • Utilise Surv object in ggplot or lattice

    - by Misha
    Anyone know how to take advantage of ggplot or lattice in doing survival analysis? It would be nice to do trellis/facet like survival graphs. So in the end I played around and sort of found a solution for a kaplan meier plot. Apologize for the messy code in taking the list elements into a dataframe, but I couldnt figure out another way. Note: It only works with two levels of stratum. If anyone know how I can use x<-length(stratum) to do this please let me know (in stata I could append to a macro-unsure how this works in R)... ggkm<-function(time,event,stratum) { m2s<-Surv(time,as.numeric(event)) fit <- survfit(m2s ~ stratum) f$time<-fit$time f$surv<-fit$surv f$strata<-c(rep(names(fit$strata[1]),fit$strata[1]),rep(names(fit$strata[2]),fit$strata[2])) f$upper<-fit$upper f$lower<-fit$lower r<-ggplot (f,aes(x=time,y=surv,fill=strata,group=strata))+geom_line()+geom_ribbon(aes(ymin=lower,ymax=upper),alpha=0.3) return(r) }

    Read the article

  • Visualize compiler warnings

    - by christoffer
    I'm looking for a way to visualize compiler warnings and remarks, by annotating or otherwise showing which lines cause a report. This is much like a modern IDE like NetBeans or Eclipse already does, but I'd like to take output from several compilers (and other static code analysis tools) at once, and create one single annotation in order to get a better overview. The rationale is that we've seen some problems go completely undetected by, say, Visual Studio 2005, but accurately detected with a proprietary ARM compiler, and vice versa. Cross-referencing warnings could potentially locate problems better, but doing so completely manually is infeasible. Have you heard of such a tool? Could an open-source IDE like Eclipse be extended to use several compilers at once, or has it already been done?

    Read the article

  • Is there a way to circumvent CA1726:UsePreferredTerms?

    - by klausbyskov
    I have a problem with the code analysis rule CA1726:UsePreferredTerms. Our business domain has two crucial concepts named Case and Flag. According to CA, it's apparently a deadly sin to use these names, however I really don't care since, as I said, they are crucial concepts in our domain model. CA complains not only about the type declarations but about every method parameter-name aswell. So does anyone know if there is a workaround other than adding loads of suppressions or disabling the rule altogether? Could I add the names to a custom dictionary?

    Read the article

  • matlab fit exp2

    - by HelloWorld
    I'm unsuccessfully looking for documentation of fit function using exp2 (sum of 2 exponents). How to operate the function is clear: [curve, gof] = fit(x, y,'exp2'); But since there are multiple ways to fit a sum of exponents I'm trying to find out what algorithm is used. Particularly what happens when I'm fitting one exponent (the raw data) with a bit of noise, how the exponents are spread. I've simulated several cases, and it seems that it "drops" all the weight on the second set of coefficients, but row data analysis often shows different behavior. Does anyone have suggestions of documentation?

    Read the article

  • Determining Connections between data in a single table

    - by user1689749
    Hi I'm a BA / programmer type doing data analysis on a legacy system. I've been teaching myself SQL to help, but I've appeared to hit upon a problem bigger than my abilities. I have two tables (generalized for simplicity): Table Objects Object_PK Table Components Component_PK Object_FK Component_Type There are 100+ distinct values in Component_Type_Code. Given that any object can have N number of Components, how can I see which Component_Type(s) appear with other Component_Type(s)? For example, the following query tells me what component_types appear with the component_type 'Component_type_1': select component_type_code, count(*) from components where object_fk in ( select object_fk from components where component_type_code = 'component_type_1' ) group by component_type_code I'd like to get a query to show me all connections My apologies for the formatting. Any help is appreciated. I've looked at cube and rollup, but didn't know how to apply to this situtation.

    Read the article

  • how to deal with a static analyzer output

    - by Jim
    We have started using a static analyzer (Coverity) on our code base. We were promptly stupefied by the sheer amount of warnings we received (its in the hundreds of thousands) , it will take the entire team a few mounts to clear them all (obliviously impossible). the options we discussed so far are 1) hire a contractor to sort out the warning and fix them - he drawback: we will probably need very experiences people to do all these modifications, and no contractor will have required understanding of the code. 2) filter out the warning and deal only with the dangerous ones - the problem here is that our static analysis output will always be cluttered by warning making it difficult for us to isolate problems. also the filtering of the warning is also a major effort. either way, bringing our code to a state when the static analyzer can be a useful tool for us seems a monumental task. so how is it possible to work with the static analyzer without braining current development efforts into a complete stand still?

    Read the article

  • Why do I get CA1811 when I call a private method from a public method in C++/CLI?

    - by brickner
    I've recently upgraded my project from Visual Studio 2008 to Visual Studio 2010. By enabling Code Analysis and building on Release, I'm getting warning CA1811: Avoid uncalled private code. I've managed to reduce the code to this: .h file: public ref class Foo { public: virtual System::String^ ToString() override; private: static System::String^ Bar(); }; .cpp file: String^ Foo::ToString() { return Bar(); } String^ Foo::Bar() { return "abc"; } The warning I get: CA1811 : Microsoft.Performance : 'Foo::Bar(void)' appears to have no upstream public or protected callers. It doesn't matter if Bar() is static or not. I've tried to reproduce it in C# but I can't. I can only reproduce it in C++/CLI. Why do I get this warning? Is this a Visual Studio 2010 bug?

    Read the article

  • How can I load an MP3 or similar music file for display and anaysis in wxWdigets?

    - by Jon Cage
    I'm developing a GUI in wxPython which allows a user to generate sequences of colours for some toys I'm building. Part of the program needs to load an MP3 (and potentially other formats further down the line) and display it to the user. That shuold be sufficient to get started but later I'd like to add features like identifying beats and some crude frequency analysis. Is there any simple way of loading / understanding an MP3's contents to display a plot of it's amplitudes to the screen using wxWidgets? I later intend to port to C++/wxWidgets for speed and to avoid having to distribute wxPython.

    Read the article

  • Is there some way to assume @Nullable as default? (using FindBugs or any other free tool).

    - by alex2k8
    Consider such code public void m1(String text) { if(text == null) text = "<empty>"; System.out.println(text.toLowerCase()); } And this is a buggy version: public void m1(String text) { System.out.println(text.toLowerCase()); } If null value passed, the NullPointerException may be thrown. I would like the static-analysis tool (e.g. FindBugs) to report this issue. Unsuccessfully the FindBugs (at least by default) requires me to specify @Nullable annotation explicitly. public void m1(@Nullable String text) { System.out.println(text.toLowerCase()); // FindBugs: text must be nonnull but is marked as nullable } The problem is that if I forget to annotate it, the bug will be missed!!! How can I make the FindBugs (or any other free tool) to assume @Nullable by default?

    Read the article

  • how to raise warning if return value is disregarded - gcc or static code check?

    - by Drakosha
    I'd like to see all the places in my code (C++) which disregard return value of a function. How can I do it - with gcc or static code analysis tool? Bad code example: int f(int z) { return z + (z*2) + z/3 + z*z + 23; } int main() { int i = 7; f(i); ///// <<----- here I disregard the return value return 1; } Update: it should work even if the function and its use are in different files free static check tool

    Read the article

  • Is object clearing/array deallocation really necessary in VB6/VBA (Pros/Cons?)

    - by Oorang
    Hello, A lot of what I have learned about VB I learned from using Static Code Analysis (Particularly Aivosto's Project Analyzer). And one one of things it checks for is whether or not you cleared all objects and arrays. I used to just do this blindly because PA said so. But now that I know a little bit more about the way VB releases resources, it seems to me that these things should be happening automatically. Is this a legacy feature from pre VB6, or is there a reason why you should explicitly set objects back to nothing and use Erase on arrays?

    Read the article

  • Preserving Language across inline Calculated Members in SSAS

    - by Tullo
    Problem: I need to retrieve the language of a given cell from the cube. The cell is defined by code-generated MDX, which can have an arbitrary level of indirection as far as calculated members and sets go (defined in the WITH clause). SSAS appears to ignore the Language of the specified members when you declare a calculated member inline in the query. Example: The cube's default locale is 1033 (en-US) The cube contains a Calculated Measure called [Net Pounds] which is defined as [Net Amt], language=2057 (en-GB) The query requests this measure alongside an inline calculated measure which is simply an alias to the [Net Pounds] When used directly, the measure is formatted in the en-GB locale, but when aliased, the measure falls back to using the cube default of en-US. Here's what the query looks like: WITH MEMBER [Measures].[Pounds Indirect] AS [Measures].[Net Pounds] SELECT { [Measures].[Pounds Indirect], [Measures].[Net Pounds] } ON AXIS (0) FROM [Cube] CELL PROPERTIES language, value, formatted_value The query returns the expected two cells, but only uses the [Net Pounds] locale when used directly. Is there an option or switch somewhere in SSAS that will allow locale information to be visible in calculated members? I realise that it is possible to declare the inline calculated member in a particular locale, but this would involve extracting the locale from the tuple first, which (since the cube's member is isolated in the application's query schema) is unknown.

    Read the article

  • Apache: Isn't chmod 755 enough to set up symlink or alias on Apache httpd on Mac OS 10.5?

    - by eed3si9n
    On my Mac OS 10.5 machine, I would like to set up a subfolder of ~/Documents like ~/Documents/foo/html to be http://localhost/foo. The first thing I thought of doing is using Alias as follows: Alias /foo /Users/someone/Documents/foo/html <Directory "/Users/someone/Documents/foo/html"> Options Indexes FollowSymLinks MultiViews Order allow,deny Allow from all </Directory> This got me 403 Forbidden. In the error_log I got: [error] [client ::1] (13)Permission denied: access to /foo denied The subfolder in question has chmod 755 access. I've tried specifying likes like http://localhost/foo/test.php, but that didn't work either. Next, I tried the symlink route. Went into /Library/WebServer/Documents and made a symlink to ~/Documents/foo/html. The document root has Options Indexes FollowSymLinks MultiViews This still got me 403 Forbidden: Symbolic link not allowed or link target not accessible: /Library/WebServer/Documents/foo What else do I need to set this up? Solution: $ chmod 755 ~/Documents In general, the folder to be shared and all of its ancestor folder needs to be viewable by the www service user.

    Read the article

  • How do I setup an Alias on Apache with XAMPP on Linux ? (Permission problem)

    - by knarf
    XAMPP works fine but I want to have http://localhost/f to point to /home/knarf/prog/php/fwyxz. I've chmod -R 777 /home/knarf/prog/php/fwyxz I've added Alias /f /home/knarf/prog/php/fwyxz at the end of the httpd.conf And when I try to access it, I get a 403. From the apache error_log: [error] [client 127.0.0.1] (13)Permission denied: access to /f denied. I've already tried several solutions (userdir and symlinks) but they both failed with the same error. I've also tried to add this after the Alias: <Directory "/home/knarf/prog/php/fwyxz"> Order allow,deny Allow from all </Directory> But again, permission denied. Now if I change the User/Group under which apache runs from nobody to knarf, it seems to work (static files are ok) but PHP can't use/initialize sessions : [error] [client 127.0.0.1] PHP Warning: session_start() [function.session-start]: open(/tmp/sess_r5nrmu4ugqguqqe83rs53lq6k0, O_RDWR) failed: Permission denied (13) in /home/knarf/prog/php/fwyxz/index.php on line 3 [error] [client 127.0.0.1] PHP Warning: Unknown: open(/tmp/sess_r5nrmu4ugqguqqe83rs53lq6k0, O_RDWR) failed: Permission denied (13) in Unknown on line 0 [error] [client 127.0.0.1] PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct () in Unknown on line 0 This is really frustrating.

    Read the article

  • CA2000 passing object reference to base constructor in C#

    - by Timothy
    I receive a warning when I run some code through Visual Studio's Code Analysis utility which I'm not sure how to resolve. Perhaps someone here has come across a similar issue, resolved it, and is willing to share their insight. I'm programming a custom-painted cell used in a DataGridView control. The code resembles: public class DataGridViewMyCustomColumn : DataGridViewColumn { public DataGridViewMyCustomColumn() : base(new DataGridViewMyCustomCell()) { } It generates the following warning: CA2000 : Microsoft.Reliability : In method 'DataGridViewMyCustomColumn.DataGridViewMyCustomColumn()' call System.IDisposable.Dispose on object 'new DataGridViewMyCustomCell()' before all references to it are out of scope. I understand it is warning me DataGridViewMyCustomCell (or a class that it inherits from) implements the IDisposable interface and the Dispose() method should be called to clean up any resources claimed by DataGridViewMyCustomCell when it is no longer. The examples I've seen on the internet suggest a using block to scope the lifetime of the object and have the system automatically dispose it, but base isn't recognized when moved into the body of the constructor so I can't write a using block around it... which I'm not sure I'd want to do anyway, since wouldn't that instruct the run time to free the object which could still be used later inside the base class? My question then, is the code okay as is? Or, how could it be refactored to resolve the warning? I don't want to suppress the warning unless it is truly appropriate to do so.

    Read the article

  • CA2000 and disposal of WCF client

    - by Mayo
    There is plenty of information out there concerning WCF clients and the fact that you cannot simply rely on a using statement to dispose of the client. This is because the Close method can throw an exception (i.e. if the server hosting the service doesn't respond). I've done my best to implement something that adheres to the numerous suggestions out there. public void DoSomething() { MyServiceClient client = new MyServiceClient(); // from service reference try { client.DoSomething(); } finally { client.CloseProxy(); } } public static void CloseProxy(this ICommunicationObject proxy) { if (proxy == null) return; try { if (proxy.State != CommunicationState.Closed && proxy.State != CommunicationState.Faulted) { proxy.Close(); } else { proxy.Abort(); } } catch (CommunicationException) { proxy.Abort(); } catch (TimeoutException) { proxy.Abort(); } catch { proxy.Abort(); throw; } } This appears to be working as intended. However, when I run Code Analysis in Visual Studio 2010 I still get a CA2000 warning. CA2000 : Microsoft.Reliability : In method 'DoSomething()', call System.IDisposable.Dispose on object 'client' before all references to it are out of scope. Is there something I can do to my code to get rid of the warning or should I use SuppressMessage to hide this warning once I am comfortable that I am doing everything possible to be sure the client is disposed of? Related resources that I've found: http://www.theroks.com/2011/03/04/wcf-dispose-problem-with-using-statement/ http://www.codeproject.com/Articles/151755/Correct-WCF-Client-Proxy-Closing.aspx http://codeguru.earthweb.com/csharp/.net/net_general/tipstricks/article.php/c15941/

    Read the article

  • How to obtain dependency metrics from Java source code?

    - by Bram Schoenmakers
    For an assignment we have to extract some software metrics from the Hibernate project. We have to extract the afferent coupling and efferent coupling metrics (dependency fan-in, fan-out) from each revision of each package in Hibernate. Some tools were provided which are able to extract these metrics, such as ckjm and JDepend. Other tools I have checked were Sonar, javancss and AOP. There is also the Metrics Eclipse plugin which I didn't get to work either. What these tools have in common, as far as I can see, is that they all operate on bytecode (*.class files). This is a problem, because I have to build every revision from source in order to run, say, JDepend on it. Older revisions won't build because my development stack is too recent. What I would like to do is to do this kind of analysis on source files so that I don't have to build each revision. Is this possible? Or is there a good reason why all these tools only operate on bytecode?

    Read the article

  • Time complexity with bit cost

    - by Keyser
    I think I might have completely misunderstood bit cost analysis. I'm trying to wrap my head around the concept of studying an algorithm's time complexity with respect to bit cost (instead of unit cost) and it seems to be impossible to find anything on the subject. Is this considered to be so trivial that no one ever needs to have it explained to them? Well I do. (Also, there doesn't even seem to be anything on wikipedia which is very unusual). Here's what I have so far: The bit cost of multiplication and division of two numbers with n bits is O(n^2) (in general?) So, for example: int number = 2; for(int i = 0; i < n; i++ ){ number = i*i; } has a time complexity with respect to bit cost of O(n^3), because it does n multiplications (right?) But in a regular scenario we want the time complexity with respect to the input. So, how does that scenario work? The number of bits in i could be considered a constant. Which would make the time complexity the same as with unit cost except with a bigger constant (and both would be linear). Also, I'm guessing addition and subtraction can be done in constant time, O(1). Couldn't find any info on it but it seems reasonable since it's one assembler operation.

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >