Search Results

Search found 3226 results on 130 pages for 'portfolio analysis'.

Page 15/130 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Why use sealed instead of static on a class?

    - by sq33G
    Our system has several utility classes. Some people on our team use (A) a class with all-static methods and a private constructor. Others use (B) a class with all-static methods (these the juniors). On code analysis, (A) and (B) raise warning CA1052, which recommends marking the class as sealed. Included in the MSDN documentation there is the following advice: If you are targeting .NET Framework 2.0 or earlier, a better approach is to mark the type as static. Why does this make any sense? I would have thought the opposite; AFAIK, previous to 2.0 there was no way to mark a class as static.

    Read the article

  • How to make a good portfolio for IT student (who loves programming) like me?

    - by Viet
    I am currently a college student, and going to apply for an university in probably next month. Unlike art student who easily put on their works such as models, designs and so on on their portfolio; I am hitting a dead corner trying to find a "creative" way to showcase my work as a programmer. It would be normal if programmer shows his good project with source code and everything else. Well, it should be no problem with actual "good" projects, but all of my projects are crappy (can't help it because I am still student, and don't have much work experience) and I don't even know it's worth to show. Nonetheless, I have learned a lot in only 1 year since I started programming. I am now familiar with Java, PHP, Actionscript3, C#, Objective-C and on my way to learn Ruby. I plan to build a Flash portfolio using Actionscript with Ruby as backend to show what I have learnt. The problem is idea. How to show people that I learned a lot of useful thing? Otherwise I hit the dead end and LOL just show what I have on Github (but i certainly never want that...)

    Read the article

  • Development methodology for single web developer?

    - by CaseTA
    I'm a web developer who mostly works with the LAMP stack when it comes to my own projects. Most of the time I just start coding on a project and fixing bugs and adding features as I go along. Often I'll try to use an existing solution such as Wordpress or Drupal. Now that I'm thinking of creating my own web application with businesses as the target group, I feel there's a need for proper analysis and design. Something lightweight for a one person project and still solid enough to handle requirements, user interfaces, security, etc. If you could recommend methodologies and literature I would be grateful.

    Read the article

  • a question on using Microsoft Excel to do data analysis

    - by Gemma
    Hello everyone there, Me and my mates have been given an Excel spreasheet,which contains the data of the Census report. The column here corresponds to "Year" (from 2000-2010),the row means "City/Town",the cell is simply the population of a town at a particular year. We are up to doing some analysis like “what town had its first population gain in 10 years?” etc.My question is just about can we do this in Excel,or do we need to export the data to other database(SQL) then do the programming? Thanks in advance.

    Read the article

  • Best ORM, Simple data Structures, Strong Query analysis.

    - by sayth
    What is the best ORM db combination for simple data structures. That is data that contains names as identifiers and locations, but whose main interaction will be numerical data for times(sports durations), and currency related data. I initially want to create a sports data base that will take names and statistics. Secondarily I plan to start into an investment and stock analysis db. Which ORM suits storing many numerical types and have strong query functions? I really am not biased to db engine (most likely use sqlite or mongo) so any suggestions to best network less db server to suit said ORM appreciated.

    Read the article

  • Matlab: Analysis of signal

    - by Mateusz
    Hi, I have a problem with this task: For free route perform frequency analysis and give parametrs of each signal component: time of beginning and ending of each component beginning and ending frequency amplitude (in time domain) in the beginning and end of each signal's component level of noise in dB Assume, that, the parametrs of each component like amplitude, frequency is changing lineary in time. Frequency of sampling is 1000Hz For example I have signal like this: Nx=64; fs=1000; t=1/fs*(0:Nx-1); %========================== A1=1; A2=4; f1=500; f2=1000; x1=A1*cos(2*pi*f1*t); x2=A2*sin(2*pi*f2*t); %========================== x=x1+x2;

    Read the article

  • What threading analysis tools do you recommend?

    - by glutz78
    My primary IDE is Visual Studio 2005 and I have a large C/C++ project. I'm interested in what thread analysis tools are recommended. By that I mean, I want a tool, static or dynamic, to help find race conditions, deadlocks, and the like. So far I've casually researched the following: 1. Intel Thread Checker: I don't believe that it ties into VS 2005? 2. Valgrind/Helgrind: free. 3. Coverity: this is a costly tool if i understand correctly. Anyone have experience with any of these or other? I'd much appreciate any advice. Thank you.

    Read the article

  • Cluster analysis on two columns that contain name of person in R

    - by Alka Shah
    I am a beginner in R. I have to do cluster analysis in data that contains two columns with name of persons. I converted it in data frame but it is character type. To use dist() function the data frame must be numeric. example of my data: Interviewed.Type interviewed.Relation.Type 1. An1 Xuan 2. An2 The 3. An3 Ngoc 4. Bui Thi 5. ANT feed 7. Bach Thi 8. Gian1 Thi 9. Lan5 Thi . . . 1100. Xung Van I will be grateful for your help.

    Read the article

  • (Visual) C++ project dependency analysis

    - by polyglot
    I have a few large projects I am working on in my new place of work, which have a complicated set of statically linked library dependencies between them. The libs number around 40-50 and it's really hard to determine what the structure was initially meant to be, there isn't clear documentation on the full dependency map. What tools would anyone recommend to extract such data? Presumably, in the simplest manner, if did the following: define the set of paths which correspond to library units set all .cpp/.h files within those to belong to those compilation units capture the 1st order #include dependency tree One would have enough information to compose a map - refactor - and recompose the map, until one has created some order. I note that http://www.ndepend.com have something nice but that's exclusively .NET unfortunately. I read something about Doxygen being able accomplish some static dependency analysis with configuration; has anyone ever pressed it into service to accomplish such a task?

    Read the article

  • Tool for response time analysis on JBoss server?

    - by Ariel Vardi
    I am running a pretty high traffic cluster of JBoss servers serving REST requests and I am interested in tools reading the access logs in Tomcat format (with %D parameter) to provide a detailed analysis of the response time on a per-call basis. Ideally this tool would generate a chart showing the progression of the response time throughout the day, hour per hour, then a weekly view with averages on the day, and monthly with average on the weeks (CACTI style). I've looked for such tools and couldn't find anything. Is any of you guys aware of something close to that before I start writing my own? I haven't looked into CACTI extensions yet, but that be an option?

    Read the article

  • Matlab Question - Principal Component Analysis

    - by Jack
    I have a set of 100 observations where each observation has 45 characteristics. And each one of those observations have a label attached which I want to predict based on those 45 characteristics. So it's an input matrix with the dimension 45 x 100 and a target matrix with the dimension 1 x 100. The thing is that I want to know how many of those 45 characteristics are relevant in my set of data, basically the principal component analysis, and I understand that I can do this with Matlab function processpca. Could you please tell me how can I do this? Suppose that the input matrix is x with 45 rows and 100 columns and y is a vector with 100 elements.

    Read the article

  • SAS(Statistical Analysis System) Career As a computer science student

    - by Renju
    Hi. I have completed MSc in Computer science this academic year. So I am fresher... While I am doing graduation and post graduation I did many projects using PHP and MySQL. Now I got opportunity to get into SAS(Statistical Analysis System) career, and I heard that SAS having better career growth than PHP developement. For the past 4 days, I was working with SAS and I feel interested in working. My questions are, Since i am a computer science student whether i will have any problem in my career growth in SAS? I am ready to learn statistics also, is there anything else I have to do? Doing certification in SAS will make any changes? Is it a bad idea to get into SAS with only CSc backgrond? So please guide me for my career...

    Read the article

  • Any recommended VC++ settings for better PDB analysis on release builds

    - by Brian R. Bondy
    Are there any VC++ settings I should know about to generate better PDB files that contain more information? I have a crash dump analysis system in place based on the project crashrpt. Also, my production build server has the source code installed on the D:\, but my development machine has the source code on the C:\. I entered the source path in the VC++ settings, but when looking through the call stack of a crash, it doesn't automatically jump to my source code. I believe if I had my dev machine's source code on the D:\ it would work.

    Read the article

  • MDX Studio download #mdx #ssas

    - by Marco Russo (SQLBI)
    Short version: the latest available version of MDX Studio can be downloaded from http://www.sqlbi.com/tools/mdx-studio/ Long version: Last week Stacia Misner twitted that the online version of MDX Studio was no longer available. It was hosted on http://mdx.mosha.com. It was a sad news, and it is also not good that nobody is maintaining the desktop version of MDX Studio. The latest release is the 0.4.14 and as I am writing it is still available on a SkyDrive link provided by Mosha Pasumansky, who wrote MDX Studio. Mosha does not work in Microsoft now and the entire BI community hopes that somebody will continue its work on this product. Unfortunately, it cannot be published on CodePlex because of some IP restrictions. Only bad news? Well, I hope no. The first good news is that MDX Studio also works with Analysis Services 2012 in Multidimensional mode. The second news is that, after having checked that we can do that, we created a web page on SQLBI web site to download the latest available release of MDX Studio. I hope it will be necessary to update it in the future, by now it is just a way to simplify the finding and download of this precious tool, and to grant that it will not disappear in case the current SkyDrive using to host the download would be discontinued, like it happened to the MDX Studio online version. Now a question to the BI Community: I know that there was some content available regarding tutorial on MDX Studio. I’d like to gather it and to put all in a single place. If you have such content, please contact me directly writing to marco (dot) russo (at) sqlbi [dot] com. Thanks!

    Read the article

  • Meet SQLBI at PASS Summit 2012 #sqlpass

    - by Marco Russo (SQLBI)
    Next week I and Alberto Ferrari will be in Seattle at PASS Summit 2012. You can meet us at our sessions, at a book signing and hopefully watching some other session during the conference. Here are our appointments: Thursday, November 08, 2012, 10:15 AM - 11:45 AM – Alberto Ferrari – Room 606-607 Querying and Optimizing DAX (BIA-321-S) Do you want to learn how to write DAX queries and how to optimize them? Don’t miss this session! Thursday, November 08, 2012, 12:00 PM - 12:30 PM – Bookstore Book signing event at the Bookstore corner with Alberto Ferrari, Marco Russo and Chris Webb Visit the bookstore and sign your copy of our Microsoft SQL Server 2012 Analysis Services: The BISM Tabular Model book. Thursday, November 08, 2012, 1:30 PM - 2:45 PM – Marco Russo – Room 611 Near Real-Time Analytics with xVelocity (without DirectQuery) (BIA-312) What’s the latency you can tolerate for your data? Discover what is the limit in Tabular without using DirectQuery and learn how to optimize your data model and your queries for a near real-time analytical system. Not a trivial task, but more affordable than you might think. Friday, November 09, 2012, 9:45 AM - 11:00 AM Parent-Child Hierarchies in Tabular (BIA-301) Multidimensional has a more advanced support for hierarchies than Tabular, but in reality you can do almost the same things by using data modeling, DAX functions and BIDS Helper!  Friday, November 09, 2012, 1:00 PM - 2:15 PM – Marco Russo – Room 612 Inside DAX Query Plans (BIA-403) Discover the query plan for your DAX query and learn how to read it and how to optimize a DAX query by using these information. If you meet us at the conference, stop us and say hello: it’s always nice to know our readers!

    Read the article

  • DIVIDE vs division operator in #dax

    - by Marco Russo (SQLBI)
    Alberto Ferrari wrote an interesting article about DIVIDE performance in DAX. This new function has been introduced in SQL Server Analysis Services 2012 SP1, so it is available also in Excel 2013 (which still doesn’t have other features/fixes introduced by following Cumulative Updates…). The idea that instead of writing: IF ( Sales[Quantity] <> 0, Sales[Amount] / Sales[Quantity], BLANK () ) you can write: DIVIDE ( Sales[Amount], Sales[Quantity] ) There is a third optional argument in DIVIDE that defines the result in case the denominator (second argument) is zero, and by default its value is BLANK, so I omitted the third argument in my example. Using DIVIDE is very important, especially when you use a measure in MDX (for example in an Excel PivotTable) because it raise the chance that the non empty evaluation for the result is evaluated in bulk mode instead of cell-by-cell. However, from a DAX point of view, you might find it’s better to use the standard division operator removing the IF statement. I suggest you to read Alberto’s article, because you will find that an expression applying a filter using FILTER is faster than using CALCULATE, which is against any rule of thumb you might have read until now! Again, this is not always true, and depends on many conditions – trying to simplify, we might say that for a simple calculation, the query plan generated by FILTER could be more efficient – but, as usual, it depends, and 90% of the times using FILTER instead of CALCULATE produces slower performance. Do not take anything for granted, and always check the query plan when performance are your first issue!

    Read the article

  • How to show or direct a business analyst to a data modelling subject?

    - by AaronLS
    Our business analysts pushed hard to collect data through a spreadsheet. I am the programmer responsible for importing that data. Usually when they push hard for something like this, I never know how well it will work out until a few weeks later when I have time assigned to work on the task of programming the import of the data. I have tried to do as much as possible along the way, named ranges, data validations, etc. But I usually don't have time to take a detailed look at all the data and compare to the destination in the database to determine how well it matches up. A lot of times there will be maybe a little table of items that somehow I have to relate to something else in the database, but there are not natural or business keys present that would allow me to do so. Make the best of this, trying to write something that can compare strings and make a best guess at it and then go through the effort of creating interfaces for a user to match the imported data to the destination. I feel like if the business analyst was actually creating a data model, they would be forced to think about these relationships, and have an appreciation for the need of natural or business keys to be part of the spreadsheet for the purposes of smoothly importing the data. The closest they come to business analysis is a big flat list of fields, and that would be fine if it were like any other data dictionary and include data types+relationships, but it isn't. They are just a bunch of names. No indication of what type of data they might hold, and it is up to me to guess. When I have pushed for more detail, they say that it is just busy work. How can I explain the importance of data modelling? How can I tell them what it is and how to do it? It feels impossible, because they don't have an appreciation for its importance. They do however, usually have an interest in helping out in whatever way they can, it's just this in particular has never gotten a motivated response.

    Read the article

  • Tender vs. Requirements vs. Solution Design

    - by Tom Tom
    Conventionally, which of the above documents is deemed to hold the most weight when it comes to system acceptance? I recently had a conversation along these lines: It was argued that the initial requirements / tender documentation should be used to determine system acceptance. It was said that the solution design only serves to describe the way in which the system will solve the problem, not the problem it will solve. Furthermore, it was argued that if requirements are missed during solution design, the requirements should be referenced during system acceptance and that if any requirements were missed then the original tender should be referenced. Conversely, I suggested that - while requirements may be based on the original tender - they supersede it once agreed with the stakeholders. Furthermore, during solution design, analysis is performed to address and refine these initial requirements, translating them into a system capable of meeting the actual requirements. Once signed off by the relevant users, this solution design should absolutely represent the requirements (by virtue of the fact that it's designed upon them) but actually supersedes them as the basis for system acceptance. Is one of the above arguments more valid than the other?

    Read the article

  • How to show or direct a business analyst to do data modelling?

    - by AaronLS
    Our business analysts pushed hard to collect data through a spreadsheet. I am the programmer responsible for importing that data. Usually when they push hard for something like this, I never know how well it will work out until a few weeks later when I have time assigned to work on the task of programming the import of the data. I have tried to do as much as possible along the way, named ranges, data validations, etc. But I usually don't have time to take a detailed look at all the data and compare to the destination in the database to determine how well it matches up. A lot of times there will be maybe a little table of items that somehow I have to relate to something else in the database, but there are not natural or business keys present that would allow me to do so. Make the best of this, trying to write something that can compare strings and make a best guess at it and then go through the effort of creating interfaces for a user to match the imported data to the destination. I feel like if the business analyst was actually creating a data model, they would be forced to think about these relationships, and have an appreciation for the need of natural or business keys to be part of the spreadsheet for the purposes of smoothly importing the data. The closest they come to business analysis is a big flat list of fields, and that would be fine if it were like any other data dictionary and include data types+relationships, but it isn't. They are just a bunch of names. No indication of what type of data they might hold, and it is up to me to guess. When I have pushed for more detail, they say that it is just busy work. How can I explain the importance of data modelling? How can I tell them what it is and how to do it? It feels impossible, because they don't have an appreciation for its importance. They do however, usually have an interest in helping out in whatever way they can, it's just this in particular has never gotten a motivated response.

    Read the article

  • How many copies are needed to enlarge an array?

    - by user10326
    I am reading an analysis on dynamic arrays (from the Skiena's algorithm manual). I.e. when we have an array structure and each time we are out of space we allocate a new array of double the size of the original. It describes the waste that occurs when the array has to be resized. It says that (n/2)+1 through n will be moved at most once or not at all. This is clear. Then by describing that half the elements move once, a quarter of the elements twice, and so on, the total number of movements M is given by: This seems to me that it adds more copies than actually happen. E.g. if we have the following: array of 1 element +--+ |a | +--+ double the array (2 elements) +--++--+ |a ||b | +--++--+ double the array (4 elements) +--++--++--++--+ |a ||b ||c ||c | +--++--++--++--+ double the array (8 elements) +--++--++--++--++--++--++--++--+ |a ||b ||c ||c ||x ||x ||x ||x | +--++--++--++--++--++--++--++--+ double the array (16 elements) +--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--+ |a ||b ||c ||c ||x ||x ||x ||x || || || || || || || || | +--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--+ We have the x element copied 4 times, c element copied 4 times, b element copied 4 times and a element copied 5 times so total is 4+4+4+5 = 17 copies/movements. But according to formula we should have 1*(16/2)+2*(16/4)+3*(16/8)+4*(16/16)= 8+8+6+4=26 copies of elements for the enlargement of the array to 16 elements. Is this some mistake or the aim of the formula is to provide a rough upper limit approximation? Or am I missunderstanding something here?

    Read the article

  • C# performance analysis- how to count CPU cycles?

    - by Lirik
    Is this a valid way to do performance analysis? I want to get nanosecond accuracy and determine the performance of typecasting: class PerformanceTest { static double last = 0.0; static List<object> numericGenericData = new List<object>(); static List<double> numericTypedData = new List<double>(); static void Main(string[] args) { double totalWithCasting = 0.0; double totalWithoutCasting = 0.0; for (double d = 0.0; d < 1000000.0; ++d) { numericGenericData.Add(d); numericTypedData.Add(d); } Stopwatch stopwatch = new Stopwatch(); for (int i = 0; i < 10; ++i) { stopwatch.Start(); testWithTypecasting(); stopwatch.Stop(); totalWithCasting += stopwatch.ElapsedTicks; stopwatch.Start(); testWithoutTypeCasting(); stopwatch.Stop(); totalWithoutCasting += stopwatch.ElapsedTicks; } Console.WriteLine("Avg with typecasting = {0}", (totalWithCasting/10)); Console.WriteLine("Avg without typecasting = {0}", (totalWithoutCasting/10)); Console.ReadKey(); } static void testWithTypecasting() { foreach (object o in numericGenericData) { last = ((double)o*(double)o)/200; } } static void testWithoutTypeCasting() { foreach (double d in numericTypedData) { last = (d * d)/200; } } } The output is: Avg with typecasting = 468872.3 Avg without typecasting = 501157.9 I'm a little suspicious... it looks like there is nearly no impact on the performance. Is casting really that cheap?

    Read the article

  • Sentiment analysis for twitter in python

    - by Ran
    I'm looking for an open source implementation, preferably in python, of Textual Sentiment Analysis (http://en.wikipedia.org/wiki/Sentiment_analysis). Is anyone familiar with such open source implementation I can use? I'm writing an application that searches twitter for some search term, say "youtube", and counts "happy" tweets vs. "sad" tweets. I'm using Google's appengine, so it's in python. I'd like to be able to classify the returned search results from twitter and I'd like to do that in python. I haven't been able to find such sentiment analyzer so far, specifically not in python. Are you familiar with such open source implementation I can use? Preferably this is already in python, but if not, hopefully I can translate it to python. Note, the texts I'm analyzing are VERY short, they are tweets. So ideally, this classifier is optimized for such short texts. BTW, twitter does support the ":)" and ":(" operators in search, which aim to do just this, but unfortunately, the classification provided by them isn't that great, so I figured I might give this a try myself. Thanks! BTW, an early demo is here and the code I have so far is here and I'd love to opensource it with any interested developer.

    Read the article

  • Usage of static analysis tools - with Clear Case/Quest

    - by boyd4715
    We are in the process of defining our software development process and wanted to get some feed back from the group about this topic. Our team is spread out - US, Canada and India - and I would like to put into place some simple standard rules that all teams will apply to their code. We make use of Clear Case/Quest and RAD I have been looking at PMD, CPP, checkstyle and FindBugs as a start. My thought is to just put these into ANT and have the developers run these manually. I realize doing this you have to have some trust in that each developer will do this. The other thought is to add in some builders in to the IDE which would run a subset of the rules (keep the build process light) and then add another set (heavy) when they check in the code. Some other ideals is to make use of something like Cruse Control and have it set up to run these static analysis tools along with the unit test when ever Clear Case/Quest is idle. Wondering if others have done this and if it was successfully or can provide lessons learned.

    Read the article

  • exclude dependencies when running sonar analysis

    - by achraf
    I have a test project requiring some heavy jars which i put in ${M2_HOME}\test\src\main\resources\ and add them in the pom.xml using : <dependency> <groupId>server</groupId> <artifactId>server</artifactId> <version>1.0</version> <scope>system</scope> <systemPath>${M2_HOME}\test\src\main\resources\server.jar</systemPath> </dependency> <dependency> <groupId>client</groupId> <artifactId>client</artifactId> <version>6.0</version> <scope>system</scope> <systemPath>${M2_HOME}\test\src\main\resources\client.jar</systemPath> </dependency> I want to know if it possible to exclude them during sonar analysis, or generally just analyze java sources folder.

    Read the article

  • Syntactical analysis with Flex/Bison part 2

    - by Imran
    Hallo, I need help in Lex/Yacc Programming. I wrote a compiler for a syntactical analysis for inputs of many statements. Now i have a special problem. In case of an Input the compiler gives the right output, which statement is uses, constant operator or a jmp instructor to which label, now i have to write so, if now a if statement comes, first the first command (before the else) must be give out when the assignment of the if is yes then it must jump to the end because the command after the else isnt needed, so after this jmp then the second command must be give out. I show it in an example maybe you understand what i mean. Input adr. Output if(x==0) 10 if(x==0) Wait 5 20 WAIT 5 else 30 JMP 50 Wait 1 40 WAIT 1 end 50 END like so. I have an idea, maybe i can do it whith a special if statement like IF exp jmp_stmt_end stmt_seq END when the if statement is given in the input the compiler has to recognize the end ofthe statement and like my jmp_stmt in my compiler ( you have to download the files from http://bitbucket.org/matrix/changed-tiny) only to jump to the end. I hope you understand my problem.thanks.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >