Search Results

Search found 32512 results on 1301 pages for 'object oriented analysis'.

Page 13/1301 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • C++ include statement required if defining a map in a headerfile.

    - by Justin
    I was doing a project for computer course on programming concepts. This project was to be completed in C++ using Object Oriented designs we learned throughout the course. Anyhow, I have two files symboltable.h and symboltable.cpp. I want to use a map as the data structure so I define it in the private section of the header file. I #include <map> in the cpp file before I #include "symboltable.h". I get several errors from the compiler (MS VS 2008 Pro) when I go to debug/run the program the first of which is: Error 1 error C2146: syntax error : missing ';' before identifier 'table' c:\users\jsmith\documents\visual studio 2008\projects\project2\project2\symboltable.h 22 Project2 To fix this I had to #include <map> in the header file, which to me seems strange. Here are the relevant code files: // symboltable.h #include <map> class SymbolTable { public: SymbolTable() {} void insert(string variable, double value); double lookUp(string variable); void init(); // Added as part of the spec given in the conference area. private: map<string, double> table; // Our container for variables and their values. }; and // symboltable.cpp #include <map> #include <string> #include <iostream> using namespace std; #include "symboltable.h" void SymbolTable::insert(string variable, double value) { table[variable] = value; // Creates a new map entry, if variable name already exist it overwrites last value. } double SymbolTable::lookUp(string variable) { if(table.find(variable) == table.end()) // Search for the variable, find() returns a position, if thats the end then we didnt find it. throw exception("Error: Uninitialized variable"); else return table[variable]; } void SymbolTable::init() { table.clear(); // Clears the map, removes all elements. }

    Read the article

  • How should I go about implementing a points-to analysis in Maude?

    - by reprogrammer
    I'm going to implement a points-to analysis algorithm. I'd like to implement this analysis mainly based on the algorithm by Whaley and Lam. Whaley and Lam use a BDD based implementation of Datalog to represent and compute the points-to analysis relations. The following lists some of the relations that are used in a typical points-to analysis. Note that D(w, z) :- A(w, x),B(x, y), C(y, z) means D(w, z) is true if A(w, x), B(x, y), and C(y, z) are all true. BDD is the data structure used to represent these relations. Relations input vP0 (variable : V, heap : H) input store (base : V, field : F, source : V) input load (base : V, field : F, dest : V) input assign (dest : V, source : V) output vP (variable : V, heap : H) output hP (base : H, field : F, target : H) Rules vP(v, h) :- vP0(v, h) vP(v1, h) :- assign(v1, v2), vP(v2, h) hP(h1, f,h2) :- store(v1, f, v2), vP(v1, h1), vP(v2, h2) vP(v2, h2) :- load(v1, f, v2), vP(v1, h1), hP(h1, f, h2) I need to understand if Maude is a good environment for implementing points-to analysis. I noticed that Maude uses a BDD library called BuDDy. But, it looks like that Maude uses BDDs for a different purpose, i.e. unification. So, I thought I might be able to use Maude instead of a Datalog engine to compute the relations of my points-to analysis. I assume Maude propagates independent information concurrently. And this concurrency could potentially make my points-to analysis faster than sequential processing of rules. But, I don't know the best way to represent my relations in Maude. Should I implement BDD in Maude myself, or Maude's internal unification based on BDD has the same effect?

    Read the article

  • Google I/O 2011: Large-scale Data Analysis Using the App Engine Pipeline API

    Google I/O 2011: Large-scale Data Analysis Using the App Engine Pipeline API Brett Slatkin The Pipeline API makes it easy to analyze complex data using App Engine. This talk will cover how to build multi-phase Map Reduce workflows; how to merge multiple large data sources with "join" operations; and how to build reusable analysis components. It will also cover the API's concurrency model, how to debug in production, and built-in testing facilities. From: GoogleDevelopers Views: 3320 17 ratings Time: 51:39 More in Science & Technology

    Read the article

  • Performance tuning of tabular data models in Analysis Services

    - by Greg Low
    More and more practical information around working with tabular data models is starting to appear as more and more sites get deployed.At SQL Down Under, we've already helped quite a few customers move to tabular data models in Analysis Services and have started to collect quite a bit of information on what works well (and what doesn't) in terms of performance of these models. We've also been running a lot of training on tabular data models.It was great to see a whitepaper on the performance of these models released today.Performance Tuning of Tabular Models in SQL Server 2012 Analysis Services was written by John Sirmon, Greg Galloway, Cindy Gross and Karan Gulati. You'll find it here: http://msdn.microsoft.com/en-us/library/dn393915.aspx

    Read the article

  • Code Analysis Rule Sets in Visual Studio 2010

    - by Anthony Trudeau
    Microsoft Visual Studio 2010 introduces the concept of rule sets when configuring code analysis.  This is a valuable change from Visual Studio 2008 that I didn't even realize I wanted.  Visual Studio 2008 by default selected all rules and then you had to remove rules on an item by item basis. The rule sets fall into logical groups including "Microsoft All Rules", "Microsoft Basic Correctness Rules", "Microsoft Security Rules", et al.  And within the project properties you can select one rule set, multiple rule sets, or you can define your own rule set based upon another. Selecting a single rule set is obviously the easiest option.  The default rule set when you create a new project is the "Microsoft Minimum Recommended Rules".  However, in my opinion the recommended rules are just too permissive.  For that reason you might want to change your rule set to "Microsoft All Rules" until you get around to creating your own rule set; or alternately you can select multiple rule sets which is an option from the rule set combo box.  The Visual Studio documentation has comprehensive help on what is contained within the rule sets. Creating your own rule set is easy if not obvious.  You need to start a rule set from an existing rule set.  To get started select a rule set in the combo box within the Code Analysis tab of the project properties.  I selected the "Microsoft All Rules" for my rule set, but you may find it easier to start with the "Microsoft Minimum Recommended Rules" if your rules are on the more permissive side. Once your rule set is selected click the Open button.  This will display a dialog that is similar in composition to the rules selection from Visual Studio 2008.  Browsing through the tree view you can select or deselect individual rules within their categories; and you can indicate that the rules are flagged as errors instead of the default which is a warning.  A nice touch to the form is that you get a help pane when you select an individual rule.  That helped me considerably when I first configured my rule set. Once you have finished selecting your rules click the Save tool button, specify a location and name, and click the Save button on the Save As dialog.  Once you're back on the Code Analysis tab you'll choose the Browse option within the combo box and open the file you just created.

    Read the article

  • Book Review: Expert Cube Development with Microsoft SQL Server 2008 Analysis Services

    - by Greg Low
    I spent last week on campus in Redmond with the SQL Server Analysis Services Maestro program. It was great to have a chance to focus on SSAS for a week. As part of that, I did quite a bit of reading as I had quite a bit of travelling time. Ironically, I re-read a few books. The first was Marco Russo, Alberto Ferrari and Chris Webb's book Expert Cube Development with Microsoft SQL Server 2008 Analysis Services . I've often told BI classes that I've been teaching that this is a really good book and...(read more)

    Read the article

  • How do you get past the Analysis to Paralysis when working on a new project?

    - by Cape Cod Gunny
    I've been struggling with how to get my project going. I've got an old software package that is in need of desparate rewrite. I haven't compiled the source code since 2004. It still sells, it's stable but does require the “Run this program in compatibility mode for:” on a lot of the newer windows systems. It's also one of those hard coded 640 X 480 screen resolution programs. Yuck! I can't seem to get started with this rewrite. I'm constantly fiddling around with different things. I'll play around with different fluid layouts for a while. Then I start looking around at how the main menu should work/look. I quickly find out that there's this thing called "Cool Bars" and I'll spend hours playing with that. Then I start thinking about stuff like "Oh I need to make sure that the screen sizes are preserved so when the application gets relaunched it remebers how the screens were positioned." Which leads to what happens if they have two monitors? Which leads to what happens if they have a quad screen? Yikes it's got to stop. I have always been a slow starter. I think about stuff long and hard up front. This has always plagued me. Once I get my mind made up then bam... I'm off and running. I'm looking for advice from some other one-person software companies that can help someone like me get off to a quicker start?

    Read the article

  • Code analysis: Global project/assembly suppression

    - by klausbyskov
    I have several CA1704:IdentifiersShouldBeSpelledCorrectly warnings that I want to suppress. Basically they refer to the company name which is deemed to be spelled incorrectly. The company name is part of several namespaces in my project, and in order to suppress all the warnings I need to add a lot of suppressions to the GlobalSuppressions file. Is there any way to suppress all warning in a single line in order to aviod my GlobalSuppressions file to become overly cluttered?

    Read the article

  • What static analysis tool do you prefer?

    - by glutz78
    Ideally, I'm looking for something to integrate into visual studio 2005, but if i could run it on the command line on windows or on linux for gcc, that would be okay also. I'm looking for something in the $1000 range so I understand I wont get the best tools available. But I also prefer something better than cppcheck, which is free. Any ideas? Thanks.

    Read the article

  • ORM market analysis

    - by bonefisher
    I would like to see your experience with popular ORM tools outhere, like NHibernate, LLBLGen, EF, S2Q, Genom-e, LightSpeed, DataObjects.NET, OpenAccess, ... From my exp: - Genom-e is quiet capable of Linq & performance, dev support - EF lacks on some key features like lazy loading, Poco support, pers.ignorance... but in 4.o it may have overcome .. - DataObjects.Net so far good, althrough I found some bugs - NHibernate steep learning curve, no 100% Linq support (like in Genom-e and DataObjects.Net), but very supportive, extensible and mature

    Read the article

  • Implementation of a general-purpose object structure (property bag)

    - by Thomas Wanner
    We need to implement some general-purpose object structure, much like an object in dynamic languages, that would give us a possibility of creating the whole object graph on-the-fly. This class has to be serializable and somehow user friendly. So far we have made some experiments with class derived from Dictionary<string, object> using the dot notation path to store properties and collections in the object tree. We have also find an article that implements something similar, but it doesn't seem to fit completely into our picture either. Do you know about some good implementations / libraries that deal with a similar problem or do you have any (non-trivial) ideas that could help us with our own implementation ? Also, I probably have to say that we are using .NET 3.5, so we can't take advantage of the new features in .NET 4.0 like dynamic type etc. (as far as I know it's also not possible to use any subset of it in .NET 3.5 solution).

    Read the article

  • ASP .NET Code analysis tool to check cross site scripting

    - by Prashant
    I am aware of a tool which MS has provided which tells you about coss site scripting attack etc. The tool is http://www.microsoft.com/downloads/details.aspx?FamilyId=0178e2ef-9da8-445e-9348-c93f24cc9f9d&displaylang=en But are there tools which you have used for ASP .NET applications which do similar to this and which one is widely used in ASP .Net applications ?

    Read the article

  • Alternatives to CAT.NET for website security analysis

    - by Gavin Miller
    I'm looking for an alternative tool to CAT.NET for performing static security scans on .NET code. Currently the CAT.NET tooling/development is at a somewhat fragile stage and doesn't offer the reliability that I'm looking for. Are there any alternative static code analyzers that you use for detecting security issues?

    Read the article

  • Fowler Analysis Patterns lately?

    - by Berryl
    As much as I've always loved this one is how much I always wished there were more meaty examples of how to apply some of the concepts available. Is anyone aware of anything out there worth looking at that attempts to that? Cheers, Berryl

    Read the article

  • spl_object_hash for PHP < 5.2 (unique ID for object instances)

    - by Rowan
    I'm trying to get unique IDs for object instances in PHP 5+. The function, spl_object_hash() is available from PHP 5.2 but I'm wondering if there's a workaround for older versions. There are a couple of functions in the comments on php.net but they're not working for me. The first (simplified): function spl_object_hash($object){ if (is_object($object)){ return md5((string)$object); } return null; } does not work with native objects (such as DOMDocument), and the second: function spl_object_hash($object){ if (is_object($object)){ ob_start(); var_dump($object); $dump = ob_get_contents(); ob_end_clean(); if (preg_match('/^object\(([a-z0-9_]+)\)\#(\d)+/i', $dump, $match)) { return md5($match[1] . $match[2]); } } return null; } looks like it could be a major performance buster! Does anybody have anything up their sleeve?

    Read the article

  • Power Analysis in [R] for Two-Way Anova

    - by Thomas
    I am trying to calculate the necessary sample size for a 2x2 factorial design. I have two questions. 1) I am using the package pwr and the one way anova function to calculate the necessary sample size using the following code pwr.anova.test(k = , n = , f = , sig.level = , power = ) However, I would like to look at two way anova, since this is more efficient at estimating group means than one way anova. There is no two-way anova function that I could find. Is there a package or routine in [R] to do this? 2) Moreover, am I safe in assuming that since I am using a one-way anova power calculations, that the sample size will be more conservative (i.e. larger)?

    Read the article

  • DMX Analysis Services question

    - by user282382
    Hi, I am have two mining models, both are time series. One is [Company_Inputs] and the other is [Booking_Projections]. What I want to do is use EXTEND_MODEL_CASES to join the results of [Company_Inputs] as the extended cases. So basically something like: Select Flattened PredictTimeSeries([Bookings], 1, 6, EXTEND_MODEL_CASES) FROM [Booking_Projections] Natural Prediction Join (Select Flattened PredictTimeSeries([Metric1], 1, 6) From [Company_Inputs]) AS T This code of course doesn't work, but the idea is to use the predictions made from [Company_Inputs] as cases for predicting future values of [Booking_Projections] If anyone has an idea of how I can accomplish this I would appreciate it very much.

    Read the article

  • Object Oriented Perl interface to read from and write to a socket

    - by user654967
    I need a perl client-server implementation as a wrapper for a server in C#. A perl script passes the server address and port number and an input string to a module, this module has to create the socket and send the input string to the server. The data sent has to follow ISO-8859-1 encoding. On receiving the information, the client has to first receive 3 byte, then the next 8 bytes, this has the length of the data that has to be received next.. so based on the length the client has to read the next data. each of the data that is read has to be stored in a variable and sent another module for further processing. Currently this is what my perl client looks like..which I'm sure isn't right..could someone tell me how to do this..and set me on the right direction.. sub WriteInfo { my ($addr, $port, $Input) = @_; $socket = IO::Socket::INET->new( Proto => "tcp", PeerAddr => $addr, PeerPort => $port, ); unless ($socket) { die "cannot connect to remote" } while (1) { $socket->send($Input); } } sub ReadData { while (1) { my $ExecutionResult = $socket->recv( $recv_data, 3); my $DataLength = $socket->recv( $recv_data, 8); $DataLength =~ s/^0+// ; my $decval = hex($DataLength); my $Data = $socket->recv( $recv_data, $decval); return($Data); } thanks a lot.. Archer

    Read the article

  • Visual Studio 2008 profiler analysis - missing time

    - by Scott Vercuski
    I ran the Visual Studio 2008 profiler against my ASP.NET application and came up with the following result set. CURRENT FUNCTION TIME (msec) ---------------------------------------------------|-------------- Data.GetItem(params) | 10,158.12 ---------------------------------------------------|-------------- Functions that were called by Data.GetItem(params) TIME (msec) ---------------------------------------------------|-------------- Model.GetSubItem(params) | 0.83 Model.GetSubItem2(params) | 0.77 Model.GetSubItem3(params) | 0.76 etc. The issue I'm facing is that the sum of the Functions called by Data.GetItem(params) do not sum up to the 10,158.12 msec total. This would lead me to believe that the bulk of the time is actually spent executing the code within that method. My question is ... does Visual Studio provide a way to analyze the method itself so I can see which sections of code are taking the longest? if it does not are there any recommended tools to do this? or should I start writing my own timing scripts? Thank you

    Read the article

  • Investment advice data dump analysis

    - by portoalet
    For my year-end pet project, I'd like to analyze investment advices and their correlation to the stock market performance. The problem is, where do I get the dump of investment advice data (free) ? something like stackoverflow.com data dump will be nice. Or maybe it's easier to do distributed crawling and crawl the public finance webpages for investment advices? Investment advice is buy/sell advice for stocks/forex, issued by institution/investment advisor.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >