Search Results

Search found 3402 results on 137 pages for 'statistical analysis soft'.

Page 49/137 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • The best way to organize WPF projects

    - by Mike
    Hello everybody, I've started recently to develop a new software in WPF, and I still don't know which is the best way to organize the application, to be more productive with Visual Studio and Expression Blend. I noticed 2 annoying things I'd like to solve: I'm using Code Contracts in my projects, and when I run my project with Expression Blend, it launches the static analysis of the code. How can I stop that? Which configuration of the project does Blend use by default? I've tried to disable Code Contracts in a new configuration. It works in VS as the static analysis is not launched, but it has no effects in Blend. I've thinked about splitting the Windows Application in 2 parts: the first one containing the views of the WPF (app.exe) and the second one being the core of the project, with the logic code (app.core.dll), and I would just open the former project in Blend. Any thoughts about that? Thanks in advance Mike

    Read the article

  • In MySQL, what is the most effective query design for joining large tables with many to many relatio

    - by lighthouse65
    In our application, we collect data on automotive engine performance -- basically source data on engine performance based on the engine type, the vehicle running it and the engine design. Currently, the basis for new row inserts is an engine on-off period; we monitor performance variables based on a change in engine state from active to inactive and vice versa. The related engineState table looks like this: +---------+-----------+---------------+---------------------+---------------------+-----------------+ | vehicle | engine | engine_state | state_start_time | state_end_time | engine_variable | +---------+-----------+---------------+---------------------+---------------------+-----------------+ | 080025 | E01 | active | 2008-01-24 16:19:15 | 2008-01-24 16:24:45 | 720 | | 080028 | E02 | inactive | 2008-01-24 16:19:25 | 2008-01-24 16:22:17 | 304 | +---------+-----------+---------------+---------------------+---------------------+-----------------+ For a specific analysis, we would like to analyze table content based on a row granularity of minutes, rather than the current basis of active / inactive engine state. For this, we are thinking of creating a simple productionMinute table with a row for each minute in the period we are analyzing and joining the productionMinute and engineEvent tables on the date-time columns in each table. So if our period of analysis is from 2009-12-01 to 2010-02-28, we would create a new table with 129,600 rows, one for each minute of each day for that three-month period. The first few rows of the productionMinute table: +---------------------+ | production_minute | +---------------------+ | 2009-12-01 00:00 | | 2009-12-01 00:01 | | 2009-12-01 00:02 | | 2009-12-01 00:03 | +---------------------+ The join between the tables would be engineState AS es LEFT JOIN productionMinute AS pm ON es.state_start_time <= pm.production_minute AND pm.production_minute <= es.event_end_time. This join, however, brings up multiple environmental issues: The engineState table has 5 million rows and the productionMinute table has 130,000 rows When an engineState row spans more than one minute (i.e. the difference between es.state_start_time and es.state_end_time is greater than one minute), as is the case in the example above, there are multiple productionMinute table rows that join to a single engineState table row When there is more than one engine in operation during any given minute, also as per the example above, multiple engineState table rows join to a single productionMinute row In testing our logic and using only a small table extract (one day rather than 3 months, for the productionMinute table) the query takes over an hour to generate. In researching this item in order to improve performance so that it would be feasible to query three months of data, our thoughts were to create a temporary table from the engineEvent one, eliminating any table data that is not critical for the analysis, and joining the temporary table to the productionMinute table. We are also planning on experimenting with different joins -- specifically an inner join -- to see if that would improve performance. What is the best query design for joining tables with the many:many relationship between the join predicates as outlined above? What is the best join type (left / right, inner)?

    Read the article

  • Dynamic function docstring

    - by Tom Aldcroft
    I'd like to write a python function that has a dynamically created docstring. In essence for a function func() I want func.__doc__ to be a descriptor that calls a custom __get__ function create the docstring on request. Then help(func) should return the dynamically generated docstring. The context here is to write a python package wrapping a large number of command line tools in an existing analysis package. Each tool becomes a similarly named module function (created via function factory and inserted into the module namespace), with the function documentation and interface arguments dynamically generated via the analysis package.

    Read the article

  • ANCOVA in Python with Scipy/Numpy stats

    - by Shax
    I would like to know a way of performing ANCOVA(analysis of covariance) using Python with scipy. It is basically a statistical comparison of regression lines. I know Python can do ANOVA and it can also do regression line fitting with Scipy.stats. I'm not sure how to put those together to get an effective ANCOVA though, if it is possible. Regards, Shax

    Read the article

  • Keeping track of changes - Django

    - by RadiantHex
    Hi folks!! I have various models of which I would like to keep track and collect statistical data. The problem is how to store the changes throughout time. I thought of various alternative: Storing a log in a TextField, open it and update it every time the model is saved. Alternatively pickle a list and store it in a TextField. Save logs on hard drive. What are your suggestions?

    Read the article

  • Vim, LaTeX, and version controlI

    - by Bkkbrad
    I'm writing a LaTeX document in vim, and I have it hard wrapping at 80 characters to make reading easier. However, this causes problems with tracking changes with in version control. For example, inserting "Lorem ipsum" at the beginning of this text: 1 Dolor sit amet, consectetur adipiscing elit. Phasellus bibendum lobortis lectus 2 quis porta. Aenean vestibulum magna vel purus laoreet at molestie massa 3 suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien 4 ullamcorper elit, dignissim consectetur justo tellus et nunc. results in: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. Phasellus bibendum 2 lobortis lectus quis porta. Aenean vestibulum magna vel purus laoreet at 3 molestie massa suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, 4 tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. When I review this change in git, it tells me that all the lines of the paragraph have changed because of the wrapping, even though only one semantic change has occurred. One way around this problem is to have every sentence on its own line. This looks the same in the rendered document, but the source now is harder to read, because each line has quite a different line length: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. 2 Phasellus bibendum lobortis lectus quis porta. 3 Aenean vestibulum magna vel purus laoreet at molestie massa suscipit. 4 Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. (If I soft wrap at 80, things still look bad, just in a different way.) Is it possible to have my text on disk with one newline per sentence, but display and edit it in vim as if the text of each paragraph was one long line, soft wrapped at 80 characters? I assume it requires some vim-foo rather than tweaking git or LaTeX.

    Read the article

  • Simple database design and LINQ

    - by Anders Svensson
    I have very little experience designing databases, and now I want to create a very simple database that does the same thing I have previously had in xml. Here's the xml: <services> <service type="writing"> <small>125</small> <medium>100</medium> <large>60</large> <xlarge>30</xlarge> </service> <service type="analysis"> <small>56</small> <medium>104</medium> <large>200</large> <xlarge>250</xlarge> </service> </services> Now, I wanted to create the same thing in a SQL database, and started doing this ( hope this formats ok, but you'll get the gist, four columns and two rows): > ServiceType Small Medium Large > > Writing 125 100 60 > > Analysis 56 104 200 This didn't work too well, since I then wanted to use LINQ to select, say, the Large value for Writing (60). But I couldn't use LINQ for this (as far as I know) and use a variable for the size (see parameters in the method below). I could only do that if I had a column like "Size" where Small, Medium, and Large would be the values. But that doesn't feel right either, because then I would get several rows with ServiceType = Writing (3 in this case, one for each size), and the same for Analysis. And if I were to add more servicetypes I would have to do the same. Simply repetitive... Is there any smart way to do this using relationships or something? Using the second design above (although not good), I could use the following LINQ to select a value with parameters sent to the method: protected int GetHourRateDB(string serviceType, Size size) { CalculatorLinqDataContext context = new CalculatorLinqDataContext(); var data = (from calculatorData in context.CalculatorDatas where calculatorData.Service == serviceType && calculatorData.Size == size.ToString() select calculatorData).Single(); return data.Hours; } But if there is another better design, could you please also describe how to do the same selection using LINQ with that design? Please keep in mind that I am a rookie at database design, so please be as explicit and pedagogical as possible :-) Thanks! Anders

    Read the article

  • Finding C++ static initialization order problems

    - by Fred Larson
    We've run into some problems with the static initialization order fiasco, and I'm looking for ways to comb through a whole lot of code to find possible occurrences. Any suggestions on how to do this efficiently? Edit: I'm getting some good answers on how to SOLVE the static initialization order problem, but that's not really my question. I'd like to know how to FIND objects that are subject to this problem. Evan's answer seems to be the best so far in this regard; I don't think we can use valgrind, but we may have memory analysis tools that could perform a similar function. That would catch problems only where the initialization order is wrong for a given build, and the order can change with each build. Perhaps there's a static analysis tool that would catch this. Our platform is IBM XLC/C++ compiler running on AIX.

    Read the article

  • How do you measure latency in low-latency environments?

    - by Ajaxx
    Here's the setup... Your system is receiving a stream of data that contains discrete messages (usually between 32-128 bytes per message). As part of your processing pipeline, each message passes through two physically separate applications which exchange the data using a low-latency approach (such as messaging over UDP) or RDMA and finally to a client via the same mechanism. Assuming you can inject yourself at any level, including wire protocol analysis, what tools and/or techniques would you use to measure the latency of your system. As part of this, I'm assuming that every message that is delivered to the system results in a corresponding (though not equivalent) message being pushed through the system and delivered to the client. The only tool that I've seen on the market like this is TS-Associates TipOff. I'm sure that with the right access you could probably measure the same information using a wire analysis tool (ala wireshark) and the right dissectors, but is this the right approach or are there any commodity solutions that I can use?

    Read the article

  • Vim, LaTeX, word-wrapping, and version control

    - by Bkkbrad
    I'm writing a LaTeX document in vim, and I have it hard wrapping at 80 characters to make reading easier. However, this causes problems with tracking changes with in version control. For example, inserting "Lorem ipsum" at the beginning of this text: 1 Dolor sit amet, consectetur adipiscing elit. Phasellus bibendum lobortis lectus 2 quis porta. Aenean vestibulum magna vel purus laoreet at molestie massa 3 suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien 4 ullamcorper elit, dignissim consectetur justo tellus et nunc. results in: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. Phasellus bibendum 2 lobortis lectus quis porta. Aenean vestibulum magna vel purus laoreet at 3 molestie massa suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, 4 tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. When I review this change in git, it tells me that all the lines of the paragraph have changed because of the wrapping, even though only one semantic change has occurred. One way around this problem is to have every sentence on its own line. This looks the same in the rendered document, but the source now is harder to read, because each line has quite a different line length: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. 2 Phasellus bibendum lobortis lectus quis porta. 3 Aenean vestibulum magna vel purus laoreet at molestie massa suscipit. 4 Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. (If I soft wrap at 80, things still look bad, just in a different way.) Is it possible to have my text on disk with one newline per sentence, but display and edit it in vim as if the text of each paragraph was one long line, soft wrapped at 80 characters? I assume it requires some vim-foo rather than tweaking git or LaTeX.

    Read the article

  • Can I ask Postgresql to ignore errors within a transaction

    - by fmark
    I use Postgresql with the PostGIS extensions for ad-hoc spatial analysis. I generally construct and issue SQL queries by hand from within psql. I always wrap an analysis session within a transaction, so if I issue a destructive query I can roll it back. However, when I issue a query that contains an error, it cancels the transaction. Any further queries elicit the following warning: ERROR: current transaction is aborted, commands ignored until end of transaction block Is there a way I can turn this behaviour off? It is tiresome to rollback the transaction and rerun previous queries every time I make a typo.

    Read the article

  • Get highest frequency terms from Lucene index

    - by Julia
    Hello! i need to extract terms with highest frequencies from several lucene indexes, to use them for some semantic analysis. So, I want to get maybe top 30 most occuring terms(still did not decide on threshold, i will analyze results) and their per-index counts. I am aware that I might lose some precision because of potentionally dropped duplicates, but for now, lets say i am ok with that. So for the proposed solutions, (needless to say maybe) speed is not important, since I would do static analysis, I would put accent on simplicity of implementation because im not so skilled with Lucene (not the programming guru too :/ ) and cant wrap my mind around many concepts of it.. I can not find any code samples from something similar, so all concrete advices (code, pseudocode, links to code samples...) I will apretiate very much!!! Thank you!

    Read the article

  • GIS: When and why to use ArcObjects over GDAL programming to work with ArcGIS rasters and vectors?

    - by anotherobject
    Im just starting off with GDAL + python to support operations that cannot be done with ArcGIS python geoprocessing scripting. Mainly I am doing spatial modeling/analysis/editing of raster and vector data. I am a bit confused when ArcObject development is required versus when GDAL can be used? Is there functionality of ArcObjects that GDAL does not do? Is the opposite true too? I am assuming that ArcObjects are more useful in developing online tools versus Desktop analysis and modeling where the difference is more to do with preference? In my case i prefer GDAL because of python support, which I believe ArcObjects lack. thanks!

    Read the article

  • Data Mining open source tools

    - by Andriyev
    Hi I'm due to take up a project which is into data mining. Before I jump in I wanted to probe around for different data mining tools (preferably open source) which allows web based reporting. In my scenario the all the data would be provided to me, so I'm not supposed to crawl for it. In n nutshell, am looking for a tool which does - Data Analysis, Web based Reporting, provides some kind of a dashboard and mining features. I have worked on the Microsoft Analysis Services and BOXI and off late I have been looking at Pentaho, which seems to be a good option. Please share your experiences on any such tool which you know of. cheers

    Read the article

  • WebService or WebRequest for database updates in ASP.NET

    - by eugeneK
    I'm currently making some system that will gather statistical reports from different sites, for any user transaction in there. My question is for would be better to implement from you experience ... All websites that will report data to statistics sites are on my servers. So whats better to user WebRequest to send GET data to page or to use Webservice for that... thanks

    Read the article

  • Using a string inside the DocumentBuilder parse method (need it for parsing XML using XPath)

    - by dierre
    Hi guys! I'm trying to create a RESTful webservice using a Java Servlet. The problem is I have to pass via POST method to a webserver a request. The content of this request is not a parameter but the body itself. So I basically send from ruby something like this: url = URI.parse(@host) req = Net::HTTP::Post.new('/WebService/WebServiceServlet') req['Content-Type'] = "text/xml" # req.basic_auth 'account', 'password' req.body = data response = Net::HTTP.start(url.host, url.port){ |http| puts http.request(req).body } Then I have to retrieve the body of this request in my servlet. I use the classic readline, so I have a string. The problem is when I have to parse it as XML: private void useXML( final String soft, final PrintWriter out) throws ParserConfigurationException, SAXException, IOException, XPathExpressionException, FileNotFoundException { DocumentBuilderFactory domFactory = DocumentBuilderFactory.newInstance(); domFactory.setNamespaceAware(true); // never forget this! DocumentBuilder builder = domFactory.newDocumentBuilder(); Document doc = builder.parse(soft); XPathFactory factory = XPathFactory.newInstance(); XPath xpath = factory.newXPath(); XPathExpression expr = xpath.compile("//software/text()"); Object result = expr.evaluate(doc, XPathConstants.NODESET); NodeList nodes = (NodeList) result; for (int i = 0; i < nodes.getLength(); i++) { out.println(nodes.item(i).getNodeValue()); } } The problem is that builder.parse() accepts: parse(File f), parse(InputSource is), parse(InputStream is). Is there any way I can transform my xml string in an InputSource or something like that? I know it could be a dummy question but Java is not my thing, I'm forced to use it and I'm not very skilled.

    Read the article

  • how to allow unamed user in svn authz file?

    - by dtrosset
    I have a subversion server running with apache. It authenticates users using LDAP in apache configuration and uses SVN authorizations to limit user access to certain repositories. This works perfectly. Apache DAV svn SVNParentPath /srv/svn SVNListParentPath Off SVNPathAuthz Off AuthType Basic AuthName "Subversion Repository" AuthBasicProvider ldap AuthLDAPBindDN # private stuff AuthLDAPBindPassword # private stuff AuthLDAPURL # private stuff Require valid-user AuthzSVNAccessFile /etc/apache2/dav_svn.authz Subversion [groups] soft = me, and, all, other, developpers Adding anonymous access from one machine Now, I have a service I want to setup (rietveld, for code reviews) that needs to have an anonymous access to the repository. As this is a web service, accesses are always done from the same server. Thus I added apache configuration to allow all accesses from this machine. This did not work until I add an additional line in the authorization file to allow read access to user -. Apache <Limit GET PROPFIND OPTIONS REPORT> Order allow,deny Allow from # private IP address Satisfy Any </Limit> Subversion [Software:/] @soft = rw - = r # <-- This is the added line For instance, before I add this, all users were authenticated, and thus had a name. Now, some accesses are done without a user name! I found this - user name in the apache log files. But does this line equals to * = r that I absolutely do not want to enable, or does it only allows the anonymous unnamed user (that is allowed access only from the rietveld server)?

    Read the article

  • Is there a Java Descriptor like thing in .Net?

    - by Sun Liwen
    I'm working on a static analysis tool for .NET assembly. In Java, there is a Descriptor which can be used to represent method or field in a string with specified grammar. for field: double d[][][]; will be [[[D It's useful especially when doing bytecode analysis. Coz it's easy to describe. If there a similar thing in .NET CLR? Or is there a better way to achieve this? Thanks!

    Read the article

  • statical graph for my website

    - by rabeea
    i have a freelance work website where programmers bid on projects. i want to draw a bell curve/ normal distribution chart for the bids placed on each project to show the percentage of bids higher or lower than the average bid. is there any statistical tool available that could take the data from the site and draw a graph?

    Read the article

  • How to exclude a filter from a facet?

    - by gjb
    I have come from a Solr background and am trying to find the equivalent of "tagging" and "excluding" in Elasticsearch. In the following example, how can I exclude the price filter from the calculation of the prices facet? In other words, the prices facet should take into account all of the filters except for price. { query : { "filtered" : { "query" : { "match_all" : {} }, "filter" : { "and" : [ { "term" : { "colour" : "Red" } }, { "term" : { "feature" : "Square" } }, { "term" : { "feature" : "Shiny" } }, { "range" : { "price" : { "from" : "10", "to" : "20" } } } ] } } }, "facets" : { "colours" : { "terms" : { "field" : "colour" } }, "features" : { "terms" : { "field" : "feature" } }, "prices" : { "statistical" : { "field" : "price" } } } }

    Read the article

  • Creating a new buffer with text using EmacsClient

    - by User1
    I have a program that can send text to any other program for further analysis (eg sed, grep, etc). I would like it to send the data to Emacs and do analysis there. How would I do that? EmacsClient takes a filename by default, this is a data string not a file and I really don't want to create and delete files just to send data to Emacs. EmacsClient has an "eval" command-line option that let's you execute lisp code instead of open files. Is there a simple lisp function that will open a new buffer with the given text?

    Read the article

  • How do i know if this is random enough?

    - by David
    I wrote a program in java that rolls a die and records the total number of times each value 1-6 is rolled. I rolled 6 Million times. Here's the distribution: #of 0's: 0 #of 1's: 1000068 #of 2's: 999375 #of 3's: 999525 #of 4's: 1001486 #of 5's: 1000059 #of 6's: 999487 (0 wasn't an option.) Is this distribution consistant with random dice rolls? What objective statistical tests might confirm that the dice rolls are indeed random enough?

    Read the article

  • What programming languages are good for statistics?

    - by Jason Baker
    I'm doing a bit more statistical analysis on some things lately, and I'm curious if there are any programming languages that are particularly good for this purpose. I know about R, but I'd kind of prefer something a bit more general-purpose (or is R pretty general-purpose?). What suggestions do you guys have? Are there any languages out there whose syntax/semantics are particularly oriented towards this? Or are there any languages that have exceptionally good libraries?

    Read the article

  • How to draw shadows that don't suck?

    - by mystify
    A CAShapeLayer uses a CGPathRef to draw it's stuff. So I have a star path, and I want a smooth drop shadow with a radius of about 15 units. Probably there is some nice functionality in some new iPhone OS versions, but I need to do it myself for a old aged version of 3.0 (which most people still use). I tried to do some REALLY nasty stuff: I created a for-loop and sequentially created like 15 of those paths, transform-scaling them step by step to become bigger. Then assigning them to a new created CAShapeLayer and decreasing it's alpha a little bit on every iteration. Not only that this scaling is mathematically incorrect and sucks (it should happen relative to the outline!), the shadow is not rounded and looks really ugly. That's why nice soft shadows have a radius. The tips of a star shouldn't appear totally sharp after a shadow size of 15 units. They should be soft like cream. But in my ugly solution they're just as s harp as the star itself, since all I do is scale the star 15 times and decrease it's alpha 15 times. Ugly. I wonder how the big guys do it? If you had an arbitrary path, and that path must throw a shadow, how does the algorithm to do that work? Probably the path would have to be expanded like 30 times, point-by-point relative to the tangent of the outline away from the filled part, and just by 0.5 units to have a nice blending. Before I re-invent the wheel, maybe someone has a handy example or link?

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >