Search Results

Search found 2562 results on 103 pages for 'morphological analysis'.

Page 26/103 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • How to prove worst-case number of inversions in a heap is O(nlogn)?

    - by Jacques
    I am busy preparing for exams, just doing some old exam papers. The question below is the only one I can't seem to do (I don't really know where to start). Any help would be appreciated greatly. Use the O(nlogn) comparison sort bound, the theta(n) bound for bottom-up heap construction, and the order complexity if insertion sort to show that the worst-case number of inversions in a heap is O(nlogn).

    Read the article

  • how phpmyvisitors works?

    - by hd
    hi i have installed "phpmyvisitors" cms to get statistics of my sites visits. it is written in php and is open source. i gets many useful information like: -total visits -viewed pages -visitor browser informations -visitor distribution over the world -how visitors access to site -how much time they spend on sites and .... it is some think like Google Analytics.but fewer features. my question is: " how does it do all of them? "

    Read the article

  • Big-Oh running time of code in Java (are my answers accurate

    - by Terry Frederick
    the Method hasTwoTrueValues returns true if at least two values in an array of booleans are true. Provide the Big-Oh running time for all three implementations proposed. // Version 1 public boolean has TwoTrueValues( boolean [ ] arr ) { int count = 0; for( int i = 0; i < arr. length; i++ ) if( arr[ i ] ) count++; return count >= 2; } // Version 2 public boolean hasTwoTrueValues( boolean [ ] arr ) { for( int i = 0; i < arr.length; i++ ) for( int j = i + 1; j < arr.length; j++ ) if( arr[ i ] && arr[ j ] ) return true; } // Version 3 public boolean hasTwoTrueValues( boolean [ ] arr ) { for( int i = 0; i < arr.length; i++ if( arr[ i ] ) for( int j = i + 1; j < arr.length; j++ ) if( arr[ j ] ) return true; return false; } For Version 1 I say the running time is O(n) Version 2 I say O(n^2) Version 3 I say O(n^2) I am really new to this Big Oh Notation so if my answers are incorrect could you please explain and help.

    Read the article

  • Scientific Data processing (Graph comparison and interpretation)

    - by pinkynobrain
    Hi stackoverflow friends, I'm trying to write a program to automate one of my more boring and repetitive work tasks. I have some programming experience but none with processing or interpreting large volumes of data so I am seeking your advice (both suggestions of techniques to try and also things to read to learn more about doing this stuff). I have a piece of equipment that monitors an experiment by taking repeated samples and displays the readings on its screen as a graph. The input of experiment can be altered and one of these changes should produce a change in a section of the graph which I currently identify by eye and is what I'm looking for in the experiment. I want to automate it so that a computer looks at a set of results and spots the experiment input that causes the change. I can already extract the results from the machine. Currently they results for a run are in the form of an integer array with the index being the sample number and the corresponding value being the measurement. The overall shape of the graph will be similar for each experiment run. The change I'm looking for will be roughly the same and will occur in approximately the same place every time for the correct experiment input. Unfortunately there are a few gotchas that make this problem more difficult. There is some noise in the measuring process which mean there is some random variation in the measured values between different runs. Although the overall shape of the graph remains the same. The time the experiment takes varies slightly each run causing two effects. First, the a whole graph may be shifted slightly on the x axis relative to another run's graph. Second, individual features may appear slightly wider or narrower in different runs. In both these cases the variation isn't particularly large and you can assume that the only non random variation is caused by the correct input being found. Thank you for your time, Pinky

    Read the article

  • Generating data on unlevel background

    - by bluejay93
    I want to make an unlevel background and then generate some test data on that using Matlab. I was not clear when I asked this question earlier. So for this simple example for i = 1:10 for j = 1:10 f(i,j)=X.^2 + Y.^2 end end where X and Y have been already defined, it plots it on a flat surface. I don't want to distort the function itself, but I want the surface that it goes onto to be unlevel, changed by some degree or something. I hope that's a little clearer.

    Read the article

  • LaTeX printing only first two pages of a document

    - by Peter Flom
    I am working in LaTeX, and when I create a pdf file (using LaTeX button or pdfLaTeX button or using yap) the pdf has only the first two pages. No errors. It just stops. If I make the first page longer by adding text, it still stops at end of 2nd page. Any ideas? OK, responding to first comment, here is the code \documentclass{article} \title{Outline of Book} \author{Peter L. Flom} \begin{document} \maketitle \section*{Preface} \subsection*{Audience} \subsection*{What makes this book different?} \subsection*{Necessary background} \subsection*{How to read this book} \section{Introduction} \subsection{The purpose of logistic regression} \subsection{The need for logistic regression} \subsection{Types of logistic regression} \section{General issues in logistic regression} \subsection{Transforming independent and dependent variables} \subsection{Interactions} \subsection{Model selection} \subsection{Parameter estimates, confidence intervals, p values} \subsection{Summary and further reading} \section{Dichotomous logistic regression} \subsection{Introduction, theory, examples} \subsection{Exploratory plots and analysis} \subsection{Basic model fitting} \subsection{Advanced and special issues in model fitting} \subsection{Diagnostic and descriptive plots and analysis} \subsection{Traps and gotchas} \subsection{Power analysis} \subsection{Summary and further reading} \subsection{Exercises} \section{Ordinal logistic regression} \subsection{Introduction, theory, examples} \subsubsection{Introduction - what are ordinal variables?} \subsubsection{Theory of the model} \subsubsection{Examples for this chapter} \subsection{Exploratory plots and analysis} \subsection{Basic model fitting} \subsection{Advanced and special issues in model fitting} \subsection{Diagnostic and descriptive plots and analysis} \subsection{Traps and gotchas} \subsection{Power analysis} \subsection{Summary and further reading} \subsection{Exercises} \section{Multinomial logistic regression} \subsection{Introduction, theory, examples} \subsection{Exploratory plots and analysis} \subsection{Basic model fitting} \subsection{Advanced and special issues in model fitting} \subsection{Diagnostic and descriptive plots and analysis} \subsection{Traps and gotchas} \subsection{Power analysis} \subsection{Summary and further reading} \subsection{Exercises} \section{Choosing a model} \subsection{NOIR and its problems} \subsection{Linear vs. ordinal} \subsection{Ordinal vs. multinomial} \subsection{Summary and further reading} \subsection{Exercises} \section{Extensions and related models} \subsection{Other logistic models} \subsection{Multilevel models - PROC NLMIXED and GLIMMIX} \subsection{Loglinear models - PROC CATMOD} \section{Summary} \end{document} thanks Peter

    Read the article

  • SSAS dimension source table changed - how to propagate changes to analysis server?

    - by Phil
    Hi, Sorry if the question isn't phrased very well but I'm new to SSAS and don't know the correct terms. I have changed the name of a table and its columns. I am using said table as a dimension for my cube, so now the cube won't process. Presumably I need to make updates in the analysis server to reflect changes to the source database? I have no idea where to start - any help gratefully received. Thanks Phil

    Read the article

  • Overview of Microsoft SQL Server 2008 Upgrade Advisor

    - by Akshay Deep Lamba
    Problem Like most organizations, we are planning to upgrade our database server from SQL Server 2005 to SQL Server 2008. I would like to know is there an easy way to know in advance what kind of issues one may encounter when upgrading to a newer version of SQL Server? One way of doing this is to use the Microsoft SQL Server 2008 Upgrade Advisor to plan for upgrades from SQL Server 2000 or SQL Server 2005. In this tip we will take a look at how one can use the SQL Server 2008 Upgrade Advisor to identify potential issues before the upgrade. Solution SQL Server 2008 Upgrade Advisor is a free tool designed by Microsoft to identify potential issues before upgrading your environment to a newer version of SQL Server. Below are prerequisites which need to be installed before installing the Microsoft SQL Server 2008 Upgrade Advisor. Prerequisites for Microsoft SQL Server 2008 Upgrade Advisor .Net Framework 2.0 or a higher version Windows Installer 4.5 or a higher version Windows Server 2003 SP 1 or a higher version, Windows Server 2008, Windows XP SP2 or a higher version, Windows Vista Download SQL Server 2008 Upgrade Advisor You can download SQL Server 2008 Upgrade Advisor from the following link. Once you have successfully installed Upgrade Advisor follow the below steps to see how you can use this tool to identify potential issues before upgrading your environment. 1. Click Start -> Programs -> Microsoft SQL Server 2008 -> SQL Server 2008 Upgrade Advisor. 2. Click Launch Upgrade Advisor Analysis Wizard as highlighted below to open the wizard. 2. On the wizard welcome screen click Next to continue. 3. In SQL Server Components screen, enter the Server Name and click the Detect button to identify components which need to be analyzed and then click Next to continue with the wizard. 4. In Connection Parameters screen choose Instance Name, Authentication and then click Next to continue with the wizard. 5. In SQL Server Parameters wizard screen select the Databases which you want to analysis, trace files if any and SQL batch files if any.  Then click Next to continue with the wizard. 6. In Reporting Services Parameters screen you can specify the Reporting Server Instance name and then click next to continue with the wizard. 7. In Analysis Services Parameters screen you can specify an Analysis Server Instance name and then click Next to continue with the wizard. 8. In Confirm Upgrade Advisor Settings screen you will be able to see a quick summary of the options which you have selected so far. Click Run to start the analysis. 9. In Upgrade Advisor Progress screen you will be able to see the progress of the analysis. Basically, the upgrade advisor runs predefined rules which will help to identify potential issues that can affect your environment once you upgrade your server from a lower version of SQL Server to SQL Server 2008. 10. In the below snippet you can see that Upgrade Advisor has completed the analysis of SQL Server, Analysis Services and Reporting Services. To see the output click the Launch Report button at the bottom of the wizard screen. 11. In View Report screen you can see a summary of issues which can affect you once you upgrade. To learn more about each issue you can expand the issue and read the detailed description as shown in the below snippet.

    Read the article

  • Is there a sample set of web log data available for testing analysis against?

    - by Peter
    Sorry if this isn't strictly speaking a programming question, but I figure my best chance of success would be to ask here. I'm developing some web log file analysis algorithms, but to date I only have access to a fairly small amount of web log data to process. One algorithm I want to use makes some assumptions about 'the shape' of typical web log data, and so I'd like to test it against a larger 'exemplar' - perhaps the logs of a busy site with a good distribution of traffic from different sources etc. Is there a set of such data available somewhere? Thanks for any help.

    Read the article

  • Do "if" statements affect in the time complexity analysis?

    - by FranXh
    According to my analysis, the running time of this algorithm should be N2, because each of the loops goes once through all the elements. I am not sure whether the presence of the if statement changes the time complexity? for(int i=0; i<N; i++){ for(int j=1; j<N; j++){ System.out.println("Yayyyy"); if(i<=j){ System.out.println("Yayyy not"); } } }

    Read the article

  • Best neural network for certain type of pattern analysis?

    - by fred basset
    Hi All, I'm working on a system that will send telemetry data on machine operation back to a central server for analysis. One of the machine parameters we're measuring is motor current drawn vs time. After an operation is finished we plan to send back an array of currents vs time to the server. A successful operation would have a pattern like a trapezoid, problematic operations would have a pattern completely different, more like a large spike in values. Can anyone recommend a type of neural network that would be good at classifying these 1D vectors of current values into a pass/fail type output? Thanks, Fred

    Read the article

  • Sentiment analysis with NLTK python for sentences using sample data or webservice?

    - by Ke
    I am embarking upon a NLP project for sentiment analysis. I have successfully installed NLTK for python (seems like a great piece of software for this). However,I am having trouble understanding how it can be used to accomplish my task. Here is my task: I start with one long piece of data (lets say several hundred tweets on the subject of the UK election from their webservice) I would like to break this up into sentences (or info no longer than 100 or so chars) (I guess i can just do this in python??) Then to search through all the sentences for specific instances within that sentence e.g. "David Cameron" Then I would like to check for positive/negative sentiment in each sentence and count them accordingly NB: I am not really worried too much about accuracy because my data sets are large and also not worried too much about sarcasm. Here are the troubles I am having: All the data sets I can find e.g. the corpus movie review data that comes with NLTK arent in webservice format. It looks like this has had some processing done already. As far as I can see the processing (by stanford) was done with WEKA. Is it not possible for NLTK to do all this on its own? Here all the data sets have already been organised into positive/negative already e.g. polarity dataset http://www.cs.cornell.edu/People/pabo/movie-review-data/ How is this done? (to organise the sentences by sentiment, is it definitely WEKA? or something else?) I am not sure I understand why WEKA and NLTK would be used together. Seems like they do much the same thing. If im processing the data with WEKA first to find sentiment why would I need NLTK? Is it possible to explain why this might be necessary? I have found a few scripts that get somewhat near this task, but all are using the same pre-processed data. Is it not possible to process this data myself to find sentiment in sentences rather than using the data samples given in the link? Any help is much appreciated and will save me much hair! Cheers Ke

    Read the article

  • Overriding Object.Equals() instance method in C#; now Code Analysis / FxCop warning CA2218: "should

    - by Chris W. Rea
    I've got a complex class in my C# project on which I want to be able to do equality tests. It is not a trivial class; it contains a variety of scalar properties as well as references to other objects and collections (e.g. IDictionary). For what it's worth, my class is sealed. To enable a performance optimization elsewhere in my system (an optimization that avoids a costly network round-trip), I need to be able to compare instances of these objects to each other for equality – other than the built-in reference equality – and so I'm overriding the Object.Equals() instance method. However, now that I've done that, Visual Studio 2008's Code Analysis a.k.a. FxCop, which I keep enabled by default, is raising the following warning: warning : CA2218 : Microsoft.Usage : Since 'MySuperDuperClass' redefines Equals, it should also redefine GetHashCode. I think I understand the rationale for this warning: If I am going to be using such objects as the key in a collection, the hash code is important. i.e. see this question. However, I am not going to be using these objects as the key in a collection. Ever. Feeling justified to suppress the warning, I looked up code CA2218 in the MSDN documentation to get the full name of the warning so I could apply a SuppressMessage attribute to my class as follows: [SuppressMessage("Microsoft.Naming", "CA2218:OverrideGetHashCodeOnOverridingEquals", Justification="This class is not to be used as key in a hashtable.")] However, while reading further, I noticed the following: How to Fix Violations To fix a violation of this rule, provide an implementation of GetHashCode. For a pair of objects of the same type, you must ensure that the implementation returns the same value if your implementation of Equals returns true for the pair. When to Suppress Warnings ----- Do not suppress a warning from this rule. [arrow & emphasis mine] So, I'd like to know: Why shouldn't I suppress this warning as I was planning to? Doesn't my case warrant suppression? I don't want to code up an implementation of GetHashCode() for this object that will never get called, since my object will never be the key in a collection. If I wanted to be pedantic, instead of suppressing, would it be more reasonable for me to override GetHashCode() with an implementation that throws a NotImplementedException? Update: I just looked this subject up again in Bill Wagner's good book Effective C#, and he states in "Item 10: Understand the Pitfalls of GetHashCode()": If you're defining a type that won't ever be used as the key in a container, this won't matter. Types that represent window controls, web page controls, or database connections are unlikely to be used as keys in a collection. In those cases, do nothing. All reference types will have a hash code that is correct, even if it is very inefficient. [...] In most types that you create, the best approach is to avoid the existence of GetHashCode() entirely. ... that's where I originally got this idea that I need not be concerned about GetHashCode() always.

    Read the article

  • Design considerations on JSON schema for scalars with a consistent attachment property

    - by casperOne
    I'm trying to create a JSON schema for the results of doing statistical analysis based on disparate pieces of data. The current schema I have looks something like this: { // Basic key information. video : "http://www.youtube.com/watch?v=7uwfjpfK0jo", start : "00:00:00", end : null, // For results of analysis, to be populated: // *** This is where it gets interesting *** analysis : { game : { value: "Super Street Fighter 4: Arcade Edition Ver. 2012", confidence: 0.9725 } teams : [ { player : { value : "Desk", confidence: 0.95, } characters : [ { value : "Hakan", confidence: 0.80 } ] } ] } } The issue is the tuples that are used to store a value and the confidence related to that value (i.e. { value : "some value", confidence : 0.85 }), populated after the results of the analysis. This leads to a creep of this tuple for every value. Take a fully-fleshed out value from the characters array: { name : { value : "Hakan", confidence: 0.80 } ultra : { value: 1, confidence: 0.90 } } As the structures that represent the values become more and more detailed (and more analysis is done on them to try and determine the confidence behind that analysis), the nesting of the tuples adds great deal of noise to the overall structure, considering that the final result (when verified) will be: { name : "Hakan", ultra : 1 } (And recall that this is just a nested value) In .NET (in which I'll be using to work with this data), I'd have a little helper like this: public class UnknownValue<T> { T Value { get; set; } double? Confidence { get; set; } } Which I'd then use like so: public class Character { public UnknownValue<Character> Name { get; set; } } While the same as the JSON representation in code, it doesn't have the same creep because I don't have to redefine the tuple every time and property accessors hide the appearance of creep. Of course, this is an apples-to-oranges comparison, the above is code while the JSON is data. Is there a more formalized/cleaner/best practice way of containing the creep of these tuples in JSON, or is the approach above an accepted approach for the type of data I'm trying to store (and I'm just perceiving it the wrong way)? Note, this is being represented in JSON because this will ultimately go in a document database (something like RavenDB or elasticsearch). I'm not concerned about being able to serialize into the object above, because I can always use data transfer objects to facilitate getting data into/out of my underlying data store.

    Read the article

  • Learn How to Use Oracle’s Spatial and BI Tools for Location-aware Predictive Analytics

    - by Mandy Ho
    November 29, 2-3pm EST Are you a OBIEE (Oracle Business Intelligence Enterprise Edition) user? Have Location data you'd like to incorporate into your analysis as well? This is a great webinar for you! Join us, as Oracle experts from both teams show how to perform perdictive analytics, network analytics and spatial analysis, combined together, in real world scenarios. We will include demos evaluating airline on-time performance and retail establishment performance.  Learn how to: - Gain better business insights and improve ROI with Oracle Spatial and Graph, Oracle Advanced Analytics, and Oracle Business Intelligence Enterprise Edition (OBIEE). - Streamline and remove the complexity of building applications with OBIEE’s built-in location and analytics features. - Create the statistical model, build interactive reports and dashboards including location analysis and map visualization, and incorporate network analytics for geomarketing and site scoring. - Perform location analysis and processing such as proximity, containment, geocoding, aggregation of geographic regions, and more. Speakers include Jayant Sharma, Director, Product Management, Oracle Spatial and Mapping Technologies; Jean Ihm, Principal Product Manager, Oracle Spatial and Mapping Technologies; and Abhinav Agarwal, OBIEE Product Management. Who should attend This webinar is appropriate for CIOs, business and technical managers, developers, and analysts involved in design and management of analytic applications and solutions where spatial analysis can add insight and value to business processes. Click here, or the link below to sign up today! https://www2.gotomeeting.com/register/764677554

    Read the article

  • Web application Project management methodologie

    - by dutchiexl
    I am looking to streamline my company's web development process. Including analysis. I myself am specialized in XP and Scrum. But we are building web application with a process cycle of 3-4 weeks and a lifetime of 1-4 months. When a project is sold, only then the project managers (= people who do analysis but know nothing about it = a small flow chart and some screen shots as analysis) What is happening is: A LOT of change requests Minimal development time Minimal analysis time NOW: the main question :) can you recommend me some methodologies and books to read for the entire project management ? Thanks in advance @Edit, I myself was looking at a combination of SCRUM for the management with flowcharts, + RAD/LD for development, and trying to distilate something from that.

    Read the article

  • What is Google Page Rank and Why is it So Important?

    What exactly is PageRank? It is basically a link analysis algorithm, which was influenced by citation analysis, which dates way back to the fifties, when it was conceived by Eugene Garfield and later on by Massimo Marchiori. This link analysis algorithm essentially gives set of hyperlinked documents, where they are weighed in numerical form, and are given a number assignment between zero to ten.

    Read the article

  • SSRS 2005 - Usability analysis - Is SSRS a good option for this scenario?

    - by Sach
    How practical is it to consider SSRS 2005 or SSRS 2008 as a reporting solution for a report that has to show reports with millions of records (records vary from 3 to 10 million)? Is there any threshold on the size of report in SSRS? How do I know that for a huge report, wheather SSRS will consume the whole memory and start paging the operations to disk or it will give a memory leak error? Even if I keep on increasing the memory how can I be sure that certain memory will be sufficient for such huge reports for the report server? All the above questions are haunting me because I have a dedicated report server with a decent hardware and OS configuration (8 processors, 8GB RAM, 64 bit OS and 64 bit SQL Server 2005). Still my report with around 2 million records is taking more than 6 minutes and going from one page to another takes 3 minutes!!! My datasource is on separate server and when I execute only the stored proc there, it returns the results in less than 2 minutes.

    Read the article

  • Know of any Java garbage collection log analysis tools?

    - by braveterry
    I'm looking for a tool or a script that will take the console log from my web app, parse out the garbage collection information and display it in a meaningful way. I'm starting up on a Sun Java 1.4.2 JVM with the following flags: -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails The log output looks like this: 54.736: [Full GC 54.737: [Tenured: 172798K->18092K(174784K), 2.3792658 secs] 257598K->18092K(259584K), [Perm : 20476K->20476K(20480K)], 2.4715398 secs] Making sense of a few hundred of these kinds of log entries would be much easier if I had a tool that would visually graph garbage collection trends.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >