Search Results

Search found 5568 results on 223 pages for 'dependency analysis'.

Page 161/223 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • Merge overlapping date intervals

    - by leoinfo
    Is there a better way of merging overlapping date intervals? The solution I came up with is so simple that now I wonder if someone else has a better idea of how this could be done. /***** DATA EXAMPLE *****/ DECLARE @T TABLE (d1 DATETIME, d2 DATETIME) INSERT INTO @T (d1, d2) SELECT '2010-01-01','2010-03-31' UNION SELECT '2010-04-01','2010-05-31' UNION SELECT '2010-06-15','2010-06-25' UNION SELECT '2010-06-26','2010-07-10' UNION SELECT '2010-08-01','2010-08-05' UNION SELECT '2010-08-01','2010-08-09' UNION SELECT '2010-08-02','2010-08-07' UNION SELECT '2010-08-08','2010-08-08' UNION SELECT '2010-08-09','2010-08-12' UNION SELECT '2010-07-04','2010-08-16' UNION SELECT '2010-11-01','2010-12-31' UNION SELECT '2010-03-01','2010-06-13' /***** INTERVAL ANALYSIS *****/ WHILE (1=1) BEGIN UPDATE t1 SET t1.d2 = t2.d2 FROM @T AS t1 INNER JOIN @T AS t2 ON DATEADD(day, 1, t1.d2) BETWEEN t2.d1 AND t2.d2 -- AND t1.d2 <= t2.d2 /***** this condition is useless *****/ IF @@ROWCOUNT = 0 BREAK END /***** RESULT *****/ SELECT StartDate = MIN(d1) , EndDate = d2 FROM @T GROUP BY d2 ORDER BY StartDate, EndDate /***** OUTPUT *****/ /***** StartDate EndDate 2010-01-01 2010-06-13 2010-06-15 2010-08-16 2010-11-01 2010-12-31 *****/ EDIT: I realized that the t1.d2 <= t2.d2 condition is not really useful.

    Read the article

  • Screening (multi)collinearity in a reggresion model

    - by aL3xa
    I hope that this one is not going to be "ask-and-answer" question... here goes: (multi)collinearity refers to extremely high correlations between predictors in the regression model. How to cure them... well, sometimes you don't need to "cure" collinearity, since it doesn't affect regression model itself, but interpretation of an effect of individual predictors. One way to spot collinearity is to put each predictor as a dependent variable, and other predictors as independent variables, determine R2, and if it's larger than .9 (or .95), we can consider predictor redundant. This is one "method"... what about other approaches? Some of them are time consuming, like excluding predictors from model and watching for b-coefficient changes - they should be noticeably different. Of course, we must always bare in mind specific context/goal of analysis... Sometimes, only remedy is to repeat a research, but right now, I'm interested in various ways of screening redundant predictors when (multi)collinearity occurs in a regression model.

    Read the article

  • What is the most useful R trick?

    - by Dirk Eddelbuettel
    In order to share some more tips and tricks for R, what is you single-most useful feature or trick? Clever vectorization? Data input/output? Visualization and graphics? Statistical analysis? Special functions? The interactive environment itself? One item per post, and we will see if we get a winner by means of votes. [Edit 25-Aug 2008]: So after one week, it seems that the simple str() won the poll. As I like to recommend that one myself, it is an easy answer to accept.

    Read the article

  • What was that tutorial on pointers?

    - by pecker
    Hello, I once read a tutorial/article on Pointers somewhere. It was not a general tutorial but it explained how to clearly understand the complex & confusing pointers (especially like the ones that are usually asked in interview). It was more like http://www.codeweblog.com/right-left-rule-complex-pointer-analysis/ I'm unable to find it. Could any one post it here. PS: I did tried to google it but couldn't find. I'm asking it here because I thought it was popular.

    Read the article

  • WPF Show wait cursor before application fully loads

    - by e28Makaveli
    I want to show the wait cursor before my WPF application, composed using CAL, fully loads. In the constructor of the main window, I have the following code: public MainWindow([Dependency] IUnityContainer container) { InitializeComponent(); Cursor = System.Windows.Input.Cursors.Wait; Mouse.OverrideCursor = System.Windows.Input.Cursors.Wait; ForceCursor = true; //this.Cursor = System.Windows.Input.Cursors.AppStarting; // event subscriptions PresenterBase.EventAggregate.GetEvent<ModulesLoadedEvent>().Subscribe(OnModulesLoaded); } After all modules have been loaded, the following handler is invoked: private void OnModulesLoaded(EventArgs e) { allModulesLoaded = true; Mouse.OverrideCursor = null; Cursor = System.Windows.Input.Cursors.Arrow; } Problem is, I do not see this wait cursor. What I am missing here? FWIW, I got a hint from this post http://stackoverflow.com/questions/2078766/showing-the-wait-cursor TIA.

    Read the article

  • Unit testing custom controls in Silverlight

    - by Hrvoje
    I have several custom controls (some kind of frames for content and layout management, like wrap panel), and would like to write unit tests for them. It's hard to find any good examples except Silverlight control toolkit, which has some helper classes to do unit tests and it's quite complicated. For MVVM classes it's easy to write tests because they don't use SL dependency system and infrastructure. Questions: how to unit test DepedenyProperty, what do I need to test how to test attached property do I test bindings with theme or UserControl, like simple textblock content binding, or command/event binding in MVVM with UserControl what else do I test in my custom controls, beside my business logic any good tutorial to achieve tests like those in control toolkit How do I start? Is SL controls toolkit only option for learning? For testing framework i'm using one from control toolkit, and for continuus integration on TFS build server I planned to use Statlight (from codeplex). Any advice on that?

    Read the article

  • "_FILE_AND_LINE_ is not defined in this scope" (compiling RakNet NAT examples in OS X)

    - by Michael F
    Hello! I'm working on a RakNet-based project (using 3.8 on OS X 10.6), and I'm trying to work through the various examples that demonstrate the parts of RakNet I want to use. For the "NatCompleteClient" example, I've imported the source into a command-line project in XCode, along with the UPNP dependency. At compile time I've had a few errors in the UPNP section, though, and I can't find any guidance on this. In UPNPPortForwarder.mm, there are 7 lines that use _FILE_AND_LINE_, and the compiler is not happy; for example on line 232: foundInterfaces.Deallocate(r1,_FILE_AND_LINE_); causes: UPNPPortForwarder.mm:232: error: '_FILE_AND_LINE_' was not declared in this scope Can anyone tell me what this is all about? That variable doesn't seem to get talked about very often... or Google doesn't like to find it.

    Read the article

  • Making the #include square

    - by David
    I'm trying to write a makefile using CC on Solaris 10. [Only the first bit of that really matters, I think]. I have the following rule for foo.o: foo.o: foo.cc common_dependencies.h CC -c foo.cc -I../../common Unfortunately, common_dependencies.h includes all sorts of idiosyncratic trash, in directories not named '.' or '../../common' . Is this just going to have to be a brute force makefile where I ferret out all of the dependency paths? All of the dependencies are somewhere under '../..', but sometimes 1-level down and sometimes 2-levels down.

    Read the article

  • How to Deploy my Open Source Projects using Maven's Central Repository?

    - by sfussenegger
    Is there anything I could do to get my own open source stuff into Maven's Central repository? I've wondered many times how I could get my own projects into Maven's Central repository. I was asking this myself, especially as I've seen some well known projects hosting their own repository, requiring users to add dependency and repository. At the same time, it's getting difficult for other projects to depend on those projects. As I neither want others to add an additional repository nor to host one myself, I'm looking for other ways. And why aren't some projects using the option to deploy to Maven Central in favor of their self-hosted repository? Any good reasons that aren't obvious?

    Read the article

  • Leak caused by fread

    - by Jack
    I'm profiling code of a game I wrote and I'm wondering how it is possible that the following snippet causes an heap increase of 4kb (I'm profiling with Heapshot Analysis of Xcode) every time it is executed: u8 WorldManager::versionOfMap(FILE *file) { char magic[4]; u8 version; fread(magic, 4, 1, file); <-- this is the line fread(&version,1,1,file); fseek(file, 0, SEEK_SET); return version; } According to the profiler the highlighted line allocates 4.00Kb of memory with a malloc every time the function is called, memory which is never released. This thing seems to happen with other calls to fread around the code, but this was the most eclatant one. Is there anything trivial I'm missing? Is it something internal I shouldn't care about? Just as a note: I'm profiling it on an iPhone and it's compiled as release (-O2).

    Read the article

  • Explain DLL Dependencies to a lay person

    - by wheaties
    This follows from a previous posting I made about lack of a clean test machine for software installations. I'm doing a bad job of explaining how DLL dependencies work and how some machines might not have the right libraries at the time of installation. The problem is that it's being viewed as a defect with the build process. I'm trying to educate the higher ups that it's not the build process per se but rather the installation process which is to blame. Here's a quote from my boss relating subcontractor work to our work to put it into perspective: I'm not a software person. All I see is that when they hand something to us it just works but when we hand something to the client there's all sorts of problems. There must be something wrong with how you're building the code. It's very easy to see how someone who is smart (scarily smart) could come to the wrong conclusion. So how would you explain the whole DLL dependency issue?

    Read the article

  • Creating an Excel Template for different data size

    - by dassouki
    I created an excel template for a file i've done for a routine work calculation. The file takes data from the data logger and does some analysis on it and outputs one number regardless of the input size. The problem I'm having is i have to modify the sheet to suit the number of rows, as everyday the data logger outputs a different number of rows. there are about 15 sheets in the workbook and it's annoying to have to change everyone of them everyday. What i'd like to do input the data logger csv, and boom the result gets outputted. Is there a way through vba or not to ahieve

    Read the article

  • Convert object to DateRange

    - by user655832
    I'm querying an underlying PostgreSQL database using Pandas 0.8. Pandas is returning the DataFrame properly but the underlying timestamp column in my database is being returned as a generic "object" type in Pandas. As I would eventually like to seasonal normalization of my data I am curious as to how to convert this generic "object" column to something that is appropriate for analysis. Here is my current code to retrieve the data: # get records from db example import pandas.io.sql as psql import psycopg2 # define query to get all subs created this year QRY = """ select i i, i * random() f, case when random() > 0.5 then true else false end t, (current_date - (i*random())::int)::timestamp with time zone tsz from generate_series(1,1000) as s(i) order by 4 ; """ CONN_STRING = "host='localhost' port=5432 dbname='postgres' user='postgres'" # connect to db conn = psycopg2.connect(CONN_STRING) # get some data set index on relid column df = psql.frame_query(QRY, con=conn) print "Row count retrieved: %i" % (len(df),) Thanks for any help you can render. M

    Read the article

  • make: invoke command for multiple targets of multiple files?

    - by marvin2k
    Hi, I looking to optimize an existing Makefile. It's used to create multiple plots (using Octave) for every logfile in a given directory using an scriptfile for every plot which takes a logfilename as an argument. In the Moment, I use one single rule for every kind of plot available, with a handwritten call to Octave, giving the specific scriptfile/logfile as an argument. It would be nice, if every plot has "his" octave-script as a dependency (plus the logfile, of course), so only one plot is regenerated if his script is changed. Since I don't want to type that much, I wonder how I can simplifiy this by using only one general rule to build "a" plot? To make it clearer: Logfile: "$(LOGNAME).log" Scriptfile: "plot$(PLOTNAME).m" creates "$(LOGNAME)_$(PLOTNAME).png" The first thing I had in mind: %1_%2.png: %1.log $(OCTAVE) --eval "plot$<2('$<1')" But this seems not to be allowed. Could someone give me a hint?

    Read the article

  • Using ASP.NET Session for Lifetime Management (Unity)

    - by Sigray
    I am considering using Unity to manage the lifetime of a custom user class instance. I am planning on extending the LifetimeManager with a custom ASP.NET session manager. What I want to be able to do is store and retrieve the currently logged in user object from my custom classes, and have Unity get the instance of User from the session object in ASP.NET, or (when in a Win32 project) retrieve it statically or from the current thread. So far my best solution is to create a static instance of my Unity container on startup, and use the Resolve method to get my User object from each of my classes. However, this seems to create a dependency on the unity container in my other classes. What is the more "Unity" way of accomplishing this goal? I would like to be able to read/replace the current User instance from any class.

    Read the article

  • Should I use Spring or Guice for a Tomcat/Wicket/Hibernate project?

    - by Trevor Allred
    I'm building a new web application that uses Linux, Apache, Tomcat, Wicket, JPA/Hibernate, and MySQL. My primary need is Dependency Injection, which both Spring and Guice can do well. I think I need transaction support that would come with Spring and JTA but I'm not sure. The site will probably have about 20 pages and I'm not expect huge traffic. Should I use Spring or Guice? Feel free to ask and followup questions and I'll do my best to update this.

    Read the article

  • Java resource management: please help to understand Findbugs results.

    - by java.is.for.desktop
    Hello, everyone! Findbugs bugs me about a method which opens two Closeable instances, but I can't understand why. Source public static void sourceXmlToBeautifiedXml(File input, File output) throws TransformerException, IOException, JAXBException { FileReader fileReader = new FileReader(input); FileWriter fileWriter = new FileWriter(output); try { // may throw something sourceXmlToBeautifiedXml(fileReader, fileWriter); } finally { try { fileReader.close(); } finally { fileWriter.close(); } } } Findbugs analysis Findbugs tells me Method [...] may fail to clean up java.io.Reader [...] and points to the line with FileReader fileReader = ... Question Who is wrong: me or Findbugs?

    Read the article

  • Processing an n-ary ANTLR AST one child at a time

    - by Chris Lieb
    I currently have a compiler that uses an AST where all children of a code block are on the same level (ie, block.children == {stm1, stm2, stm3, etc...}). I am trying to do liveness analysis on this tree, which means that I need to take the value returned from the processing of stm1 and then pass it to stm2, then take the value returned by stm2 and pass it to stm3, and so on. I do not see a way of executing the child rules in this fashion when the AST is structured this way. Is there a way to allow me to chain the execution of the child grammar items with my given AST, or am I going to have to go through the painful process of refactoring the parser to generate a nested structure and updating the rest of the compiler to work with the new AST? Example ANTLR grammar fragment: block : ^(BLOCK statement*) ; statement : // stuff ; What I hope I don't have to go to: block : ^(BLOCK statementList) ; statementList : ^(StmLst statement statement+) | ^(StmLst statement) ; statement : // stuff ;

    Read the article

  • R: including model specifications in xtable(anova(...))

    - by HamiltonUlmer
    Hello R comrades: I have a bunch of loglinear models, which, for our purposes will just be glm() objects called mx, my, mz. I want to get a nicely-formatted xtable of the analysis of deviance, so naturally I would want to perform xtable(anova(mx, my, mz, test = "Chisq")). The vanilla output of xtable, however, doesn't include the model specifications. I'd like to include those for all the ANOVA tests I'm running, so if there is not a param I'm missing that does this I'll probably just have to hack up my own solution. But looking over the help page, there doesn't seem to be an easy way to include the model specifications. Any thoughts? Alternatives? If it helps this was done in 2.9.1 with xtable 1.5-5.

    Read the article

  • Screen capture during testing

    - by Edwward
    This is an application for reviewing performance tests. Simple in concept, tricky to describe. Picture: 1) Recording interactions with a WPF program so the inputs can be played back. 2) Playing the inputs back while doing a continuous screen capture. 3) Capturing wall time as well as continuous CPU percentages during playback. 4) Repeating steps (2) and (3) lots of times. 5) Writing the relevant stuff out to files/db. 6) Reading it and putting it all in a fancy UI for easy review/analysis. The killer for me is (2). I could use some guidance on a good, possibly commercial, screen capture SDK. I would also welcome the news that my whole problem already has a solution. And of course any thoughts on the overall idea would also be great. Thanks. Ed

    Read the article

  • Have you switched from CodeIgniter to Kohana?

    - by Eli
    Hi All, I usually just work with straight PHP, but want to try MVC and see if a framework will really speed up development. After much waffling, analysis paralysis, and many dumb SO questions, I thought I had settled on CodeIgniter for my next PHP project. However, I am now seriously considering Kohana. Has anyone made the switch from CI to Kohana? If so, why? What's better about the actual code, libraries, etc? Edit: Hi All, I did end up going with Kohana. It's easy to use, but more importantly, it's easy NOT to use, since there are a lot of things I like to work with native PHP for. It's ridiculously extensible, well coded, and seems like it is beginning to pull out ahead of CI in a few things like putting views in views, passing subview data, etc. I am sure CI will catch up, but Kohana should be 3 steps ahead by then =o)

    Read the article

  • How do I use dependencies in a makefile without calling a target?

    - by rassie
    I'm using makefiles to convert an internal file format to an XML file which is sent to other colleagues. They would make changes to the XML file and send it back to us (Don't ask, this needs to be this way ;)). I'd like to use my makefile to update the internal files when this XML changes. So I have these rules: %.internal: $(DATAFILES) # Read changes from XML if any # Create internal representation here %.xml: %.internal # Convert to XML here Now the XML could change because of the workflow described above. But since no data files have changed, make would tell me that file.internal is up-to-date. I would like to avoid making %.internal target phony and a circular dependency on %.xml obviously doesn't work. Any other way I could force make to check for changes in the XML file and re-build %.internal?

    Read the article

  • Performance Overhead of Perf Event Subsystem in Linux Kernel

    - by Bo Xiao
    Performance counters for Linux are a new kernel-based subsystem that provide a framework for all things performance analysis. It covers hardware level (CPU/PMU, Performance Monitoring Unit) features and software features (software counters, tracepoints) as well. Since 2.6.33, the kernel provide 'perf_event_create_kernel_counter' kernel api for developers to create kernel counter to collect system runtime information. What I concern most is the performance impact on overall system when tracepoint/ftrace is enabled. There are no docs I can find about them. I was once told that ftrace was implemented by dynamically patching code, will it slow the system dramatically?

    Read the article

  • JAXWS and sessions

    - by Pace
    I'm fairly new to writing web services. I'm working on a SOAP service using JAXWS. I'd like to be able to have users log-in and in my service know which user is issuing a command. In other words, have some session handling. One way I've seen to do this is to use cookies and access the HTTP layer from my web service. However, this puts a dependency on using HTTP as the transport layer (I'm aware HTTP is almost always the transport layer but I'm a purist). Is there a better approach which keeps the service layer unaware of the transport layer? Is there some way I can accomplish this with servlet filters? I'd like the answer to be as framework agnostic as possible.

    Read the article

  • NetBeans Platform Unit Test Library Dependencies

    - by Ben Hammond
    I am working on a Netbeans Platform RCP application. I use jmock in my unit tests and I have created a Library Wrapper Module to import the necessary libraries. The Module has an section named 'Libraries' and another section named 'Unit Test Libraries'. I hoped that I could add the JMock Library Wrapper to the 'Unit Test Libraries', however when I run the unit tests I get the error 'package org.jmock does not exist'. If I import the JMock Library Wrapper in to the main 'Libraries' element then it works, but this feels wrong. Maven allows me to specify unit-test only dependencies, and I assumed that NetBeans Platform did the same. Should this be possible? Am I doing something wrong? Should I resign myself to a run-time dependency on the unit-test libraries (ugh).

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >