Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 546/677 | < Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >

  • Intent Bundle returns Null every time?

    - by Max
    I have some extras sent to a new intent. There it grabs the bundle and tests if it is null. Every single time it is null even though I am able to get the values passed and use them. Can anyone see what is wrong with the if statement? Intent i = getIntent(); Bundle b = i.getExtras(); int picked = b.getInt("PICK"); int correct = b.getInt("CORR"); type = b.getString("RAND"); if(b == null || !b.containsKey("WELL")) { Log.v("BUNDLE", "bun is null"); } else { Log.v("BUNDLE", "Got bun well"); } EDIT: Heres where the bundle is created. Intent intent = new Intent(this, app.pack.son.class); Bundle b = new Bundle(); b.putInt("PICK", pick); b.putInt("CORR", corr); b.putString("RAND", "yes"); intent.putExtras(b); startActivity(intent);

    Read the article

  • FLV performance and garbage collection

    - by justinbach
    I'm building a large flash site (AS3) that uses huge FLVs as transition videos from section to section. The FLVs are 1280x800 and are being scaled to 1680x1050 (much of which is not displayed to users with smaller screens), and are around 5-8 seconds apiece. I'm encoding the videos using On2's hi-def codec, VP6-S, and playback is pretty good with native FLV players, Perian-equipped Quicktime, and simple proof-of-concept FLV playback apps built in AS3. The problem I'm having is that in the context of the actual site, playback isn't as smooth; the framerate isn't quite as good as it should be, and more problematically, there's occasional jerkiness and dropped frames (sometimes pausing the video for as long as a quarter of a second or so). My guess is that this is being caused by garbage collection in the Flash player, which happens nondeterministically and is therefore hard to test and control for. I'm using a single instance of FLVPlayback to play the videos; I originally was using NetStream objects and so forth directly but switched to FLVPlayback for this reason. Has anyone experienced this sort of jerkiness with FLVPlayback (or more generally, with hi-def Flash video)? Am I right about GC being the culprit here, and if so, is there any way to prevent it during playback of these system-intensive transitions?

    Read the article

  • Perf4J Graph Output from Log File

    - by manyxcxi
    I currently have a long running process that I am trying to analyze with Perf4J. I currently have it writing results in CSV format to its own log file using the AsyncCoalescingStatisticsAppender and a StatisticsCsvLayout on the file appender. My question is; when I try and use the --graph option from the command line (using the perf4j jar) it isn't populating the data points- it isn't populating anything. Are my appenders set incorrectly? The log file contains hundreds (sometimes thousands) of data points of about 10 different tag names. <appender name="perfAppender" class="org.apache.log4j.FileAppender"> <param name="File" value="perfStats.log"/> <layout class="org.perf4j.log4j.StatisticsCsvLayout"> </layout> </appender> <appender name="CoalescingStatistics" class="org.perf4j.log4j.AsyncCoalescingStatisticsAppender"> <!-- The TimeSlice option is used to determine the time window for which all received StopWatch logs are aggregated to create a single GroupedTimingStatistics log. Here we set it to 10 seconds, overriding the default of 30000 ms --> <param name="TimeSlice" value="10000"/> <appender-ref ref="ConsoleAppender"/> <appender-ref ref="CompositeRollingFileAppender"/> <appender-ref ref="perfAppender"/> </appender>

    Read the article

  • Modify loggingConfiguration Programmatic (enterprise library)

    - by alhambraeidos
    Hi all, I have app.config in m win application, and loggingConfiguration section (enterprise library 4.1). I need do this programatically, Get a list of all listener in loggingConfiguration Modify property fileName=".\Trazas\Excepciones.log" of several RollingFlatFileTraceListener's Modify several properties of AuthenticatingEmailTraceListener listener, Any help, please, I havent found any reference or samples Thanks in advanced. Greetings <listeners> <add name="Excepciones RollingFile Listener" fileName=".\Trazas\Excepciones.log" formatter="Text Single Formatter" footer="&lt;/Excepcion&gt;" header="&lt;Excepcion&gt;" rollFileExistsBehavior="Overwrite" rollInterval="None" rollSizeKB="1500" timeStampPattern="yyyy-MM-dd" listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.RollingFlatFileTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" traceOutputOptions="None" filter="All" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.RollingFlatFileTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add name="AuthEmailTraceListener" type="zzzz.Frk.Logging.AuthEmailTraceListener.AuthenticatingEmailTraceListener, zzzz.Frk.Logging.AuthEmailTraceListener" listenerDataType="zzzz.Frk.Logging.AuthEmailTraceListener.AuthenticatingEmailTraceListenerData, zzzz.Frk.Logging.AuthEmailTraceListener" formatter="Exception Formatter" traceOutputOptions="None" toAddress="[email protected]" fromAddress="[email protected]" subjectLineStarter=" Excepción detectada - " subjectLineEnder="incidencias" smtpServer="smtp.gmail.com" smtpPort="587" authenticate="true" username="[email protected]" password="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" enableSsl="true" />

    Read the article

  • Trouble compiling C/C++ project in NetBeans 6.8 with MinGW on Windows

    - by dontoo
    I am learning C and because VC++ 2008 doesn't support C99 features I have just installed NetBeans and configure it to work with MinGW. I can compile single file project ( main.c) and use debugger but when I add new file to project I get error "undefined reference to ... function(code) in that file..". Obviously MinGW does't link my files or I don't know how properly add them to my project (c standard library files work fine). /bin/make -f nbproject/Makefile-Debug.mk SUBPROJECTS= .build-conf make[1]: Entering directory `/c/Users/don/Documents/NetBeansProjects/CppApplication_7' /bin/make -f nbproject/Makefile-Debug.mk dist/Debug/MinGW-Windows/cppapplication_7.exe make[2]: Entering directory `/c/Users/don/Documents/NetBeansProjects/CppApplication_7' mkdir -p dist/Debug/MinGW-Windows gcc.exe -o dist/Debug/MinGW-Windows/cppapplication_7 build/Debug/MinGW-Windows/main.o build/Debug/MinGW-Windows/main.o: In function `main': C:/Users/don/Documents/NetBeansProjects/CppApplication_7/main.c:5: undefined reference to `X' collect2: ld returned 1 exit status make[2]: *** [dist/Debug/MinGW-Windows/cppapplication_7.exe] Error 1 make[2]: Leaving directory `/c/Users/don/Documents/NetBeansProjects/CppApplication_7' make[1]: *** [.build-conf] Error 2 make[1]: Leaving directory `/c/Users/don/Documents/NetBeansProjects/CppApplication_7' make: *** [.build-impl] Error 2 BUILD FAILED (exit value 2, total time: 1s) main.c #include "header.h" int main(int argc, char** argv) { X(); return (EXIT_SUCCESS); } header.h #ifndef _HEADER_H #define _HEADER_H #include <stdio.h> #include <stdlib.h> void X(void); #endif source.c #include "header.h" void X(void) { printf("dsfdas"); }

    Read the article

  • How to deploy SQL Reporting 2005 when Data Sources are locked?

    - by spoulson
    The DBAs here maintain all SQL Server and SQL Reporting servers. I have a custom developed SQL Reporting 2005 project in Visual Studio that runs fine on my local SQL Database and Reporting instances. I need to deploy to a production server, so I had a folder created on a SQL Reporting 2005 server with permissions to upload files. Normally, a deploy from within Visual Studio is all that is needed to upload the report files. However, for security purposes, data sources are maintained explicitly by DBAs and stored in a separated locked down common folder on the reporting server. I had them create the data source for me. When I attempt to deploy from VS, it gives me the error "The item '/Data Sources' already exists." I get this whether I'm deploying the whole project or just a single report file. I already set OverwriteDataSources=false in the project properties. The TargetServer URL and folder are verified correct. I suppose I could copy the files manually, but I'd like to be able to deploy from within VS. What could I be doing wrong?

    Read the article

  • Core data migration failing with "Can't find model for source store" but managedObjectModel for source is present

    - by Ira Cooke
    I have a cocoa application using core-data, which is now at the 4th version of its managed object model. My managed object model contains abstract entities but so far I have managed to get migration working by creating appropriate mapping models and creating my persistent store using addPersistentStoreWithType:configuration:options:error and with the NSMigratePersistentStoresAutomaticallyOption set to YES. NSDictionary *optionsDictionary = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:NSMigratePersistentStoresAutomaticallyOption]; NSURL *url = [NSURL fileURLWithPath: [applicationSupportFolder stringByAppendingPathComponent: @"MyApp.xml"]]; NSError *error=nil; [theCoordinator addPersistentStoreWithType:NSXMLStoreType configuration:nil URL:url options:optionsDictionary error:&error] This works fine when I migrate from model version 3 to 4, which is a migration that involves adding attributes to several entities. Now when I try to add a new model version (version 5), the call to addPersistentStoreWithType returns nil and the error remains empty. The migration from 4 to 5 involves adding a single attribute. I am struggling to debug the problem and have checked all the following; The source database is in fact at version 4 and the persistentStoreCoordinator's managed object model is at version 5. The 4-5 mapping model as well as managed object models for versions 4 and 5 are present in the resources folder of my built application. I've tried various model upgrade paths. Strangely I find that upgrading from an early version 3 - 5 works .. but upgrading from 4 - 5 fails. I've tried adding a custom entity migration policy for migration of the entity whose attributes are changing ... in this case I overrode the method beginEntityMapping:manager:error: . Interestingly this method does get called when migration works (ie when I migrate from 3 to 4, or from 3 to 5 ), but it does not get called in the case that fails ( 4 to 5 ). I'm pretty much at a loss as to where to proceed. Any ideas to help debug this problem would be much appreciated.

    Read the article

  • What is the best way to auto-generate INSERT statements for a SQL Server table?

    - by JosephStyons
    We are writing a new application, and while testing, we will need a bunch of dummy data. I've added that data by using MS Access to dump excel files into the relevant tables. Every so often, we want to "refresh" the relevant tables, which means dropping them all, re-creating them, and running a saved MS Access append query. The first part (dropping & re-creating) is an easy sql script, but the last part makes me cringe. I want a single setup script that has a bunch of INSERTs to regenerate the dummy data. I have the data in the tables now. What is the best way to automatically generate a big list of INSERT statements from that dataset? I'm thinking of something like in TOAD (for Oracle) where you can right-click on a grid and click Save As-Insert Statements, and it will just dump a big sql script wherever you want. The only way I can think of doing it is to save the table to an excel sheet and then write an excel formula to create an INSERT for every row, which is surely not the best way. I'm using the 2008 Management Studio to connect to a SQL Server 2005 database.

    Read the article

  • Thread Local Memory, Using std::string's internal buffer for c-style Scratch Memory.

    - by Hassan Syed
    I am using Protocol Buffers and OpensSSL to generate, HMACs and then CBC encrypt the two fields to obfuscate the session cookies -- similar Kerberos tokens. Protocol Buffers' API communicates with std::strings and has a buffer caching mechanism; I exploit the caching mechanism, for successive calls in the the same thread, by placing it in thread local memory; additionally the OpenSSL HMAC and EVP CTX's are also placed in the same thread local memory structure ( see this question for some detail on why I use thread local memory and the massive amount of speedup it enables even with a single thread). The generation and deserialization, "my algorithms", of these cookie strings uses intermediary void *s and std::strings and since Protocol Buffers has an internal memory retention mechanism I want these characteristics for "my algorithms". So how do I implement a common scratch memory ? I don't know much about the rdbuf(streambuf - strinbuf ??) of the std::string object. I would presumeably need to grow it to the lowest common size ever encountered during the execution of "my algorithms". Thoughts ? My question I guess would be: " is the internal buffer of a string re-usable, and if so, how ?" Edit: See comments to Vlad's answer please.

    Read the article

  • Hotkeys in webapps

    - by Johoo
    When creating webapps, is there any guidelines on which keys you can use for your own hotkeys without overriding too many of the browsers default hotkeys. For example i might want to have a custom copy command for copying entire sets of data that only makes sense for my program instead of just text. The logical combination for this would be ctrl+c but that would destroy the default copy hotkey for normal text. One solution i was thinking about is to only catch the hotkey when it "makes sense" but when you use some advanced custom selection it might be hard to differentiate if your data is focused, if text is selected or both. Right now i am only using single keys as the hotkey, so just 'c' for the example above and this seems to be what most other sites are doing too. The problem is that if you have text input this doesn't work so good. Is this the best solution? To clarify I'm talking about advanced webapps that behave more like normal programs and not just some website presenting information(even though i think these guidlines would be valid for both cases). So for the copy example it might not be a big deal if you can't copy the text in the menu but when ctrl+tab, alt+d or ctrl+e doesn't work i would be really pissed, cough flash cough.

    Read the article

  • iPhone OS: Is there a way to set up KVO between two ManagedObject Entities?

    - by nickthedude
    I have 2 entities I want to link with KVO, one a single statTracker class that keeps track of different stats and the other an achievement class that contains information about achievements. Ideally what I want to be able to do is set up KVO by having an instance of the achievement class observe a value on the statTracker class and also set up a threshold value at which the achievement instance should be "triggered"(triggering in this case would mean showing a UIAlertView and changing a property on the achievement class.) I'd like to also set these relationships up on instantiation of the achievement class if possible so kind of like this: Achievement *achievement1 = (Achievement *)[NSEntityDescription insertNewObjectForEntityForName:@"Achievement" inManagedObjectContext:[[CoreDataSingleton sharedCoreDataSingleton] managedObjectContext]]; [achievement1 setAchievementName:@"2 time launcher"]; [achievement1 setAchievementDescription:@"So you've decided to come back for more eh? Here are some achievement points to get you going"]; [achievement1 setAchievementPoints:[NSNumber numberWithInt:300]; [achievement1 setObjectToObserve:@"statTrackerInstace" propertyToObserve:@"timesLaunched" valueOfPropertToSatisfyAchievement:2] Anyone out there know how I would set this up? Is there some way I could do this by way of relationships that I'm not seeing? Thanks, Nick

    Read the article

  • How can I make this method more Scalalicious

    - by Neil Chambers
    I have a function that calculates the left and right node values for some collection of treeNodes given a simple node.id, node.parentId association. It's very simple and works well enough...but, well, I am wondering if there is a more idiomatic approach. Specifically is there a way to track the left/right values without using some externally tracked value but still keep the tasty recursion. /* * A tree node */ case class TreeNode(val id:String, val parentId: String){ var left: Int = 0 var right: Int = 0 } /* * a method to compute the left/right node values */ def walktree(node: TreeNode) = { /* * increment state for the inner function */ var c = 0 /* * A method to set the increment state */ def increment = { c+=1; c } // poo /* * the tasty inner method * treeNodes is a List[TreeNode] */ def walk(node: TreeNode): Unit = { node.left = increment /* * recurse on all direct descendants */ treeNodes filter( _.parentId == node.id) foreach (walk(_)) node.right = increment } walk(node) } walktree(someRootNode) Edit - The list of nodes is taken from a database. Pulling the nodes into a proper tree would take too much time. I am pulling a flat list into memory and all I have is an association via node id's as pertains to parents and children. Adding left/right node values allows me to get a snapshop of all children (and childrens children) with a single SQL query. The calculation needs to run very quickly in order to maintain data integrity should parent-child associations change (which they do very frequently). In addition to using the awesome Scala collections I've also boosted speed by using parallel processing for some pre/post filtering on the tree nodes. I wanted to find a more idiomatic way of tracking the left/right node values. After looking at the answers listed I have settled on this synthesised version: def walktree(node: TreeNode) = { def walk(node: TreeNode, counter: Int): Int = { node.left = counter node.right = treeNodes .filter( _.parentId == node.id) .foldLeft(counter+1) { (counter, curnode) => walk(curnode, counter) + 1 } node.right } walk(node,1) }

    Read the article

  • How do I start up an NSRunLoop, and ensure that it has an NSAutoreleasePool that gets emptied?

    - by Nick Forge
    I have a "sync" task that relies on several "sub-tasks", which include asynchronous network operations, but which all require access to a single NSManagedObjectContext. Due to the threading requirements of NSManagedObjectContexts, I need every one of these sub-tasks to execute on the same thread. Due to the amount of processing being done in some of these tasks, I need them to be on a background thread. At the moment, I'm launching a new thread by doing this in my singleton SyncEngine object's -init method: [self performSelectorInBackground:@selector(initializeSyncThread) withObject:nil]; The -initializeSyncThread method looks like this: - (void)initializeSyncThread { self.syncThread = [NSThread currentThread]; self.managedObjectContext = [(MyAppDelegate *)[UIApplication sharedApplication].delegate createManagedObjectContext]; NSRunLoop *runLoop = [NSRunLoop currentRunLoop]; [runLoop run]; } Is this the correct way to start up the NSRunLoop for this thread? Is there a better way to do it? The run loop only needs to handle 'performSelector' sources, and it (and its thread) should be around for the lifetime of the process. When it comes to setting up an NSAutoreleasePool, should I do this by using Run Loop Observers to create the autorelease pool and drain it after every run-through?

    Read the article

  • Printing HTML blocks

    - by Lem0n
    I have a page with a few tables that I want to be printable. I want the following behaviour: 1) Add a page break if the next table fits in a single page, but won't fit in the current page (because of other stuff already printed on this page) 2) Print the "table header" again in case it's needed to break a table (I guess it's the default behaviour) Any ideas specially on the first issue? Maybe some CSS can help? I'll give on example. I have a page with 4 tables. All of them with 10 lines, except the third one, with 50 lines. The first and second goes on the first page. Since the third one won't fit in the same page, but will fit in a page alone, it's printed on a page alone... and then the forth table is printed on the third page (in case it doesn't fit together in the second page). But, if the third page had 300 lines and would be broke anyway, it could have started to be print in the first page.

    Read the article

  • categorize a set of phrases into a set of similar phrases

    - by Dingo
    I have a few apps that generate textual tracing information (logs) to log files. The tracing information is the typical printf() style - i.e. there are a lot of log entries that are similar (same format argument to printf), but differ where the format string had parameters. What would be an algorithm (url, books, articles, ...) that will allow me to analyze the log entries and categorize them into several bins/containers, where each bin has one associated format? Essentially, what I would like is to transform the raw log entries into (formatA, arg0 ... argN) instances, where formatA is shared among many log entries. The formatA does not have to be the exact format used to generate the entry (even more so if this makes the algo simpler). Most of the literature and web-info I found deals with exact matching, a max substring matching, or a k-difference (with k known/fixed ahead of time). Also, it focuses on matching a pair of (long) strings, or a single bin output (one match among all input). My case is somewhat different, since I have to discover what represents a (good-enough) match (generally a sequence of discontinuous strings), and then categorize each input entries to one of the discovered matches. Lastly, I'm not looking for a perfect algorithm, but something simple/easy to maintain. Thanks!

    Read the article

  • JSESSIONID collision between two servers on same ip but different ports

    - by Steve Armstrong
    I've got a situation where I have two different webapps running on a single server, using different ports. They're both running Java's Jetty servlet container, so they both use a cookie parameter named JSESSIONID to track the session id. These two webapps are fighting over the session id. Open a Firefox tab, and go to WebApp1 WebApp1's HTTP response has a set-cookie header with JSESSIONID=1 Firefox now has a Cookie header with JSESSIONID=1 in all it's HTTP requests to WebApp1 Open a second Firefox tab, and go to WebApp2 The HTTP reqeust to WebApp2 also has a Cookie header with JSESSIONID=1, but in the doGet, when I call req.getSession(false); I get null. And if I call req.getSession(true) I get a new Session object, but then the HTTP response from WebApp2 has a set-cookie header with JSESSIONID=20 Now, WebApp2 has a working Session, but WebApp1's session is gone. Going to WebApp1 will give me a new session, blowing away WebApp2's session. Continue forever So the Sessions are thrashing between each web app. I'd really like for the req.getSession(false) to return a valid session if there's already a JSESSIONID cookie defined. One option is to basically reimplement the Session framework with a HashMap and cookies called WEBAPP1SESSIONID and WEBAPP2SESSIONID, but that sucks, and means I'll have to hack the new Session stuff into ActionServlet and a few other places. This must be a problem others have encountered. Is Jetty's HttpServletRequest.getSession(boolean) just crappy?

    Read the article

  • Testing performance of queries in mysl

    - by Unreason
    I am trying to setup a script that would test performance of queries on a development mysql server. Here are more details: I have root access I am the only user accessing the server Mostly interested in InnoDB performance The queries I am optimizing are mostly search queries (SELECT ... LIKE '%xy%') What I want to do is to create reliable testing environment for measuring the speed of a single query, free from dependencies on other variables. Till now I have been using SQL_NO_CACHE, but sometimes the results of such tests also show caching behaviour - taking much longer to execute on the first run and taking less time on subsequent runs. If someone can explain this behaviour in full detail I might stick to using SQL_NO_CACHE; I do believe that it might be due to file system cache and/or caching of indexes used to execute the query, as this post explains. It is not clear to me when Buffer Pool and Key Buffer get invalidated or how they might interfere with testing. So, short of restarting mysql server, how would you recommend to setup an environment that would be reliable in determining if one query performs better then the other?

    Read the article

  • PHP: Aggregate Model Classes or Uber Model Classes?

    - by sunwukung
    In many of the discussions regarding the M in MVC, (sidestepping ORM controversies for a moment), I commonly see Model classes described as object representations of table data (be that an Active Record, Table Gateway, Row Gateway or Domain Model/Mapper). Martin Fowler warns against the development of an anemic domain model, i.e. a class that is nothing more than a wrapper for CRUD functionality. I've been working on an MVC application for a couple of months now. The DBAL in the application I'm working on started out simple (on account of my understanding - oh the benefits of hindsight), and is organised so that Controllers invoke Business Logic classes, that in turn access the database via DAO/Transaction Scripts pertinent to the task at hand. There are a few "Entity" classes that aggregate these DAO objects to provide a convenient CRUD wrapper, but also embody some of the "behaviour" of that Domain concept (for example, a user - since it's easy to isolate). Taking a look at some of the code, and thinking along refactoring some of the code into a Rich Domain Model, it occurred to me that were I to try and wrap the CRUD routines and behaviour of say, a Company into a single "Model" class, that would be a sizeable class. So, my question is this: do Models represent domain objects, business logic, service layers, all of the above combined? How do you go about defining the responsibilities for these components?

    Read the article

  • Better way to call superclass method in ExtJS

    - by Rene Saarsoo
    All the ExtJS documentation and examples I have read suggest calling superclass methods like this: MyApp.MyPanel = Ext.extend(Ext.Panel, { initComponent: function() { // do something MyPanel specific here... MyApp.MyPanel.superclass.initComponent.call(this); } }); I have been using this pattern for quite some time and the main problem is, that when you rename your class then you also have to change all the calls to superclass methods. That's quite inconvenient, often I will forget and then I have to track down strange errors. But reading the source of Ext.extend() I discovered, that instead I could use the superclass() or super() methods that Ext.extend() adds to the prototype: MyApp.MyPanel = Ext.extend(Ext.Panel, { initComponent: function() { // do something MyPanel specific here... this.superclass().initComponent.call(this); } }); In this code renaming MyPanel to something else is simple - I just have to change the one line. But I have doubts... I haven't seen this documented anywhere and the old wisdom says, I shouldn't rely on undocumented behaviour. I didn't found a single use of these superclass() and supr() methods in ExtJS source code. Why create them when you aren't going to use them? Maybe these methods were used in some older version of ExtJS but are deprecated now? But it seems such a useful feature, why would you deprecate it? So, should I use these methods or not?

    Read the article

  • sql query - how to apply limit within group by

    - by Raj
    hey guys assuming i have a table named t1 with following fields: ROWID, CID, PID, Score, SortKey it has the following data: 1, C1, P1, 10, 1 2, C1, P2, 20, 2 3, C1, P3, 30, 3 4, C2, P4, 20, 3 5, C2, P5, 30, 2 6, C3, P6, 10, 1 7, C3, P7, 20, 2 what query do i write so that it applies group by on CID, but instead of returning me 1 single result per group, it returns me a max of 2 results per group. also where condition is score = 20 and i want the results ordered by CID and SortKey. If I had to run my query on above data, i would expect the following result: RESULTS FOR C1 - note: ROWID 1 is not considered as its score < 20 C1, P2, 20, 2 C1, P3, 30, 3 RESULTS FOR C2 - note: ROWID 5 appears before ROWID 4 as ROWID 5 has lesser value SortKey C2, P5, 30, 2 C2, P4, 20, 3 RESULTS FOR C3 - note: ROWID 6 does not appear as its score is less than 20 so only 1 record returned here C3, P7, 20, 2 IN SHORT, I WANT A LIMIT WITHIN A GROUP BY. I want the simplest solution and want to avoid temp tables. sub queries are fine. also note i am using sqlite for this

    Read the article

  • TypeError: Error #1009 - (Null reference error) With Flash.

    - by Wind Chimez
    I am not an expert in flash, but i do work with AS and tweak Flash projects , though not having deep expertise in it. Currently i need to revamp a flash website done by one another guy, and the code base given to me, upon execution is throwing the following error. "--- TypeError: Error #1009: Cannot access a property or method of a null object reference. at NewSite_fla::MainTimeline/__setProp_ContactOutP1_ContactOut_Contents_0() at NewSite_fla::MainTimeline/frame1() --" The structure of the project is like, it has the different sections split into different movie clips. There is no single main timeline, but click actions on different areas of seperate movie clips will take them between one another. All the AS logic of event handling are written inline in FLA , no seperate Document class exists. Preloader Movie clip is the first one getting loaded. As i understood the error is getting thrown initially itself, and it is not happening due to any Action script logic written inline, because it is throwing error even before hitting the first inline AS code. I am not able to figure Out what exactly it causing the problem, or where to resolve it. I really got stuck at this point. Any help will be great.I had not seen the particular solution i am looking for anywhere yet, though Error #1009 is common. I uploaded the fla, here ( http://tinyurl.com/249e95p ), for reference.It may not be working , since the included/refered video files and all are not there, but reviwing the action code/movie clips will be possible. Please let me know if somebody will be able to trace the issue exactly.

    Read the article

  • Filtering records in app-engine (Java)

    - by Manjoor
    I have following code running perfectly. It filter records based on single parameter. public List<Orders> GetOrders(String email) { PersistenceManager pm = PMF.get().getPersistenceManager(); Query query = pm.newQuery(Orders.class); query.setFilter("Email == pEmail"); query.setOrdering("Id desc"); query.declareParameters("String pEmail"); query.setRange(0,50); return (List<Orders>) query.execute(email); } Now i want to filter on multiple parameters. sdate and edate is Start Date and End Date. In datastore it is saved as Date (not String). public List<Orders> GetOrders(String email,String icode,String sdate, String edate) { PersistenceManager pm = PMF.get().getPersistenceManager(); Query query = pm.newQuery(Orders.class); query.setFilter("Email == pEmail"); query.setFilter("ItemCode == pItemCode"); query.declareParameters("String pEmail"); query.declareParameters("String pItemCode"); .....//Set filter and declare other 2 parameters .....// ...... query.setRange(0,50); query.setOrdering("Id desc"); return (List<Orders>) query.execute(email,icode,sdate,edate); } Any clue?

    Read the article

  • JTextField vs JComboBox behaviour in JTable

    - by Ash
    Okay, this is a hard one to explain but I'll try my best. I have a JTextField and a JComboBox in a JTable, whose getCellEditor method has been overriden as follows: public TableCellEditor getCellEditor( int row, int column ) { if ( column == 3 ) { // m_table is the JTable if ( m_table.getSelectedRowCount() == 1 ) { JComboBox choices = new JComboBox(); choices.setEditable( true ); choices.addItem( new String( "item 1" ) ); return new DefaultCellEditor( choices ); } return super.getCellEditor( row, column ); } Here are the behavioral differences (NOTE that from this point on, when I say JTextField or JComboBox, I mean the CELL in the JTable containing either component): When I click once on a JTextField, the cell is highlighted. Double clicking brings up the caret and I can input text. Whereas, with a JComboBox, single clicking brings up the caret to input text, as well as the combo drop down button. When I tab or use the arrow keys to navigate to a JTextField and then start typing, the characters I type automatically get entered into the cell. Whereas, when I navigate to a JComboBox the same way and then start typing, nothing happens apart from the combo drop down button appearing. None of the characters I type get entered unless I hit F2 first. So here's my question: What do I need to do have JComboBoxes behave exactly like JTextFields in the two instances described above? Please do not ask why I'm doing what I'm doing or suggest alternatives (it's the way it is and I need to do it this way) and yes, I've read the API for all components in question....the problem is, it's a swing API. Thanks in advance, Ash

    Read the article

  • Commit is VERY slow in my NHibernate / SQLite project

    - by Tom Bushell
    I've just started doing some real-world performance testing on my Fluent NHibernate / SQLite project, and am experiencing some serious delays when when I Commit to the database. By serious, I mean taking 20 - 30 seconds to Commit 30 K of data! This delay seems to get worse as the database grows. When the SQLite DB file is empty, commits happen almost instantly, but when it grows to 10 Meg, I see these huge delays. The database has 16 tables, averaging 10 columns each. One possible problem is that I'm storing a dozen or so IList members, but they are typically only 200 elements long. But this is a recent addition to Fluent NHibernate automapping, which stores each float in a single table row, so maybe that's a potential problem. Any suggestions on how to track this down? I suspect SQLite is the culprit, but maybe it's NHibernate? I don't have any experience with profilers, but am thinking of getting one. I'm aware of NHibernate Profiler - any recommendations for profilers that work well with SQLite? Here's the method that saves the data - it's just a SaveOrUpdate call and a Commit, if you ignore all the error handling and debug logging. public static void SaveMeasurement(object measurement) { Debug.WriteLine("\r\n---SaveMeasurement---"); // Get the application's database session var session = GetSession(); using (var transaction = session.BeginTransaction()) { try { session.SaveOrUpdate(measurement); } catch (Exception e) { throw new ApplicationException( "\r\n SaveMeasurement->SaveOrUpdate failed\r\n\r\n", e); } try { Debug.WriteLine("\r\n---Commit---"); transaction.Commit(); Debug.WriteLine("\r\n---Commit Complete---"); } catch (Exception e) { throw new ApplicationException( "\r\n SaveMeasurement->Commit failed\r\n\r\n", e); } } }

    Read the article

  • Simplest way to automatically alter "const" value in Java during compile time

    - by Michael Mao
    Hi all: This is a question corresponds to my uni assignment so I am very sorry to say I cannot adopt any one of the following best practices in a short time -- you know -- assignment is due tomorrow :( link to Best way to alter const in Java on StackOverflow Basically the only task (I hope so) left for me is the performance tuning. I've got a bunch of predefined "const" values in my single-class agent source code like this: //static final values private static final long FT_THRESHOLD = 400; private static final long FT_THRESHOLD_MARGIN = 50; private static final long FT_SMOOTH_PRICE_IDICATOR = 20; private static final long FT_BUY_PRICE_MODIFIER = 0; private static final long FT_LAST_ROUNDS_STARTTIME = 90; private static final long FT_AMOUNT_ADJUST_MODIFIER = 5; private static final long FT_HISTORY_PIRCES_LENGTH = 10; private static final long FT_TRACK_DURATION = 5; private static final int MAX_BED_BID_NUM_PER_AUC = 12; I can definitely alter the values manually and then compile the code to give it another go around. But the execution time for a thorough "statistic analysis" usually requires over 2000 times of execution, which will typically lasts more than half an hour on my own laptop... So I hope there is a way to alter values using other ways than dig into the source code to change the "const" values there, so I can automatically distributed compiled code to other people's PC and let them run the statistic analysis instead. One other reason for a automatically value adjustment is that I can try using my own agent to defeat itself by choosing different "const" values. Although my values are derived from previous history and statistical results, they are far from the most optimized values. I hope there is a easy way so I can quickly adopt that so to have a good sleep tonight while the computer does everything for me... :) Any hints on this sort of stuff? Any suggestion is welcomed and much appreciated.

    Read the article

< Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >