Search Results

Search found 19393 results on 776 pages for 'reference count'.

Page 636/776 | < Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >

  • Weak hashmap with weak references to the values?

    - by Razor Storm
    I am building an android app where each entity has a bitmap that represents its sprite. However, each entity can be be duplicated (there might be 3 copies of entity asdf for example). One approach is to load all the sprites upfront, and then put the correct sprite in the constructors of the entities. However, I want to decode the bitmaps lazily, so that the constructors of the entities will decode the bitmaps. The only problem with this is that duplicated entities will load the same bitmap twice, using 2x the memory (Or n times if the entity is created n times). To fix this, I built a SingularBitmapFactory that will store a decoded Bitmap into a hash, and if the same bitmap is asked for again, will simply return the previously hashed one instead of building a new one. Problem with this, though, is that the factory holds a copy of all bitmaps, and so won't ever get garbage collected. What's the best way to switch the hashmap to one with weakly referenced values? In otherwords, I want a structure where the values won't be GC'd if any other object holds a reference to it, but as long as no other objects refers it, then it can be GC'd.

    Read the article

  • How to get a list of all Subversion commit author usernames?

    - by Quinn Taylor
    I'm looking for an efficient way to get the list of unique commit authors for an SVN repository as a whole, or for a given resource path. I haven't been able to find an SVN command specifically for this (and don't expect one) but I'm hoping there may be a better way that what I've tried so far in Terminal (on OS X): svn log --quiet | grep "^r" | awk '{print $3}' svn log --quiet --xml | grep author | sed -E "s:</?author>::g" Either of these will give me one author name per line, but they both require filtering out a fair amount of extra information. They also don't handle duplicates of the same author name, so for lots of commits by few authors, there's tons of redundancy flowing over the wire. More often than not I just want to see the unique author usernames. (It actually might be handy to infer the commit count for each author on occasion, but even in these cases it would be better if the aggregated data were sent across instead.) I'm generally working with client-only access, so svnadmin commands are less useful, but if necessary, I might be able to ask a special favor of the repository admin if strictly necessary or much more efficient. The repositories I'm working with have tens of thousands of commits and many active users, and I don't want to inconvenience anyone.

    Read the article

  • c++ Design pattern for CoW, inherited classes, and variable shared data?

    - by krunk
    I've designed a copy-on-write base class. The class holds the default set of data needed by all children in a shared data model/CoW model. The derived classes also have data that only pertains to them, but should be CoW between other instances of that derived class. I'm looking for a clean way to implement this. If I had a base class FooInterface with shared data FooDataPrivate and a derived object FooDerived. I could create a FooDerivedDataPrivate. The underlying data structure would not effect the exposed getters/setters API, so it's not about how a user interfaces with the objects. I'm just wondering if this is a typical MO for such cases or if there's a better/cleaner way? What peeks my interest, is I see the potential of inheritance between the the private data classes. E.g. FooDerivedDataPrivate : public FooDataPrivate, but I'm not seeing a way to take advantage of that polymorphism in my derived classes. class FooDataPrivate { public: Ref ref; // atomic reference counting object int a; int b; int c; }; class FooInterface { public: // constructors and such // .... // methods are implemented to be copy on write. void setA(int val); void setB(int val); void setC(int val); // copy constructors, destructors, etc. all CoW friendly private: FooDataPrivate *data; }; class FooDerived : public FooInterface { public: FooDerived() : FooInterface() {} private: // need more shared data for FooDerived // this is the ???, how is this best done cleanly? };

    Read the article

  • Unit testing, mocking - simple case: Service - Repository

    - by rafek
    Consider a following chunk of service: public class ProductService : IProductService { private IProductRepository _productRepository; // Some initlization stuff public Product GetProduct(int id) { try { return _productRepository.GetProduct(id); } catch (Exception e) { // log, wrap then throw } } } Let's consider a simple unit test: [Test] public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() { var product = EntityGenerator.Product(); _productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product); Product returnedProduct = _productService.GetProduct(product.Id); Assert.AreEqual(product, returnedProduct); _productRepositoryMock.VerifyAll(); } At first it seems that this test is ok. But let's change our service method a little bit: public Product GetProduct(int id) { try { var product = _productRepository.GetProduct(id); product.Owner = "totallyDifferentOwner"; return product; } catch (Exception e) { // log, wrap then throw } } How to rewrite a given test that it'd pass with the first service method and fail with a second one? How do you handle this kind of simple scenarios? HINT: A given test is bad coz product and returnedProduct is actually the same reference.

    Read the article

  • Get more error information from unhandled error

    - by Andrew Simpson
    I am using C# in a desktop application. I am calling a DLL written in C that I do not have the source code for. Whenever I call this DLL I get an untrapped error which I trap in an UnhandledException event/delegate. The error is : object reference not set to an instance of an object But the stack trace is empty. When I Googled this the info back was that the error was being hanlded eleswhere and then rethrown. But this can only be in the DLL I do not have the source code for. So, is there anyway I can get more info about this error? This is my code... in program.cs... AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException); static void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e) { try { Exception _ex = (Exception)e.ExceptionObject; //the stact trace property is empty here.. } finally { Application.Exit(); } } My DLL... [DllImport("AutoSearchDevice.dll", EntryPoint = "Start", ExactSpelling = false, CallingConvention = CallingConvention.StdCall)] public static extern int Start(int ASD_HANDLE); An I call it like so: public static void AutoSearchStart() { try { Start(m_pASD); } catch (Exception ex) { } }

    Read the article

  • Gridview Paging via ObjectDataSource: Why is maximumRows being set to -1?

    - by Bryan
    So before I tried custom gridview paging via ObjectDataSource... I think I read every tutorial known to man just to be sure I got it. It didn't look like rocket science. I've set the AllowPaging = True on my gridview. I've specified PageSize="10" on my gridview. I've set EnablePaging="True" on the ObjectDataSource. I've added the 2 paging parms (maximumRows & startRowIndex) to my business object's select method. I've created an analogous "count" method with the same signature as the select method. The only problem I seem to have is during execution... the ObjectDataSource is supplying my business object with a maximumRows value of -1 and I can't for the life of me figure out why. I've searched to the end of the web for anyone else having this problem and apparently I'm the only one. The StartRowIndex parameter seems to be working just fine. Any ideas?

    Read the article

  • scanf a byte then print it out?

    - by Sarah
    I've searched around to see if I can find this answer but I can't seem to (please let me know if I'm wrong). I am trying to use scanf to read in a byte, an unsigned int and a char in one .c file and I am trying to access this byte in a different .c file and print it out. (I have already checked to make sure I have included all the appropriate parameters everywhere) But I keep getting errors. The warnings are: database.c: In function ‘addCitizen’: database.c:23:2: warning: format ‘%hhu’ expects argument of type ‘int’, but argument 2 has type ‘byte *’ [-Wformat] database.c:24:2: warning: format ‘%u’ expects argument of type ‘unsigned int’, but argument 2 has type ‘int *’ [-Wformat] database.c:25:2: warning: format ‘%c’ expects argument of type ‘int’, but argument 2 has type ‘char *’ [-Wformat] where I'm scanf'ing: // Request loop while (count-- != 0) { while (1){ // Get values from the user int error = scanf ("%79s %hhu %u %c", tname, &tdist, &tyear, &tgender); addCitizen(db, tname, &tdist, &tyear, &tgender); where I'm printing: void addCitizen(Database *db, char *tname, byte *tdist, int *tyear, char *tgender){ //needs to find the right place in memory to put this stuff and then put it there printf("\nName is: %79s\n", tname); printf("District is: %hhu\n", tdist); printf("Year of birth is: %u\n", tyear); printf("Gender is:%c\n", tgender); I'm not sure where I'm going wrong?

    Read the article

  • most efficient method of turning multiple 1D arrays into columns of a 2D array

    - by Ty W
    As I was writing a for loop earlier today, I thought that there must be a neater way of doing this... so I figured I'd ask. I looked briefly for a duplicate question but didn't see anything obvious. The Problem: Given N arrays of length M, turn them into a M-row by N-column 2D array Example: $id = [1,5,2,8,6] $name = [a,b,c,d,e] $result = [[1,a], [5,b], [2,c], [8,d], [6,e]] My Solution: Pretty straight forward and probably not optimal, but it does work: <?php // $row is returned from a DB query // $row['<var>'] is a comma separated string of values $categories = array(); $ids = explode(",", $row['ids']); $names = explode(",", $row['names']); $titles = explode(",", $row['titles']); for($i = 0; $i < count($ids); $i++) { $categories[] = array("id" => $ids[$i], "name" => $names[$i], "title" => $titles[$i]); } ?> note: I didn't put the name = value bit in the spec, but it'd be awesome if there was some way to keep that as well.

    Read the article

  • Optimizing an embedded SELECT query in mySQL

    - by Crazy Serb
    Ok, here's a query that I am running right now on a table that has 45,000 records and is 65MB in size... and is just about to get bigger and bigger (so I gotta think of the future performance as well here): SELECT count(payment_id) as signup_count, sum(amount) as signup_amount FROM payments p WHERE tm_completed BETWEEN '2009-05-01' AND '2009-05-30' AND completed > 0 AND tm_completed IS NOT NULL AND member_id NOT IN (SELECT p2.member_id FROM payments p2 WHERE p2.completed=1 AND p2.tm_completed < '2009-05-01' AND p2.tm_completed IS NOT NULL GROUP BY p2.member_id) And as you might or might not imagine - it chokes the mysql server to a standstill... What it does is - it simply pulls the number of new users who signed up, have at least one "completed" payment, tm_completed is not empty (as it is only populated for completed payments), and (the embedded Select) that member has never had a "completed" payment before - meaning he's a new member (just because the system does rebills and whatnot, and this is the only way to sort of differentiate between an existing member who just got rebilled and a new member who got billed for the first time). Now, is there any possible way to optimize this query to use less resources or something, and to stop taking my mysql resources down on their knees...? Am I missing any info to clarify this any further? Let me know... EDIT: Here are the indexes already on that table: PRIMARY PRIMARY 46757 payment_id member_id INDEX 23378 member_id payer_id INDEX 11689 payer_id coupon_id INDEX 1 coupon_id tm_added INDEX 46757 tm_added, product_id tm_completed INDEX 46757 tm_completed, product_id

    Read the article

  • JavaScript (SVG drawing): Positioning x amount of points in an area

    - by Jack
    I'm using http://raphaeljs.com/ to try and draw multiple small circles. The problem I'm having is that the canvas has a fixed width, and if I want to draw, say, 1000 circles, they don't wrap onto a 'new line' (because you have to specify the xy position of each circle). E.g. I want this: .................................................. to look like this: ............................ ...................... At the moment I'm doing this: for ( var i = 0; i < 1000; i++ ) { var multiplier = i*3; if ( i <= 50 ) { paper.circle((2*multiplier),2,2); } else if ( i >= 51 && i <= 101 ) { paper.circle((2*multiplier) - 304,8,2); } else if ( i >= 152 && i <= 202 ) { paper.circle((2*multiplier) - 910,14,2); } } For reference: circle(x co-ord, y co-ord, radius) This is messy. I have to add an if statement for every new line I want. Must be a better way of doing it..?

    Read the article

  • Why do Scala maps have poor performance relative to Java?

    - by Mike Hanafey
    I am working on a Scala app that consumes large amounts of CPU time, so performance matters. The prototype of the system was written in Python, and performance was unacceptable. The application does a lot with inserting and manipulating data in maps. Rex Kerr's Thyme was used to look at the performance of updating and retrieving data from maps. Basically "n" random Ints were stored in maps, and retrieved from the maps, with the time relative to java.util.HashMap used as a reference. The full results for a range of "n" are here. Sample (n=100,000) performance relative to java, smaller is worse: Update Read Mutable 16.06% 76.51% Immutable 31.30% 20.68% I do not understand why the scala immutable map beats the scala mutable map in update performance. Using the sizeHint on the mutable map does not help (it appears to be ignored in the tested implementation, 2.10.3). Even more surprisingly the immutable read performance is worse than the mutable read performance, more significantly so with larger maps. The update performance of the scala mutable map is surprisingly bad, relative to both scala immutable and plain Java. What is the explanation?

    Read the article

  • Parse a CSV file using python (to make a decision tree later)

    - by Margaret
    First off, full disclosure: This is going towards a uni assignment, so I don't want to receive code. :). I'm more looking for approaches; I'm very new to python, having read a book but not yet written any code. The entire task is to import the contents of a CSV file, create a decision tree from the contents of the CSV file (using the ID3 algorithm), and then parse a second CSV file to run against the tree. There's a big (understandable) preference to have it capable of dealing with different CSV files (I asked if we were allowed to hard code the column names, mostly to eliminate it as a possibility, and the answer was no). The CSV files are in a fairly standard format; the header row is marked with a # then the column names are displayed, and every row after that is a simple series of values. Example: # Column1, Column2, Column3, Column4 Value01, Value02, Value03, Value04 Value11, Value12, Value13, Value14 At the moment, I'm trying to work out the first part: parsing the CSV. To make the decisions for the decision tree, a dictionary structure seems like it's going to be the most logical; so I was thinking of doing something along these lines: Read in each line, character by character If the character is not a comma or a space Append character to temporary string If the character is a comma Append the temporary string to a list Empty string Once a line has been read Create a dictionary using the header row as the key (somehow!) Append that dictionary to a list However, if I do things that way, I'm not sure how to make a mapping between the keys and the values. I'm also wondering whether there is some way to perform an action on every dictionary in a list, since I'll need to be doing things to the effect of "Everyone return their values for columns Column1 and Column4, so I can count up who has what!" - I assume that there is some mechanism, but I don't think I know how to do it. Is a dictionary the best way to do it? Would I be better off doing things using some other data structure? If so, what?

    Read the article

  • erlide, which eclipse/which packages?

    - by KevinDTimm
    I have downloaded eclipse 3.4 (java version) for MacOSX (carbon). I have tried to 'update' to the erlide, but see many (duplicated) options (many erlide, options that say 'only for erl SDK updates', etc.) Sometimes I get 403 errors when attempting to access http://erlide.org/update and http://erlide.sourceforge.net/update. Finally, when I get some set of options installed, I either get errors like : Loading of /Users/kevindtimm/Documents/eclipse-java-ganymede-SR2-macosx-carbon/eclipse/plugins/org.erlide.kernel.common_0.8.1.201005250801/ebin/erlide_kernel_common.beam failed: badfile (hello_world@ktmac)1> =ERROR REPORT==== 24-Nov-2010::19:17:32 === beam/beam_load.c(1768): Error loading function erlide_kernel_common:monitor/0: op put_string u u x: please re-compile this module with an R14B compiler or, when I've done different installations of erlide, I get no response in the console to : hello:hello(). Does anybody have a good reference for how to load this plug-in and which items I should install? -module(hello). -export([hello/0]). hello() -> io:write("Hello World\n"). [edit] I have installed eclipse 3.6 (c++) as requested below, and the following code still can't find hello:hello(). %%file_comment -module(hello). %% %% Include files %% %% %% Exported Functions %% -export([hello/0]). %% %% API Functions %% %% %% Local Functions %% hello() -> io:write("Hello World\n"). [/edit]

    Read the article

  • 'LINQ query plan' horribly inefficient but 'Query Analyser query plan' is perfect for same SQL!

    - by Simon_Weaver
    I have a LINQ to SQL query that generates the following SQL : exec sp_executesql N'SELECT COUNT(*) AS [value] FROM [dbo].[SessionVisit] AS [t0] WHERE ([t0].[VisitedStore] = @p0) AND (NOT ([t0].[Bot] = 1)) AND ([t0].[SessionDate] > @p1)',N'@p0 int,@p1 datetime', @p0=1,@p1='2010-02-15 01:24:00' (This is the actual SQL taken from SQL Profiler on SQL Server 2008.) The query plan generated when I run this SQL from within Query Analyser is perfect. It uses an index containing VisitedStore, Bot, SessionDate. The query returns instantly. However when I run this from C# (with LINQ) a different query plan is used that is so inefficient it doesn't even return in 60 seconds. This query plan is trying to do a key lookup on the clustered primary key which contains a couple million rows. It has no chance of returning. What I just can't understand though is that the EXACT same SQL is being run - either from within LINQ or from within Query Analyser yet the query plan is different. I've ran the two queries many many times and they're now running in isolation from any other queries. The date is DateTime.Now.AddDays(-7), but I've even hardcoded that date to eliminate caching problems. Is there anything i can change in LINQ to SQL to affect the query plan or try to debug this further? I'm very very confused!

    Read the article

  • Oracle - UPSERT with update not executed for unmodified values

    - by Buthrakaur
    I'm using following update or insert Oracle statement at the moment: BEGIN UPDATE DSMS SET SURNAME = :SURNAME, FIRSTNAME = :FIRSTNAME, VALID = :VALID WHERE DSM = :DSM; IF (SQL%ROWCOUNT = 0) THEN INSERT INTO DSMS (DSM, SURNAME, FIRSTNAME, VALID) VALUES (:DSM, :SURNAME, :FIRSTNAME, :VALID); END IF; END; This runs fine except that the update statement performs dummy update if the data is same as the parameter values provided. I would not mind the dummy update in normal situation, but there's a replication/synchronization system build over this table using triggers on tables to capture updated records and executing this statement frequently for many records simply means that I'd cause huge traffic in triggers and the sync system. Is there any simple method how to reformulate this code that the update statement wouldn't update record if not necessary without using following IF-EXISTS check code which I find not sleek enough and maybe also not most efficient for this task? DECLARE CNT NUMBER; BEGIN SELECT COUNT(1) INTO CNT FROM DSMS WHERE DSM = :DSM; IF SQL%FOUND THEN UPDATE DSMS SET SURNAME = :SURNAME, FIRSTNAME = :FIRSTNAME, VALID = :VALID WHERE DSM = :DSM AND (SURNAME != :SURNAME OR FIRSTNAME != :FIRSTNAME OR VALID != :VALID); ELSE INSERT INTO DSMS (DSM, SURNAME, FIRSTNAME, VALID) VALUES (:DSM, :SURNAME, :FIRSTNAME, :VALID); END IF; END;

    Read the article

  • How do I display java.lang.* object allocations in Eclipse profiler?

    - by Martin Wickman
    I am profiling an application using the Eclipse profiler. I am particularly interested in number of allocated object instances of classes from java.lang (for instance java.lang.String or java.util.HashMap). I also want to know stuff like number of calls to String.equals() etc. I use the "Object Allocations" tab and I shows all classes in my application and a count. It also shows all int[], byte[], long[] etc, but there is no mention of any standard java classes. For instance, this silly code: public static void main(String[] args) { Object obj[] = new Object[1000]; for (int i = 0; i < 1000; i++) { obj[i] = new StringBuffer("foo" + i); } System.out.println (obj[30]); } Shows up in the Object Allocations tab as 7 byte[]s, 4 char[]s and 2 int[]s. It doesn't matter if I use 1000 or 1 iterations. It seems the profiler simply ignores everything that is in any of the java.* packages. The same applies to Execution Statistics as well. Any idea how to display instances of java.* in the Eclipse Profiler?

    Read the article

  • Is there a better way to write this LINQ query?

    - by Raj Aththanayake
    Hi Is there a better simplified way to write this query. My logic is if collection contains customer ids and countrycodes, do the query ordey by customer id ascending. If there are no contain id in CustIDs then do the order by customer name. Is there a better way to write this query? I'm not really familiar with complex lambdas. var custIdResult = (from Customer c in CustomerCollection where (c.CustomerID.ToLower().Contains(param.ToLower()) && (countryCodeFilters.Any(item => item.Equals(c.CountryCode))) ) select c).ToList(); if (custIdResult.Count > 0) { return from Customer c in custIdResult where ( c.CustomerName.ToLower().Contains(param.ToLower()) && countryCodeFilters.Any(item => item.Equals(c.CountryCode))) orderby c.CustomerID ascending select c; } else { return from Customer c in CustomerCollection where (c.CustomerName.ToLower().Contains(param.ToLower()) && countryCodeFilters.Any(item => item.Equals(c.CountryCode))) orderby c.CustomerName descending select c; }

    Read the article

  • boost::filesystem - how to create a boost path from a windows path string on posix plattforms?

    - by VolkA
    I'm reading path names from a database which are stored as relative paths in Windows format, and try to create a boost::filesystem::path from them on a Unix system. What happens is that the constructor call interprets the whole string as the filename. I need the path to be converted to a correct Posix path as it will be used locally. I didn't find any conversion functions in the boost::filesystem reference, nor through google. Am I just blind, is there an obvious solution? If not, how would you do this? Example: std::string win_path("foo\\bar\\asdf.xml"); std::string posix_path("foo/bar/asdf.xml"); // loops just once, as part is the whole win_path interpreted as a filename boost::filesystem::path boost_path(win_path); BOOST_FOREACH(boost::filesystem::path part, boost_path) { std::cout << part << std::endl; } // prints each path component separately boost::filesystem::path boost_path_posix(posix_path); BOOST_FOREACH(boost::filesystem::path part, boost_path_posix) { std::cout << part << std::endl; }

    Read the article

  • Sql Server Compact Edition version error.

    - by Tim
    I am working on .NET ClickOnce project that uses Sql Server 2005 Compact Edition to synchronize remote data through the use of a Merge replication. This application has been live for nearly a year now, and while we encounter occasional synchronization errors, things run quite smoothly for the most part. Yesterday a user reported an error that I have never seen before and have yet to find any information for online. Many users synchronize every night, and I haven't received error reports from anyone else, so this issue must be isolated to this particular user / client machine. Here are the full details of the error: -Error Code : 80004005 -Message : The message contains an unexpected replication operation code. The version of SQL Server Compact Edition Client Agent and SQL Server Compact Edition Server Agent should match. [ replication operation code = 31 ] -Minor Error : 28526 -Source : Microsoft SQL Server Compact Edition -Numeric Parameters : 31 One interesting thing that I've found is that his data does get synchronized to the server, so this error must occur after the upload completes. I have yet to determine whether or not changes at the server are still being downloaded to his subscription. Thinking that maybe there was some kind of version conflict going on, I had a remote desktop session with this user last night and uninstalled both the application and the SQL Server Compact Edition prerequisite, then reinstalled both from our ClickOnce publication site. I also removed his existing local database file so that upon synchronization, an entirely new subscription would be issued to him. Still his errors continue. I suppose the error may be somewhat general, and the text in the error message stating that the versions should match may not necessarily reflect the problem at hand. This site contains the only official reference to this error that I've been able to find, and it offers no more detail than the error message itself. Has anyone else encountered this error? Or at least know more about SQL Compact to have a better guess as to what is going on here? Any help / suggestions will be greatly appreciated!

    Read the article

  • How might I wrap the FindXFile-style APIs to the STL-style Iterator Pattern in C++?

    - by BillyONeal
    Hello everyone :) I'm working on wrapping up the ugly innards of the FindFirstFile/FindNextFile loop (though my question applies to other similar APIs, such as RegEnumKeyEx or RegEnumValue, etc.) inside iterators that work in a manner similar to the Standard Template Library's istream_iterators. I have two problems here. The first is with the termination condition of most "foreach" style loops. STL style iterators typically use operator!= inside the exit condition of the for, i.e. std::vector<int> test; for(std::vector<int>::iterator it = test.begin(); it != test.end(); it++) { //Do stuff } My problem is I'm unsure how to implement operator!= with such a directory enumeration, because I do not know when the enumeration is complete until I've actually finished with it. I have sort of a hack together solution in place now that enumerates the entire directory at once, where each iterator simply tracks a reference counted vector, but this seems like a kludge which can be done a better way. The second problem I have is that there are multiple pieces of data returned by the FindXFile APIs. For that reason, there's no obvious way to overload operator* as required for iterator semantics. When I overload that item, do I return the file name? The size? The modified date? How might I convey the multiple pieces of data to which such an iterator must refer to later in an ideomatic way? I've tried ripping off the C# style MoveNext design but I'm concerned about not following the standard idioms here. class SomeIterator { public: bool next(); //Advances the iterator and returns true if successful, false if the iterator is at the end. std::wstring fileName() const; //other kinds of data.... }; EDIT: And the caller would look like: SomeIterator x = ??; //Construct somehow while(x.next()) { //Do stuff } Thanks! Billy3

    Read the article

  • Emptying the datastore in GAE

    - by colwilson
    I know what you're thinking, 'O not that again!', but here we are since Google have not yet provided a simpler method. I have been using a queue based solution which worked fine: import datetime from models import * DELETABLE_MODELS = [Alpha, Beta, AlphaBeta] def initiate_purge(): for e in config.DELETABLE_MODELS: deferred.defer(delete_entities, e, 'purging', _queue = 'purging') class NotEmptyException(Exception): pass def delete_entities(e, queue): try: q = e.all(keys_only=True) db.delete(q.fetch(200)) ct = q.count(1) if ct > 0: raise NotEmptyException('there are still entities to be deleted') else: logging.info('processing %s completed' % queue) except Exception, err: deferred.defer(delete_entities, e, then, queue, _queue = queue) logging.info('processing %s deferred: %s' % (queue, err)) All this does is queue a request to delete some data (once for each class) and then if the queued process either fails or knows there is still some stuff to delete, it re-queues itself. This beats the heck out of hitting the refresh on a browser for 10 minutes. However, I'm having trouble deleting AlphaBeta entities, there are always a few left at the end. I think because it contains Reference Properties: class AlphaBeta(db.Model): alpha = db.ReferenceProperty(Alpha, required=True, collection_name='betas') beta = db.ReferenceProperty(Beta, required=True, collection_name='alphas') I have tried deleting the indexes relating to these entity types, but that did not make any difference. Any advice would be appreciated please.

    Read the article

  • Removing "Using temporary; Using filesort" from this MySQL select+join+group by query

    - by claytontstanley
    I have the following query: select t.Chunk as LeftChunk, t.ChunkHash as LeftChunkHash, q.Chunk as RightChunk, q.ChunkHash as RightChunkHash, count(t.ChunkHash) as ChunkCount from chunksubset as t join chunksubset as q on t.ID = q.ID group by LeftChunkHash, RightChunkHash And the following explain table: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE subsets ref PRIMARY,IDIndex,SubsetIndex SubsetIndex 767 const 522014 "Using where; Using temporary; Using filesort" 1 SIMPLE subsets eq_ref PRIMARY,IDIndex,SubsetIndex PRIMARY 771 sotero.subsets.Id,const 1 "Using where; Using index" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 "Using where" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 note the "using temporary; using filesort". When this query is run, I quickly run out of RAM (presumably b/c of the temp table), and then the HDD kicks in, and the query slows to a halt. I thought it might be an index issue, so I started adding a few that sort of made sense: Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment chunks 0 PRIMARY 1 ChunkId A 17796190 NULL NULL BTREE chunks 1 ChunkHashIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 IDIndex 1 Id A 1483015 NULL NULL BTREE chunks 1 ChunkIndex 1 Chunk A 243783 NULL NULL BTREE chunks 1 ChunkTypeIndex 1 ChunkType A 2 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 2 ChunkId A 17796190 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 2 ChunkType A 261708 NULL NULL BTREE chunks 1 chunkHashByIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByIDIndex 2 Id A 17796190 NULL NULL BTREE But still using the temporary table. The db engine is MyISAM. How can I get rid of the using temporary; using filesort in this query? Just changing to InnoDB w/o explaining the underlying cause is not a particularly satisfying answer. Besides, if the solution is to just add the proper index, then that's much easier than migrating to another db engine.

    Read the article

  • How to close InAppBrowser itself in Phonegap Application?

    - by Shashi
    I am developing Phonegap application and currently i am using InAppBrowser to display external pages. On some of the external pages I place a close button and i want to close the InAppBrowser itself. because InAppBrowser displays these pages that is why the reference of it is not accessed on itself to close it and Please do not suggest me to use ChildBrowser Plugin. window.close(); //Not Worked for me or iabRef.close(); //Also not Worked for me because iabRef is not accessible on InAppBrowser. It is created on Parent Window Some of the Android device and iOS device display a Done Button to close it. As well as the iPad also display the Done button. but in Case of Android tablet there is not any kind of button to close it. UPDATE :- Here is my full code :- var iabRef = null; function iabLoadStart(event) { } function iabLoadStop(event) { } function iabClose(event) { iabRef.removeEventListener('loadstart', iabLoadStart); iabRef.removeEventListener('loadstop', iabLoadStop); iabRef.removeEventListener('exit', iabClose); } function startInAppB() { var myURL=encodeURI('http://www.domain.com/some_path/mypage.html'); iabRef = window.open(myURL,'_blank', 'location=yes'); iabRef.addEventListener('loadstart', iabLoadStart); iabRef.addEventListener('loadstop', iabLoadStop); iabRef.addEventListener('exit', iabClose); }

    Read the article

  • C# Virtual method call in constructor - how to refactor?

    - by Cristi Diaconescu
    I have an abstract class for database-agnostic cursor actions. Derived from that, there are classes that implement the abstract methods for handling database-specific stuff. The problem is, the base class ctor needs to call an abstract method - when the ctor is called, it needs to initialize the database-specific cursor. I know why this shouldn't be done, I don't need that explanation! This is my first implementation, that obviously doesn't work - it's the textbook "wrong way" of doing it. The overridden method accesses a field from the derived class, which is not yet instantiated: public abstract class CursorReader { private readonly int m_rowCount; protected CursorReader() { m_rowCount = CreateCursor(sqlCmd); //virtual call ! } protected abstract int CreateCursor(string sqlCmd); } public class SqlCursorReader : CursorReader { private SqlConnection m_sqlConnection; public SqlCursorReader(string sqlCmd, SqlConnection sqlConnection) { m_sqlConnection = sqlConnection; //field initialized here } protected override int CreateCursor(string sqlCmd) { //uses not-yet-initialized member *m_sqlConnection* //so this throws a NullReferenceException var cursor = new CustomCursor(sqlCmd, m_sqlConnection); return cursor.Count(); } } I will follow up with an answer on my attempts to fix this...

    Read the article

  • Using capistrano to deploy from different git branches

    - by Toms Mikoss
    I am using capistrano to deploy a RoR application. The codebase is in a git repository, and branching is widely used in development. Capistrano uses deploy.rb file for it's settings, one of them being the branch to deploy from. My problem is this: let's say I create a new branch A from master. The deploy file will reference master branch. I edit that, so A can be deployed to test environment. I finish working on the feature, and merge branch A into master. Since the deploy.rb file from A is fresher, it gets merged in and now the deploy.rb in master branch references A. Time to edit again. That's a lot of seemingly unnecessary manual editing - the parameter should always match current branch name. On top of that, it is easy to forget to edit the settings each and every time. What would be the best way to automate this process? Edit: Turns out someone already had done exactly what I needed.

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >