Search Results

Search found 20199 results on 808 pages for 'ebs release 12'.

Page 94/808 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Fixing predicated NSFetchedResultsController/NSFetchRequest performance with SQLite backend?

    - by Jaanus
    I have a series of NSFetchedResultsControllers powering some table views, and their performance on device was abysmal, on the order of seconds. Since it all runs on main thread, it's blocking my app at startup, which is not great. I investigated and turns out the predicate is the problem: NSPredicate *somePredicate = [NSPredicate predicateWithFormat:@"ANY somethings == %@", something]; [fetchRequest setPredicate:somePredicate]; I.e the fetch entity, call it "things", has a many-to-many relation with entity "something". This predicate is a filter that limits the results to only things that have a relation with a particular "something". When I removed the predicate for testing, fetch time (the initial performFetch: call) dropped (for some extreme cases) from 4 seconds to around 100ms or less, which is acceptable. I am troubled by this, though, as it negates a lot of the benefit I was hoping to gain with Core Data and NSFRC, which otherwise seems like a powerful tool. So, my question is, how can I optimize this performance? Am I using the predicate wrong? Should I modify the model/schema somehow? And what other ways there are to fix this? Is this kind of degraded performance to be expected? (There are on the order of hundreds of <1KB objects.) EDIT WITH DETAILS: Here's the code: [fetchRequest setFetchLimit:200]; NSLog(@"before fetch"); BOOL success = [frc performFetch:&error]; if (!success) { NSLog(@"Fetch request error: %@", error); } NSLog(@"after fetch"); Updated logs (previously, I had some application inefficiencies degrading the performance here. These are the updated logs that should be as close to optimal as you can get under my current environment): 2010-02-05 12:45:22.138 Special Ppl[429:207] before fetch 2010-02-05 12:45:22.144 Special Ppl[429:207] CoreData: sql: SELECT DISTINCT 0, t0.Z_PK, t0.Z_OPT, <model fields> FROM ZTHING t0 LEFT OUTER JOIN Z_1THINGS t1 ON t0.Z_PK = t1.Z_2THINGS WHERE t1.Z_1SOMETHINGS = ? ORDER BY t0.ZID DESC LIMIT 200 2010-02-05 12:45:22.663 Special Ppl[429:207] CoreData: annotation: sql connection fetch time: 0.5094s 2010-02-05 12:45:22.668 Special Ppl[429:207] CoreData: annotation: total fetch execution time: 0.5240s for 198 rows. 2010-02-05 12:45:22.706 Special Ppl[429:207] after fetch If I do the same fetch without predicate (by commenting out the two lines in the beginning of the question): 2010-02-05 12:44:10.398 Special Ppl[414:207] before fetch 2010-02-05 12:44:10.405 Special Ppl[414:207] CoreData: sql: SELECT 0, t0.Z_PK, t0.Z_OPT, <model fields> FROM ZTHING t0 ORDER BY t0.ZID DESC LIMIT 200 2010-02-05 12:44:10.426 Special Ppl[414:207] CoreData: annotation: sql connection fetch time: 0.0125s 2010-02-05 12:44:10.431 Special Ppl[414:207] CoreData: annotation: total fetch execution time: 0.0262s for 200 rows. 2010-02-05 12:44:10.457 Special Ppl[414:207] after fetch 20-fold difference in times. 500ms is not that great, and there does not seem to be a way to do it in background thread or otherwise optimize that I can think of. (Apart from going to a binary store where this becomes a non-issue, so I might do that. Binary store performance is consistently ~100ms for the above 200-object predicated query.) (I nested another question here previously, which I now moved away).

    Read the article

  • CURL - HTTPS Wierd error

    - by Vincent
    All, I am having trouble requesting info from HTTPS site using CURL and PHP. I am using Solaris 10. It so happens that sometimes it works and sometimes it doesn't. I am not sure what is the cause. If it doesn't work, this is the entry recorded in the verbose log: * About to connect() to 10.10.101.12 port 443 (#0) * Trying 10.10.101.12... * connected * Connected to 10.10.101.12 (10.10.101.12) port 443 (#0) * error setting certificate verify locations, continuing anyway: * CAfile: /etc/opt/webstack/curl/curlCA CApath: none * error:80089077:lib(128):func(137):reason(119) * Closing connection #0 If it works, this is the entry recorded in the verbose log: * About to connect() to 10.10.101.12 port 443 (#0) * Trying 10.10.101.12... * connected * Connected to 10.10.101.12 (10.10.101.12) port 443 (#0) * error setting certificate verify locations, continuing anyway: * CAfile: /etc/opt/webstack/curl/curlCA CApath: none * SSL connection using DHE-RSA-AES256-SHA * Server certificate: * subject: C=CA, ST=British Columnbia, L=Vancouver, O=google, OU=FDN, CN=g.googlenet.com, [email protected] * start date: 2007-07-24 23:06:32 GMT * expire date: 2027-09-07 23:06:32 GMT * issuer: C=US, ST=California, L=Sunnyvale, O=Google, OU=Certificate Authority, CN=support, [email protected] * SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway. > POST /gportal/gpmgr HTTP/1.1^M Host: 10.10.101.12^M Accept: */*^M Accept-Encoding: gzip,deflate^M Content-Length: 1623^M Content-Type: application/x-www-form-urlencoded^M Expect: 100-continue^M ^M < HTTP/1.1 100 Continue^M < HTTP/1.1 200 OK^M < Date: Wed, 28 Apr 2010 21:56:15 GMT^M < Server: Apache^M < Cache-Control: no-cache^M < Pragma: no-cache^M < Vary: Accept-Encoding^M < Content-Encoding: gzip^M < Content-Length: 1453^M < Content-Type: application/json^M < ^M * Connection #0 to host 10.10.101.12 left intact * Closing connection #0 My CURL options are as under: $ch = curl_init(); $devnull = fopen('/tmp/curlcookie.txt', 'w'); $fp_err = fopen('/tmp/verbose_file.txt', 'ab+'); fwrite($fp_err, date('Y-m-d H:i:s')."\n\n"); curl_setopt($ch, CURLOPT_STDERR, $devnull); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_URL, $desturl); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 0); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT,120); curl_setopt($ch, CURLOPT_AUTOREFERER, true); curl_setopt($ch, CURLOPT_ENCODING, 'gzip,deflate'); curl_setopt($ch, CURLOPT_POSTFIELDS, $postdata); curl_setopt($ch, CURLOPT_VERBOSE,1); curl_setopt($ch, CURLOPT_FAILONERROR, true); curl_setopt($ch, CURLOPT_STDERR, $fp_err); $ret = curl_exec($ch); Anybody has any idea, why it works sometimes but fails mostly? Thanks

    Read the article

  • Concatenate row values T-SQL

    - by Robert
    I am trying to pull together some data for a report and need to concatenate the row values of one of the tables. Here is the basic table structure: Reviews ReviewID ReviewDate Reviewers ReviewerID ReviewID UserID Users UserID FName LName This is a M:M relationship. Each Review can have many Reviewers; each User can be associated with many Reviews. Basically, all I want to see is Reviews.ReviewID, Reviews.ReviewDate, and a concatenated string of the FName's of all the associated Users for that Review (comma delimited). Instead of: ReviewID---ReviewDate---User 1----------12/1/2009----Bob 1----------12/1/2009----Joe 1----------12/1/2009----Frank 2----------12/9/2009----Sue 2----------12/9/2009----Alice Display this: ReviewID---ReviewDate----Users 1----------12/1/2009-----Bob, Joe, Frank 2----------12/9/2009-----Sue, Alice I have found this article describing some ways to do this, but most of these seem to only deal with one table, not multiple; unfortunately, my SQL-fu is not strong enough to adapt these to my circumstances. I am particularly interested in the example on that site which utilizes FOR XML PATH() as that looks the cleanest and most straight forward. SELECT p1.CategoryId, ( SELECT ProductName + ', ' FROM Northwind.dbo.Products p2 WHERE p2.CategoryId = p1.CategoryId ORDER BY ProductName FOR XML PATH('') ) AS Products FROM Northwind.dbo.Products p1 GROUP BY CategoryId; Can anyone give me a hand with this? Any help would be greatly appreciated!

    Read the article

  • Why does perl crash with "*** glibc detected *** perl: munmap_chunk(): invalid pointer"?

    - by sid_com
    #!/usr/bin/env perl use warnings; use strict; use 5.012; use XML::LibXML::Reader; my $reader = XML::LibXML::Reader->new( location => 'http://www.heise.de/' ) or die $!; while ( $reader->read ) { say $reader->name; } At the end of the output from this script I get this error-messages: * glibc detected * perl: munmap_chunk(): invalid pointer: 0x0000000000b362e0 * ======= Backtrace: ========= /lib64/libc.so.6[0x7fb84952fc76] ... ======= Memory map: ======== 00400000-0053d000 r-xp 00000000 08:01 182002 /usr/local/bin/perl ... Is this due a bug? perl -V: Summary of my perl5 (revision 5 version 12 subversion 0) configuration: Platform: osname=linux, osvers=2.6.31.12-0.2-desktop, archname=x86_64-linux uname='linux linux1 2.6.31.12-0.2-desktop #1 smp preempt 2010-03-16 21:25:39 +0100 x86_64 x86_64 x86_64 gnulinux ' config_args='-Dnoextensions=ODBM_File' hint=recommended, useposix=true, d_sigaction=define useithreads=undef, usemultiplicity=undef useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef use64bitint=define, use64bitall=define, uselongdouble=undef usemymalloc=n, bincompat5005=undef Compiler: cc='cc', ccflags ='-fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64', optimize='-O2', cppflags='-fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include' ccversion='', gccversion='4.4.1 [gcc-4_4-branch revision 150839]', gccosandvers='' intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678 d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16 ivtype='long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8 alignbytes=8, prototype=define Linker and Libraries: ld='cc', ldflags =' -fstack-protector -L/usr/local/lib' libpth=/usr/local/lib /lib /usr/lib /lib64 /usr/lib64 /usr/local/lib64 libs=-lnsl -ldl -lm -lcrypt -lutil -lc perllibs=-lnsl -ldl -lm -lcrypt -lutil -lc libc=/lib/libc-2.10.1.so, so=so, useshrplib=false, libperl=libperl.a gnulibc_version='2.10.1' Dynamic Linking: dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E' cccdlflags='-fPIC', lddlflags='-shared -O2 -L/usr/local/lib -fstack-protector' Characteristics of this binary (from libperl): Compile-time options: PERL_DONT_CREATE_GVSV PERL_MALLOC_WRAP USE_64_BIT_ALL USE_64_BIT_INT USE_LARGE_FILES USE_PERLIO USE_PERL_ATOF Built under linux Compiled at Apr 15 2010 13:25:46 @INC: /usr/local/lib/perl5/site_perl/5.12.0/x86_64-linux /usr/local/lib/perl5/site_perl/5.12.0 /usr/local/lib/perl5/5.12.0/x86_64-linux /usr/local/lib/perl5/5.12.0 .

    Read the article

  • Enormous data and PHP errors

    - by salamis
    I am currently using the following HighCharts:HighStock:Charts: http://www.highcharts.com/stock/demo/data-grouping in order to display the data returned from the server. We retrieve the data from a MySQL database and is really big. We are storing sensor metrics every 1 second. After a while we got the following error: [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 4756882 bytes) in C:\\wamp\\www\\admin\\getTrends.php on line 156, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP Stack trace:, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 1. {main}() C:\\wamp\\www\\admin\\getTrends.php:0, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 2. getTrendsDataAI() C:\\wamp\\www\\admin\\getTrends.php:33, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 3. printResults() C:\\wamp\\www\\admin\\getTrends.php:102, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 4. createData() C:\\wamp\\www\\admin\\getTrends.php:230, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 5. implode() C:\\wamp\\www\\admin\\getTrends.php:156, referer: http://localhost/admin/trends.php What is the best solution to return this data as JSON object to HighStocks for viewing? And how can we overcome the PHP limitation? Shall we return chunk of data each time? How do they usually present enormous amount of data to the users and creating charts and reports from this data? Another big problem that we need to overcome is that the returned JSON object is enormous. At this point is around 20-30 mbs and it will be much larger in the future. Is it ok to return this data to the user and perform everything client side? Any suggestions or thoughts welcome.

    Read the article

  • Replacing a word in a text file with a value using python

    - by Jamde Jam
    I have been trying to replace a word in a text file with a value (say 1), but my outfile is blank.I am new to python (its only been a month since I have been learning it). My file is relatively large, but I just want to replace a word with the value 1 for now. Here is a segment of what the file looks like: NAME SECOND_1 ATOM 1 6 0 0 0 # ORB 1 ATOM 2 2 0 12/24 0 # ORB 2 ATOM 3 2 12/24 0 0 # ORB 2 ATOM 4 2 0 0 4/24 # ORB 3 ATOM 5 2 0 0 20/24 # ORB 3 ATOM 6 2 0 0 8/24 # ORB 3 ATOM 7 2 0 0 16/24 # ORB 3 ATOM 8 6 0 0 12/24 # ORB 1 ATOM 9 2 12/24 0 12/24 # ORB 2 ATOM 10 2 0 12/24 12/24 # ORB 2 #1 #2 #3 I want to first replace the word ATOM with the value 1. Next I want to replace #ORB with a space. Here is what I am trying thus far. input = open('SECOND_orbitsJ22.txt','r') output=open('SECOND_orbitsJ22_out.txt','w') for line in input: word=line.split(',') if(word[0]=='ATOM'): word[0]='1' output.write(','.join(word)) Can anyone offer any suggestions or help? Thanks so much.

    Read the article

  • need to clean malformed tags using regular expression

    - by Brian
    Looking to find the appropriate regular expression for the following conditions: I need to clean certain tags within free flowing text. For example, within the text I have two important tags: <2004:04:12 and . Unfortunately some of tags have missing "<" or "" delimiter. For example, some are as follows: 1) <2004:04:12 , I need this to be <2004:04:12> 2) 2004:04:12>, I need this to be <2004:04:12> 3) <John Doe , I need this to be <John Doe> I attempted to use the following for situation 1: String regex = "<\\d{4}-\\d{2}-\\d{2}\\w*{2}[^>]"; String output = content.replaceAll(regex,"$0>"); This did find all instances of "<2004:04:12" and the result was "<2004:04:12 ". However, I need to eliminate the space prior to the ending tag. Not sure this is the best way. Any suggestions. Thanks

    Read the article

  • capture text, including tags from string, and then reorder tags with text

    - by Brian
    I have the following text: abcdef<CONVERSION>abcabcabcabc<2007-01-12><name1><2007-01-12>abcabcabcabc<name2><2007-01-11>abcabcabcabc<name3><2007-02-12>abcabcabcabc<name4>abcabcabcabc<2007-03-12><name5><date>abcabcabcabc<name6> I need to use regular expressions in order to clean the above text: The basic extraction rule is: <2007-01-12>abcabcabcabc<name2> I have no problem extracting this pattern. My issue is that within th text I have malformed sequences: If the text doesn't start with a date, and end with a name my extraction fails. For example, the text above may have several mal formed sequences, such as: abcabcabcabc<2007-01-12><name1> Should be: <2007-01-12>abcabcabcabc<name1> Is it possible to have a regular expression that would clean the above, prior to extracting my consistent pattern. In short, i need to find all mal formed patterns, and then take the date tag and put it in front of it, as provided in the example above. Thanks.

    Read the article

  • glibc detected ./.a.out: free(): invalid pointer

    - by ExtremeBlue
    typedef struct _PERSON { size_t age; unsigned char* name; }PERSON; int init(PERSON** person) { (* person) = (PERSON *) malloc(sizeof(struct _PERSON)); (* person)->age = 1; (* person)->name = (unsigned char *) malloc(sizeof(4)); (* person)->name = "NAME"; return 0; } void close(PERSON** person) { (* person)->age = 0; if((* person)->name != NULL) { free((* person)->name); } if((* person) != NULL) { free((* person)); } } int main(int argc, char* argv[]) { PERSON* p; init(&p); printf("%d\t%s\n", (int) p->age, p->name); close(&p); return 0; } 1 NAME *** glibc detected *** ./a.out: free(): invalid pointer: 0x000000000040079c *** ======= Backtrace: ========= /lib/libc.so.6(+0x774b6)[0x7fa9027054b6] /lib/libc.so.6(cfree+0x73)[0x7fa90270bc83] ./a.out(close+0x3d)[0x400651] ./a.out[0x40069f] /lib/libc.so.6(__libc_start_main+0xfe)[0x7fa9026acd8e] ./a.out[0x4004f9] ... 7fa8fc000000-7fa8fc021000 rw-p 00000000 00:00 0 7fa8fc021000-7fa900000000 ---p 00000000 00:00 0 7fa902478000-7fa90248d000 r-xp 00000000 08:12 23068732 /lib/libgcc_s.so.1 7fa90248d000-7fa90268c000 ---p 00015000 08:12 23068732 /lib/libgcc_s.so.1 7fa90268c000-7fa90268d000 r--p 00014000 08:12 23068732 /lib/libgcc_s.so.1 7fa90268d000-7fa90268e000 rw-p 00015000 08:12 23068732 /lib/libgcc_s.so.1 7fa90268e000-7fa902808000 r-xp 00000000 08:12 23068970 /lib/libc-2.12.1.so 7fa902808000-7fa902a07000 ---p 0017a000 08:12 23068970 /lib/libc-2.12.1.so 7fa902a07000-7fa902a0b000 r--p 00179000 08:12 23068970 /lib/libc-2.12.1.so 7fa902a0b000-7fa902a0c000 rw-p 0017d000 08:12 23068970 /lib/libc-2.12.1.so 7fa902a0c000-7fa902a11000 rw-p 00000000 00:00 0 7fa902a11000-7fa902a31000 r-xp 00000000 08:12 23068966 /lib/ld-2.12.1.so 7fa902c25000-7fa902c28000 rw-p 00000000 00:00 0 7fa902c2e000-7fa902c31000 rw-p 00000000 00:00 0 7fa902c31000-7fa902c32000 r--p 00020000 08:12 23068966 /lib/ld-2.12.1.so 7fa902c32000-7fa902c33000 rw-p 00021000 08:12 23068966 /lib/ld-2.12.1.so 7fa902c33000-7fa902c34000 rw-p 00000000 00:00 0 7fff442d5000-7fff442f6000 rw-p 00000000 00:00 0 [stack] 7fff44308000-7fff44309000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] Aborted

    Read the article

  • Reference an object, based on a variable with it's name in it

    - by James G
    I'm looking for a way to reference an object, based on a variable with it's name in it. I know I can do this for properties and sub properties: var req = {body: {jobID: 12}}; console.log(req.body.jobID); //12 var subProperty = "jobID"; console.log(req.body[subProperty ]); //12 var property = "body"; console.log(req[property][subProperty]); //12 is it possible for the object itself? var req = {body: {jobID: 12}}; var object = "req"; var property = "body"; var subProperty = "jobID"; console.log([object][property][subProperty]); //12 or console.log(this[object][property][subProperty]); //12 Note: I'm doing this in node.js not a browser. Here is an exert from the function: if(action.render){ res.render(action.render,renderData); }else if(action.redirect){ if(action.redirect.args){ var args = action.redirect.args; res.redirect(action.redirect.path+req[args[0]][args[1]]); }else{ res.redirect(action.redirect.path); } } I could work around it by changing it to this, but I was looking for something more dynamic. if(action.render){ res.render(action.render,renderData); }else if(action.redirect){ if(action.redirect.args){ var args = action.redirect.args; if(args[0]==="req"){ res.redirect(action.redirect.path+req[args[1]][args[2]]); }else if(args[0]==="rows"){ rows.redirect(action.redirect.path+rows[args[1]][args[2]]); } }else{ res.redirect(action.redirect.path); } }

    Read the article

  • .NET 3.0 Unit Testing getting System.MethodAccessException calling .NET 2.0

    - by NealWalters
    Is there any way to get around this exception? Can I not call a .NET 2.0 from 3.5? I have to write .NET 2.0 to maintain capability with BizTalk 2006/R2. But I would like to test with VS2008 Unit Tests to be consistent to other non-BizTalk code that we are testing. Test method ABC.UnitTest.UnitTest1.TestReferenceCode1 threw exception: System.MethodAccessException: ABC.EasyRegEx.extractUsingRegEx(System.String, System.String).

    Read the article

  • pointer being freed was not allocated. Complex malloc history help

    - by Martin KS
    I've followed the guides helpfully linked here: http://stackoverflow.com/questions/295778/iphone-debugging-pointer-being-freed-was-not-allocated-errors but the malloc_history is really throwing me for a loop, can anyone shed any light on the following: ALLOC 0x185c600-0x18605ff [size=16384]: thread_a068a4e0 |start | main | UIApplicationMain | -[UIApplication _run] | CFRunLoopRunInMode | CFRunLoopRunSpecific | PurpleEventCallback | _UIApplicationHandleEvent | -[UIApplication sendEvent:] | -[UIApplication handleEvent:withNewEvent:] | -[UIApplication _reportAppLaunchFinished] | CA::Transaction::commit() | CA::Context::commit_transaction(CA::Transaction*) | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CA::Context::commit_layer(_CALayer*, unsigned int, unsigned int, void*) | CA::Render::encode_set_object(CA::Render::Encoder*, unsigned long, unsigned int, CA::Render::Object*, unsigned int) | CA::Render::Layer::encode(CA::Render::Encoder*) const | CA::Render::Image::encode(CA::Render::Encoder*) const | CA::Render::Encoder::encode_data_async(void const*, unsigned long, void (*)(void const*, void*), void*) | CA::Render::Encoder::encode_bytes(void const*, unsigned long) | CA::Render::Encoder::grow(unsigned long) | realloc | malloc_zone_realloc ---- FREE 0x185c600-0x18605ff [size=16384]: thread_a068a4e0 |start | main | UIApplicationMain | -[UIApplication _run] | CFRunLoopRunInMode | CFRunLoopRunSpecific | PurpleEventCallback | _UIApplicationHandleEvent | -[UIApplication sendEvent:] | -[UIApplication handleEvent:withNewEvent:] | -[UIApplication _reportAppLaunchFinished] | CA::Transaction::commit() | CA::Context::commit_transaction(CA::Transaction*) | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CA::Context::commit_layer(_CALayer*, unsigned int, unsigned int, void*) | CA::Render::encode_set_object(CA::Render::Encoder*, unsigned long, unsigned int, CA::Render::Object*, unsigned int) | CA::Render::Layer::encode(CA::Render::Encoder*) const | CA::Render::Image::encode(CA::Render::Encoder*) const | CA::Render::Encoder::encode_data_async(void const*, unsigned long, void (*)(void const*, void*), void*) | CA::Render::Encoder::encode_bytes(void const*, unsigned long) | CA::Render::Encoder::grow(unsigned long) | realloc | malloc_zone_realloc ALLOC 0x185e000-0x185e62f [size=1584]: thread_a068a4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[UITableView _userSelectRowAtIndexPath:] | -[UITableView _selectRowAtIndexPath:animated:scrollPosition:notifyDelegate:] | -[PLAlbumView tableView:didSelectRowAtIndexPath:] | -[PLUIAlbumViewController albumView:selectedPhoto:] | PLNotifyImagePickerOfImageAvailability | -[UIImagePickerController _imagePickerDidCompleteWithInfo:] | -[GalleryViewController imagePickerController:didFinishPickingMediaWithInfo:] | UIImageJPEGRepresentation | CGImageDestinationFinalize | _CGImagePluginWriteJPEG | writeOne | _cg_jpeg_start_compress | _cg_jinit_compress_master | _cg_jinit_c_prep_controller | alloc_sarray | alloc_large | malloc | malloc_zone_malloc ---- FREE 0x185e000-0x185e62f [size=1584]: thread_a068a4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[UITableView _userSelectRowAtIndexPath:] | -[UITableView _selectRowAtIndexPath:animated:scrollPosition:notifyDelegate:] | -[PL AlbumView tableView:didSelectRowAtIndexPath:] | -[PLUIAlbumViewController albumView:selectedPhoto:] | PLNotifyImagePickerOfImageAvailability | -[UIImagePickerController _imagePickerDidCompleteWithInfo:] | -[GalleryViewController imagePickerController:didFinishPickingMediaWithInfo:] | UIImageJPEGRepresentation | CGImageDestinationFinalize | _CGImagePluginWriteJPEG | writeOne | _cg_jpeg_abort | free_pool | free ALLOC 0x185c800-0x185ea1f [size=8736]: thread_a068a4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[UITableView _userSelectRowAtIndexPath:] | -[UITableView _selectRowAtIndexPath:animated:scrollPosition:notifyDelegate:] | -[PLAlbumView tableView:didSelectRowAtIndexPath:] | -[PLUIAlbumViewController albumView:selectedPhoto:] | PLNotifyImagePickerOfImageAvailability | -[UIImagePickerController _imagePickerDidCompleteWithInfo:] | -[GalleryViewController imagePickerController:didFinishPickingMediaWithInfo:] | -[UIImage initWithData:] | _UIImageRefFromData | CGImageSourceCreateImageAtIndex | makeImagePlus | _CGImagePluginInitJPEG | initImageJPEG | calloc | malloc_zone_calloc

    Read the article

  • When is a webapp called Beta, alpha, pre-alpha, or none

    - by dmontain
    I've come across many apps on the web that call themselves Beta. I've come across other apps that had an alpha designation. I've even come across some that called themselves pre-alpha, whatever that means (if you know please clarify). Then I've come across some really bad webapps that shouldn't have left the developer's computer and they didn't have any beta designations. I've also seen some well built apps that called themselves Beta, including Stack Exchange (the mother site of SO) which I believe is very full featured to be called a Beta. I'm a little confused. It seems people are doing it at their whims. Is there an established rule or a checklist that can help decide what stage an app is in (beta, alpha, pre-alpha, or none)? P.S. Please feel free to retag as appropriate.

    Read the article

  • having a test debug app and a released debug app side by side

    - by Tristan
    Yo! When I download my app from the iStore, the latest test version installed to my phone gets over written. Does anyone know how to have two versions of the same app side by side? On a test project, I edited the build settings so that "realease" and "debug" have different product names. This seemed to solve my problem, however when I try this same trick on my actual project, the two overwrite each other again. Does anyone have a recommendation? I don't mind how it's done. Thanks! Tristan

    Read the article

  • Random Page Cost and Planning

    - by Dave Jarvis
    A query (see below) that extracts climate data from weather stations within a given radius of a city using the dates for which those weather stations actually have data. The query uses the table's only index, rather effectively: CREATE UNIQUE INDEX measurement_001_stc_idx ON climate.measurement_001 USING btree (station_id, taken, category_id); Reducing the server's configuration value for random_page_cost from 2.0 to 1.1 had a massive performance improvement for the given range (nearly an order of magnitude) because it suggested to PostgreSQL that it should use the index. While the results now return in 5 seconds (down from ~85 seconds), problematic lines remain. Bumping the query's end date by a single year causes a full table scan: sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1997-12-31'::date AND How do I persuade PostgreSQL to use the indexes regardless of years between the two dates? (A full table scan against 43 million rows is probably not the best plan.) Find the EXPLAIN ANALYSE results below the query. Thank you! Query SELECT extract(YEAR FROM m.taken) AS year, avg(m.amount) AS amount FROM climate.city c, climate.station s, climate.station_category sc, climate.measurement m WHERE c.id = 5182 AND earth_distance( ll_to_earth(c.latitude_decimal,c.longitude_decimal), ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND s.elevation BETWEEN 0 AND 3000 AND s.applicable = TRUE AND sc.station_id = s.id AND sc.category_id = 1 AND sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1996-12-31'::date AND m.station_id = s.id AND m.taken BETWEEN sc.taken_start AND sc.taken_end AND m.category_id = sc.category_id GROUP BY extract(YEAR FROM m.taken) ORDER BY extract(YEAR FROM m.taken) 1900 to 1996: Index "Sort (cost=1348597.71..1348598.21 rows=200 width=12) (actual time=2268.929..2268.935 rows=92 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1348586.56..1348590.06 rows=200 width=12) (actual time=2268.829..2268.886 rows=92 loops=1)" " -> Nested Loop (cost=0.00..1344864.01 rows=744510 width=12) (actual time=0.807..2084.206 rows=134893 loops=1)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (sc.station_id = m.station_id))" " -> Nested Loop (cost=0.00..12755.07 rows=1220 width=18) (actual time=0.502..521.937 rows=23 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Nested Loop (cost=0.00..9907.73 rows=3659 width=34) (actual time=0.014..28.937 rows=3458 loops=1)" " -> Seq Scan on station_category sc (cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458 loops=1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id = 1))" " -> Index Scan using station_pkey1 on station s (cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=3458)" " Index Cond: (s.id = sc.station_id)" " Filter: (s.applicable AND (s.elevation >= 0) AND (s.elevation <= 3000))" " -> Append (cost=0.00..1072.27 rows=947 width=18) (actual time=6.996..63.199 rows=5865 loops=23)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.000..0.000 rows=0 loops=23)" " Filter: (m.category_id = 1)" " -> Bitmap Heap Scan on measurement_001 m (cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865 loops=23)" " Recheck Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" " -> Bitmap Index Scan on measurement_001_stc_idx (cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865 loops=23)" " Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" "Total runtime: 2269.264 ms" 1900 to 1997: Full Table Scan "Sort (cost=1370192.26..1370192.76 rows=200 width=12) (actual time=86165.797..86165.809 rows=94 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1370181.12..1370184.62 rows=200 width=12) (actual time=86165.654..86165.736 rows=94 loops=1)" " -> Hash Join (cost=4293.60..1366355.81 rows=765061 width=12) (actual time=534.786..85920.007 rows=139721 loops=1)" " Hash Cond: (m.station_id = sc.station_id)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end))" " -> Append (cost=0.00..867005.80 rows=43670150 width=18) (actual time=0.009..79202.329 rows=43670079 loops=1)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.001..0.001 rows=0 loops=1)" " Filter: (category_id = 1)" " -> Seq Scan on measurement_001 m (cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008 rows=43670079 loops=1)" " Filter: (category_id = 1)" " -> Hash (cost=4277.93..4277.93 rows=1253 width=18) (actual time=534.704..534.704 rows=25 loops=1)" " -> Nested Loop (cost=847.87..4277.93 rows=1253 width=18) (actual time=415.837..534.682 rows=25 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Hash Join (cost=847.87..1352.07 rows=3760 width=34) (actual time=6.427..35.107 rows=3552 loops=1)" " Hash Cond: (s.id = sc.station_id)" " -> Seq Scan on station s (cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949 loops=1)" " Filter: (applicable AND (elevation >= 0) AND (elevation <= 3000))" " -> Hash (cost=800.87..800.87 rows=3760 width=14) (actual time=6.416..6.416 rows=3552 loops=1)" " -> Bitmap Heap Scan on station_category sc (cost=430.29..800.87 rows=3760 width=14) (actual time=2.316..5.353 rows=3552 loops=1)" " Recheck Cond: (category_id = 1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))" " -> Bitmap Index Scan on station_category_station_category_idx (cost=0.00..429.35 rows=6376 width=0) (actual time=2.268..2.268 rows=6339 loops=1)" " Index Cond: (category_id = 1)" "Total runtime: 86165.936 ms"

    Read the article

  • How best to store Subversion version information in EAR's?

    - by Rene
    When receiving a bug report or an it-doesnt-work message one of my initials questions is always what version? With a different builds being at many stages of testing, planning and deploying this is often a non-trivial question. I the case of releasing Java JAR (ear, jar, rar, war) files I would like to be able to look in/at the JAR and switch to the same branch, version or tag that was the source of the released JAR. How can I best adjust the ant build process so that the version information in the svn checkout remains in the created build? I was thinking along the lines of: adding a VERSION file, but with what content? storing information in the META-INF file, but under what property with which content? copying sources into the result archive added svn:properties to all sources with keywords in places the compiler leaves them be I ended up using the svnversion approach (the accepted anwser), because it scans the entire subtree as opposed to svn info which just looks at the current file / directory. For this I defined the SVN task in the ant file to make it more portable. <taskdef name="svn" classname="org.tigris.subversion.svnant.SvnTask"> <classpath> <pathelement location="${dir.lib}/ant/svnant.jar"/> <pathelement location="${dir.lib}/ant/svnClientAdapter.jar"/> <pathelement location="${dir.lib}/ant/svnkit.jar"/> <pathelement location="${dir.lib}/ant/svnjavahl.jar"/> </classpath> </taskdef> Not all builds result in webservices. The ear file before deployment must remain the same name because of updating in the application server. Making the file executable is still an option, but until then I just include a version information file. <target name="version"> <svn><wcVersion path="${dir.source}"/></svn> <echo file="${dir.build}/VERSION">${revision.range}</echo> </target> Refs: svnrevision: http://svnbook.red-bean.com/en/1.1/re57.html svn info http://svnbook.red-bean.com/en/1.1/re13.html subclipse svn task: http://subclipse.tigris.org/svnant/svn.html svn client: http://svnkit.com/

    Read the article

  • Releasing Excel after using Interop

    - by figus
    Hi everyone I've read many post looking for my answer, but all are similar to this: http://stackoverflow.com/questions/1610743/reading-excel-files-in-vb-net-leaves-excel-process-hanging My problem is that I don't quit the app... The idea is this: If a User has Excel Open, if he has the file I'm interested in open... get that Excel instance and do whatever I want to do... But I don't to close his File after I'm done... I want him to keep working on it, the problem is that when he closes Excel... The process keeps running... and running... and running after the user closes Excel with the X button... this is how I try to do it This piece is used to know if he has Excel open, and in the For I check for the file name I'm interested in. Try oApp = GetObject(, "Excel.Application") libroAbierto = True For Each libro As Microsoft.Office.Interop.Excel.Workbook In oApp.Workbooks If libro.Name = EquipoASeccionIdSeccion.Text & ".xlsm" Then Exit Try End If Next libroAbierto = False Catch ex As Exception oApp = New Microsoft.Office.Interop.Excel.Application End Try here would be my code... if he hasn't Excel open, I create a new instance, open the file and everything else. My code ends with this: If Not libroAbierto Then libroSeccion.Close(SaveChanges:=True) oApp.Quit() Else oApp.UserControl = True libroSeccion.Save() End If System.Runtime.InteropServices.Marshal.FinalReleaseComObject(libroOriginal) System.Runtime.InteropServices.Marshal.FinalReleaseComObject(libroSeccion) System.Runtime.InteropServices.Marshal.FinalReleaseComObject(origen) System.Runtime.InteropServices.Marshal.FinalReleaseComObject(copiada) System.Runtime.InteropServices.Marshal.FinalReleaseComObject(oApp) libroOriginal = Nothing libroSeccion = Nothing oApp = Nothing origen = Nothing copiada = Nothing nuevosGuardados = True So you can see that, if I opened the file, I call oApp.Quit() and everything else and the Excel Process ends after a few seconds (maybe 5 aprox.) BUT if I mean the user to keep the file open (not calling Quit()), Excel process keeps running after the user closes Excel with the X button. Is there any way to do what I try to do?? Control a open instance of excel and releasing everything so when the user closes it with the X button, the Excel Process dies normally??? Thanks!!!

    Read the article

  • Do I have to release modifications made to a GPL v2 CMS?

    - by John McCollum
    If we use a CMS that is covered by the GPL (v2), do we have to re-release the source code of the CMS if we make modifications to the core? The GPL v2 states: The GPL does not require you to release your modified version. You are free to make modifications and use them privately, without ever releasing them. This applies to organizations (including companies), too; an organization can make a modified version and use it internally without ever releasing it outside the organization. But if you release the modified version to the public in some way, the GPL requires you to make the modified source code available to the program's users, under the GPL. The grey area for me here is the part that states "if you release the modified version to the public in some way" - does displaying a website to the public count as "releasing it to the public"? What about if a custom plugin is written which integrates with the CMS - are we required to release the source? Does this count as a modification?

    Read the article

  • Retain count = 0 in other function? memory-management problem?

    - by rdesign
    Hey guys, I declared a NSMutableArray in the header-file with: NSMutableArray *myMuArr; and @property (nonatomic, retain) NSMutableArray *myMuArr; In the .m file I've got a delegate from an other class: -(void)didGrabData:(NSArray*)theArray { self.myMuArr = [[[NSMutableArray alloc] initWithArray:myMuArr]retain]; } If I want to access the self.myMuArr in cellForRowAtIndexPath it's empty (I checked the retain count of the array and it's 0) What am I doing wrong? Of course it's released in the dealloc, no where else. I would be very thankfull for any help :0)

    Read the article

  • why make said no rule to make target

    - by guilin ??
    Isn't Makefile syntax is target: require_files cmd... Why I got this problem? Makefile MXMLC = /opt/flex/bin/mxmlc MXMLC_RELEASE = $(MXMLC) -debug=false -compiler.optimize=true release: bin-release/Wrapper.swf, bin-release/Application.swf bin-release/Application.swf: src/**/*.as, lib/*.swc $(MXMLC_RELEASE) -output bin-release/Application.swf src/Application.as @@-rm ../server/public/game/Application.swf $(CP) bin-release/Application.swf ../server/public/game/Application.swf bin-release/Wrapper.swf: src/*.as, src/engine/**/*.as, lib/*.swc $(MXMLC_RELEASE) -output bin-release/Wrapper.swf src/Wrapper.as @@-rm ../server/public/game/Wrapper.swf $(CP) bin-release/Wrapper.swf ../server/public/game/Wrapper.swf $: make bin-release/Application.swf ~/workspace/project/src/flash [2]19:20 make: * No rule to make target src/constant/*.as,', needed bybin-release/Application.swf'. Stop.

    Read the article

  • How to embed revision information using mercurial and maven (and svn)

    - by Zwei Steinen
    Our project had a nice hack (although I'm guessing there are better ways to do it) to embed revision information into the artifacts (jar etc.) when we used svn. Now we have migrated to mercurial, and we want to have a similar thing, but before I start working on a similar hack with mercurial, I wanted to know if there are better ways to do this. Thanks for your answers! <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <executions> <execution> <phase>process-classes</phase> <id>svninfo</id> <goals> <goal>exec</goal> </goals> <configuration> <executable>svn</executable> <arguments> <argument>info</argument> <argument>../</argument> <argument>></argument> <argument>target/some-project/META-INF/svninfo.txt</argument> </arguments> </configuration> </execution> </executions> </plugin>

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >