Search Results

Search found 15637 results on 626 pages for 'memory efficient'.

Page 504/626 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • Base 128 or 256 Encoding for the Binary Lexical Octet Adhoc Transport Protocol?

    - by Randolpho
    I'm in the process of implementing a network driver for the Binary Lexical Octet Adhoc Transport (BLOAT) protocols in the hopes of replacing the TCP/UDP/IP stack with a much more flexible XML structure. BLOAT is detailed in RFC 3252, so if you're unfamiliar with the protocol I highly recommend you read the entire RFC before providing any comments. Don't worry, it's short and sweet; you might even enjoy it. Anyway, my problem is this: BLOAT requires that the payload be Base64 encoded which doesn't make sense to me. I mean, sure, it's the internet standard for binary payloads, but there are better, more efficient encodings available: Base128 and Base256, for example. That the RFC requires Base64 and doesn't allow for any other payload encoding really bothers me. To that end, I'm considering a small optional change to the protocol. Embrace and extend, right? Anyway, I'd like to modify the payload element to accept an encoding attribute, which can extend the encoding to Base128 or Base256, or even to other encodings I can't conceive of at the moment. If the encoding attribute isn't present, Base64 would be assumed. So my question is this: should I? I mean, BLOAT is an accepted standard, even if it isn't exactly omnipresent. If I make this change, will there be compatibility issues? I don't foresee any, but perhaps you, oh great Stack Overflow Community, can? If I do implement this change, should I contact the original RFC author? Should I offer a supplemental RFC?

    Read the article

  • Subset a data.frame by list and apply function on each part, by rows

    - by aL3xa
    This may seem as a typical plyr problem, but I have something different in mind. Here's the function that I want to optimize (skip the for loop). # dummy data set.seed(1985) lst <- list(a=1:10, b=11:15, c=16:20) m <- matrix(round(runif(200, 1, 7)), 10) m <- as.data.frame(m) dfsub <- function(dt, lst, fun) { # check whether dt is `data.frame` stopifnot (is.data.frame(dt)) # check if vectors in lst are "whole" / integer # vector elements should be column indexes is.wholenumber <- function(x, tol = .Machine$double.eps^0.5) abs(x - round(x)) < tol # fall if any non-integers in list idx <- rapply(lst, is.wholenumber) stopifnot(idx) # check for list length stopifnot(ncol(dt) == length(idx)) # subset the data subs <- list() for (i in 1:length(lst)) { # apply function on each part, by row subs[[i]] <- apply(dt[ , lst[[i]]], 1, fun) } # preserve names names(subs) <- names(lst) # convert to data.frame subs <- as.data.frame(subs) # guess what =) return(subs) } And now a short demonstration... actually, I'm about to explain what I primarily intended to do. I wanted to subset a data.frame by vectors gathered in list object. Since this is a part of code from a function that accompanies data manipulation in psychological research, you can consider m as a results from personality questionnaire (10 subjects, 20 vars). Vectors in list hold column indexes that define questionnaire subscales (e.g. personality traits). Each subscale is defined by several items (columns in data.frame). If we presuppose that the score on each subscale is nothing more than sum (or some other function) of row values (results on that part of questionnaire for each subject), you could run: > dfsub(m, lst, sum) a b c 1 46 20 24 2 41 24 21 3 41 13 12 4 37 14 18 5 57 18 25 6 27 18 18 7 28 17 20 8 31 18 23 9 38 14 15 10 41 14 22 I took a glance at this function and I must admit that this little loop isn't spoiling the code at all... BUT, if there's an easier/efficient way of doing this, please, let me know!

    Read the article

  • Assistance with building an inverted-index

    - by tipu
    It's part of an information retrieval thing I'm doing for school. The plan is to create a hashmap of words using the the first two letters of the word as a key and any words with the two letters saved as a string value. So, hashmap["ba"] = "bad barley base" Once I'm done tokenizing a line I take that hashmap, serialize it, and append it to the text file named after the key. The idea is that if I take my data and spread it over hundreds of files I'll lessen the time it takes to fulfill a search by lessening the density of each file. The problem I am running into is when I'm making 100+ files in each run it happens to choke on creating a few files for whatever reason and so those entries are empty. Is there any way to make this more efficient? Is it worth continuing this, or should I abandon it? I'd like to mention I'm using PHP. The two languages I know relatively intimately are PHP and Java. I chose PHP because the front end will be very simple to do and I will be able to add features like autocompletion/suggested search without a problem. I also see no benefit in using Java. Any help is appreciated, thanks.

    Read the article

  • Javascript functional inheritance with prototypes

    - by cdmckay
    In Douglas Crockford's JavaScript: The Good Parts he recommends that we use functional inheritance. Here's an example: var mammal = function(spec, my) { var that = {}; my = my || {}; // Protected my.clearThroat = function() { return "Ahem"; }; that.getName = function() { return spec.name; }; that.says = function() { return my.clearThroat() + ' ' + spec.saying || ''; }; return that; } var cat = function(spec, my) { var that = {}; my = my || {}; spec.saying = spec.saying || 'meow'; that = mammal(spec, my); that.purr = function() { return my.clearThroat() + " purr"; }; that.getName = function() { return that.says() + ' ' + spec.name + ' ' + that.says(); }; return that; }; var kitty = cat({name: "Fluffy"}); The main issue I have with this is that every time I make a mammal or cat the JavaScript interpreter has to re-compile all the functions in it. That is, you don't get to share the code between instances. My question is: how do I make this code more efficient? For example, if I was making thousands of cat objects, what is the best way to modify this pattern to take advantage of the prototype object?

    Read the article

  • Nonblocking Tcp server

    - by hoodoos
    It's not a question really, i'm just looking for some guidelines :) I'm currently writing some abstract tcp server which should use as low number of threads as it can. Currently it works this way. I have a thread doing listening and some worker threads. Listener thread is just sits and wait for clients to connect I expect to have a single listener thread per server instance. Worker threads are doing all read/write/processing job on clients socket. So my problem is in building efficient worker process. And I came to some problem I can't really solve yet. Worker code is something like that(code is really simple just to show a place where i have my problem): List<Socket> readSockets = new List<Socket>(); List<Socket> writeSockets = new List<Socket>(); List<Socket> errorSockets = new List<Socket>(); while( true ){ Socket.Select( readSockets, writeSockets, errorSockets, 10 ); foreach( readSocket in readSockets ){ // do reading here } foreach( writeSocket in writeSockets ){ // do writing here } // POINT2 and here's the problem i will describe below } it works all smothly accept for 100% CPU utilization because of while loop being cycling all over again, if I have my clients doing send-receive-disconnect routine it's not that painful, but if I try to keep alive doing send-receive-send-receive all over again it really eats up all CPU. So my first idea was to put a sleep there, I check if all sockets have their data send and then putting Thread.Sleep in POINT2 just for 10ms, but this 10ms later on produces a huge delay of that 10ms when I want to receive next command from client socket.. For example if I don't try to "keep alive" commands are being executed within 10-15ms and with keep alive it becomes worse by atleast 10ms :( Maybe it's just a poor architecture? What can be done so my processor won't get 100% utilization and my server to react on something appear in client socket as soon as possible? Maybe somebody can point a good example of nonblocking server and architecture it should maintain?

    Read the article

  • Java EE suitablity for a social network using Cassandra datastore ??

    - by Marcos
    We are in the process of making some important technology decisions for a social networking application. We're planning to have Cassandra(a NoSQL database to support efficient data storage). We would be using Hector(a Java client) to interact with Cassandra. 1.) Would Java EE be a good choice over PHP for a social networking application in terms of performance, scalabilty & complexities? 2.) Another possible implementation strategy, Is it suitable to have backend alone in Java and rest in PHP? 3.) What differences(as compared to PHP) it makes in terms of costs at various stages of application development, deployment and maintenance ? 4.) What are the things to keep in mind as we move along with Java development& deployment(as we are relatively new to the Java background) ? 5.) If you could list some major production deployments of similar type(social network) applications in Java. Thank you!

    Read the article

  • To Ajax or Not to Ajax a listing page

    - by kaivalya
    Here i am talking about product listing pages where there are multiple filters that filter the list of products appearing on the page like product types, categories price range etc. I have done such pages using both ajax and no ajax way in the past. What I like about using ajax in such page is that, when filters are selected I only update the section that contains the product list. There is no need to refresh the whole page which could end up re-loading the images on top bar, banners etc and slow down the user performance. Ajax way in my opinion becomes more compact and responsive from user experience. Down side for ajax route for me is; since filter states are not maintained in the URL I end up maintaining them on the server. This becomes complicated if I want to handle multi window scenarios and it is also costly to maintain such state on server memory for each session. Not using ajax and simply keeping all filter values on url and refreshing the page is quite simple but the luxury of refreshing only the pane that really needs to be refreshed is lost. Lately I am seeing a lot of large scale e-commerce sites that are using non-ajax approach on their listing pages and this is making me question one more time if it might be more efficient to build non-ajax listing make due to the long term maintenance ease and sacrifice a little bit from user experience. I am about to start implementing a new listing page for a product which I have the flexibility to go either way and I would appreciate your inputs.

    Read the article

  • Fastest image iteration in Python

    - by Greg
    I am creating a simple green screen app with Python 2.7.4 but am getting quite slow results. I am currently using PIL 1.1.7 to load and iterate the images and saw huge speed-ups changing from the old getpixel() to the newer load() and pixel access object indexing. However the following loop still takes around 2.5 seconds to run for an image of around 720p resolution: def colorclose(Cb_p, Cr_p, Cb_key, Cr_key, tola, tolb): temp = math.sqrt((Cb_key-Cb_p)**2+(Cr_key-Cr_p)**2) if temp < tola: return 0.0 else: if temp < tolb: return (temp-tola)/(tolb-tola) else: return 1.0 .... for x in range(width): for y in range(height): Y, cb, cr = fg_cbcr_list[x, y] mask = colorclose(cb, cr, cb_key, cr_key, tola, tolb) mask = 1 - mask bgr, bgg, bgb = bg_list[x,y] fgr, fgg, fgb = fg_list[x,y] pixels[x,y] = ( (int)(fgr - mask*key_color[0] + mask*bgr), (int)(fgg - mask*key_color[1] + mask*bgg), (int)(fgb - mask*key_color[2] + mask*bgb)) Am I doing anything hugely inefficient here which makes it run so slow? I have seen similar, simpler examples where the loop is replaced by a boolean matrix for instance, but for this case I can't see a way to replace the loop. The pixels[x,y] assignment seems to take the most amount of time but not knowing Python very well I am unsure of a more efficient way to do this. Any help would be appreciated.

    Read the article

  • Is Berkeley DB XML a viable database backend?

    - by w00t
    Apparently, BDB-XML has been around since at least 2003 but I only recently stumbled upon it on Oracle's website: Berkeley DB XML. Here's the blurb: Oracle Berkeley DB XML is an open source, embeddable XML database with XQuery-based access to documents stored in containers and indexed based on their content. Oracle Berkeley DB XML is built on top of Oracle Berkeley DB and inherits its rich features and attributes. Like Oracle Berkeley DB, it runs in process with the application with no need for human administration. Oracle Berkeley DB XML adds a document parser, XML indexer and XQuery engine on top of Oracle Berkeley DB to enable the fastest, most efficient retrieval of data. To me it seems that the underlying ideas are technically sound and probably more mature than the newer document-based DBs like CouchDB or MongoDB. It has support for C, C++, Ruby and Perl, as far as I can determine. It even has HA-capabilities like automatic replication using a master/slave model with automatic election. However, I can't seem to find any projects that use it. Is there something fundamentally wrong with it? Is the license too onerous? Is it too complicated? Why is it not being used?

    Read the article

  • Excel Regex, or export to Python? ; "Vlookup" in Python?

    - by victorhooi
    heya, We have an Excel file with a worksheet containing people records. 1. Phone Number Sanitation One of the fields is a phone number field, which contains phone numbers in the format e.g.: +XX(Y)ZZZZ-ZZZZ (where X, Y and Z are integers). There are also some records which have less digits, e.g.: +XX(Y)ZZZ-ZZZZ And others with really screwed up formats: +XX(Y)ZZZZ-ZZZZ / ZZZZ or: ZZZZZZZZ We need to sanitise these all into the format: 0YZZZZZZZZ (or OYZZZZZZ with those with less digits). 2. Fill in Supervisor Details Each person also has a supervisor, given as an numeric ID. We need to do a lookup to get the name and email address of that supervisor, and add it to the line. This lookup will be firstly on the same worksheet (i.e. searching itself), and it can then fallback to another workbook with more people. 3. Approach? For the first issue, I was thinking of using regex in Excel/VBA somehow, to do the parsing. My Excel-fu isn't the best, but I suppose I can learn...lol. Any particular points on this one? However, would I be better off exporting the XLS to a CSV (e.g. using xlrd), then using Python to fix up the phone numbers? For the second approach, I was thinking of just using vlookups in Excel, to pull in the data, and somehow, having it fall through, first on searching itself, then on the external workbook, then just putting in error text. Not sure how to do that last part. However, if I do happen to choose to export to CSV and do it in Python, what's an efficient way of doing the vlookup? (Should I convert to a dict, or just iterate? Or is there a better, or more idiomatic way?) Cheers, Victor

    Read the article

  • Beginner video capture and processing/Camera selection

    - by mattbauch
    I'll soon be undertaking a research project in real-time event recognition but have no experience with the programming aspect of video capture (I'm an upperclassman undergraduate in computer engineering). I want to start off on the right foot so advice from anyone with experience would be great. The ultimate goal is to track events such as a person standing up/sitting down, entering/leaving a room, possibly even shrugging/slumping in posture, etc. from a security camera-like vantage point. First of all, which cameras/companies would you recommend? I'm looking to spend ~$100, more if necessary but not much. Great resolution isn't a must, but is desirable if affordable. What about IP network cameras vs. a USB type webcam? Webcams are less expensive, but IP cameras seem like they'd be much less work to deal with in software. What features should I look for in the camera? Once I've selected a camera, what does converting its output to a series of RGB bitmaps entail? I've never dealt with video encoding/decoding so a starting point or a tutorial that will guide me up to this point would be great if anyone has suggestions. Finally, what is the best (least complicated/most efficient) way to display video from the camera plus my own superimposed images (boxes around events in progress, for instance) in a GUI application? I can work on any operating system in any language. I have some experience with win32 GUIs and Java GUIs. The focus of the project is on the algorithm and so I'm trying to get the video capture/display portion of the app done cleanly and quickly. Thanks for any responses!!

    Read the article

  • Querying a self referencing join with NHibernate Linq

    - by Ben
    In my application I have a Category domain object. Category has a property Parent (of type category). So in my NHibernate mapping I have: <many-to-one name="Parent" column="ParentID"/> Before I switched to NHibernate I had the ParentId property on my domain model (mapped to the corresponding database column). This made it easy to query for say all top level categories (ParentID = 0): where(c => c.ParentId == 0) However, I have since removed the ParentId property from my domain model (because of NHibernate) so I now have to do the same query (using NHibernate.Linq) like so: public IList<Category> GetCategories(int parentId) { if (parentId == 0) return _catalogRepository.Categories.Where(x => x.Parent == null).ToList(); else return _catalogRepository.Categories.Where(x => x.Parent.Id == parentId).ToList(); } The real impact that I can see, is the sql generated. Instead of a simple 'select x,y,z from categories where parentid = 0' NHibernate generates a left outer join: SELECT this_.CategoryId as CategoryId4_1_, this_.ParentID as ParentID4_1_, this_.Name as Name4_1_, this_.Slug as Slug4_1_, parent1_.CategoryId as CategoryId4_0_, parent1_.ParentID as ParentID4_0_, parent1_.Name as Name4_0_, parent1_.Slug as Slug4_0_ FROM Categories this_ left outer join Categories parent1_ on this_.ParentID = parent1_.CategoryId WHERE this_.ParentID is null Which doesn't seems much less efficient that what I had before. Is there a better way of querying these self referencing joins as it's very tempting to drop the ParentID back onto my domain model for this reason. Thanks, Ben

    Read the article

  • SQL Joining on a one-to-many relationship

    - by Harley
    Ok, here was my original question; Table one contains ID|Name 1 Mary 2 John Table two contains ID|Color 1 Red 1 Blue 2 Blue 2 Green 2 Black I want to end up with is ID|Name|Red|Blue|Green|Black 1 Mary Y Y 2 John Y Y Y It seems that because there are 11 unique values for color and 1000's upon 1000's of records in table one that there is no 'good' way to do this. So, two other questions. Is there an efficient way to query to get this result? I can then create a crosstab in my application to get the desired result. ID|Name|Color 1 Mary Red 1 Mary Blue 2 John Blue 2 John Green 2 John Black If I wanted to limit the number of records returned how could I do a query to do something like this? Where ((color='blue') AND (color<>'red' OR color<>'green')) So using the above example I would then get back ID|Name|Color 1 Mary Blue 2 John Blue 2 John Black I connect to Visual FoxPro tables via ADODB to use SQL. Thanks!

    Read the article

  • Best practices to develop and maintaing code for complex JQuery/JQueryUI based applications

    - by dafi
    I'm working on my first very complex JQuery based application. A single web page can contain hundreds of JQuery related code for example to JQueryUI dialogs. Now I want to organize code in separated files. For example I'm moving all initialization dialogs code $("#dialog-xxx").dialog({...}) in separated files and due to reuse I wrap them on single function call like dialogs.js function initDialog_1() { $("#dialog-1").dialog({}); } function initDialog_2() { $("#dialog-2").dialog({}); } This simplifies function code and make caller page clear $(function() { // do some init stuff initDialog_1(); initTooltip_2(); }); Is this the correct pattern? Are you using more efficient techniques? I know that splitting code in many js files introduces an ugly band-bandwidth usage so. Does exist some good practice or tool to 'join' files for production environments? I imagine some tool that does more work than simply minimize and/or compress JS code.

    Read the article

  • What is your best-practice advice on implementing SQL stored procedures (in a C# winforms applicatio

    - by JYelton
    I have read these very good questions on SO about SQL stored procedures: When should you use stored procedures? and Are Stored Procedures more efficient, in general, than inline statements on modern RDBMS’s? I am a beginner on integrating .NET/SQL though I have used basic SQL functionality for more than a decade in other environments. It's time to advance with regards to organization and deployment. I am using .NET C# 3.5, Visual Studio 2008 and SQL Server 2008; though this question can be regarded as language- and database- agnostic, meaning that it could easily apply to other environments that use stored procedures and a relational database. Given that I have an application with inline SQL queries, and I am interested in converting to stored procedures for organizational and performance purposes, what are your recommendations for doing so? Here are some additional questions in my mind related to this subject that may help shape the answers: Should I create the stored procedures in SQL using SQL Management Studio and simply re-create the database when it is installed for a client? Am I better off creating all of the stored procedures in my application, inside of a database initialization method? It seems logical to assume that creating stored procedures must follow the creation of tables in a new installation. My database initialization method creates new tables and inserts some default data. My plan is to create stored procedures following that step, but I am beginning to think there might be a better way to set up a database from scratch (such as in the installer of the program). Thoughts on this are appreciated. I have a variety of queries throughout the application. Some queries are incredibly simple (SELECT id FROM table) and others are extremely long and complex, performing several joins and accepting approximately 80 parameters. Should I replace all queries with stored procedures, or only those that might benefit from doing so? Finally, as this topic obviously requires some research and education, can you recommend an article, book, or tutorial that covers the nuances of using stored procedures instead of direct statements?

    Read the article

  • sample java code for approximate string matching or boyer-moore extended for approximate string matc

    - by Dolphin
    Hi I need to find 1.mismatch(incorrectly played notes), 2.insertion(additional played), & 3.deletion (missed notes), in a music piece (e.g. note pitches [string values] stored in a table) against a reference music piece. This is either possible through exact string matching algorithms or dynamic programming/ approximate string matching algos. However I realised that approximate string matching is more appropriate for my problem due to identifying mismatch, insertion, deletion of notes. Or an extended version of Boyer-moore to support approx. string matching. Is there any link for sample java code I can try out approximate string matching? I find complex explanations and equations - but I hope I could do well with some sample code and simple explanations. Or can I find any sample java code on boyer-moore extended for approx. string matching? I understand the boyer-moore concept, but having troubles with adjusting it to support approx. string matching (i.e. to support mismatch, insertion, deletion). Also what is the most efficient approx. string matching algorithm (like boyer-moore in exact string matching algo)? Greatly appreciate any insight/ suggestions. Many thanks in advance

    Read the article

  • RESTful idempotence

    - by DutrowLLC
    I'm designing a RESTful web service utilizing ROA(Resource oriented architecture). I'm trying to work out an efficient way to guarantee idempotence for PUT requests that create new resources in cases that the server designates the resource key. From my understanding, the traditional approach is to create a type of transaction resource such as /CREATE_PERSON. The the client-server interaction for creating a new person resource would be in two parts: Step 1: Get unique transaction id for creating the new PERSON resource::: **Client request:** GET /CREATE_PERSON **Server response:** 200 OK transaction-id:"as8yfasiob" Step 2: Create the new person resource in a request guaranteed to be unique by using the transaction id::: **Client request** PUT /CREATE_PERSON/{transaction_id} first_name="Big bubba" **Server response** 201 Created // (If the request is a duplicate, it would send this PersonKey="398u4nsdf" // same response without creating a new resource. It // would perhaps send an error response if the was used // on a transaction id non-duplicate request, but I have // control over the client, so I can guarantee that this // won't happen) The problem that I see with this approach is that it requires sending two requests to the server in order to do to single operation of creating a new PERSON resource. This creates a performance issues increasing the chance that the user will be waiting around for the client to complete their request. I've been trying to hash out ideas for eliminating the first step such as pre-sending transaction-id's with each request, but most of my ideas have other issues or involve sacrificing the statelessness of the application. Is there a way to do this?

    Read the article

  • Django: Odd mark_safe behaviour?

    - by Mark
    I wrote this little function for writing out HTML tags: def html_tag(tag, content=None, close=True, attrs={}): lst = ['<',tag] for key, val in attrs.iteritems(): lst.append(' %s="%s"' % (key, escape_html(val))) if close: if content is None: lst.append(' />') else: lst.extend(['>', content, '</', tag, '>']) else: lst.append('>') return mark_safe(''.join(lst)) Which worked great, but then I read this article on efficient string concatenation (I know it doesn't really matter for this, but I wanted consistency) and decided to update my script: def html_tag(tag, body=None, close=True, attrs={}): s = StringIO() s.write('<%s'%tag) for key, val in attrs.iteritems(): s.write(' %s="%s"' % (key, escape_html(val))) if close: if body is None: s.write(' />') else: s.write('>%s</%s>' % (body, tag)) else: s.write('>') return mark_safe(s.getvalue()) But now my HTML get escaped when I try to render it from my template. Everything else is exactly the same. It works properly if I replace the last line with return mark_safe(unicode(s.getvalue())). I checked the return type of s.getvalue(). It should be a str, just like the first function, so why is this failing?? Also fails with SafeString(s.getvalue()) but succeeds with SafeUnicode(s.getvalue()). I'd also like to point out that I used return mark_safe(s.getvalue()) in a different function with no odd behavior. The "call stack" looks like this: class Input(Widget): def render(self): return html_tag('input', attrs={'type':self.itype, 'id':self.id, 'name':self.name, 'value':self.value, 'class':self.itype}) class Field: def __unicode__(self): return mark_safe(self.widget.render()) And then {{myfield}} is in the template. So it does get mark_safed'd twice, which I thought might have been the problem, but I tried removing that too..... I really have no idea what's causing this, but it's not too hard to work around, so I guess I won't fret about it.

    Read the article

  • Efficiently Determine if EF 4 POCO Already in ObjectSet

    - by Eric J.
    I'm trying EF 4 with POCO's on a small project for the first time. In my Repository implementation, I want to provide a method AddOrUpdate that will add a passed-in POCO to the repository if it's new, else do nothing (as the updated POCO will be saved when SaveChanges is called). My first thought was to do this: public void AddOrUpdate(Poco p) { if (!Ctx.Pocos.Contains<Poco>(p)) { Ctx.Pocos.AddObject(p); } } However that results in a NotSupportedException as documented under Referencing Non-Scalar Variables Not Supported (bonus question: why would that not be supported?) Just removing the Contains part and always calling AddObject results in an InvalidStateException: An object with the same key already exists in the ObjectStateManager. The existing object is in the Unchanged state. An object can only be added to the ObjectStateManager again if it is in the added state. So clearly EF 4 knows somewhere that this is a duplicate based on the key. What's a clean, efficient way for the Repository to update Pocos for either a new or pre-existing object when AddOrUpdate is called so that the subsequent call to SaveChanges() will do the right thing? I did consider carrying an isNew flag on the object itself, but I'm trying to take persistence ignorance as far as practical.

    Read the article

  • NSUndoManager grouping problem?

    - by anonymous
    I'm working on a barebones drawing app. I'm attempting to implement undo/redo capability, so I tell the view's undoManager to save the current image before updating the display. This works perfectly (yes, I understand that redrawing/saving the entire view is not incredibly efficient, but to solve this problem before attempting to optimize the code). However, as expected, when I 'undo' or 'redo', only the minute change is reflected. My goal is to have the whole finger stroke undone/redone. To do that, I told the undoManager to [beginUndoGrouping] in the [touchesBegan] method, and to [endUndoGrouping] in [touchesEnded]. That works for a bit, but after drawing a few strokes, the app crashes, and gdb exits with exc_bad_access. I'm very grateful for any insight you can give me. - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { mouseDragged = YES; currentPoint = [[touches anyObject] locationInView:self]; UIGraphicsBeginImageContext(drawingImageView.bounds.size); [drawingImageView.image drawInRect:drawingImageView.bounds]; CGContextRef ctx = UIGraphicsGetCurrentContext(); CGContextSetLineCap(ctx, kCGLineCapRound); CGContextSetLineWidth(ctx, drawingWidth); [drawingColor setStroke]; CGContextBeginPath(ctx); CGContextMoveToPoint(ctx, previousPoint.x, previousPoint.y); CGContextAddLineToPoint(ctx, currentPoint.x, currentPoint.y); CGContextStrokePath(ctx); [self.undoManager registerUndoWithTarget:drawingImageView selector:@selector(setImage:) object:drawingImageView.image]; drawingImageView.image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); previousPoint = currentPoint; }

    Read the article

  • template engine implementation

    - by qwerty
    I am currently building this small template engine. It takes a string containing the template in parameter, and a dictionary of "tags,values" to fill in the template. In the engine, I have no idea of the tags that will be in the template and the ones that won't. I am currently iterating (foreach) on the dictionnary, parsing my string that I have put in a string builder, and replacing the tags in the template by their corresponding value. Is there a more efficient/convenient way of doing this? I know the main drawback here is that the stringbuilder is parsed everytime entirely for each tag, which is quite bad... (I am also checking, though not included in the sample, after the process that my template does not contain any tag anymore. They are all formated in the same way: @@tag@@) //Dictionary<string, string> tagsValueCorrespondence; //string template; StringBuilder outputBuilder = new StringBuilder(template); foreach (string tag in tagsValueCorrespondence.Keys) { outputBuilder.Replace(tag, tagsValueCorrespondence[tag]); } template = outputBuilder.ToString(); Responses: @Marc: string template = "Some @@foobar@@ text in a @@bar@@ template"; StringDictionary data = new StringDictionary(); data.Add("foo", "value1"); data.Add("bar", "value2"); data.Add("foo2bar", "value3"); Output: "Some text in a value2 template" instead of: "Some @@foobar@@ text in a value2 template"

    Read the article

  • Distance between Long Lat coord using SQLITE

    - by munchine
    I've got an sqlite db with long and lat of shops and I want to find out the closest 5 shops. So the following code works fine. if(sqlite3_prepare_v2(db, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { while (sqlite3_step(compiledStatement) == SQLITE_ROW) { NSString *branchStr = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 0)]; NSNumber *fLat = [NSNumber numberWithFloat:(float)sqlite3_column_double(compiledStatement, 1)]; NSNumber *fLong = [NSNumber numberWithFloat:(float)sqlite3_column_double(compiledStatement, 2)]; NSLog(@"Address %@, Lat = %@, Long = %@", branchStr, fLat, fLong); CLLocation *location1 = [[CLLocation alloc] initWithLatitude:currentLocation.coordinate.latitude longitude:currentLocation.coordinate.longitude]; CLLocation *location2 = [[CLLocation alloc] initWithLatitude:[fLat floatValue] longitude:[fLong floatValue]]; NSLog(@"Distance i meters: %f", [location1 getDistanceFrom:location2]); [location1 release]; [location2 release]; } } I know the distance from where I am to each shop. My question is. Is it better to put the distance back into the sqlite row, I have the row when I step thru the database. How do I do that? Do I use the UPDATE statement? Does someone have a piece of code to help me. I can read the sqlite into an array and then sort the array. Do you recommend this over the above approach? Is this more efficient? Finally, if someone has a better way to get the closest 5 shops, love to hear it.

    Read the article

  • Windows mobile phone for business application - or design way out of scrolling problem

    - by Peter
    We have a business application written for Windows mobile. It has people doing inventories. The fastest way to do this is to keep one hand on the product being counted and the other on the phone. The five-way lets you hit enter to accept the current correct inventory (the norm) or to move one left to remove one. This is very fast. If I am two low, I have a quick glance at the phone- move my eyes back to count the next row - and with my right hand hit the five way "left left enter". Very fast. Our problem is that with I-Mate out of business, we cannot find a modern cell phone that has a decent size screen and a five way. Everyone is going to candy bar and flicks. That works great if it is two hands and you are looking at the phone, but not efficient for us. Other than the $1,500 ruggadized units, is there anybody who makes a business style Windows Mobile with a five way? If not, any ideas on how to design the phone to make it easier without a five way to quickly enter data like this?

    Read the article

  • Versioning friendly, extendible binary file format

    - by Bas Bossink
    In the project I'm currently working on there is a need to save a sizable data structure to disk (edit: think dozens of MB's). Being an optimist, I thought that there must be a standard solution for such a problem; however, up to now I haven't found a solution that satisfies the following requirements: .NET 2.0 support, preferably with a FOSS implementation Version friendly (this should be interpreted as: reading an old version of the format should be relatively simple if the changes in the underlying data structure are simple, say adding/dropping fields) Ability to do some form of random access where part of the data can be extended after initial creation (think of this as extending intermediate results) Space and time efficient (XML has been excluded as option given this requirement) Options considered so far: Protocol Buffers: was turned down by verdict of the documentation about Large Data Sets - since this comment suggested adding another layer on top, this would call for additional complexity which I wish to have handled by the file format itself. HDF5,EXI: do not seem to have .net implementations SQLite/SQL Server Compact edition: the data structure at hand would result in a pretty complex table structure that seems too heavyweight for the intended use BSON: does not appear to support requirement 3. Fast Infoset: only seems to have paid .NET implementations. Any recommendations or pointers are greatly appreciated. Furthermore if you believe any of the information above is not true, please provide pointers/examples to prove me wrong.

    Read the article

  • Excel 2003 VBA - Method to duplicate this code that select and colors rows

    - by Justin
    so this is a fragment of a procedure that exports a dataset from access to excel Dim rs As Recordset Dim intMaxCol As Integer Dim intMaxRow As Integer Dim objxls As Excel.Application Dim objWkb As Excel.Workbook Dim objSht As Excel.Worksheet Set rs = CurrentDb.OpenRecordset("qryOutput", dbOpenSnapshot) intMaxCol = rs.Fields.Count If rs.RecordCount > 0 Then rs.MoveLast: rs.MoveFirst intMaxRow = rs.RecordCount Set objxls = New Excel.Application objxls.Visible = True With objxls Set objWkb = .Workbooks.Add Set objSht = objWkb.Worksheets(1) With objSht On Error Resume Next .Range(.Cells(1, 1), .Cells(intMaxRow, intMaxCol)).CopyFromRecordset rs .Name = conSHT_NAME .Cells.WrapText = False .Cells.EntireColumn.AutoFit .Cells.RowHeight = 17 .Cells.Select With Selection.Font .Name = "Calibri" .Size = 10 End With .Rows("1:1").Select With Selection .Insert Shift:=xlDown End With .Rows("1:1").Interior.ColorIndex = 15 .Rows("1:1").RowHeight = 30 .Rows("2:2").Select With Selection.Interior .ColorIndex = 40 .Pattern = xlSolid End With .Rows("4:4").Select With Selection.Interior .ColorIndex = 40 .Pattern = xlSolid End With .Rows("6:6").Select With Selection.Interior .ColorIndex = 40 .Pattern = xlSolid End With .Rows("1:1").Select With Selection.Borders(xlEdgeBottom) .LineStyle = xlContinuous .Weight = xlMedium .ColorIndex = xlAutomatic End With End With End With End If Set objSht = Nothing Set objWkb = Nothing Set objxls = Nothing Set rs = Nothing Set DB = Nothing End Sub see where I am looking at coloring the rows. I wanted to select and fill (with any color) every other row, kinda like some of those access reports. I can do it manually coding each and every row, but two problems: 1) its a pain 2) i don't know what the record count is before hand. How can I make the code more efficient in this respect while incorporating the recordcount to know how many rows to "loop through" EDIT: Another question I have is with the selection methods I am using in the module, is there a better excel syntax instead of these with selections.... .Cells.Select With Selection.Font .Name = "Calibri" .Size = 10 End With is the only way i figure out how to accomplish this piece, but literally every other time I run this code, it fails. It says there is no object and points to the .font ....every other time? is this because the code is poor, or that I am not closing the xls app in the code? if so how do i do that? Thanks as always!

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >