Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 38/361 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • What will be the better way for data retrieval on application that needs to handle limited amount of data?

    - by Milanix
    Just moved this question from Stack Overflow. Since, adding my code snippets itself would make this question really long. Instead, I am pretty interested in knowing a better ways for data retrieval on application that needs to handle limited amount of data which isn't updated regularly. Let's take this example: I am writing an application which gets a schedule as an XML from server. I have written a logic in order to parse XML version and update database only if the version is newer than the local version. Although the update is checked automatically/manually on daily basis based on user preference, the actual version update happens only once per few months or so. Since, this is done by some other authority which doesn't provide API but, rather inform publicly on their changes. The actual XML contains a "(n number of groups)(days in a week) (n number of schedule)" . The group is usually 6 and the number of schedule is usually 2. So basically there would usually be only around 100 strings. Now although I have used SQLite at the moment. I want to know how to make update on database. Should I show progress dialog that the application is updating and exit the app when it's done? Since, my updates are infrequent i don't think this will really harm user experience but, is there any better ways to do it? Because I don't want update to be made when user is searching which is done using database. This will cause an database already open exception. At least I have faced this problem before. Is it better to rather parse XML every time when user wants to view certain things or to use SQLite? Since, I make lots of use of adapter in my app to create lists, will that degrade the performance?

    Read the article

  • What will be the better way for data retrieval on application that needs to handle limited amount of data.?

    - by Milanix
    This is not really a coding question since, I am not adding any code in here. Since, adding my code snippets itself would make this question really long. Instead, I am pretty interested in knowing a better ways for data retrieval on application that needs to handle limited amount of data which isn't updated regularly. Let's take this example: I am writing an application which gets a schedule as an XML from server. I have written a logic in order to parse XML version and update database only if the version is newer than the local version. Although the update is checked automatically/manually on daily basis based on user preference, the actual version update happens only once per few months or so. Since, this is done by some other authority which doesn't provide API but, rather inform publicly on their changes. The actual XML contains a "(n number of groups)(days in a week) (n number of schedule)" . The group is usually 6 and the number of schedule is usually 2. So basically there would usually be only around 100 strings. Now although I have used SQLite at the moment. I want to know how to make update on database. Should I show progress dialog that the application is updating and exit the app when it's done? Since, my updates are infrequent i don't think this will really harm user experience but, is there any better ways to do it? Because I don't want update to be made when user is searching which is done using database. This will cause an database already open exception. Atleast I have faced this problem before. Is it better to rather parse XML every time when user wants to view certain things or to use SQLite? Since, I make lots of use of adapter in my app to create lists, will that degrade the performance? It would really be a great help if anyone can give me better overview about it. Or may be counter argument against each. Many thanks!

    Read the article

  • Why is Web SQL database deprecated?

    - by user221287
    I am making a hybrid Android app. At first I decided to use localStorage, after spending 2 days, I realized that it is very strange and so dropped it. Then, I picked up indexedDB, after spending today's whole day and actually getting the output in Google Chrome, it is not running inside a WebView of the android app. And I never used Web SQL database at all because it was deprecated. Anyhow, it has come to my notice that PhoneGap still uses Web SQL and android's browsers support it. Why was Web SQL deprecated in the first place? And will it be a good idea for me to go with Web SQL now?

    Read the article

  • Strategy to store/average logs of pings

    - by José Tomás Tocino
    I'm developing a site to monitor web services. The most basic type of check is sending a ping, storing the response time in a CheckLog object. By default, PingCheck objects are triggered every minute, so in one hour you get 60 CheckLogs and in one day you get 1440 CheckLogs. That's a lot of them, I don't need to store such level of detail, so I've set a up collapsing mechanism that periodically takes the uncollapsed CheckLogs older than 24h and collapses (averages) them in intervals of 30 minutes. So, if you have 360 CheckLogs that have been saved from 0:00 to 6:00, after collapsing you retain just 12 of them. The problem.. well, is this: After averaging the response times, the graph changes drastically. What can I do to improve this? Guess one option could be narrowing the interval duration to 15 min. I've seen the graphs at the GitHub status page and they do not seem to suffer from this problem. I'd appreciate any kind of information you could give me about this area.

    Read the article

  • How should a team share/store game content during development?

    - by irwinb
    Other than Dropbox, what out there has been especially useful for storing and sharing game content like images during development (similar feature set to Dropbox like working offline, automatic syncing and support for windows/osx)? We are looking into hosting our own SharePoint server but it seems to be really focused on documents... Maybe Box.net would work? EDIT For code, we are using Git. To be more precise, I was looking for an easy, automatic way for content produced by artists/audio engineers to be available to everyone. Features like approvals of assets don't hurt either. Following the answer linked by Tetrad, Alienbrain looked pretty interesting but..is way out of our budget (may be something to invest in in the future). What ended up doing... We were going to go with Box.net but downloading the sync apps for desktop use required us to wait to be contacted by them for some reason. We did not have much time to wait so we ended up going with Dropbox Teams. Box.net has a nice feature set but we never really felt held back without them. Thanks for the help :).

    Read the article

  • Storing Projects on Google Drive (Cloud)

    - by JamesKraw
    I've started using Google Drive for my cloud needs and backing up pretty much everything. I've got the app installed so it auto-sync's all my content in most things. My question is this, I am currently coding for iOS (although this applies to any coding project) and am split on storing my project files on Google Drive while using sync. My theory is that if I did use it, I'd never have to worry about system crashes or lost code before backups, but if I do use it it will be sync'ing a-lot and I thought there might be problems with it detecting changes and trying to sync for example half way through compiling. Bandwidth isn't an issue as I have fast connection and unlimited monthly allowance. Has anyone ever used this, or similar cloud-based sync'ing (dropbox etc) for this and knows whether it works or not or whether there are any potential problems etc.

    Read the article

  • Top Exastack Partner Headlines - 7 Nov

    - by Roxana Babiciu
    R-Style Softlab recommends RS-solutions with Exadata and SuperCluster to improve business performance for their customers. Read more. WingArc, a subsidiary of 1st Holdings, Inc., finds Oracle Exadata’s substantial computing power enabling to their enterprise output management solution (SVF/RD). It helps solve the problem of providing an integrated output platform for high volume businesses. Read more.

    Read the article

  • Oracle Virtual Compute Appliance Launch Channel Update Webcast - May 28

    - by Roxana Babiciu
    Join us for an Oracle Virtual Compute Appliance (OVCA) launch update for the channel. This training webcast is a follow up to the OVCA launch on April 16. We will provide a brief product overview of OVCA followed by some great OPN program content, resell criteria, OPN Incentive Program and Demo Equipment Program details. There will be two sessions to accommodate each region. Additionally, don’t miss the latest Oracle Virtual Compute Appliance article packed with great information!

    Read the article

  • Why use binary files to stack up different versions on DMSs?

    - by edgarator
    I've used both Liferay and Alfresco trying to use them as the Document Management System for an intranet. I noticed the following: They use the file system and the database to store files They use a GUID to name the file on the filesystem and that GUID is used as an Id in the database. The GUID-named file is a binary file The GUID-named binary file stores all versions for a given file The path for the file in the DMS doesn't match the one in the file system The URL makes reference to the GUID when a certain file is requested What I want to know is why is this, and what would be the best way of doing it. Like how to would you create the binary file (zip?), and what parts would you keep in the binary file and what parts would you store in the database (meta-data, path?). I'm assuming some of the benefits of doing it like this. As having the same URL for a file, regardless of its current document path. And having only one file even if the file has changed names over time.

    Read the article

  • Setting up an efficient and effective development process

    - by christopher-mccann
    I am in the midst of setting up the development environment (PHP/MySQL) for my start-up. We use three sets of servers: LIVE - the servers which provide the actual application TEST - providing a testing version before it is actually released DEV - the development servers The development servers run SVN with each developer checking out their local copy. At the end of each day completed fixes are checked in and then we use Hudson to automate our build process and then transfer it over to TEST. We then check the application still functions correctly using a tester and then if everything is fine move it to LIVE. I am happy with this process but I do have two questions: How would you recommend we do local testing - as each developer adds new pages or changes functionality I want them to be able to test what they are doing. Would you just setup local Apache and a local database and have them test locally on their own machine? How would you recommend dealing with data layer changes? Is there anything else you would recommend doing to really make our development process as easy and efficient as possible? Thanks in advance

    Read the article

  • Efficient Multiple Linear Regression in C# / .Net

    - by mrnye
    Does anyone know of an efficient way to do multiple linear regression in C#, where the number of simultaneous equations may be in the 1000's (with 3 or 4 different inputs). After reading this article on multiple linear regression I tried implementing it with a matrix equation: Matrix y = new Matrix( new double[,]{{745}, {895}, {442}, {440}, {1598}}); Matrix x = new Matrix( new double[,]{{1, 36, 66}, {1, 37, 68}, {1, 47, 64}, {1, 32, 53}, {1, 1, 101}}); Matrix b = (x.Transpose() * x).Inverse() * x.Transpose() * y; for (int i = 0; i < b.Rows; i++) { Trace.WriteLine("INFO: " + b[i, 0].ToDouble()); } However it does not scale well to the scale of 1000's of equations due to the matrix inversion operation. I can call the R language and use that, however I was hoping there would be a pure .Net solution which will scale to these large sets. Any suggestions? EDIT #1: I have settled using R for the time being. By using statconn (downloaded here) I have found it to be both fast & relatively easy to use this method. I.e. here is a small code snippet, it really isn't much code at all to use the R statconn library (note: this is not all the code!). _StatConn.EvaluateNoReturn(string.Format("output <- lm({0})", equation)); object intercept = _StatConn.Evaluate("coefficients(output)['(Intercept)']"); parameters[0] = (double)intercept; for (int i = 0; i < xColCount; i++) { object parameter = _StatConn.Evaluate(string.Format("coefficients(output)['x{0}']", i)); parameters[i + 1] = (double)parameter; }

    Read the article

  • Cache efficient code

    - by goldenmean
    This could sound a subjective question, but what i am looking for is specific instances which you would have encountered related to this. 1) How to make a code, cache effective-cache friendly? (More cache hits, as less cahce misses as possible). from both perspectives, data cache & program cache(instruction cache). i.e. What all things in one's code, related to data structures, code constructs one should take care of to make it cache effective. 2) Are there any particular data structures one must use, must avoid,or particular way of accessing the memers of that structure etc.. to make code cache effective. 3) Are there any program constructs(if, for, switch, break, goto,...), code-flow(for inside a if, if inside a for, etc...) one should follow/avoid in this matter? I am looking forward to hear individual experiences related to making a cache efficient code in general. It can be any programming language(C,C++,ASsembly,...), any hardware target(ARM,Intel,PowerPC,...), any OS(Windows,Linux,Symbian,...) etc.. More the variety, it will help better to understand it deeply.

    Read the article

  • Efficient way to handle multiple HTMLPurifier configs.

    - by anomareh
    I'm using HTMLPurifier in a current project and I'm not sure about the most efficient way to go about handling multiple configs. For the most part the only major thing changing is the allowed tags. Currently I have a private method, in each class using HTMLPurifier, that gets called when a config is needed and it creates one from the default. I'm not really happy with this as if 2 classes are using the same config, that's duplicating code, and what if a class needs 2 configs? It just feels messy. The one upside to it, is it's lazy in the sense that it's not creating anything until it's needed. The various classes could be used and never have to purify anything. So I don't want to be creating a bunch of config objects that might not even be used. I just recently found out that you can create a new instance of HTMLPurifier and directly set config options like so: $purifier = new HTMLPurifier(); $purifier->config->set('HTML.Allowed', ''); I'm not sure if that's bad form at all or not, and if it isn't I'm not really sure of a good way to put it to use. My latest idea was to create a config handler class that would just return an HTMLPurifier config for use in subsequent purify calls. At some point it could probably be expanded to allow the setting of configs, but just from the start I figured they'd just be hardcoded and run a method parameter through a switch to grab the requested config. Perhaps I could just send the stored purifier instance as an argument and have the method directly set the config on it like shown above? This to me seems the best of the few ways I thought of, though I'm not sure if such a task warrants me creating a class to handle it, and if so, if I'm handling it in the best way. I'm not too familiar with the inner workings of HTMLPurifier so I'm not sure if there are any better mechanisms in place for handling multiple configs. Thanks for any insight anyone can offer.

    Read the article

  • Efficient algorithm to find first available name

    - by Avrahamshuk
    I have an array containing names of items. I want to give the user the option to create items without specifying their name, so my program will have to supply a unique default name, like "Item 1". The challenge is that the name has to be unique so i have to check all the array for that default name, and if there is an item with the same name i have to change the name to be "Item 2" and so on until i find an available name. The obvious solution will be something like that: String name; for (int i = 0 , name = "Item " + i ; !isAvailable(name) ; i++); My problem with that algorithm is that it runs at O(N^2). I wonder if there is a known (or new) more efficient algorithm to solve this case. In other words my question is this: Is there any algorithm that finds the first greater-than-zero number that dose NOT exist in a given array, and runs at less that N^2? Thanks!

    Read the article

  • Is there any other efficient way to use table variable instead of using temporary table

    - by varta shrimali
    we are writing script to display banners on a web page where we are using temporary table in mysql procedure. Is there any other efficient way to use table variable instead of using temporary table we are using following code: -- banner location CURSOR -- DECLARE banner_location_cursor CURSOR FOR select bm.id as masterId, bm.section as masterName, bs.id as locationId, bs.sectionName as locationName from banner_master as bm inner join banner_section as bs on bm.id=bs.masterId where bm.section=sCode ; -- DECLARE banner CURSORS DECLARE banner_cursor CURSOR FOR SELECT bd.id as bannerId, bd.sectionId, bd.bannerName, bd.websiteURL, bd.paymentType, bd.status, bd.startDate, bd.endDate, bd.bannerDisplayed, bs.id, bs.sectionName from banner_detail as bd inner join banner_section as bs on bs.id=bd.sectionId where bs.id= location_id and bd.status='A' and (dates between cast(bd.startDate as DATE) and cast(bd.endDate as DATE)) order by rand(), bd.bannerDisplayed asc limit 1 ; DECLARE CONTINUE HANDLER FOR NOT FOUND SET no_more_rows = 1; SET dates = (select curdate()); -- RESULTS TABLE WHICH WILL BE RETURNED -- CREATE temporary TABLE test ( b_id INT, s_id INT, b_name varchar(128), w_url varchar(128), p_type varchar(128), st char(1), s_date datetime, e_date datetime, b_display int, sec_id int, s_name varchar(128) ); -- OPEN banner location CURSOR OPEN banner_location_cursor; the_loop: LOOP FETCH banner_location_cursor INTO master_id, master_name, location_id, location_name; IF no_more_rows THEN CLOSE banner_location_cursor; leave the_loop; END IF; OPEN banner_cursor; -- select FOUND_ROWS(); the_loop2: LOOP FETCH banner_cursor INTO banner_id, section_id, banner_name, website_url, payment, status, start_date, end_date, banner_displayed, sec_id, section_name; IF no_more_rows THEN set no_more_rows = 0; CLOSE banner_cursor; leave the_loop2; END IF; INSERT INTO test ( b_id, s_id, b_name , w_url, p_type, st, s_date, e_date, b_display, sec_id, s_name ) VALUES ( banner_id, section_id, banner_name, website_url, payment, status, start_date, end_date, banner_displayed, sec_id, section_name ); UPDATE banner_detail set bannerDisplayed = (banner_displayed+1) where id = banner_id; END LOOP the_loop2; END LOOP the_loop; -- RETURN result SELECT * FROM test; -- DROP RESULTS TABLE DROP TABLE test; END

    Read the article

  • More efficient approach to XSLT for-each

    - by Paul
    I have an XSLT which takes a . delimted string and splits it into two fields for a SQL statement: <xsl:for-each select="tokenize(Path,'\.')"> <xsl:choose> <xsl:when test="position() = 1 and position() = last()">SITE = '<xsl:value-of select="."/>' AND PATH = ''</xsl:when> <xsl:when test="position() = 1 and position() != last()">SITE = '<xsl:value-of select="."/>' </xsl:when> <xsl:when test="position() = 2 and position() = last()">AND PATH = '<xsl:value-of select="."/>' </xsl:when> <xsl:when test="position() = 2">AND PATH = '<xsl:value-of select="."/></xsl:when> <xsl:when test="position() > 2 and position() != last()">.<xsl:value-of select="."/></xsl:when> <xsl:when test="position() > 2 and position() = last()">.<xsl:value-of select="."/>' </xsl:when> <xsl:otherwise>zxyarglfaux</xsl:otherwise> </xsl:choose> </xsl:for-each> The results are as follows: INPUT: North OUTPUT: SITE = 'North' AND PATH = '' INPUT: North.A OUTPUT: SITE = 'North' AND PATH = 'A' INPUT: North.A.B OUTPUT: SITE = 'North' AND PATH = 'A.B' INPUT: North.A.B.C OUTPUT: SITE = 'North' AND PATH = 'A.B.C' This works, but is very lengthy. Can anyone see a more efficient approach? Thanks!

    Read the article

  • how to implement a really efficient bitvector sorting in python

    - by xiao
    Hello guys! Actually this is an interesting topic from programming pearls, sorting 10 digits telephone numbers in a limited memory with an efficient algorithm. You can find the whole story here What I am interested in is just how fast the implementation could be in python. I have done a naive implementation with the module bitvector. The code is as following: from BitVector import BitVector import timeit import random import time import sys def sort(input_li): return sorted(input_li) def vec_sort(input_li): bv = BitVector( size = len(input_li) ) for i in input_li: bv[i] = 1 res_li = [] for i in range(len(bv)): if bv[i]: res_li.append(i) return res_li if __name__ == "__main__": test_data = range(int(sys.argv[1])) print 'test_data size is:', sys.argv[1] random.shuffle(test_data) start = time.time() sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "vec_sort function takes " + str(elapsed) I have tested from array size 100 to 10,000,000 in my macbook(2GHz Intel Core 2 Duo 2GB SDRAM), the result is as following: test_data size is: 1000 sort function takes 0.000274896621704 vec_sort function takes 0.00383687019348 test_data size is: 10000 sort function takes 0.00380706787109 vec_sort function takes 0.0371489524841 test_data size is: 100000 sort function takes 0.0520560741425 vec_sort function takes 0.374383926392 test_data size is: 1000000 sort function takes 0.867373943329 vec_sort function takes 3.80475401878 test_data size is: 10000000 sort function takes 12.9204008579 vec_sort function takes 38.8053860664 What disappoints me is that even when the test_data size is 100,000,000, the sort function is still faster than vec_sort. Is there any way to accelerate the vec_sort function?

    Read the article

  • 'Fixed' for loop - what is more efficient?

    - by pimvdb
    I'm creating a tic-tac-toe game, and one of the functions has to iterate through each of the 9 fields (tic-tac-toe is played on a 3x3 grid). I was wondering what is more efficient (which one is perhaps faster, or what is the preferred way of scripting in such situation) - using two for nested loops like this: for(var i=0; i<3; i++) { for(var j=0; j<3; j++) { checkField(i, j); } } or hard-coding it like this: checkField(0, 0); checkField(0, 1); checkField(0, 2); checkField(1, 0); checkField(1, 1); checkField(1, 2); checkField(2, 0); checkField(2, 1); checkField(2, 2); As there are only 9 combinations, it would be perhaps overkill to use two nested for loops, but then again this is clearer to read. The for loop, however, will increment variables and check whether i and j are smaller than 3 every time as well. In this example, the time saving at least might be negligible, but what is the preferred way of coding in this case? Thanks.

    Read the article

  • Most efficient way to update attribute of one instance

    - by Begbie00
    Hi all - I'm creating an arbitrary number of instances (using for loops and ranges). At some event in the future, I need to change an attribute for only one of the instances. What's the best way to do this? Right now, I'm doing the following: 1) Manage the instances in a list. 2) Iterate through the list to find a key value. 3) Once I find the right object within the list (i.e. key value = value I'm looking for), change whatever attribute I need to change. for Instance within ListofInstances: if Instance.KeyValue == SearchValue: Instance.AttributeToChange = 10 This feels really inefficient: I'm basically iterating over the entire list of instances, even through I only need to change an attribute in one of them. Should I be storing the Instance references in a structure more suitable for random access (e.g. dictionary with KeyValue as the dictionary key?) Is a dictionary any more efficient in this case? Should I be using something else? Thanks, Mike

    Read the article

  • More efficient comparison of numbers

    - by Pez Cuckow
    I have an array which is part of a small JS game I am working on I need to check (as often as reasonable) that each of the elements in the array haven't left the "stage" or "playground", so I can remove them and save the script load I have coded the below and was wondering if anyone knew a faster/more efficient way to calculate this. This is run every 50ms (it deals with the movement). Where bots[i][1] is movement in X and bots[i][2] is movement in Y (mutually exclusive). for (var i in bots) { var left = parseInt($("#" + i).css("left")); var top = parseInt($("#" + i).css("top")); var nextleft = left + bots[i][1]; var nexttop = top + bots[i][2]; if(bots[i][1]>0&&nextleft>=PLAYGROUND_WIDTH) { remove_bot(i); } else if(bots[i][1]<0&&nextleft<=-GRID_SIZE) { remove_bot(i); } else if(bots[i][2]>0&&nexttop>=PLAYGROUND_HEIGHT) { remove_bot(i); } else if(bots[i][2]<0&&nexttop<=-GRID_SIZE) { remove_bot(i); } else { //alert(nextleft + ":" + nexttop); $("#" + i).css("left", ""+(nextleft)+"px"); $("#" + i).css("top", ""+(nexttop)+"px"); } } On a similar note the remove_bot(i); function is as below, is this correct (I can't splice as it changes all the ID's of the elements in the array. function remove_bot(i) { $("#" + i).remove(); bots[i] = false; } Many thanks for any advice given!

    Read the article

  • Efficient way to store order in mySQL for list of items

    - by ninumedia
    I want to code cleaner and more efficiently and I wanted to know any other suggestions for the following problem: I have a mySQL database that holds data about a set of photograph names. Oh, say 100 photograph names Table 1: (photos) has the following fields: photo_id, photo_name Ex data: 1 | sunshine.jpg 2 | cloudy.jpg 3 | rainy.jpg 4 | hazy.jpg ... Table 2: (categories) has the following fields: category_id, category_name, category_order Ex data: 1 | Summer Shots | 1,2,4 2 | Winter Shots | 2,3 3 | All Seasons | 1,2,3,4 ... Is it efficient to store the order of the photos in this manner per entry via comma delimited values? It's one approach I have seen used before but I wanted to know if something else is faster in run time. Using this way I don't think it is possible to do a direct INNER JOIN on the category table and photo table to get a single matched list of all the photographs per category. Ex: Summer shots - sunshine.jpg, cloudy.jpg, hazy.jpg because it was matched against 1,2,4 The iteration through all the categories and then the photos will have a O(n^2) and there has to be a better/faster way. Please educate me :)

    Read the article

  • How would you code an efficient Circular Buffer in Java or C#

    - by Cheeso
    I want a simple class that implements a fixed-size circular buffer. It should be efficient, easy on the eyes, generically typed. EDIT: It need not be MT-capable, for now. I can always add a lock later, it won't be high-concurrency in any case. Methods should be: .Add and I guess .List, where I retrieve all the entries. On second thought, Retrieval I think should be done via an indexer. At any moment I will want to be able to retrieve any element in the buffer by index. But keep in mind that from one moment to the next Element[n] may be different, as the Circular buffer fills up and rolls over. This isn't a stack, it's a circular buffer. Regarding "overflow": I would expect internally there would be an array holding the items, and over time the head and tail of the buffer will rotate around that fixed array. But that should be invisible from the user. There should be no externally-detectable "overflow" event or behavior. This is not a school assignment - it is most commonly going to be used for a MRU cache or a fixed-size transaction or event log.

    Read the article

  • Most efficient way to bind a Listbox with SelectionMode=Multiple

    - by Draak
    Hi, I have an ASP.NET webform that has a listbox (lbxRegions) with multi-select option enabled. In my db, I have a table with an xml field that holds a list of regions. I need to populate the listbox with all available regions and then "check off" the list items that match the regions in the db table. The list options also need to be ordered by region name. So, I wrote the following code that works just fine -- no problems. But I was wondering if anyone can think of a better (more succinct, more efficient) way to have done the same thing. Thanks in advance. Dim allRegions = XElement.Load(Server.MapPath(Request.ApplicationPath) & "\Regions.xml").<country>.<regions>.<region> Dim selectedRegions = (From ev In dc.Events Where ev.EventId = 2951).Single.CEURegions.<country>.<regions>.<region> Dim unselectedRegions = allRegions.Except(selectedRegions) Dim selectedItems = From x In selectedRegions Select New ListItem() _ With {.Value = x.@code, .Text = x.Value, .Selected = True} Dim unselectedItems = From x In unselectedRegions Select New ListItem() _ With {.Value = x.@code, .Text = x.Value} Dim allItems = selectedItems.Union(unselectedItems).OrderBy(Function(x) x.Text) lbxRegions.Items.AddRange(allItems.ToArray()) P.S. You can post code in C# if you like.

    Read the article

  • Efficient way to maintain a sorted list of access counts in Python

    - by David
    Let's say I have a list of objects. (All together now: "I have a list of objects.") In the web application I'm writing, each time a request comes in, I pick out up to one of these objects according to unspecified criteria and use it to handle the request. Basically like this: def handle_request(req): for h in handlers: if h.handles(req): return h return None Assuming the order of the objects in the list is unimportant, I can cut down on unnecessary iterations by keeping the list sorted such that the most frequently used (or perhaps most recently used) objects are at the front. I know this isn't something to be concerned about - it'll make only a miniscule, undetectable difference in the app's execution time - but debugging the rest of the code is driving me crazy and I need a distraction :) so I'm asking out of curiosity: what is the most efficient way to maintain the list in sorted order, descending, by the number of times each handler is chosen? The obvious solution is to make handlers a list of (count, handler) pairs, and each time a handler is chosen, increment the count and resort the list. def handle_request(req): for h in handlers[:]: if h[1].handles(req): h[0] += 1 handlers.sort(reverse=True) return h[1] return None But since there's only ever going to be at most one element out of order, and I know which one it is, it seems like some sort of optimization should be possible. Is there something in the standard library, perhaps, that is especially well-suited to this task? Or some other data structure? (Even if it's not implemented in Python) Or should/could I be doing something completely different?

    Read the article

  • Efficient database access when dealing with multiple abstracted repositories

    - by Nathan Ridley
    I want to know how most people are dealing with the repository pattern when it involves hitting the same database multiple times (sometimes transactionally) and trying to do so efficiently while maintaining database agnosticism and using multiple repositories together. Let's say we have repositories for three different entities; Widget, Thing and Whatsit. Each repository is abstracted via a base interface as per normal decoupling design processes. The base interfaces would then be IWidgetRepository, IThingRepository and IWhatsitRepository. Now we have our business layer or equivalent (whatever you want to call it). In this layer we have classes that access the various repositories. Often the methods in these classes need to do batch/combined operations where multiple repositories are involved. Sometimes one method may make use of another method internally, while that method can still be called independently. What about, in this scenario, when the operation needs to be transactional? Example: class Bob { private IWidgetRepository _widgetRepo; private IThingRepository _thingRepo; private IWhatsitRepository _whatsitRepo; public Bob(IWidgetRepository widgetRepo, IThingRepository thingRepo, IWhatsitRepository whatsitRepo) { _widgetRepo = widgetRepo; _thingRepo= thingRepo; _whatsitRepo= whatsitRepo; } public void DoStuff() { _widgetRepo.StoreSomeStuff(); _thingRepo.ReadSomeStuff(); _whatsitRepo.SaveSomething(); } public void DoOtherThing() { _widgetRepo.UpdateSomething(); DoStuff(); } } How do I keep my access to that database efficient and not have a constant stream of open-close-open-close on connections and inadvertent invocation of MSDTS and whatnot? If my database is something like SQLite, standard mechanisms like creating nested transactions are going to inherently fail, yet the business layer should not have to be concerning itself with such things. How do you handle such issues? Does ADO.Net provide simple mechanisms to handle this or do most people end up wrapping their own custom bits of code around ADO.Net to solve these types of problems?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >