Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 314/361 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • Automatic tracking algorithm

    - by nico
    Hi everyone, I'm trying to write a simple tracking routine to track some points on a movie. Essentially I have a series of 100-frames-long movies, showing some bright spots on dark background. I have ~100-150 spots per frame, and they move over the course of the movie. I would like to track them, so I'm looking for some efficient (but possibly not overkilling to implement) routine to do that. A few more infos: the spots are a few (es. 5x5) pixels in size the movement are not big. A spot generally does not move more than 5-10 pixels from its original position. The movements are generally smooth. the "shape" of these spots is generally fixed, they don't grow or shrink BUT they become less bright as the movie progresses. the spots don't move in a particular direction. They can move right and then left and then right again the user will select a region around each spot and then this region will be tracked, so I do not need to automatically find the points. As the videos are b/w, I though I should rely on brigthness. For instance I thought I could move around the region and calculate the correlation of the region's area in the previous frame with that in the various positions in the next frame. I understand that this is a quite naïve solution, but do you think it may work? Does anyone know specific algorithms that do this? It doesn't need to be superfast, as long as it is accurate I'm happy. Thank you nico

    Read the article

  • Simple description of worker and I/O threads in .NET

    - by Konstantin
    It's very hard to find detailed but simple description of worker and I/O threads in .NET What's clear to me regarding this topic (but may not be technically precise): Worker threads are threads that should employ CPU for their work; I/O threads (also called "completion port threads") should employ device drivers for their work and essentially "do nothing", only monitor the completion of non-CPU operations. What is not clear: Although method ThreadPool.GetAvailableThreads returns number of available threads of both types, it seems there is no public API to schedule work for I/O thread. You can only manually create worker thread in .NET? It seems that single I/O thread can monitor multiple I/O operations. Is it true? If so, why ThreadPool has so many available I/O threads by default? In some texts I read that callback, triggered after I/O operation completion is performed by I/O thread. Is it true? Isn’t this a job for worker thread, considering that this callback is CPU operation? To be more specific – do ASP.NET asynchronous pages user I/O threads? What exactly is performance benefit in switching I/O work to separate thread instead of increasing maximum number of worker threads? Is it because single I/O thread does monitor multiple operations? Or Windows does more efficient context switching when using I/O threads?

    Read the article

  • Enumerate all paths in a weighted graph from A to B where path length is between C1 and C2

    - by awmross
    Given two points A and B in a weighted graph, find all paths from A to B where the length of the path is between C1 and C2. Ideally, each vertex should only be visited once, although this is not a hard requirement. I supose I could use a heuristic to sort the results of the algorithm to weed out "silly" paths (e.g. a path that just visits the same two nodes over and over again) I can think of simple brute force algorithms, but are there any more sophisticed algorithms that will make this more efficient? I can imagine as the graph grows this could become expensive. In the application I am developing, A & B are actually the same point (i.e. the path must return to the start), if that makes any difference. Note that this is an engineering problem, not a computer science problem, so I can use an algorithm that is fast but not necessarily 100% accurate. i.e. it is ok if it returns most of the possible paths, or if most of the paths returned are within the given length range.

    Read the article

  • Running an existing LINQ query against a dynamic object (DataTable like)

    - by TomTom
    Hello, I am working on a generic OData provider to go against a custom data provider that we have here. Thsi is fully dynamic in that I query the data provider for the table it knows. I have a basic storage structure in place so far based on the OData sample code. My problem is: OData supports queries and expects me to hand in an IQueryable implementation. On the lowe rside, I dont have any query support. Not a joke - the provider returns tables and the WHERE clause is not supported. Performance is not an issue here - the tables are small. It is ok to sort them in the OData provider. My main problem is this. I submit a SQL statement to get out the data of a table. The result is some sort of ADO.NET data reader here. I need to expose an IQueryable implementation for this data to potentially allow later filtering. Any ide ahow to best touch that? .NET 3.5 only (no 4.0 planned for some time). I was seriously thinking of creating dynamic DTO classes for every table (emitting bytecode) so I can use standard LINQ. Right now I am using a dictionary per entry (not too efficient) but I see no real way to filter / sort based on them.

    Read the article

  • How to avoid multiple, unused has_many associations when using multiple models for the same entity (

    - by mikep
    Hello, I'm looking for a nice, Ruby/Rails-esque solution for something. I'm trying to split up some data using multiple tables, rather than just using one gigantic table. My reasoning is pretty much to try and avoid the performance drop that would come with having a big table. So, rather than have one table called books, I have multiple tables: books1, books2, books3, etc. (I know that I could use a partition, but, for now, I've decided to go the 'multiple tables' route.) Each user has their books placed into a specific table. The actual book table is chosen when the user is created, and all of their books go into the same table. The goal is to try and keep each table pretty much even -- but that's a different issue. One thing I don't particularly want to have is a bunch of unused associations in the User class. Right now, it looks like I'd have to do the following: class User < ActiveRecord::Base has_many :books1, :books2, :books3, :books4, :books5 end class Books1 < ActiveRecord::Base belongs_to :user end class Books2 < ActiveRecord::Base belongs_to :user end First off, for each specific user, only one of the book tables would be usable/applicable, since all of a user's books are stored in the same table. So, only one of the associations would be in use at any time and any other has_many :bookX association that was loaded would be a waste. I don't really know Ruby/Rails does internally with all of those has_many associations though, so maybe it's not so bad. But right now I'm thinking that it's really wasteful, and that there may just be a better, more efficient way of doing this. Is there's some sort of special Ruby/Rails methodology that could be applied here to avoid having to have all of those has_many associations? Also, does anyone have any advice on how to abstract the fact that there's multiple book tables behind a single books model/class?

    Read the article

  • Best way to randomly select columns from random rows of SQL results.

    - by LesterDove
    A search of SO yields many results describing how to select random rows of data from a database table. My requirement is a bit different, though, in that I'd like to select individual columns from across random rows in the most efficient/random/interesting way possible. To better illustrate: I have a large Customers table, and from that I'd like to generate a bunch of fictitious demo Customer records that aren't real people. I'm thinking of just querying randomly from the Customers table, and then randomly pairing FirstNames with LastNames, Address, City, State, etc. So if this is my real Customer data (simplified): FirstName LastName State ========================== Sally Simpson SD Will Warren WI Mike Malone MN Kelly Kline KS Then I'd generate several records that look like this: FirstName LastName State ========================== Sally Warren MN Kelly Malone SD Etc. My initial approach works, but it lacks the elegance that I'm hoping the final answer will provide. (I'm particularly unhappy with the repetitiveness of the subqueries, and the fact that this solution requires a known/fixed number of fields and therefore isn't reusable.) SELECT FirstName = (SELECT TOP 1 FirstName FROM Customer ORDER BY newid()), LastName= (SELECT TOP 1 LastNameFROM Customer ORDER BY newid()), State = (SELECT TOP 1 State FROM Customer ORDER BY newid()) Thanks!

    Read the article

  • Compile time float packing/punning

    - by detly
    I'm writing C for the PIC32MX, compiled with Microchip's PIC32 C compiler (based on GCC 3.4). My problem is this: I have some reprogrammable numeric data that is stored either on EEPROM or in the program flash of the chip. This means that when I want to store a float, I have to do some type punning: typedef union { int intval; float floatval; } IntFloat; unsigned int float_as_int(float fval) { IntFloat intf; intf.floatval = fval; return intf.intval; } // Stores an int of data in whatever storage we're using void StoreInt(unsigned int data, unsigned int address); void StoreFPVal(float data, unsigned int address) { StoreInt(float_as_int(data), address); } I also include default values as an array of compile time constants. For (unsigned) integer values this is trivial, I just use the integer literal. For floats, though, I have to use this Python snippet to convert them to their word representation to include them in the array: import struct hex(struct.unpack("I", struct.pack("f", float_value))[0]) ...and so my array of defaults has these indecipherable values like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 0x3C83126F, // Some default float value, 0.005 } (These actually take the form of X macro constructs, but that doesn't make a difference here.) Commenting is nice, but is there a better way? It's be great to be able to do something like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 COMPILE_TIME_CONVERT(0.005), // Some default float value, 0.005 } ...but I'm completely at a loss, and I don't even know if such a thing is possible. Notes Obviously "no, it isn't possible" is an acceptable answer if true. I'm not overly concerned about portability, so implementation defined behaviour is fine, undefined behaviour is not (I have the IDB appendix sitting in front of me). As fas as I'm aware, this needs to be a compile time conversion, since DEFAULTS is in the global scope. Please correct me if I'm wrong about this.

    Read the article

  • Coding the Python way

    - by Aaron Moodie
    I've just spent the last half semester at Uni learning python. I've really enjoyed it, and was hoping for a few tips on how to write more 'pythonic' code. This is the __init__ class from a recent assignment I did. At the time I wrote it, I was trying to work out how I could re-write this using lambdas, or in a neater, more efficient way, but ran out of time. def __init__(self, dir): def _read_files(_, dir, files): for file in files: if file == "classes.txt": class_list = readtable(dir+"/"+file) for item in class_list: Enrol.class_info_dict[item[0]] = item[1:] if item[1] in Enrol.classes_dict: Enrol.classes_dict[item[1]].append(item[0]) else: Enrol.classes_dict[item[1]] = [item[0]] elif file == "subjects.txt": subject_list = readtable(dir+"/"+file) for item in subject_list: Enrol.subjects_dict[item[0]] = item[1] elif file == "venues.txt": venue_list = readtable(dir+"/"+file) for item in venue_list: Enrol.venues_dict[item[0]] = item[1:] elif file.endswith('.roll'): roll_list = readlines(dir+"/"+file) file = os.path.splitext(file)[0] Enrol.class_roll_dict[file] = roll_list for item in roll_list: if item in Enrol.enrolled_dict: Enrol.enrolled_dict[item].append(file) else: Enrol.enrolled_dict[item] = [file] try: os.path.walk(dir, _read_files, None) except: print "There was a problem reading the directory" As you can see, it's a little bulky. If anyone has the time or inclination, I'd really appreciate a few tips on some python best-practices. Thanks.

    Read the article

  • Should I persist images on EBS or S3?

    - by javanes
    I am migrating my Java,Tomcat, Mysql server to AWS EC2. I have already attached EBS volume for storing MySql data. In my web application people may upload images. So I should persist them. There are 2 alternatives in my mind: Save uploaded images to EBS volume. Use the S3 service. The followings are my notes, please be skeptic about them, as my expertise is not on servers, but software development. EBS plus: S3 storage is more expensive. (0.15 $/Gb 0.1$/Gb) S3 plus: Serving statics from EBS may influence my web server's performance negatively. Is this true? Does Serving images affect server performance notably? For S3 my server will not be responsible for serving statics. S3 plus: Serving statics from EBS may result I/O cost, probably it will be minor. EBS plus: People say EBS is faster. S3 plus: People say S3 is more safe for persistence. EBS plus: No need to learn API, it is straight forward to save the images to EBS volume. Namely I can not decide, will be happy if you guide. Thanks

    Read the article

  • php parsing speed optimization

    - by Arnaud
    I would like to add tooltip or generate link according to the element available in the database, for exemple if the html page printed is: to reboot your linux host in single-user mode you can ... I will use explode(" ", $row[page]) and the idea is now to lookup for every single word in the page to find out if they have a related referance in this exemple let's say i've got a table referance an one entry for reboot and one for linux reboot: restart a computeur linux: operating system now my output will look like (replaced < and by @) to @a href="ref/reboot"@reboot@/a@ your @a href="ref/linux"@linux@/a@ host in single-user mode you can ... Instead of have a static list generated when I saved the content, if I add more keyword in the future, then the text will become more interactive. My main concerne and question is how can I create a efficient enough process to do it ? Should I store all the db entry in an array and compare them ? Do an sql query for each word (seems to be crazy) Dump the table in a file and use a very long regex or a "grep -f pattern data" way of doing it? Or or or or I'm sure it must be a better way of doing it, just don't have a clue about it, or maybe this will be far too resource un-friendly and I should avoid doing such things. Cheers!

    Read the article

  • Save memory in Python. How to iterate over the lines and save them efficiently with a 2million line

    - by skyl
    I have a tab-separated data file with a little over 2 million lines and 19 columns. You can find it, in US.zip: http://download.geonames.org/export/dump/. I started to run the following but with for l in f.readlines(). I understand that just iterating over the file is supposed to be more efficient so I'm posting that below. Still, with this small optimization, I'm using 10% of my memory on the process and have only done about 3% of the records. It looks like, at this pace, it will run out of memory like it did before. Also, the function I have is very slow. Is there anything obvious I can do to speed it up? Would it help to del the objects with each pass of the for loop? def run(): from geonames.models import POI f = file('data/US.txt') for l in f: li = l.split('\t') try: p = POI() p.geonameid = li[0] p.name = li[1] p.asciiname = li[2] p.alternatenames = li[3] p.point = "POINT(%s %s)" % (li[5], li[4]) p.feature_class = li[6] p.feature_code = li[7] p.country_code = li[8] p.ccs2 = li[9] p.admin1_code = li[10] p.admin2_code = li[11] p.admin3_code = li[12] p.admin4_code = li[13] p.population = li[14] p.elevation = li[15] p.gtopo30 = li[16] p.timezone = li[17] p.modification_date = li[18] p.save() except IndexError: pass if __name__ == "__main__": run()

    Read the article

  • Drawing only part of a

    - by Ben Reeves
    ..Continued on from my previous question I have a 320*480 RGB565 framebuffer which I wish to draw using OpenGL ES 1.0 on the iPhone. - (void)setupView { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, (int[4]){0, 0, 480, 320}); glEnable(GL_TEXTURE_2D); } // Updates the OpenGL view when the timer fires - (void)drawView { // Make sure that you are drawing to the current context [EAGLContext setCurrentContext:context]; //Get the 320*480 buffer const int8_t * frameBuf = [source getNextBuffer]; //Create enough storage for a 512x512 power of 2 texture int8_t lBuf[2*512*512]; memcpy (lBuf, frameBuf, 320*480*2); //Upload the texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, lBuf); //Draw it glDrawTexiOES(0, 0, 1, 480, 320); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } If I produce the original texture in 512*512 the output is cropped incorrectly but other than that looks fine. However using the require output size of 320*480 everything is distorted and messed up. I'm pretty sure it's the way I'm copying the framebuffer into the new 512*512 buffer. I have tried this routine int8_t lBuf[512][512][2]; const char * frameDataP = frameData; for (int ii = 0; ii < 480; ++ii) { memcpy(lBuf[ii], frameDataP, 320); frameDataP += 320; } Which is better, but the width appears to be stretched and the height is messed up. Any help appreciated.

    Read the article

  • How to do this Python / MySQL manipulation (match) more efficiently?

    - by NJTechie
    Following is my data : Company Table : ID Company Address City State Zip Phone 1 ABC 123 Oak St Philly PA 17542 7329878901 2 CDE 111 Joe St Newark NJ 08654 3 GHI 211 Foe St Brick NJ 07740 7321178901 4 JAK 777 Wall Ocean NJ 07764 7322278901 5 KLE 87 Ilk St Plains NY 07654 7376578901 6 AB 1 W.House SField PA 87656 7329878901 Branch Office Table : ID Address City State Zip Phone 1 323 Alk St Philly PA 17542 7329832221 1 171 Joe St Newark NJ 08654 3 287 Foe St Brick NJ 07740 7321178901 3 700 Wall Ocean NJ 07764 7322278901 1 89 Blk St Surrey NY 07154 7376222901 File to be Matched (In MySQL): ID Company Address City State Zip Phone 1 ABC 123 Oak St Philly PA 17542 7329878901 2 AB 171 Joe St Newark NJ 08654 3 GHI 211 Foe St Brick NJ 07740 7321178901 4 JAK 777 Wall Ocean NJ 07764 7322278901 5 K 87 Ilk St Plains NY 07654 7376578901 Resulting File : ID Company Address City State Zip Phone appendedID 1 ABC 123 Oak St Philly PA 17542 7329878901 [Original record, field always empty] 1 ABC 171 Joe St Newark NJ 08654 1 [Company Table] 1 ABC 323 Alk St Philly PA 17542 7329832221 1 [Branch Office Table] 1 AB 1 W.House SField PA 87656 7329878901 6 [Partial firm and State, Zip match] 2 CDE 111 Joe St Newark NJ 08654 3 GHI 211 Foe St Brick NJ 07740 7321178901 3 GHI 700 Wall Ocean NJ 07764 7322278901 3 3 GHI 287 Foe St Brick NJ 07740 7321178901 3 4 JAK 777 Wall Ocean NJ 07764 7322278901 5 KLE 87 Ilk St Surrey NY 07654 7376578901 5 KLE 89 Blk St Surrey NY 07154 7376222901 5 Requirement : 1) I have to match each firm on the 'File to be Matched' to that of Company and Branch Office tables (MySQL). 2) If there are multiple exact/partial matches, then the ID from Company, Branch Office table is inserted as a new row in the resulting file. 3) Not all the firms will be matched perfectly, in that case I have to match on partial Company names (like 5/8th of the company name) and any of the address fields and insert them in the resulting file. Please help me out in the most efficient solution for this problem.

    Read the article

  • Getting a lightweight installation of java eclipse.

    - by liam
    Having dealt with yet another stupid eclipse problem, I want to try to get the lightest, most minimal eclipse installation as possible. To be clear, I use eclipse for two things: - Editing Java - Debugging Java Everything else I do through emacs/zsh (editing jsp/xml/js, file management, svn check-in, etc). I have not found any aspect of working in eclipse to do these tasks to be efficient or even reliable, so I do not want plug-ins that relate to it. From the eclipse.org site, this is the lightest install of eclipse that they have, and I don't want any of those things (bugzilla, mylyn, cvs, xml_ui), and have actually had problems with each of them even though I do not use them. So what is the minimal build I can get that will: 1) Ignore svn metadata 2) Includes the full-featured editor (intellisense and type-finding) 3) Includes the full-featured debugger (standard eclipse/jdk) Does not have any extra plug-ins, platforms, or "integrations" with other platforms, specifically, I don't want to deal with plug-ins relating to: Maven, JSP Validation, Javascript editing or validation, CVS or SVN, Mylyn, Spring or Hibernate "natures", app servers like a bundled tomcat/glassfish/etc, J2EE tools, or anything of the like. I do primarily spring/hibernate/web-mvc apps, and have never dealt with an eclipse plug-in that handles any of it gracefully, I can work effectively with my own toolset, but eclipse extensions do nothing but get in the way. I have worked with plain eclipse up to Ganymede, MyEclipse (up to 7.5), and the latest version of Spring-SourceTools, and find that they are all saddled with buggy useless plug-ins (though the combination is always different). Switching to netbeans/intellij is not an option, and my teammates work with svn-controlled .class/.project files, so it pretty much has to be eclipse. Does anyone have any good advice on how I can save a few grey hairs?

    Read the article

  • Reading text files line by line, with exact offset/position reporting

    - by Benjamin Podszun
    Hi. My simple requirement: Reading a huge ( a million) line test file (For this example assume it's a CSV of some sorts) and keeping a reference to the beginning of that line for faster lookup in the future (read a line, starting at X). I tried the naive and easy way first, using a StreamWriter and accessing the underlying BaseStream.Position. Unfortunately that doesn't work as I intended: Given a file containing the following Foo Bar Baz Bla Fasel and this very simple code using (var sr = new StreamReader(@"C:\Temp\LineTest.txt")) { string line; long pos = sr.BaseStream.Position; while ((line = sr.ReadLine()) != null) { Console.Write("{0:d3} ", pos); Console.WriteLine(line); pos = sr.BaseStream.Position; } } the output is: 000 Foo 025 Bar 025 Baz 025 Bla 025 Fasel I can imagine that the stream is trying to be helpful/efficient and probably reads in (big) chunks whenever new data is necessary. For me this is bad.. The question, finally: Any way to get the (byte, char) offset while reading a file line by line without using a basic Stream and messing with \r \n \r\n and string encoding etc. manually? Not a big deal, really, I just don't like to build things that might exist already..

    Read the article

  • How toget a list of "fastest miles" from a set of GPS Points

    - by santiagobasulto
    I'm trying to solve a weird problem. Maybe you guys know of some algorithm that takes care of this. I have data for a cargo freight truck and want to extract some data. Suppose I've got a list of sorted points that I get from the GPS. That's the route for that truck: [ { "lng": "-111.5373066", "lat": "40.7231711", "time": "1970-01-01T00:00:04Z", "elev": "1942.1789265256325" }, { "lng": "-111.5372056", "lat": "40.7228762", "time": "1970-01-01T00:00:07Z", "elev": "1942.109892409177" } ] Now, what I want to get is a list of the "fastest miles". I'll do an example: Given the points: A, B, C, D, E, F the distance from point A to point B is 1 mile, and the cargo took 10:32 minutes. From point B to point D i've got other mile, and the cargo took 10 minutes, etc. So, i need a list sorted by time. Similar to: B -> D: 10 A -> B: 10:32 D -> F: 11:02 Do you know any efficient algorithm that let me calculate that? Thank you all. PS: I'm using Python. EDIT: I've got the distance. I know how to calculate it and there are plenty of posts to do that. What I need is an algorithm to tokenize by mile and get speed from that. Having a distance function is not helpful enough: results = {} for point in points: aux_points = points.takeWhile(point>n) #This doesn't exist, just trying to be simple for aux_point in aux_points: d = distance(point, aux_point) if d == 1_MILE: time_elapsed = time(point, aux_point) results[time_elapsed] = (point, aux_point) I'm still doing some pretty inefficient calculations.

    Read the article

  • does lucene search function work in large size document?

    - by shaon-fan
    Hi,there I have a problem when do search with lucene. First, in lucene indexing function, it works well to huge size document. such as .pst file, the outlook mail storage. It can build indexing file include all the information of .pst. The only problem is to large sometimes, include very much words. So when i search using lucene, it only can process the front part of this indexing file, if one word come out the back part of the indexing file, it couldn't find this word and no hits in result. But when i separate this indexing file to several parts in stupid way when debugging, and searching every parts, it can work well. So i want to know how to separate indexing file, how much size should be the limit of searching? cheers and wait 4 reply. ++++++++++++++++++++++++++++++++++++++++++++++++++ hi,there, follow Coady siad, i set the length to max 2^31-1. But the search result still can't include what i want. simply, i convert the doc word to string array[] to analyze, one doc word has 79680 words include the space and any symbol. when i search certain word, it just return 300 count, actually it has more than 300 results. The same reason, when i search a word in back part of the doc, it also couldn't find. //////////////set the length idexwriter.SetMaxFieldLength(2147483647); ////////////////////search IndexSearcher searcher = new ndexSearcher(Program.Parameters["INDEX_LOCATION"].ToString()); Hits hits = searcher.Search(query); This is my code, as others same. I found that problem when i need to count every word hits in a doc. So i also found it couldn't search word in back part of doc. pls help me to find, is there any set searcher length somewhere? how u meet this problem.

    Read the article

  • Filtering on a left join in SQLalchemy

    - by Adam Ernst
    Using SQLalchemy I want to perform a left outer join and filter out rows that DO have a match in the joined table. I'm sending push notifications, so I have a Notification table. This means I also have a ExpiredDeviceId table to store device_ids that are no longer valid. (I don't want to just delete the affected notifications as the user might later re-install the app, at which point the notifications should resume according to Apple's docs.) CREATE TABLE Notification (device_id TEXT, time DATETIME); CREATE TABLE ExpiredDeviceId (device_id TEXT PRIMARY KEY, expiration_time DATETIME); Note: there may be multiple Notifications per device_id. There is no "Device" table for each device. So when doing SELECT FROM Notification I should filter accordingly. I can do it in SQL: SELECT * FROM Notification LEFT OUTER JOIN ExpiredDeviceId ON Notification.device_id = ExpiredDeviceId.device_id WHERE expiration_time == NULL But how can I do it in SQLalchemy? sess.query( Notification, ExpiredDeviceId ).outerjoin( (ExpiredDeviceId, Notification.device_id == ExpiredDeviceId.device_id) ).filter( ??? ) Alternately I could do this with a device_id NOT IN (SELECT device_id FROM ExpiredDeviceId) clause, but that seems way less efficient.

    Read the article

  • Comparing all values within a List against each other

    - by Kave
    I am a bit stuck here and can't think further. public struct CandidateDetail { public int CellX { get; set; } public int CellY { get; set; } public int CellId { get; set; } } var dic = new Dictionary<int, List<CandidateDetail>>(); How can I compare each CandidateDetail item against other CandidateDetail items within the same dictionary in the most efficient way? Example: There are three keys for the dictionary: 5, 6 and 1. Therefore we have three entries. now each of these key entries would have a List associated with. In this case let say each of these three numbers has exactly two CandidateDetails items within the list associated to each key. This means in other words we have two 5, two 6 and two 1 in different or in the same cells. I would like to know: if[5].1stItem.CellId == [6].1stItem.CellId = we got a hit. That means we have a 5 and a 6 within the same Cell if[5].2ndItem.CellId == [6].2ndItem.CellId = perfect. We found out that the other 5 and 6 are together within a different cell. if[1].1stItem.CellId == ... Now I need to check the 1 also against the other 5 and 6 to see if the one exists within the previous same two cells or not. Could a Linq expression help perhaps? I am quite stuck here... I don't know...Maybe I am taking the wrong approach. I am trying to solve the "Hidden pair" of the game Sudoku. :) http://www.sudokusolver.eu/ExplainSolveMethodD.aspx Many Thanks, Kave

    Read the article

  • AJAX Autosave

    - by antony.trupe
    What's the best javascript library, or plugin or extension to a library, that has implemented autosaving functionality? The specific need is to be able to 'save' a data grid. Think gmail and Google Documents' autosave. I don't want to reinvent the wheel if its already been invented. I'm looking for an existing implementation of the magical autoSave() function. Auto-Saving:pushing to server code that saves to persistent storage, usually a DB. The server code framework is outside the scope of this question. Note that I'm not looking for an Ajax library, but a library/framework a level higher: interacts with the form itself. daemach introduced an implementation on top of jQuery @ http://ideamill.synaptrixgroup.com/?p=3. I'm not convinced it meets the lightweight and well engineered criteria though. Criteria stable, lightweight, well engineered saves onChange and/or onBlur saves no more frequently then a given number of milliseconds handles multiple updates happening at the same time doesn't save if no change has occurred since last save saves to different urls per input class Updates I've stabilized a solution. See my answer below for links.

    Read the article

  • How to eliminate duplicate nodes bases on values of multiple attributes?

    - by JayRaj
    Hello All, How can I eliminate duplicate nodes based on values of multiple (more than 1) attributes? Also the attribute names are passed as parameters to the stylesheet. Now I am aware of the Muenchian method of grouping that uses a <xsl:key> element. But I came to know that XSLT 1.0 does not allow paramters/variables in <xsl:key>. Is there another method(s) to achieve duplicate nodes removal? It is fine if it not as efficient as the Munechian method. Update from previus question: XML: <data id = "root"> <record id="1" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="2" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="3" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="4" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="5" operator1='xxx' operator2='lkj' operator3='tyu'/> <record id="6" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="7" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="8" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="9" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="10" operator1='rrr' operator2='yyy' operator3='zzz'/> </data>

    Read the article

  • SSIS - Bulk Update at Database Field Level

    - by Adam
    Hello, Here's our mission: Receive files from clients. Each file contains anywhere from 1 to 1,000,000 records. Records are loaded to a staging area and business-rule validation is applied. Valid records are then pumped into an OLTP database in a batch fashion, with the following rules: If record does not exist (we have a key, so this isn't an issue), create it. If record exists, optionally update each database field. The decision is made based on one of 3 factors...I don't believe it's important what those factors are. Our main problem is finding an efficient method of optionally updating the data at a field level. This is applicable across ~12 different database tables, with anywhere from 10 to 150 fields in each table (original DB design leaves much to be desired, but it is what it is). Our first attempt has been to introduce a table that mirrors the staging environment (1 field in staging for each system field) and contains a masking flag. The value of the masking flag represents the 3 factors. We've then put an UPDATE similar to... UPDATE OLTPTable1 SET Field1 = CASE WHEN Mask.Field1 = 0 THEN Staging.Field1 WHEN Mask.Field1 = 1 THEN COALESCE( Staging.Field1 , OLTPTable1.Field1 ) WHEN Mask.Field1 = 2 THEN COALESCE( OLTPTable1.Field1 , Staging.Field1 ) ... As you can imagine, the performance is rather horrendous. Has anyone tackled a similar requirement? We're a MS shop using a Windows Service to launch SSIS packages that handle the data processing. Unfortunately, we're pretty much novices at this stuff.

    Read the article

  • Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?

    - by Michael Aaron Safyan
    I'm using PYML to construct a multiclass linear support vector machine (SVM). After training the SVM, I would like to be able to save the classifier, so that on subsequent runs I can use the classifier right away without retraining. Unfortunately, the .save() function is not implemented for that classifier, and attempting to pickle it (both with standard pickle and cPickle) yield the following error message: pickle.PicklingError: Can't pickle : it's not found as __builtin__.PySwigObject Does anyone know of a way around this or of an alternative library without this problem? Thanks. Edit/Update I am now training and attempting to save the classifier with the following code: mc = multi.OneAgainstRest(SVM()); mc.train(dataset_pyml,saveSpace=False); for i, classifier in enumerate(mc.classifiers): filename=os.path.join(prefix,labels[i]+".svm"); classifier.save(filename); Notice that I am now saving with the PyML save mechanism rather than with pickling, and that I have passed "saveSpace=False" to the training function. However, I am still gettting an error: ValueError: in order to save a dataset you need to train as: s.train(data, saveSpace = False) However, I am passing saveSpace=False... so, how do I save the classifier(s)? P.S. The project I am using this in is pyimgattr, in case you would like a complete testable example... the program is run with "./pyimgattr.py train"... that will get you this error. Also, a note on version information: [michaelsafyan@codemage /Volumes/Storage/classes/cse559/pyimgattr]$ python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. import PyML print PyML.__version__ 0.7.0

    Read the article

  • Migrating a Large amount of data from old publishing site to new site

    - by tommizzle
    Hi, I am currently in the process of creating a new news/publishing site on the Movable Type platform. There are around 20 or so sites with 20,000+ rows of data to be moved/aggregated to ~8 sites (we have a number of location specific sites and are going to aggregate the content from these into 1 single site for each niche). We have discussed how to do this and came to the conclusion that it would probably be better to hire somebody to do it (I could probably do it, but i'm limited on time and am sure that a specialist would be more efficient). So my questions to you guys are: 1) What kind of skill set should we look for in an applicant? 2) There will be a large amount of input from our side... is getting somebody to work remotely out of the question? 3) How long would a task like this traditionally take (I know this question is very subjective, but an estimation would be awesome)? 4) Do you have any recommendations for firms who would be able to take on a large task like this? Thanks in advance, Tom

    Read the article

  • pattern for the following condition in java

    - by zahir hussain
    hi i want to know how to write pattern.. for example : the word is "AboutGoogle AdWords Drive traffic and customers to your site. Pay through Cheque, Net Banking or Credit Card. Google Toolbar Add a search box to your browser. Google SMS To find out local information simply SMS to 54664. Gmail Free email with 7.2GB storage and less spam. Try Gmail today. Our ProductsHelp Help with Google Search, Services and ProductsGoogle Web Search Features Translation, I'm Feeling Lucky, CachedGoogle Services & Tools Toolbar, Google Web APIs, ButtonsGoogle Labs Ideas, Demos, ExperimentsFor Site OwnersAdvertising AdWords, AdSenseBusiness Solutions Google Search Appliance, Google Mini, WebSearchWebmaster Central One-stop shop for comprehensive info about how Google crawls and indexes websitesSubmit your content to Google Add your site, Google SitemapsOur CompanyPress Center News, Images, ZeitgeistJobs at Google Openings, Perks, CultureCorporate Info Company overview, Philosophy, Diversity, AddressesInvestor Relations Financial info, Corporate governanceMore GoogleContact Us FAQs, Feedback, NewsletterGoogle Logos Official Logos, Holiday Logos, Fan LogosGoogle Blog Insights to Google products and cultureGoogle Store Pens, Shirts, Lava lamps©2010 Google - Privacy Policy - Terms of Service" I have to search some word... for example "google insights" so how to write the code in java... i just write small code... check my code and answer my question... that code only use for find the search word, where is that. but i need to display some words front of search word and display some words rear of search workd... similar to google search... my code is Pattern p = Pattern.compile("(?i)(.*?)"+search+""); Matcher m = p.matcher(full); String title=""; while (m.find() == true) { title=m.group(1); System.out.println(title); } the full is orignal content, search s search word... thanks and advance

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >