Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 326/361 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • Can I split a single SQL 2008 DB Table into multiple filegroups, based on a discriminator column?

    - by Pure.Krome
    Hi folks, I've got a SQL Server 2008 R2 database which has a number of tables. Two of these tables contains a lot of large data .. mainly because one of them is VARBINARY(MAX) and the sister table is GEOGRAPHY. (Why two tables? Read Below if you're interested***) The data in these tables are geospatial shapes, such as zipcode boundaries. Now, the first 70K odd rows are for DataType = 1 the rest 5mil rows are for DataType = 2 Now, is it possible to split the table data into two files? so all rows that are for DataType != 2 goes into File_A and DataType = 2 goes into File_B? This way, when I backup the DB, I can skip adding File_B so my download is waaaaay smaller? Is this possible? I guessing you might be thinking - why not keep them as TWO extra tables? Mainly because in the code, the data is conceptually the same .. it's just happens that I want to split the storage of this model data. It really messes up my model if I now how two aggregates in my model, instead of one. ***Entity Framework doesn't like Tables with GEOGRAPHY, so i have to create a new table which transforms the GEOGRAPHY to VARBINARY, and then drop that into EF.

    Read the article

  • ASP.NET or PHP: Is Memcached useful for storing user-state information?

    - by hamlin11
    This question may expose my ignorance as a web developer, but that wouldn't exactly be a bad thing for me now would it? I have the need to store user-state information. Examples of information that I need to store per user. (define user: unauthenticated visitor) User arrived to the site from google/bing/yahoo User utilized the search feature (true/false) List of previous visited product pages on current visit It is my understanding that I could store this in the view state, but that causes a problem with page load from the end-users' perspective because a significant amount of non-viewable information is being transferred to and from the end-users even though the server is the only side that needs the info. On a similar note, it is my understanding that the session state can be used to store such information, but does not this also result in the same information being transferred to the user and stored in their cookie? (Not quite as bad as viewstate, but it does not feel ideal). This leaves me with either a server-only-session storage system or a mem-caching solution. Is memcached the only good option here?

    Read the article

  • Find messages from certain key till certain key while being able to remove stale keys.

    - by Alfred
    My problem Let's say I add messages to some sort of datastructure: 1. "dude" 2. "where" 3. "is" 4. "my" 5. "car" Asking for messages from index[4,5] should return: "my","car". Next let's assume that after a while I would like to purge old messages because they aren't useful anymore and I want to save memory. Let's say at time x messages[1-3] became stale. I assume that it would be most efficient to just do the deletion once every x seconds. Next my datastructure should contain: 4. "my" 5. "car" My solution? I was thinking of using a concurrentskiplistset or concurrentskiplist map. Also I was thinking of deleting the old messages from inside a newSingleThreadScheduledExecutor. I would like to know how you would implement(efficiently/thread-safe) this or maybe use a library?

    Read the article

  • Interesting Scala typing solution, doesn't work in 2.7.7?

    - by djc
    I'm trying to build some image algebra code that can work with images (basically a linear pixel buffer + dimensions) that have different types for the pixel. To get this to work, I've defined a parametrized Pixel trait with a few methods that should be able to get used with any Pixel subclass. (For now, I'm only interested in operations that work on the same Pixel type.) Here it is: trait Pixel[T <: Pixel[T]] { def mul(v: Double): T def max(v: T): T def div(v: Double): T def div(v: T): T } Now I define a single Pixel type that has storage based on three doubles (basically RGB 0.0-1.0), I've called it TripleDoublePixel: class TripleDoublePixel(v: Array[Double]) extends Pixel[TripleDoublePixel] { var data: Array[Double] = v def this() = this(Array(0.0, 0.0, 0.0)) def toString(): String = { "(" + data(0) + ", " + data(1) + ", " + data(2) + ")" } def increment(v: TripleDoublePixel) { data(0) += v.data(0) data(1) += v.data(1) data(2) += v.data(2) } def mul(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x * v)) } def div(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x / v)) } def div(v: TripleDoublePixel): TripleDoublePixel = { var tmp = new Array[Double](3) tmp(0) = data(0) / v.data(0) tmp(1) = data(1) / v.data(1) tmp(2) = data(2) / v.data(2) new TripleDoublePixel(tmp) } def max(v: TripleDoublePixel): TripleDoublePixel = { val lv = data(0) * data(0) + data(1) * data(1) + data(2) * data(2) val vv = v.data(0) * v.data(0) + v.data(1) * v.data(1) + v.data(2) * v.data(2) if (lv > vv) (this) else v } } Now I want to write code to use this, that doesn't have to know what type the pixels are. For example: def idiv[T](a: Image[T], b: Image[T]) { for (i <- 0 until a.data.size) { a.data(i) = a.data(i).div(b.data(i)) } } Unfortunately, this doesn't compile: (fragment of lindet-gen.scala):145: error: value div is not a member of T a.data(i) = a.data(i).div(b.data(i)) I was told in #scala that this worked for someone else, but that was on 2.8. I've tried to get 2.8-rc1 going, but it doesn't compile for me. Is there any way to get this to work in 2.7.7?

    Read the article

  • Stable/repeatable random sort (MySQL, Rails)

    - by Matt Rogish
    I'd like to paginate through a randomly sorted list of ActiveRecord models (rows from MySQL database). However, this randomization needs to persist on a per-session basis, so that other people that visit the website also receive a random, paginate-able list of records. Let's say there are enough entities (tens of thousands) that storing the randomly sorted ID values in either the session or a cookie is too large, so I must temporarily persist it in some other way (MySQL, file, etc.). Initially I thought I could create a function based on the session ID and the page ID (returning the object IDs for that page) however since the object ID values in MySQL are not sequential (there are gaps), that seemed to fall apart as I was poking at it. The nice thing is that it would require no/minimal storage but the downsides are that it is likely pretty complex to implement and probably CPU intensive. My feeling is I should create an intersection table, something like: random_sorts( sort_id, created_at, user_id NULL if guest) random_sort_items( sort_id, item_id, position ) And then simply store the 'sort_id' in the session. Then, I can paginate the random_sorts WHERE sort_id = n ORDER BY position LIMIT... as usual. Of course, I'd have to put some sort of a reaper in there to remove them after some period of inactivity (based on random_sorts.created_at). Unfortunately, I'd have to invalidate the sort as new objects were created (and/or old objects being removed, although deletion is very rare). And, as load increases the size/performance of this table (even properly indexed) drops. It seems like this ought to be a solved problem but I can't find any rails plugins that do this... Any ideas? Thanks!!

    Read the article

  • Mgmt wants to re-title my position: Any help...? [closed]

    - by JohnFlyTN
    Management here wants to re-title my position, since I'm doing quite a bit of different work than was originally planned. They want my input. After a quick glance over my skill set and job duties, what would we need to describe this position as? I'll just list things I'm at least proficient in, I will not list things I have a passing knowledge of. About me : ~10 years software development. Languages : C, C++, Perl, PHP, C#, TCL, Unix shell scripting, SQL (TSQL, PLSQL) Systems : MS-Dos, Windows 3.1 to 7 for client, NT 4 to 2008 for server, OS/2, IBM MVS & z/OS, Linux ( multiple distros), AIX Current position: I do all sorts of in-house software. The range is single user apps to large systems spanning multiple OS's. One of the larger projects I've designed and coded is about 100k lines of C#, and a database where I have been the sole designer and maintainer. I have near total freedom to design as I see fit, restraints are usually budgetary. Skills required to replace me in my current role: Windows and Unix admin, Database design, .NET up to 3.5 (C#, ASP.NET), C++, Perl, good skills in designing large and efficient data processing systems. Given this small level of information what would you see this as being titled? (is more information required to render a decision?)

    Read the article

  • Quickest way to compare a buch of array or list of values.

    - by zapping
    Can you please let me know on the quickest and efficient way to compare a large set of values. Its like there are a list of parent codes(string) and each code has a series of child values(string). The child lists have to be compared with each other and find out duplicates and count how many times they repeat. code1(code1_value1, code1_value2, code3_value3, ..., code1_valueN); code2(code2_value1, code1_value2, code2_value3, ..., code2_valueN); code3(code2_value1, code3_value2, code3_value3, ..., code3_valueN); . . . codeN(codeN_value1, codeN_value2, codeN_value3, ..., codeN_valueN); The lists are huge say like there are 100 parent codes and each has about 250 values in them. There will not be duplicates within a code list. Doing it in java and the solution i could figure out is. Store the values of first set of code in as codeMap.put(codeValue, duplicateCount). The count initialized to 0. Then compare the rest of the values with this. If its in the map then increment the count otherwise append it to the map. The downfall of this is to get the duplicates. Another iteration needs to be performed on a very large list. An alternative is to maintain another hashmap for duplicates like duplicateCodeMap.put(codeValue, duplicateCount) and change the initial hashmap to codeMap.put(codeValue, codeValue). Speed is what is requirement. Hope one of you can help me with it.

    Read the article

  • .Net Hash Codes no longer persistent?

    - by RobV
    I have an API where various types have custom hash codes. These hash codes are based on getting the hash of a string representation of the object in question. Various salting techniques are used so that as far as possible Hash Codes do not collide and that Objects of different types with equivalent string representations have different Hash Codes. Obviously since the Hash Codes are based on strings there are some collisions (infinite strings vs the limited range of 32 bit integers). I use hashes based on string representations since I need the hashes to persist over sessions and particularly for use in database storage of objects. Suddenly today my code has started generating different hash codes for Objects which is breaking all kinds of things. It was working earlier today and I haven't touched any of the code involved in Hash Code generation. I'm aware that the .Net documentation allows for implementation of hash codes between .Net framework versions to change (and between 32 and 64 bit versions) but I haven't changed the framework version and there has been no framework updates recently as far as I can remember Any ideas because this seems really weird? Edit Hash Codes are generated like follows: //Compute Hash Code this._hashcode = (this._nodetype + this.ToString() + PlainLiteralHashCodeSalt).GetHashCode();

    Read the article

  • Long primitive or AtomicLong for a counter?

    - by Rich
    Hi I have a need for a counter of type long with the following requirements/facts: Incrementing the counter should take as little time as possible. The counter will only be written to by one thread. Reading from the counter will be done in another thread. The counter will be incremented regularly (as much as a few thousand times per second), but will only be read once every five seconds. Precise accuracy isn't essential, only a rough idea of the size of the counter is good enough. The counter is never cleared, decremented. Based upon these requirements, how would you choose to implement your counter? As a simple long, as a volatile long or using an AtomicLong? Why? At the moment I have a volatile long but was wondering whether another approach would be better. I am also incrementing my long by doing ++counter as opposed to counter++. Is this really any more efficient (as I have been led to believe elsewhere) because there is no assignment being done? Thanks in advance Rich

    Read the article

  • Best solution for __autoload

    - by tpk
    As our PHP5 OO application grew (in both size and traffic), we decided to revisit the __autoload() strategy. We always name the file by the class definition it contains, so class Customer would be contained within Customer.php. We used to list the directories in which a file can potentially exist, until the right .php file was found. This is quite inefficient, because you're potentially going through a number of directories which you don't need to, and doing so on every request (thus, making loads of stat() calls). Solutions that come to my mind... -use a naming convention that dictates the directory name (similar to PEAR). Disadvantages: doesn't scale too great, resulting in horrible class names. -come up with some kind of pre-built array of the locations (propel does this for its __autoload). Disadvantage: requires a rebuild before any deploy of new code. -build the array "on the fly" and cache it. This seems to be the best solution, as it allows for any class names and directory structure you want, and is fully flexible in that new files just get added to the list. The concerns are: where to store it and what about deleted/moved files. For storage we chose APC, as it doesn't have the disk I/O overhead. With regards to file deletes, it doesn't matter, as you probably don't wanna require them anywhere anyway. As to moves... that's unresolved (we ignore it as historically it didn't happen very often for us). Any other solutions?

    Read the article

  • LINQ to SQL - Lightweight O/RM?

    - by CoffeeAddict
    I've heard from some that LINQ to SQL is good for lightweight apps. But then I see LINQ to SQL being used for Stackoverflow, and a bunch of other .coms I know (from interviewing with them). Ok, so is this true? for an e-commerce site that's bringing in millions and you're typically only doing basic CRUDs most the time with the exception of an occasional stored proc for something more complex, is LINQ to SQL complete enough and performance-wise good enough or able to be tweaked enough to run happily on an e-commerce site? I've heard that you just need to tweak performance on the DB side when using LINQ to SQL for a better approach. So there are really 2 questions here: 1) Meaning/scope/definition of a "Lightweight" O/RM solution: What the heck does "lightweight" mean when people say LINQ to SQL is a "lightweight O/RM" and is that true??? If this is so lightweight then why do I see a bunch of huge .coms using it? Is it good enough to run major .coms (obviously it looks like it is) and what determines what the context of "lightweight" is...it's such a generic statement. 2) Performance: I'm working on my own .com and researching different O/RMs. I'm not really looking at the Entity Framework (yet), just want to figure out the LINQ to SQL basics here and determine if it will be efficient enough for me. The problem I think is you can't tweak or control the SQL it generates...

    Read the article

  • Sorting by some field and fetching whole tree from DB

    - by Niaxon
    Hello everyone, I am trying to do file browser in a tree form and have a problem to sort it somehow. I use PHP and MySQL for that. I've created mixed (nested set + adjacency) table 'element' with the following fields: element_id, left_key, right_key, level, parent_id, element_name, element_type (enum: 'folder','file'), element_size. Let's not discuss right now that it is better to move information about element (name, type, size) into other table. Function to scan specified directory and fill table work correctly. Noteworthy, i am adding elements to tree in specific order: folders first and only files. After that i can easily fetch and display whole table on the page using simple query: SELECT * FROM element WHERE 1=1 ORDER BY left_key With the result of that query and another function i can generate correct html code (<ul><li>... and so on). Now back to the question (finally, huh?). I am struggling to add sorting functionality. For example i want to order my result by size. Here i need to keep in my mind whole hierarchy of tree and rule: folders first, files later. I believe i can do that by generating in PHP recursive query: SELECT * FROM element WHERE parent_id = {$parentId} ORDER BY element_type (so folders would be first), size (or name for example) After that for each result which is folder i will send another query to get it's content. Also it's possible to fetch whole tree by left_key and after that sort it in PHP as array but i guess that would be worse :) I wonder if there is better and more efficient way to do such thing?

    Read the article

  • Rails: creating a model in the new action

    - by Joseph Silvashy
    I have an interesting situation, well it's probably not that unique at all, but I'm not totally sure how to tackle it. I have a model, in this case a recipe and the user navigates to the new path /recipes/new however the situation is that I need to be able to have the user upload images and make associations to that model in the new action, but the model doesn't have an ID yet. So I assume I need to rethink my controller, but I don't want to have redirects and whatnot, how can accomplish this? Here is the basic controller, barebones obviously: ... def new # I should be creating the model first, so it has an ID @recipe = Recipe.new end def create @recipe = Recipe.new(params[:recipe]) if @recipe.save redirect_to @recipe else render 'new' end end ... update Perhaps I can have a column thats like state which could have values like new/incomplete/complete or what-have-you. I'm mostly trying to figure out what would also be most efficient for the DB. It would be nice if I could still have a url that said '/new', instead of it be the edit path with the id, for usability sake, but I'm not sure this can be simply accomplished in the new action of my controller. Thoughts?

    Read the article

  • Optimized 2D Tile Scrolling in OpenGL

    - by silicus
    Hello, I'm developing a 2D sidescrolling game and I need to optimize my tiling code to get a better frame rate. As of right now I'm using a texture atlas and 16x16 tiles for 480x320 screen resolution. The level scrolls in both directions, and is significantly larger than 1 screen (thousands of pixels). I use glTranslate for the actual scrolling. So far I've tried: Drawing only the on-screen tiles using glTriangles, 2 per square tile (too much overhead) Drawing the entire map as a Display List (great on a small level, way to slow on a large one) Partitioning the map into Display Lists half the size of the screen, then culling display lists (still slows down for 2-directional scrolling, overdraw is not efficient) Any advice is appreciated, but in particular I'm wondering: I've seen Vertex Arrays/VBOs suggested for this because they're dynamic. What's the best way to take advantage of this? If I simply keep 1 screen of vertices plus a bit of overdraw, I'd have to recopy the array every few frames to account for the change in relative coordinates (shift everything over and add the new rows/columns). If I use more overdraw this doesn't seem like a big win; it's like the half-screen display list idea. Does glScissor give any gain if used on a bunch of small tiles like this, be it a display list or a vertex array/VBO Would it be better just to build the level out of large textures and then use glScissor? Would losing the memory saving of tiling be an issue for mobile development if I do this (just curious, I'm currently on a PC)? This approach was mentioned here Thanks :)

    Read the article

  • Is it possible to create multi-tiered WHERE statements in mySQL

    - by Brendan
    I'm currently developing a program that will generate reports based upon lead data. My issue is that I'm running 3 queries for something that I would like to only have to run one query for. For instance, I want to gather data for leads generated in the past day submission_date > (NOW() - INTERVAL 1 DAY) and I would like to find out how many total leads there were, and how many sold leads there were in that timeframe. (sold=1 / sold=0). The issue comes with the fact that this query is currently being done with 2 queries, one with WHEREsold= 1 and one with WHEREsold= 0. This is all well and good, but when I want to generate this data for the past day,week,month,year,and all time I will have to run 10 queries to obtain this data. I feel like there HAS to be a more efficient way of doing this. I know I can create a mySQL function for this, but I don't see how this could solve the problem. Thanks!!

    Read the article

  • What is the best way to get a reference to a spring bean in the backend layers?

    - by java_pill
    I have two spring config files and I'm specifying them in my web.xml as in below. web.xml snippet .. <context-param> <param-name>contextConfigLocation</param-name> <param-value>WEB-INF/classes/domain-context.xml WEB-INF/classes/client-ws.xml</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> .. From my domain object I have to invoke a Web Service Client and in order to get a reference to the Web Service client I do this: ApplicationContext context = new ClassPathXmlApplicationContext("client-ws.xml"); //b'cos I don't want to use WebApplicationContextUtils ProductServiceClient client = (ProductServiceClient) context.getBean("productClient"); .. client.find(prodID); //calls a Web Service .. However, I have concerns that looking up the client-ws.xml file and getting a reference to the ProductServiceClient bean is not efficient. I thought of getting it using WebApplicationContextUtils. However, I don't want my domain objects to have a dependency on the ServletContext (a web/control layer object) because WebApplicationContextUtils depends on ServletContext. What is the best way to get a reference to a spring bean in the backend layers? Thanks!

    Read the article

  • Better way to compare neighboring cells in matrix

    - by HyperCube
    Suppose I have a matrix of size 100x100 and I would like to compare each pixel to its direct neighbor (left, upper, right, lower) and then do some operations on the current matrix or a new one of the same size. A sample code in Python/Numpy could look like the following: (the comparison 0.5 has no meaning, I just want to give a working example for some operation while comparing the neighbors) import numpy as np my_matrix = np.random.rand(100,100) new_matrix = np.array((100,100)) my_range = np.arange(1,99) for i in my_range: for j in my_range: if my_matrix[i,j+1] > 0.5: new_matrix[i,j+1] = 1 if my_matrix[i,j-1] > 0.5: new_matrix[i,j-1] = 1 if my_matrix[i+1,j] > 0.5: new_matrix[i+1,j] = 1 if my_matrix[i-1,j] > 0.5: new_matrix[i-1,j] = 1 if my_matrix[i+1,j+1] > 0.5: new_matrix[i+1,j+1] = 1 if my_matrix[i+1,j-1] > 0.5: new_matrix[i+1,j-1] = 1 if my_matrix[i-1,j+1] > 0.5: new_matrix[i-1,j+1] = 1 This can get really nasty if I want to step into one neighboring cell and start from it to do a similar task... Do you have some suggestions how this can be done in a more efficient manner? Is this even possible?

    Read the article

  • Passing C++ object to C++ code through Python?

    - by cornail
    Hi all, I have written some physics simulation code in C++ and parsing the input text files is a bottleneck of it. As one of the input parameters, the user has to specify a math function which will be evaluated many times at run-time. The C++ code has some pre-defined function classes for this (they are actually quite complex on the math side) and some limited parsing capability but I am not satisfied with this construction at all. What I need is that both the algorithm and the function evaluation remain speedy, so it is advantageous to keep them both as compiled code (and preferrably, the math functions as C++ function objects). However I thought of glueing the whole simulation together with Python: the user could specify the input parameters in a Python script, while also implementing storage, visualization of the results (matplotlib) and GUI, too, in Python. I know that most of the time, exposing C++ classes can be done, e.g. with SWIG but I still have a question concerning the parsing of the user defined math function in Python: Is it possible to somehow to construct a C++ function object in Python and pass it to the C++ algorithm? E.g. when I call f = WrappedCPPGaussianFunctionClass(sigma=0.5) WrappedCPPAlgorithm(f) in Python, it would return a pointer to a C++ object which would then be passed to a C++ routine requiring such a pointer, or something similar... (don't ask me about memory management in this case, though :S) The point is that no callback should be made to Python code in the algorithm. Later I would like to extend this example to also do some simple expression parsing on the Python side, such as sum or product of functions, and return some compound, parse-tree like C++ object but let's stay at the basics for now. Sorry for the long post and thx for the suggestions in advance.

    Read the article

  • How to get notified about changes of the history via history.pushState?

    - by Felix Kling
    So now that HTML5 introduces history.pushState to change the browsers history, websites start using this in combination with Ajax instead of changing the fragment identifier of the URL. Sadly that means that those calls cannot be detect anymore by onhashchange. My question is: Is there a reliable way (hack? ;)) to detect when a website uses history.pushState? The specification does not state anything about events that are raised (at least I couldn't find anything). I tried to create a facade and replaced window.history with my own JavaScript object, but it didn't have any effect at all. Further explanation: I'm developing a Firefox add-on that needs to detect these changes and act accordingly. I know there was a similar question a few days ago that asked whether listening to some DOM events would be efficient but I would rather not rely on that because these events can be generated for a lot of different reasons. Update: Here is a jsfiddle (use Firefox 4 or Chrome 8) that shows that onpopstate is not triggered when pushState is called (or am I doing something wrong? Feel free to improve it!). Update 2: Another (side) problem is that window.location is not updated when using pushState (but I read about this already here on SO I think).

    Read the article

  • Logic: Best way to sample & count bytes of a 100MB+ file

    - by Jami
    Let's say I have this 170mb file (roughly 180 million bytes). What I need to do is to create a table that lists: all 4096 byte combinations found [column 'bytes'], and the number of times each byte combination appeared in it [column 'occurrences'] Assume two things: I can save data very fast, but I can update my saved data very slow. How should I sample the file and save the needed information? Here're some suggestions that are (extremely) slow: Go through each 4096 byte combinations in the file, save each data, but search the table first for existing combinations and update it's values. this is unbelievably slow Go through each 4096 byte combinations in the file, save until 1 million rows of data in a temporary table. Go through that table and fix the entries (combine repeating byte combinations), then copy to the big table. Repeat going through another 1 million rows of data and repeat the process. this is faster by a bit, but still unbelievably slow This is kind of like taking the statistics of the file. NOTE: I know that sampling the file can generate tons of data (around 22Gb from experience), and I know that any solution posted would take a bit of time to finish. I need the most efficient saving process

    Read the article

  • How can multiple variables be passed to a function cleanly in C?

    - by aquanar
    I am working on an embedded system that has different output capabilities (digital out, serial, analog, etc). I am trying to figure out a clean way to pass many of the variables that will control those functions. I don't need to pass ALL of them too often, but I was hoping to have a function that would read the input data (in this case from a TCP network), and then parse the data (IE, the 3rd byte contains the states of 8 of the digital outputs (according to which bit in that byte is high or low)), and put that into a variable where I can then use elsewhere in the program. I wanted that function to be separate from the main() function, but to do so would require passing pointers to some 20 or so variables that it would be writing to. I know I could make the variables global, but I am trying to make it easier to debug by making it obvious when a function is allowed to edit that variable, by passing it to the function. My best idea was a struct, and just pass a pointer to it, but wasn't sure if there was a more efficient way, especially since there is only really 1 function that would need to access all of them at once, while most others only require parts of the information that will be stored in this bunch of state variables. So anyway, is there a clean way to send many variables between functions at once that need to be edited?

    Read the article

  • Parallelizing L2S Entity Retrieval

    - by MarkB
    Assuming a typical domain entity approach with SQL Server and a dbml/L2S DAL with a logic layer on top of that: In situations where lazy loading is not an option, I have settled on a convention where getting a list of entities does not also get each item's child entities (no loading), but getting a single entity does (eager loading). Since getting a single entity also gets children, it causes a cascading effect in which each child then gets its children too. This sounds bad, but as long as the model is not too deep, I usually don't see performance problems that outweigh the benefits of the ease of use. So if I want to get a list in which each of the items is fully hydrated with children, I combine the GetList and GetItem methods. So I'll get a list and then loop through it getting each item with the full cascade. Even this is generally acceptable in many of the projects I've worked on - but I have recently encountered situations with larger models and/or more data in which it needs to be more efficient. I've found that partitioning the loop and executing it on multiple threads yields excellent results. In my first experiment with a list of 50 items from one particular project, I did 5 threads of 10 items each and got a 3X improvement in time. Of course, the mileage will vary depending on the project but all else being equal this is clearly a big opportunity. However, before I go further, I was wondering what others have done that have already been through this. What are some good approaches to parallelizing this type of thing?

    Read the article

  • Interview Q: sorting an almost sorted array (elements misplaced by no more than k)

    - by polygenelubricants
    I was asked this interview question recently: You're given an array that is almost sorted, in that each of the N elements may be misplaced by no more than k positions from the correct sorted order. Find a space-and-time efficient algorithm to sort the array. I have an O(N log k) solution as follows. Let's denote arr[0..n) to mean the elements of the array from index 0 (inclusive) to N (exclusive). Sort arr[0..2k) Now we know that arr[0..k) are in their final sorted positions... ...but arr[k..2k) may still be misplaced by k! Sort arr[k..3k) Now we know that arr[k..2k) are in their final sorted positions... ...but arr[2k..3k) may still be misplaced by k Sort arr[2k..4k) .... Until you sort arr[ik..N), then you're done! This final step may be cheaper than the other steps when you have less than 2k elements left In each step, you sort at most 2k elements in O(k log k), putting at least k elements in their final sorted positions at the end of each step. There are O(N/k) steps, so the overall complexity is O(N log k). My questions are: Is O(N log k) optimal? Can this be improved upon? Can you do this without (partially) re-sorting the same elements?

    Read the article

  • What's a good way to do testing a plug-in on multiple Windows and Outlook versions?

    - by Andrei
    Hello, We're building a plug-in for Outlook that should work on multiple Windows versions (XP, Vista, 7) and also with different Outlook versions (2003, 2007, 2010). The testing problem I am facing right now, is that I can't figure out a good/convenient/thorough way to test the application on multiple Windows and Outlook versions. At the moment, I have a VirtualBox which runs many virtual machines, with different Windows versions and Outlook versions. So I would have a virtual machine with Windows 7 testing Outlook 2010, and another one with Windows 7 testing Outlook 2007, Windows Vista with Outlook 2010 and so on, going through some of the possible combinations. It kind of gets the job done, although it is cumbersome and takes a long time to test. Some of the testing included in the application is unit testing, but this is also rather tied in with the machine I test it on (windows 7 with outlook 2010). For example, I was using ManagementObject recently, which worked fine on my system (and thus passed the unit test for that method), however, using that object threw an exception in another person's system, which crashed the application. I work on Visual Studio 2010 Ultimate. The questions: Is there a more elegant way to make the testing process more streamline and more efficient? Any other testing methods you recommend? How would you deal with this problem? Thanks! Looking forward to your replies.

    Read the article

  • Is programming overrated?

    - by aengine
    [Subjective and intended to be a community wiki] I am sorry for such an offensive question: But here are my arguments Most of the progress in "computing" has came from non-programming sources. i.e. People invented faster microprocessors and better routers and novel memory devices. I dont think on average people are writting more efficient programs than those written 10 years ago. And the newer and popular languages are infact slower than C. though speed is one of the lesser criterias. Most of the progress came from novel paradigms. Web, Internet, Cloud computing and Social networking are novel paradigms and did not involve progress in programming as such. Heck even facebook was written in PHP and not some extreme language. Though it did face scalability issues (same with twitter) but i believe money and better programmers (who came in much later) took care of that. Thus ideating capability trumped programming capability/ Even things like Map-Reduce, Column oriented database and Probablistic algorithms (E.g. bloom filters) came from hardcore Algorithms research, rather than some programming convention. Thus my final point is why programming skill is so overstressed? To point a recent example about how only 10% of programmers can "write code" (binary search) without debugging. Isnt it a bit hypocritical, considering your real successs lies in coming up with better algorithm or a novel feature rather than getting right first time???

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >