Search Results

Search found 13249 results on 530 pages for 'performance tuning'.

Page 482/530 | < Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >

  • Moving x,y position of all array objects every frame in actionscript 3?

    - by Dylan Gallardo
    I have my code setup so that I have an movieclip in my library a class called "block" being duplicated multiple times and added into an array like this: function makeblock(e:Event){ newblock=new block; newblock.x=10; newblock.y=10; addChild(newblock); myarray[counter] = newblock; //adds a newblock object into array counter += 1; } Then I have a loop with a currently primitive way of handling my problem: stage.addEventListener(Event.ENTER_FRAME, gameloop); function gameloop(evt:Event):void { if (moveright==true){ myarray[0].x += 5; myarray[1].x += 5; myarray[2].x += 5 -(and so on)- My question is how can I change x,y values every frame for new objects duplicated into the array, along with the previous ones that were added. Of course with a more elegant way than writing it out myself... array[0].x += 5, array[1], array[2], array[3] etc. Ideally I would like this to go up to 500 or more array objects for one array so obviously I don't want to be writing it out individually haha, I also need it to be consistent with performance so using a for loop or something to loop through the whole array and move each x += 5 wouldn't work would it? Anyway, if anyone has any ideas that'd be great!

    Read the article

  • Iterate over defined elements of a JS array

    - by sibidiba
    I'm using a JS array to Map IDs to actual elements, i.e. a key-value store. I would like to iterate over all elements. I tried several methods, but all have its caveats: for (var item in map) {...} Does iterates over all properties of the array, therefore it will include also functions and extensions to Array.prototype. For example someone dropping in the Prototype library in the future will brake existing code. var length = map.lenth; for (var i = 0; i < length; i++) { var item = map[i]; ... } does work but just like $.each(map, function(index, item) {...}); They iterate over the whole range of indexes 0..max(id) which has horrible drawbacks: var x = []; x[1]=1; x[10]=10; $.each(x, function(i,v) {console.log(i+": "+v);}); 0: undefined 1: 1 2: undefined 3: undefined 4: undefined 5: undefined 6: undefined 7: undefined 8: undefined 9: undefined 10: 10 Of course my IDs wont resemble a continuous sequence either. Moreover there can be huge gaps between them so skipping undefined in the latter case is unacceptable for performance reasons. How is it possible to safely iterate over only the defined elements of an array (in a way that works in all browsers and IE)?

    Read the article

  • Database solution for 200million writes/day, monthly summarization queries

    - by sb
    Hello. I'm looking for help deciding on which database system to use. (I've been googling and reading for the past few hours; it now seems worthwhile to ask for help from someone with firsthand knowledge.) I need to log around 200 million rows (or more) per 8 hour workday to a database, then perform weekly/monthly/yearly summary queries on that data. The summary queries would be for collecting data for things like billing statements, eg. "How many transactions of type A did each user run this month?" (could be more complex, but that's the general idea). I can spread the database amongst several machines, as necessary, but I don't think I can take old data offline. I'll definitely need to be able to query a month's worth of data, maybe a year. These queries would be for my own use, and wouldn't need to be generated in real-time for an end-user (they could run overnight, if needed). Does anyone have any suggestions as to which databases would be a good fit? P.S. Cassandra looks like it would have no problem handling the writes, but what about the huge monthly table scans? Is anyone familiar with Cassandra/Hadoop MapReduce performance?

    Read the article

  • Designing for varying mobile device resolutions, i.e. iPhone 4 & iPhone 3G

    - by Josh
    As the design community moves to design applications & interfaces for mobile devices, a new problem has arisen: Varying Screen DPI's. Here's the situation: Touch: * iPhone 3G/S ~ 160 dpi * iPhone 4 ~ 300 dpi * iPad ~ 126 dpi * Android device @ 480p ~ 200 dpi Point / click: * Laptop @ 720p ~ 96 dpi * Desktop @ 720p ~ 72 dpi There is certainly a clear distinction between desktop and mobile so having two separate front-ends to the same app is logical, especially when considering one is "touch"-based and the other is "point/click"-based. The challenge lies in designing static graphical elements that will scale between, say, 160 dpi and 300+ dpi, and get consistent and clean design across zoom levels. Any thoughts on how to approach this? Here are some scenarios, but each has drawbacks as well: * Design a single set of assets (high resolution), then adjust zoom levels based on detected resolution / device o Drawbacks: Performance caused by code layering, varying device support of Zoom * Develop & optimize multiple variations of image and CSS assets, then hide / show each based on device o Drawbacks: Extra work in design & QA. Anyone have thoughts or experience on how to deal with this? We should certainly be looking at methods that use / support HTML5 and CSS3.

    Read the article

  • Architecture of finding movable geotagged objects

    - by itsme
    I currently have a Postgres DB filled with approx. 300.000 data-sets of moving vehicles all over the world. My very frequently repeated query is: Give me all vehicles in a 5/10/20mile radius. Currently I spend around 600 to 1200 ms in the DB to prepare the set of located vehicle-objects. I am looking to vastly improve this time by ideally one or two orders of magnitude if possible. I am working in a Ruby on Rails 3.0beta environment if this is relevant. Any ideas how to architect the whole system to accelerate this query? Any NoSQL database able to deliver this kind of geolocation performance? I know of MongoDB working on an extension to facilitate this scenario but haven't tried it yet. Any intelligent use of Redis to achieve this? One problem with SQL-DBs here seems to be that I can't possibly use indexes because my vehicles are mostly moving around, meaning I had to constantly created DB indexes which, by itself, is probably more expensive than just doing the searching without index. Looking forward to your thoughs, Thanks!

    Read the article

  • My HTML5 web app crashes and I have no clue how to debug

    - by Shouvik
    Hi All, I have written a word game using HTML5 canvas tag and a little bit of audio. I developed the application on the chrome web browser on a linux system. Recently during the testing phase it was tried on safari 5.0.3 on Mac and the webpage froze. Not just the canvas element, but interactive element on the page froze. I have at some times experienced this problem on google chrome when I was developing but since the console did not throw any error before this happened, I did not give it much credence. Now as per requirements I am supposed to support both chrome and safari but this dismal performance on safari has left me shocked and I cannot see what error can be thrown which might lead to such a situation. Worse yet the CPU usage on using this application peaks to 70-80percent on my 2yr old macbook running ubuntu... I can only but pity the person who uses mac to operate this app, which undoubtedly is a heavier OS. Could someone help me out with a place I can start with to find out what exactly is causing this issue. I have run profiles on this webapp on google chromes console and noticed that in the heap spanshot value increases steadily with the playing of the game, specifically (root) value which jumps up by 900 counts. Any help would be very appreciated! Thanks EDIT: I don't know if this helps, but I have noticed that even on refreshing the page after the app becomes unresponsive the page reloads and I am still not able to interact with the page elements but the tab scroll bar continues to work and I can see my application window completely. So to summaries the tab stops accepting any sort of user interaction inside the page. Edit2: Nop. It doesn't work still... The app crashes on double click on the canvas element. The console is not throwing any errors either! =/ I have noticed this problem is isolated only to safari!

    Read the article

  • Setting pixel values in Nvidia NPP ImageCPU objects?

    - by solvingPuzzles
    In the Nvidia Performance Primitives (NPP) image processing examples in the CUDA SDK distribution, images are typically stored on the CPU as ImageCPU objects, and images are stored on the GPU as ImageNPP objects. boxFilterNPP.cpp is an example from the CUDA SDK that uses these ImageCPU and ImageNPP objects. When using a filter (convolution) function like nppiFilter, it makes sense to define a filter as an ImageCPU object. However, I see no clear way setting the values of an ImageCPU object. npp::ImageCPU_32f_C1 hostKernel(3,3); //allocate space for 3x3 convolution kernel //want to set hostKernel to [-1 0 1; -1 0 1; -1 0 1] hostKernel[0][0] = -1; //this doesn't compile hostKernel(0,0) = -1; //this doesn't compile hostKernel.at(0,0) = -1; //this doesn't compile How can I manually put values into an ImageCPU object? Notes: I didn't actually use nppiFilter in the code snippet; I'm just mentioning nppiFilter as a motivating example for writing values into an ImageCPU object. The boxFilterNPP.cpp example doesn't involve writing directly to an ImageCPU object, because nppiFilterBox is a special case of nppiFilter that uses a built-in gaussian smoothing filter (probably something like [1 1 1; 1 1 1; 1 1 1]).

    Read the article

  • Java: multi-threaded maps: how do the implementations compare?

    - by user346629
    I'm looking for a good hash map implementation. Specifically, one that's good for creating a large number of maps, most of them small. So memory is an issue. It should be thread-safe (though losing the odd put might be an OK compromise in return for better performance), and fast for both get and put. And I'd also like the moon on a stick, please, with a side-order of justice. The options I know are: HashMap. Disastrously un-thread safe. ConcurrentHashMap. My first choice, but this has a hefty memory footprint - about 2k per instance. Collections.sychronizedMap(HashMap). That's working OK for me, but I'm sure there must be faster alternatives. Trove or Colt - I think neither of these are thread-safe, but perhaps the code could be adapted to be thread safe. Any others? Any advice on what beats what when? Any really good new hash map algorithms that Java could use an implementation of? Thanks in advance for your input!

    Read the article

  • what are good ways to implement search and search results using ajax?

    - by Amr ElGarhy
    i have some text box in a page and in the same page there will be a table 'grid' like for holding the search result. When the user start editing and of the textbox above, the search must start by sending all textboxs values to the server 'ajax', and get back with the results to fill the below grid. Notes: This grid should support paging, sorting by clicking on headers and it will contains some controls beside the results such as checkboxs for boolean values and links for opening details in another page. I know many ways to do this some of them are: 1- updatepanel around all of these controls and thats it "fast dirty solution" 2- send the search criteria using ajax request using JQuery post function for example and get back the JSON result, and using a template will draw the grid "clean but will take time to finish and will be harder to edit later". 3- .... My question is: What do you think will be the best choice to implement this scenario? because i face this scenario too much, and want to know which implementation will be better regarding performance, optimization, and time to finish. I just want to know your thoughts about this issue.

    Read the article

  • Common Properties: Consolidating Loan, Purchase, Inventory and Sale tables into one Transaction tabl

    - by Frank Computer
    Pawnshop Application: I have separate tables for Loan, Purchase, Inventory & Sales transactions. Each tables rows are joined to their respective customer rows by: customer.pk [serial] = loan.fk [integer]; = purchase.fk [integer]; = inventory.fk [integer]; = sale.fk [integer]; Since there are so many common properties within the four tables, I consolidated the four tables into one table called "transaction", where a column: transaction.trx_type char(1) {L=Loan, P=Purchase, I=Inventory, S=Sale} Scenario: A customer initially pawns merchandise, makes a couple of interest payments, then decides he wants to sell the merchandise to the pawnshop, who then places merchandise in Inventory and eventually sells it to another customer. I designed a generic transaction table where for example: transaction.main_amount DECIMAL(7,2) in a loan transaction holds the pawn amount, in a purchase holds the purchase price, in inventory and sale holds sale price. This is clearly a denormalized design, but has made programming alot easier and improved performance. Any type of transaction can now be performed from within one screen, without the need to change to different tables.

    Read the article

  • Elegent methods for caching search results from RESTful service?

    - by Paul
    I have a RESTful web service which I access from the browser using JavaScript. As an example, say that this web service returns a list of all the Message resources assigned to me when I send a GET request to /messages/me. For performance reasons, I'd like to cache this response so that I don't have to re-fetch it every time I visit my Manage Messages web page. The cached response would expire after 5 minutes. If a Message resource is created "behind my back", say by the system admin, it's possible that I won't know about it for up to 5 minutes, until the cached search response expires and is re-fetched. This is acceptable, because it creates no confusion for me. However if I create a new Message resource which I know should be part of the search response, it becomes confusing when it doesn't appear on my Manage Messages page immediately. In general, when I knowingly create/delete/update a resource that invalidates a cached search response, I need that cached response to be expired/flushed immediately. The core problem which I can't figure out: I see no simple way of connecting the task of creating/deleting/updating a resource with the task of expiring the appropriate cached responses. In this example it seems simple, I could manually expire the cached search response whenever I create/delete/update a(ny) Message resource. But in a more complex system, keeping track of which search responses to expire under what circumstances will get clumsy quickly. If someone could suggest a simple solution or some clarifying thoughts, I'd appreciate it.

    Read the article

  • (PHP) How do I get XMLReader behave like SimpleXML with Xpath? (Large directory like XML file)

    - by AESM
    So, I've got this huge XML file (10MB and up) that I want to parse and I figured instead of using SimpleXML, I'd better use XMLReader. Since the performance should be way better, right? But since XMLReader doesn't work with XPath... The xml is like this: <root name="bookmarks"> * <dir name="A directory"> <link name="blablabla"> <dir name="Sub directory"> ... </dir> * </dir> * <link name="another link"> </root> With SimpleXML combined with Xpath, I would simple do like: $xml = simplexml_load_file('/xmlFile.xml'); $xml-xpath('/root[@name]/dir[@name="A directory"]/dir[@name="Sub directory"]'); Which is so simple. But how do I do this with just using XMLReader? Ps. I'm converting the resulting node to DOM/SimpleXML to get its inner contents. Like this: $node = $xr-expand(); $dom = new DomDocument(); $n = $dom-importNode($node, true); $dom-appendChild($n); $selectedRoot = simplexml_import_dom($dom); Is this ok? ... Thanks!

    Read the article

  • MySQL Multiple Table Join

    - by hitman001
    I have a 3 tables that I'm trying to join and get distinct results. CREATE TABLE `car` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL DEFAULT '', PRIMARY KEY (`id`) ) ENGINE=InnoDB mysql> select * from car; +----+-------+ | id | name | +----+-------+ | 1 | acura | +----+-------+ CREATE TABLE `tires` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `tire_desc` varchar(255) DEFAULT NULL, `car_id` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `new_fk_constraint` (`car_id`), CONSTRAINT `new_fk_constraint` FOREIGN KEY (`car_id`) REFERENCES `car` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB mysql> select * from tires; +----+-------------+--------+ | id | tire_desc | car_id | +----+-------------+--------+ | 1 | front_right | 1 | | 2 | front_left | 1 | +----+-------------+--------+ CREATE TABLE `lights` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `lights_desc` varchar(255) NOT NULL, `car_id` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `new1_fk_constraint` (`car_id`), CONSTRAINT `new1_fk_constraint` FOREIGN KEY (`car_id`) REFERENCES `car` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB mysql> select * from lights; +----+-------------+--------+ | id | lights_desc | car_id | +----+-------------+--------+ | 1 | right_light | 1 | | 2 | left_light | 1 | +----+-------------+--------+ Here is my query. mysql> SELECT name, group_concat(tire_desc), group_concat(lights_desc) FROM car left join tires on car.id = tires.car_id left join lights on car.id = car_id group by car.id; +-------+-----------------------------------------------+-----------------------------------------------+ | name | group_concat(tire_desc) | group_concat(lights_desc) | +-------+-----------------------------------------------+-----------------------------------------------+ | acura | front_right,front_right,front_left,front_left | right_light,left_light,right_light,left_light | +-------+-----------------------------------------------+-----------------------------------------------+ I get duplicate entires and this is what I would like to get. +-------+-----------------------------------------------+--------------------------------+ | name | group_concat(tire_desc) | group_concat(lights_desc) | +-------+-----------------------------------------------+--------------------------------+ | acura | front_right,front_left | right_light,left_light | +-------+-----------------------------------------------+--------------------------------+ I cannot use distinct in group_concat because I might have legitimate duplicates which I would like to keep. Is there any way to do this query using joins and not using inner selects like the statement below? SELECT name, (select group_concat(tire_desc) from tires where car.id = tires.car_id), (select group_concat(lights_desc) from lights where car.id = lights.car_id) FROM car Also, if I will use inner selects, will there be any performance issues over joins?

    Read the article

  • Built in background-scheduling system in .NET?

    - by Lasse V. Karlsen
    I ask though I doubt there is any such system. Basically I need to schedule tasks to execute at some point in the future (usually no more than a few seconds or possibly minutes from now), and have some way of cancelling that request unless too late. Ie. code that would look like this: var x = Scheduler.Schedule(() => SomethingSomething(), TimeSpan.FromSeconds(5)); ... x.Dispose(); // cancels the request Is there any such system in .NET? Is there anything in TPL that can help me? I need to run such future-actions from various instances in a system here, and would rather avoid each such class instance to have its own thread and deal with this. Also note that I don't want this (or similar, for instance through Tasks): new Thread(new ThreadStart(() => { Thread.Sleep(5000); SomethingSomething(); })).Start(); There will potentially be a few such tasks to execute, they don't need to be executed in any particular order, except for close to their deadline, and it isn't vital that they have anything like a realtime performance concept. I just want to avoid spinning up a separate thread for each such action.

    Read the article

  • Using a single texture image unit with multiple sampler uniforms

    - by bcrist
    I am writing a batching system which tracks currently bound textures in order to avoid unnecessary glBindTexture() calls. I'm not sure if I need to keep track of which textures have already been used by a particular batch so that if a texture is used twice, it will be bound to a different TIU for the second sampler which requires it. Is it acceptable for an OpenGL application to use the same texture image unit for multiple samplers within the same shader stage? What about samplers in different shader stages? For example: Fragment shader: ... uniform sampler2D samp1; uniform sampler2D samp2; void main() { ... } Main program: ... glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, tex_id); glUniform1i(samp1_location, 0); glUniform1i(samp2_location, 0); ... I don't see any reason why this shouldn't work, but what about if the shader program also included a vertex shader like this: Vertex shader: ... uniform sampler2D samp1; void main() { ... } In this case, OpenGL is supposed to treat both instances of samp1 as the same variable, and exposes a single location for them. Therefore, the same texture unit is being used in the vertex and fragment shaders. I have read that using the same texture in two different shader stages counts doubly against GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS but this would seem to contradict that. In a quick test on my hardware (HD 6870), all of the following scenarios worked as expected: 1 TIU used for 2 sampler uniforms in same shader stage 1 TIU used for 1 sampler uniform which is used in 2 shader stages 1 TIU used for 2 sampler uniforms, each occurring in a different stage. However, I don't know if this is behavior that I should expect on all hardware/drivers, or if there are performance implications.

    Read the article

  • Good C string libary

    - by chamakits
    Hello all. I recently got inspired to start up a project I've been wanting to code for a while. I want to do it in C, because memory handling is key this application. I was searching around for a good implementation of strings in C, since I know me doing it myself could lead to some messy buffer overflows, and I expect to be dealing with a fairly big amount of strings. I found this article which gives details on each, but they each seem like they have a good amount of cons going for them (don't get me wrong, this article is EXTREMELY helpful, but it still worries me that even if I were to choose one of those, I wouldn't be using the best I can get). I also don't know how up to date the article is, hence my current plea. What I'm looking for is something that may hold a large amount of characters, and simplifies the process of searching through the string. If it allows me to tokenize the string in any way, even better. Also, it should have some pretty good I/O performance. Printing, and formatted printing isn't quite a top priority. I know I shouldn't expect a library to do all the work for me, but was just wandering if there was a well documented string function out there that could save me some time and some work. Any help is greatly appreciated. Thanks in advance! EDIT: I was asked about the license I prefer. Any sort of open source license will do, but preferably GPL (v2 or v3). EDIt2: I found betterString (bstring) library and it looks pretty good. Good documentation, small yet versatile amount of functions, and easy to mix with c strings. Anyone have any good or bad stories about it? The only downside I've read about it is that it lacks Unicode (again, read about this, haven't seen it face to face just yet), but everything else seems pretty good. EDIT3: Also, preferable that its pure C.

    Read the article

  • Grouping by date, with 0 when count() yields no lines

    - by SCO
    I'm using Postgresql 9 and I'm fighting with counting and grouping when no lines are counted. Let's assume the following schema : create table views { date_event timestamp with time zone ; event_id integer; } Let's imagine the following content : 2012-01-01 00:00:05 2 2012-01-01 01:00:05 5 2012-01-01 03:00:05 8 2012-01-01 03:00:15 20 I want to group by hour, and count the number of lines. I wish I could retrieve the following : 2012-01-01 00:00:00 1 2012-01-01 01:00:00 1 2012-01-01 02:00:00 0 2012-01-01 03:00:00 2 2012-01-01 04:00:00 0 2012-01-01 05:00:00 0 . . 2012-01-07 23:00:00 0 I mean that for each time range slot, I count the number of lines in my table whose date correspond, otherwise, I return a line with a count at zero. The following will definitely not work (will yeld only lines with counted lines 0). SELECT extract ( hour from date_event ),count(*) FROM views where date_event > '2012-01-01' and date_event <'2012-01-07' GROUP BY extract ( hour from date_event ); Please note I might also need to group by minute, or by hour, or by day, or by month, or by year (multiple queries is possible of course). I can only use plain old sql, and since my views table can be very big (100M records), I try to keep performance in mind. How can this be achieved ? Thank you !

    Read the article

  • Is there an Easier way to Get a 3 deep Panel Control from a Form in order to add a new Control to it programmatically?

    - by Mark Sweetman
    I have a VB Windows Program Created by someone else, it was programmed so that anyone could Add to the functionality of the program through the use of Class Libraries, the Program calls them (ie... the Class Libraries, DLL files) Plugins, The Plugin I am creating is a C# Class Library. ie.. .dll This specific Plugin Im working on Adds a Simple DateTime Clock Function in the form of a Label and inserts it into a Panel that is 3 Deep. The Code I have I have tested and it works. My Question is this: Is there a better way to do it? for instance I use Controls.Find 3 different times, each time I know what Panel I am looking for and there will only be a single Panel added to the Control[] array. So again Im doing a foreach on an array that only holds a single element 3 different times. Now like I said the code works and does as I expected it to. It just seems overly redudant, and Im wondering if there could be a performance issue. here is the code: foreach (Control p0 in mDesigner.Controls) if (p0.Name == "Panel1") { Control panel1 = (Control)p0; Control[] controls = panel1.Controls.Find("Panel2", true); foreach (Control p1 in controls) if (p1.Name == "Panel2") { Control panel2 = (Control)p1; Control[] controls1 = panel2.Controls.Find("Panel3", true); foreach(Control p2 in controls1) if (p2.Name == "Panel3") { Control panel3 = (Control)p2; panel3.Controls.Add(clock); } } }

    Read the article

  • How to handle product ratings in a database

    - by Mel
    Hello, I would like to know what is the best approach to storing product ratings in a database. I have in mind the following two (simplified, and assuming a MySQL db) scenarios: Scenario 1: Create two columns in the product table to store number of votes and the sum of all votes. Use columns to get an average on the product display page: products(productedID, productName, voteCount, voteSum) Pros: I will only need to access one table, and thus execute one query to display product data and ratings. Cons: Write operations will be executed in a table whose original purpose is only to furnish product data. Scenario 2: Create an additional table to store ratings. products(productID, productName) ratings(productID, voteCount, voteSum) Pros: Isolate ratings into a separate table, leaving the products table to furnish data on available products. Cons: I will have to execute two separate queries on product page requests (one for data and another for ratings). In terms of performance, which of the following two approaches is best: Allow users to execute an occasional write query to a table that will handle hundreds of read requests? Execute two queries at every product page, but isolate the write query into a separate table. I'm a novice to database development, and often find myself struggling with simple questions such as these. Many thanks,

    Read the article

  • Implementing search functionality with multiple optional parameters against database table.

    - by quarkX
    Hello, I would like to check if there is a preferred design pattern for implementing search functionality with multiple optional parameters against database table where the access to the database should be only via stored procedures. The targeted platform is .Net with SQL 2005, 2008 backend, but I think this is pretty generic problem. For example, we have customer table and we want to provide search functionality to the UI for different parameters, like customer Type, customer State, customer Zip, etc., and all of them are optional and can be selected in any combinations. In other words, the user can search by customerType only or by customerType, customerZIp or any other possible combinations. There are several available design approaches, but all of them have some disadvantages and I would like to ask if there is a preferred design among them or if there is another approach. Generate sql where clause sql statement dynamically in the business tier, based on the search request from the UI, and pass it to a stored procedure as parameter. Something like @Where = ‘where CustomerZip = 111111’ Inside the stored procedure generate dynamic sql statement and execute it with sp_executesql. Disadvantage: dynamic sql, sql injection Implement a stored procedure with multiple input parameters, representing the search fields from the UI, and use the following construction for selecting the records only for the requested fields in the where statement. WHERE (CustomerType = @CustomerType OR @CustomerType is null ) AND (CustomerZip = @CustomerZip OR @CustomerZip is null ) AND ………………………………………… Disadvantage: possible performance issue for the sql. 3.Implement separate stored procedure for each search parameter combinations. Disadvantage: The number of stored procedures will increase rapidly with the increase of the search parameters, repeated code.

    Read the article

  • What is the best way to create a running integer id on the AppEngine data storage?

    - by Freed
    For various reasons, I need a unique running integer id for my entities stored on the Google AppEngine. The automatically generated key sort of has this behaviour, but it doesn't start from 1 (or 0) and doesn't guarantee that the generated integer part will come from a continuous sequence. What would be the best way to efficiently implement this on AppEngine? Is there any support from the storage system? To add to the complexity, I might need to do this over entities from different entity groups, meaning I can't just get the highest id right now and save an entity with the next id in a transaction. Might memcache be the way to go..? Edit: I havn't yet implemented this, but to clarify on the memcache idea. I know memcache is unreliable, but in practice it probably won't lose data "too often" to hurt performance. Basically, I would have a memcache entry for the last used id, update it (somehow atomically) whenever I create a new entity and use that id. In the case of memcache not having a value for this entry, I'd get the highest id so far by doing a query on my entities sorted by the id and update memcache (unless someone else had already done so). The only problem I can see with this right now would be atomicity of the operation as a whole if the save of my new entity was also part of a transaction. Thoughts..?

    Read the article

  • Value types of variable size

    - by YellPika
    I'm trying to code a small math library in C#. I wanted to create a generic vector structure where the user could define the element type (int, long, float, double, etc.) and dimensions. My first attempt was something like this... public struct Vector<T> { public readonly int Dimensions; public readonly T[] Elements; // etc... } Unfortunately, Elements, being an array, is also a reference type. Thus, doing this, Vector<int> a = ...; Vector<int> b = a; a[0] = 1; b[0] = 2; would result in both a[0] and b[0] equaling 2. My second attempt was to define an interface IVector<T>, and then use Reflection.Emit to automatically generate the appropriate type at runtime. The resulting classes would look roughly like this: public struct Int32Vector3 : IVector<T> { public int Element0; public int Element1; public int Element2; public int Dimensions { get { return 3; } } // etc... } This seemed fine until I found out that interfaces seem to act like references to the underlying object. If I passed an IVector to a function, and changes to the elements in the function would be reflected in the original vector. What I think is my problem here is that I need to be able to create classes that have a user specified number of fields. I can't use arrays, and I can't use inheritance. Does anyone have a solution? EDIT: This library is going to be used in performance critical situations, so reference types are not an option.

    Read the article

  • What db fits me?

    - by afvasd
    Dear Everyone I am currently using mysql. I am finding that my schema is getting incredibly complicated. I seek to find a new db that will suit my needs: Let's assume I am building a news aggregrator (which collects news from multiple website). I then run algorithms to determine if two news from different sites are actually referring to the same topic. I run this algorithm to cluster news together. The relationship is depicted below: cluster \--news1 \--word1 \--word2 \--news2 \--word3 \--news3 \--word1 \--word3 And then I will apply some magic and determine the importance of each word. Summing all the importance of each word gives me the importance of a news article. Summing the importance of each news article gives me the importance of a cluster. Note that above cluster there are also subgroups( like split by region etc), and categories (like sports, etc) which I have to determine the importance of that in a particular day per se. I have used views in the past to do so, but I realized that views are very slow. So i will normally do an insert into an actual table and index them for better performance. As you can see this leads to multiple tables derived like (cluster, importance), (news, importance), (words, importance) etc which can get pretty messy. Also the "importance" metric will change. It has become increasingly difficult to alter tables, update data (which I am using TRUNCATE TABLE) and then inserting from null. I am currently looking into something schemaless like Mongodb. I do not need distributedness. I would very much want something that is reasonably fast (which can be indexed) and something that is a lot more flexible that traditional RDMBS. Also, I need something that has some kind of ORM because I personally like ORM a lot. I am currently using sqlalchemy Please help!

    Read the article

  • dm_exec_query_stats returning stale data?

    - by VoiceOfUnreason
    I've been testing my app on a SQL Server 2005 database, and am trying to establish a preliminary picture of the query performance using sys.dm_exec_query_stats. Problem: there's a particular query that I'm interested in, because total_elapsed_time and last_elapsed_time are both large numbers. When I tickle my app to invoke that query (this runs successfully), then refresh my view of the stats, I find that 1) execution_count has incremented (expected) 2) last_execution_time has updated to now (expected) 3) last_elapsed_time is still a large value (not expected - I anticipated a new value) 4) total_elapsed_time is unchanged (contradiction?) If last_elapsed_time refers to the execution that happened @ last_execution_time, then the total_elapsed_time should have increased? This documentation: http://msdn.microsoft.com/en-us/library/ms189741(SQL.90).aspx tells me that last_execution_time is the last time the plan was executed, and last_elapsed_time comes from the "most recently executed plan", but doesn't tell me why those might be different. The query itself is uncomplicated (SELECT/WHERE/ORDER BY - parameters appearing in the where clause, but no clever operations), the table has maybe 25 rows in it right now. Questions: 1) What's the real relationship between execution_count, last_execution_time, and last_elapsed_time? 2) Where is the documentation of this relationship (manual, third party book, blog, bug ticket, stone tablets...) ?

    Read the article

  • More efficient way to update multiple elements in javascript and/or jquery?

    - by Seiverence
    Say I have 6 divs with ID "#first", ""#second" ... "#sixth". Say if I wanted to execute functions on each of those divs, I would set up an array that contains each of the names of the divs I want to update as an element in the array of strings. ["first", "second", "third"] that I want to update. If I wanted to apply I function, I set up a for loop that iterates through each element in the array and say if I wanted to change the background color to red: function updateAllDivsInTheList() { for(var i = 0; i < array.size; i++) $("#"+array[i]).changeCssFunction(); } } Whenever I create a new div, i would add it to the array. The issue is, if I have a large number of divs that need to get updated, say if I wanted to update 1000 out of 1200 divs, it may be a pain/performance tank to have to sequentially iterate through every single element in the array. Is there some alternative more efficient way of updating multiple divs without having to sequentially iterate through every element in an array, maybe with some other more efficient data structure besides array? Or is what I am doing the most efficient way to do it? If can provide some example, that would be great.

    Read the article

< Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >