Search Results

Search found 6104 results on 245 pages for 'fast esp'.

Page 207/245 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Thread management advice - Is TPL a good idea?

    - by Ian
    I'm hoping to get some advice on the use of thread managment and hopefully the task parallel library, because I'm not sure I've been going down the correct route. Probably best is that I give an outline of what I'm trying to do. Given a Problem I need to generate a Solution using a heuristic based algorithm. I start of by calculating a base solution, this operation I don't think can be parallelised so we don't need to worry about. Once the inital solution has been generated, I want to trigger n threads, which attempt to find a better solution. These threads need to do a couple of things: They need to be initalized with a different 'optimization metric'. In other words they are attempting to optimize different things, with a precedence level set within code. This means they all run slightly different calculation engines. I'm not sure if I can do this with the TPL.. If one of the threads finds a better solution that the currently best known solution (which needs to be shared across all threads) then it needs to update the best solution, and force a number of other threads to restart (again this depends on precedence levels of the optimization metrics). I may also wish to combine certain calculations across threads (e.g. keep a union of probabilities for a certain approach to the problem). This is probably more optional though. The whole system needs to be thread safe obviously and I want it to be running as fast as possible. I tried quite an implementation that involved managing my own threads and shutting them down etc, but it started getting quite complicated, and I'm now wondering if the TPL might be better. I'm wondering if anyone can offer any general guidance? Thanks...

    Read the article

  • How to seek to a specific time in a RTP stream?

    - by Cipi
    I am streaming a prerecorded H264 video that has the following structure: [I] [x] [x] [x] [I] [x] [x] [x] [I]... In between the IDR (I-s in my structure) I have 32 (only 3 presented here) other frames (all other stuff that is not IDR like SEI, SPS, PPS... X-es) Now, let assume that the timing of my frames is such: TIME: 1 2 3 4 5 6 7 8 9 FRAME: [I] [x] [x] [x] [I] [x] [x] [x] [I]... Now i want to seek to the time 4. If I seek to that frame, and send it, the picture gets messed up because the decoder needs a IDR to decode it properly, so I resorted to finding the appropriate IDR (in this case one with the time 1) and sending it as the frame with the time 4. So now the picture is decoded properly, all is well... but... If my GOV is 32, and I need to send the non IDR frame that has the index 31, and if the time span between it and the corresponding IDR is 3 seconds, I actually get 3 seconds earlier then the time I want. Now, this is not precise, because I cannot seek to the half of the GOV time span. Also, I cant set smaller GOV, so I want other ideas... My other idea was to send the last known IDR, and then send all other non IDR frames that come before the one I want, only I would set for all of them RTP-TIME to be the same as the corresponding IDR. In this case the picture gets decoded perfectly, but now in the above case, 3 seconds that follow non IDR frame with the wanted time get fast paced in the decoder/player (there is no instantaneous seek)... Any ideas? Or I can only seek to IDR-s and not the frames in between?

    Read the article

  • Database structure for ecommerce site

    - by imanc
    Hey Guys, I have been tasked with designing an ecommerce solution. The aspect that is causing me the most problems is the database. Currently the site consists of 10+ country based shops each with their own database (all residing on the same mysql instance). For the new site I'd rather all these shop databases be merged into one database so that all tables (products, orders, customers etc.) have a shop_id field. From a programming perspective this seems to make the most sense as we won't have to manage data across multiple databases. Currently the entire site generates about 120k orders a year, but is experiencing fairly heavy growth and we need to design a solution that will scale. In 5 years there may be more than a million orders per year and a database that contains 5 years order history (archiving maybe a solution here). The question is - do we use a single database, or do we keep the database-per-shop structure? I am currently trying to find supporting evidence for either avenue. The company I am designing the solution for prefer the per-shop database structure because they believe it will allow the sites to scale. But my argument is that the shop's database probably won't get that busy over the next few years that they exceed the capacity of a mysql database and a "no expenses spared" hardware set-up. I am wondering if anyone has any advice either way? Does anyone have experience with websites / ecommerce sites that have tables containing millions of records? I know there is probably not a clear answer here, but at what stage do we have too many records or too large table files to have a fast loading site? Also, if anyone has any advice on sources of information - books, websites, etc. where I can do further research, it would be highly appreciated! Cheers, imanc

    Read the article

  • Recursive problem...

    - by Chronos
    Guys, I'm new to java and multithreading..... and I have a following problem: I have two classes running in two different threads. Class A and Class B. Class A has the method "onNewEvent()". Once that method is invoked, it will ask class B to do some work. As soon as B finishes the work it invokes onJobDone() method of the class A. Now... here comes the problem... what I want is to create new job within onJobDone() method and to send it again to B. here is what I do (pseudo code) ... in the sequence of execution A.onNewEvent(){ //create job //ask B to do it B.do() } B.do{ // Do some stuff A.jobDone() } A.onJobDOne(){ B.do() //doItAgain // print message "Thank you for doing it" } The problem is... that message "Thank you for doing it" never gets printed... in fact, when onJobDone() method is invoked, it invokes B.do()... because B.do() is very fast, it invokes onJobDone() immediatly... so execution flow never comes to PRINT MESSAGE part of code... I suppose this is one of the nasty multithreading problems.... any help would be appreciated.

    Read the article

  • How can I improve the performance of LinqToSql queries that use EntitySet properties?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • I get this error: "glibc detected"

    - by AKGMA
    Hi I just wrote a piece of CPP code and I compiled it using G++ in ubuntu. When I run my code everything is fine, the code runs well and gives output but doesn't exit and it gives this error: *** glibc detected *** ./a.out: free(): invalid next size (fast): 0x09f931f0 *** ======= Backtrace: ========= /lib/libc.so.6(+0x6c501)[0x3de501] /lib/libc.so.6(+0x6dd70)[0x3dfd70] /lib/libc.so.6(cfree+0x6d)[0x3e2e5d] /usr/lib/libstdc++.so.6(_ZdlPv+0x21)[0x6e2441] ./a.out[0x8049ce6] /lib/libc.so.6(+0x2f69e)[0x3a169e] /lib/libc.so.6(+0x2f70f)[0x3a170f] /lib/libc.so.6(__libc_start_main+0xef)[0x388cef] ./a.out[0x8048a61] ======= Memory map: ======== 00219000-0021a000 r-xp 00000000 00:00 0 [vdso] 00354000-00370000 r-xp 00000000 08:01 8781845 /lib/ld-2.12.1.so 00370000-00371000 r--p 0001b000 08:01 8781845 /lib/ld-2.12.1.so 00371000-00372000 rw-p 0001c000 08:01 8781845 /lib/ld-2.12.1.so 00372000-004c9000 r-xp 00000000 08:01 8781869 /lib/libc-2.12.1.so 004c9000-004ca000 ---p 00157000 08:01 8781869 /lib/libc-2.12.1.so 004ca000-004cc000 r--p 00157000 08:01 8781869 /lib/libc-2.12.1.so 004cc000-004cd000 rw-p 00159000 08:01 8781869 /lib/libc-2.12.1.so 004cd000-004d0000 rw-p 00000000 00:00 0 00638000-00717000 r-xp 00000000 08:01 3935829 /usr/lib/libstdc++.so.6.0.14 00717000-0071b000 r--p 000de000 08:01 3935829 /usr/lib/libstdc++.so.6.0.14 0071b000-0071c000 rw-p 000e2000 08:01 3935829 /usr/lib/libstdc++.so.6.0.14 0071c000-00723000 rw-p 00000000 00:00 0 00909000-0092d000 r-xp 00000000 08:01 8781918 /lib/libm-2.12.1.so 0092d000-0092e000 r--p 00023000 08:01 8781918 /lib/libm-2.12.1.so 0092e000-0092f000 rw-p 00024000 08:01 8781918 /lib/libm-2.12.1.so 00fdb000-00ff5000 r-xp 00000000 08:01 8781903 /lib/libgcc_s.so.1 00ff5000-00ff6000 r--p 00019000 08:01 8781903 /lib/libgcc_s.so.1 00ff6000-00ff7000 rw-p 0001a000 08:01 8781903 /lib/libgcc_s.so.1 08048000-0804b000 r-xp 00000000 08:01 8652645 /home/akg/Desktop/contest/a.out 0804b000-0804c000 r--p 00002000 08:01 8652645 /home/akg/Desktop/contest/a.out 0804c000-0804d000 rw-p 00003000 08:01 8652645 /home/akg/Desktop/contest/a.out 09f93000-09fb4000 rw-p 00000000 00:00 0 [heap] b7600000-b7621000 rw-p 00000000 00:00 0 b7621000-b7700000 ---p 00000000 00:00 0 b7765000-b7768000 rw-p 00000000 00:00 0 b7775000-b7779000 rw-p 00000000 00:00 0 bf9a7000-bf9c8000 rw-p 00000000 00:00 0 [stack] Aborted What does this mean? How can i get rid of it? I'm not using malloc or free , I'm just using vector!

    Read the article

  • Fastest inline-assembly spinlock

    - by sigvardsen
    I'm writing a multithreaded application in c++, where performance is critical. I need to use a lot of locking while copying small structures between threads, for this I have chosen to use spinlocks. I have done some research and speed testing on this and I found that most implementations are roughly equally fast: Microsofts CRITICAL_SECTION, with SpinCount set to 1000, scores about 140 time units Implementing this algorithm with Microsofts InterlockedCompareExchange scores about 95 time units Ive also tried to use some inline assembly with __asm {} using something like this code and it scores about 70 time units, but I am not sure that a proper memory barrier has been created. Edit: The times given here are the time it takes for 2 threads to lock and unlock the spinlock 1,000,000 times. I know this isn't a lot of difference but as a spinlock is a heavily used object, one would think that programmers would have agreed on the fastest possible way to make a spinlock. Googling it leads to many different approaches however. I would think this aforementioned method would be the fastest if implemented using inline assembly and using the instruction CMPXCHG8B instead of comparing 32bit registers. Furthermore memory barriers must be taken into account, this could be done by LOCK CMPXHG8B (I think?), which guarantees "exclusive rights" to the shared memory between cores. At last [some suggests] that for busy waits should be accompanied by NOP:REP that would enable Hyper-threading processors to switch to another thread, but I am not sure whether this is true or not? From my performance-test of different spinlocks, it is seen that there is not much difference, but for purely academic purpose I would like to know which one is fastest. However as I have extremely limited experience in the assembly-language and with memory barriers, I would be happy if someone could write the assembly code for the last example I provided with LOCK CMPXCHG8B and proper memory barriers in the following template: __asm { spin_lock: ;locking code. spin_unlock: ;unlocking code. }

    Read the article

  • protocol parsing in c

    - by nomad.alien
    I have been playing around with trying to implement some protocol decoders, but each time I run into a "simple" problem and I feel the way I am solving the problem is not optimal and there must be a better way to do things. I'm using C. Currently I'm using some canned data and reading it in as a file, but later on it would be via TCP or UDP. Here's the problem. I'm currently playing with a binary protocol at work. All fields are 8 bits long. The first field(8bits) is the packet type. So I read in the first 8 bits and using a switch/case I call a function to read in the rest of the packet as I then know the size/structure of it. BUT...some of these packets have nested packets inside them, so when I encounter that specific packet I then have to read another 8-16 bytes have another switch/case to see what the next packet type is and on and on. (Luckily the packets are only nested 2 or 3 deep). Only once I have the whole packet decoded can I handle it over to my state machine for processing. I guess this can be a more general question as well. How much data do you have to read at a time from the socket? As much as possible? As much as what is "similar" in the protocol headers? So even though this protocol is fairly basic, my code is a whole bunch of switch/case statements and I do a lot of reading from the file/socket which I feel is not optimal. My main aim is to make this decoder as fast as possible. To the more experienced people out there, is this the way to go or is there a better way which I just haven't figured out yet? Any elegant solution to this problem?

    Read the article

  • How to write a flexible modular program with good interaction possibilities between modules?

    - by PeterK
    I went through answers on similar topics here on SO but could't find a satisfying answer. Since i know this is a rather large topic, i will try to be more specific. I want to write a program which processes files. The processing is nontrivial, so the best way is to split different phases into standalone modules which then would be used as necessary (since sometimes i will be only interested in the output of module A, sometimes i would need output of five other modules, etc). The thing is, that i need the modules to cooperate, because the output of one might be the input of another. And i need it to be FAST. Moreover i want to avoid doing certain processing more than once (if module A creates some data which then need to be processed by module B and C, i don't want to run module A twice to create the input for modules B,C ). The information the modules need to share would mostly be blocks of binary data and/or offsets into the processed files. The task of the main program would be quite simple - just parse arguments, run required modules (and perhaps give some output, or should this be the task of the modules?). I don't need the modules to be loaded at runtime. It's perfectly fine to have libs with a .h file and recompile the program every time there is a new module or some module is updated. The idea of modules is here mainly because of code readability, maintaining and to be able to have more people working on different modules without the need to have some predefined interface or whatever (on the other hand, some "guidelines" on how to write the modules would be probably required, i know that). We can assume that the file processing is a read-only operation, the original file is not changed. Could someone point me in a good direction on how to do this in C++ ? Any advice is wellcome (links, tutorials, pdf books...).

    Read the article

  • java.math.BigInteger pow(exponent) question

    - by Jan Kraus
    Hi, I did some tests on pow(exponent) method. Unfortunately, my math skills are not strong enough to handle the following problem. I'm using this code: BigInteger.valueOf(2).pow(var); Results: var | time in ms 2000000 | 11450 2500000 | 12471 3000000 | 22379 3500000 | 32147 4000000 | 46270 4500000 | 31459 5000000 | 49922 See? 2,500,000 exponent is calculated almost as fast as 2,000,000. 4,500,000 is calculated much faster then 4,000,000. Why is that? To give you some help, here's the original implementation of BigInteger.pow(exponent): public BigInteger pow(int exponent) { if (exponent < 0) throw new ArithmeticException("Negative exponent"); if (signum==0) return (exponent==0 ? ONE : this); // Perform exponentiation using repeated squaring trick int newSign = (signum<0 && (exponent&1)==1 ? -1 : 1); int[] baseToPow2 = this.mag; int[] result = {1}; while (exponent != 0) { if ((exponent & 1)==1) { result = multiplyToLen(result, result.length, baseToPow2, baseToPow2.length, null); result = trustedStripLeadingZeroInts(result); } if ((exponent >>>= 1) != 0) { baseToPow2 = squareToLen(baseToPow2, baseToPow2.length, null); baseToPow2 = trustedStripLeadingZeroInts(baseToPow2); } } return new BigInteger(result, newSign); }

    Read the article

  • Play Multiple iPod Library Songs On iPhone At The Same Time With Pitch Bending & Other Effects

    - by Dino
    Hi, I have been going at this for the past two weeks and it is driving me crazy. I asked this question a couple of days ago (Extract iPod Library raw PCM samples and play with sound effects) and whilst the answer got me half way there I am still stuck. Basically what I am trying to achieve is load up multiple songs from the iPod library for playback with effects such as pitch bend, eq effects etc... I have gone down the route of AVPlayer and AVAudioPlayer which are too simple. The only framework I've seen that can play audio with these effects is OpenAL. I have tried a few objective c wrappers (Finch and ObjectAL) Finch does not play compressed audio whilst ObjectAL will only convert it for me if I pass in a URL for the file (which is something I cannot do because I only have an incompatible iPod library URL). An example of an app that does what I want beautifilly is Tap DJ. It can load up songs from the iPod library fast (unlike TouchDJ and it plays them with all sorts of effects. Any help would be much appreciated.

    Read the article

  • Wrapping unmanaged C++ with C++/CLI - a proper approach.

    - by Jamie
    Hi there, as stated in the title, I want to have my old C++ library working in managed .NET. I think of two possibilities: 1) I might try to compile the library with /clr and try "It Just Works" approach. 2) I might write a managed wrapper to the unmanaged library. First of all, I want to have my library working FAST, as it was in unmanaged environment. Thus, I am not sure if the first approach will not cause a large decrease in performance. However, it seems to be faster to implement (not a right word :-)) (assuming it will work for me). On the other hand, I think of some problems that might appear while writing a wrapper (e.g. how to wrap some STL collection (vector for instance)?) I think of writing a wrapper residing in the same project as the unmanaged C++ resides - is that a reasonable approach (e.g. MyUnmanagedClass and MyManagedClass in the same project, the second wrapping the other)? What would you suggest in that problem? Which solution is going to give me better performance of the resulting code? Thank you in advance for any suggestions and clues! Cheers

    Read the article

  • Which fieldtype is best for storing PRICE values?

    - by BerggreenDK
    Hi there I am wondering whats the best "price field" in MSSQL for a shoplike structure? Looking at this overview: http://www.teratrax.com/sql_guide/data_types/sql_server_data_types.html We have datatypes called money, smallmoney, then we have decimal/numeric and lastly float and real Name, memory/disk-usage and value ranges: Money: 8 bytes (values: -922,337,203,685,477.5808 to +922,337,203,685,477.5807) Smallmoney: 4 bytes (values: -214,748.3648 to +214,748.3647) Decimal: 9 [default, min. 5] bytes (values: -10^38 +1 to 10^38 -1 ) Float: 8 bytes (values: -1.79E+308 to 1.79E+308 ) Real: 4 bytes (values: -3.40E+38 to 3.40E+38 ) My question is: is it really wise to store pricevalues in those types? what about eg. INT? Int: 4 bytes (values: -2,147,483,648 to 2,147,483,647) Lets say a shop uses dollars, they have cents, but I dont see prices being $49.2142342 so the use of a lot of decimals showing cents seems waste of SQL bandwidth. Secondly, most shops wouldn't show any prices near 200.000.000 (not in normal webshops at least... unless someone is trying to sell me a famous tower in Paris) So why not go for an int? An int is fast, its only 4 bytes and you can easily make decimals, by saving values in cents instead of dollars and then divide when you present the values. The other approach would be to use smallmoney which is 4 bytes too, but this will require the math part of the CPU to do the calc, where as Int is integer power... on the downside you will need to divide every single outcome. Are there any "currency" related problems with regionalsettings when using smallmoney/money fields? what will these transfer too in C#/.NET ? Any pros/cons? Go for integer prices or smallmoney or some other? Whats does your experience tell?

    Read the article

  • apache solr : sum of data resulted from group by

    - by Terance Dias
    Hi, We have a requirement where we need to group our records by a particular field and take the sum of a corresponding numeric field e.x. select userid, sum(click_count) from user_action group by userid; We are trying to do this using apache solr and found that there were 2 ways of doing this: Using the field collapsing feature (http://blog.jteam.nl/2009/10/20/result-grouping-field-collapsing-with-solr/) but found 2 problems with this: 1.1. This is not part of release and is available as patch so we are not sure if we can use this in production. 1.2. We do not get the sum back but individual counts and we need to sum it at the client side. Using the Stats Component along with faceted search (http://wiki.apache.org/solr/StatsComponent). This meets our requirement but it is not fast enough for very large data sets. I just wanted to know if anybody knows of any other way to achieve this. Appreciate any help. Thanks, Terance.

    Read the article

  • Splitting a set of object into several subsets of 'similar' objects

    - by doublep
    Suppose I have a set of objects, S. There is an algorithm f that, given a set S builds certain data structure D on it: f(S) = D. If S is large and/or contains vastly different objects, D becomes large, to the point of being unusable (i.e. not fitting in allotted memory). To overcome this, I split S into several non-intersecting subsets: S = S1 + S2 + ... + Sn and build Di for each subset. Using n structures is less efficient than using one, but at least this way I can fit into memory constraints. Since size of f(S) grows faster than S itself, combined size of Di is much less than size of D. However, it is still desirable to reduce n, i.e. the number of subsets; or reduce the combined size of Di. For this, I need to split S in such a way that each Si contains "similar" objects, because then f will produce a smaller output structure if input objects are "similar enough" to each other. The problems is that while "similarity" of objects in S and size of f(S) do correlate, there is no way to compute the latter other than just evaluating f(S), and f is not quite fast. Algorithm I have currently is to iteratively add each next object from S into one of Si, so that this results in the least possible (at this stage) increase in combined Di size: for x in S: i = such i that size(f(Si + {x})) - size(f(Si)) is min Si = Si + {x} This gives practically useful results, but certainly pretty far from optimum (i.e. the minimal possible combined size). Also, this is slow. To speed up somewhat, I compute size(f(Si + {x})) - size(f(Si)) only for those i where x is "similar enough" to objects already in Si. Is there any standard approach to such kinds of problems? I know of branch and bounds algorithm family, but it cannot be applied here because it would be prohibitively slow. My guess is that it is simply not possible to compute optimal distribution of S into Si in reasonable time. But is there some common iteratively improving algorithm?

    Read the article

  • Select statement with multiple 'where' fields using same value without duplicating text

    - by kdbdallas
    I will start by saying that I don't think what I want can be done, but that said, I am hoping I am wrong and someone knows more than me. So here is your chance... Prove you are smarter than me :) I want to do a search against a SQLite table looking for any records that "are similar" without having to write out the query in long hand. To clarify this is how I know I can write the query: select * from Articles where title like '%Bla%' or category like '%Bla%' or post like '%Bla%' This works and is not a huge deal if you are only checking against a couple of columns, but if you need to check against a bunch then your query can get really long and nasty looking really fast, not to mention the chance for typos. (ie: 'Bla%' instead of '%Bla%') What I am wondering is if there is a short hand way to do this? *This next code does not work the way I want, but just shows kind of what I am looking for select * from Articles where title or category or post like '%Bla%' Anyone know if there is a way to specify that multiple 'where' columns should use the same search value without listing that same search value for every column? Thanks in advance!

    Read the article

  • What is the correct stage to use for Google Guice in production in an application server?

    - by Yishai
    It seems like a strange question (the obvious answer would Production, duh), but if you read the java docs: /** * We want fast startup times at the expense of runtime performance and some up front error * checking. */ DEVELOPMENT, /** * We want to catch errors as early as possible and take performance hits up front. */ PRODUCTION Assuming a scenario where you have a stateless call to an application server, the initial receiving method (or there abouts) creates the injector new every call. If there all of the module bindings are not needed in a given call, then it would seem to have been better to use the Development stage (which is the default) and not take the performance hit upfront, because you may never take it at all, and here the distinction between "upfront" and "runtime performance" is kind of moot, as it is one call. Of course the downside of this would appear to be that you would lose the error checking, causing potential code paths to cause a problem by surprise. So the question boils down to are the assumptions in the above correct? Will you save performance on a large set of modules when the given lifetime of an injector is one call?

    Read the article

  • Why does multiple sessions started on the same page gets the same session id?

    - by Calmarius
    I tryed the following: <?php session_name('user'); session_start(); // sets an 'user' cookie with session id. // This session stores the user's login data. // Its an ordinary session. $USERSESSION=$_SESSION; // saving this session $userSessId=session_id(); session_write_close(); //closing this session session_name('chatroom'); session_start(); // set a 'chatroom' cookie but contains the same session id and points to the same data :( . // This session would be used to store the chat text. This session would be // shared between all users joined in this chat room by setting the same session id for all members. // by calling session_regenerate_id would make a diffent session id for every user and it won't be a chat room anymore. ?> So I want to do a chat like thing with sessions. On the client side it would be done with ajax that polls this php page in every 5-10 seconds. Sessions may be cached in the server's memory so it can be accessible fast. I can store the chat in the database but my service runs on a free webhost which is limited, only 4 mysql connections allowed at a time which is almost nothing. I try to touch my database as least times as possible.

    Read the article

  • Is there any performance issue using Row_Number to implement table paging in Sql Server 2008?

    - by majkinetor
    I want to implement table paging using this method: SET @PageNum = 2; SET @PageSize = 10; WITH OrdersRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY OrderDate, OrderID) AS RowNum ,* FROM dbo.Orders ) SELECT * FROM OrdersRN WHERE RowNum BETWEEN (@PageNum - 1) * @PageSize + 1 AND @PageNum * @PageSize ORDER BY OrderDate ,OrderID; Is there anything I should be aware of ? Table has millions of records. Thx. EDIT: After using suggested MAXROWS method for some time (which works really really fast) I had to switch back to ROW_NUMBER method because of its greater flexibility. I am also very happy about its speed so far (I am working with View having more then 1M records with 10 columns). To use any kind of query I use following modification: PROCEDURE [dbo].[PageSelect] ( @Sql nvarchar(512), @OrderBy nvarchar(128) = 'Id', @PageNum int = 1, @PageSize int = 0 ) AS BEGIN SET NOCOUNT ON Declare @tsql as nvarchar(1024) Declare @i int, @j int if (@PageSize <= 0) OR (@PageSize > 10000) SET @PageSize = 10000 -- never return more then 10K records SET @i = (@PageNum - 1) * @PageSize + 1 SET @j = @PageNum * @PageSize SET @tsql = 'WITH MyTableOrViewRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY ' + @OrderBy + ') AS RowNum ,* FROM MyTableOrView WHERE ' + @Sql + ' ) SELECT * FROM MyTableOrViewRN WHERE RowNum BETWEEN ' + CAST(@i as varchar) + ' AND ' + cast(@j as varchar) exec(@tsql) END If you use this procedure make sure u prevented sql injection.

    Read the article

  • Dealing with rapid tapping on Buttons

    - by Eric Burke
    I have a Button with an OnClickListener. For illustrative purposes, consider a button that shows a modal dialog: public class SomeActivity ... { protected void onCreate(Bundle state) { super.onCreate(state); findViewById(R.id.ok_button).setOnClickListener( new View.OnClickListener() { public void onClick(View v) { // This should block input new AlertDialog.Builder(SomeActivity.this) .setCancelable(true) .show(); } }); } Under normal usage, the alert dialog appears and blocks further input. Users must dismiss the dialog before they can tap the button again. But sometimes the button's OnClickListener is called twice before the dialog appears. You can duplicate this fairly easily by tapping really fast on the button. I generally have to try several times before it happens, but sooner or later I'll trigger multiple onClick(...) calls before the dialog blocks input. I see this behavior in Android 2.1 on the Motorola Droid phone. We've received 4 crash reports in the Market, indicating this occasionally happens to people. Depending on what our OnClickListeners do, this causes all sorts of havoc. How can we guarantee that blocking dialogs actually block input after the first tap?

    Read the article

  • Continuing JavaScript "classes" - enums within

    - by espais
    From a previous question, I have the following: So I have implemented a resource class, now I'd like to continue extending it and add all my constants and enums (or as far as JS will allow...). This is what I currently have: var resources = { // images player : new c_resource("res/player.png"), enemies : new c_resource("res/enemies.png"), tilemap : new c_resource("res/tilemap.png") }; And this is what I would like to continue to extend it to: var resources = { // images player : new c_resource("res/player.png"), enemies : new c_resource("res/enemies.png"), tilemap : new c_resource("res/tilemap.png"), // enums directions : {up:0, right:1, down:2, left:3}, speeds : {slow: 1, medium: 3, fast: 5} }; ... function enemies() { this.dir = resources.directions.down; // initialize to down } When I attempt to access resources.directions.up, my JS script goes down in a flaming pile of burning code. Are enums allowed in this context, and if not, how can I properly insert them to be used outside of a normal function? I have also tried defining them as global to a similar effect. edits: fixed the comma...that was just an error in transcribing it. When I run it in Firefox and watch the console, I get an error that says resources is undefined. The resources 'class' is defined at the top of my script, and function enemies() directly follows...so from what I understand it should still be in scope...

    Read the article

  • Hang during databinding of large amount of data to WPF DataGrid

    - by nihi_l_ist
    Im using WPFToolkit datagrid control and do the binding in such way: <WpfToolkit:DataGrid x:Name="dgGeneral" SelectionMode="Single" SelectionUnit="FullRow" AutoGenerateColumns="False" CanUserAddRows="False" CanUserDeleteRows="False" Grid.Row="1" ItemsSource="{Binding Path=Conversations}" > public List<CONVERSATION> Conversations { get { return conversations; } set { if (conversations != value) { conversations = value; NotifyPropertyChanged("Conversations"); } } } public event PropertyChangedEventHandler PropertyChanged; public void NotifyPropertyChanged(string propertyName) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public void GenerateData() { BackgroundWorker bw = new BackgroundWorker(); bw.WorkerSupportsCancellation = bw.WorkerReportsProgress = true; List<CONVERSATION> list = new List<CONVERSATION>(); bw.DoWork += delegate { list = RefreshGeneralData(); }; bw.RunWorkerCompleted += delegate { try { Conversations = list; } catch (Exception ex) { CustomException.ExceptionLogCustomMessage(ex); } }; bw.RunWorkerAsync(); } And than in the main window i call GenerateData() after setting DataCotext of the window to instance of the class, containing GenerateData(). RefreshGeneralData() returns some list of data i want and it returns it fast. Overall there are near 2000 records and 6 columns(im not posting the code i used during grid's initialization, because i dont think it can be the reason) and the grid hangs for almost 10 secs!

    Read the article

  • What is the fastest way to filter a list of strings when making an Intellisense/Autocomplete list?

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • Code Golf: Counting XML elements in file on Android

    - by CSharperWithJava
    Take a simple XML file formatted like this: <Lists> <List> <Note/> ... <Note/> </List> <List> <Note/> ... <Note/> </List> </Lists> Each node has some attributes that actually hold the data of the file. I need a very quick way to count the number of each type of element, (List and Note). Lists is simply the root and doesn't matter. I can do this with a simple string search or something similar, but I need to make this as fast as possible. Design Parameters: Must be in java (Android application). Must AVOID allocating memory as much as possible. Must return the total number of Note elements and the number of List elements in the file, regardless of location in file. Number of Lists will typically be small (1-4), and number of notes can potentially be very large (upwards of 1000, typically 100) per file. I look forward to your suggestions.

    Read the article

  • Big-O for calculating all routes from GPS data

    - by HH
    A non-critical GPS module use lists because it needs to be modifiable, new routes added, new distances calculated, continuos comparisons. Well so I thought but my team member wrote something I am very hard to get into. His pseudo code int k =0; a[][] <- create mapModuleNearbyDotList -array //CPU O(n) for(j = 1 to n) // O(nlog(m)) for(i =1 to n) for(k = 1 to n) if(dot is nearby) adj[i][j]=min(adj[i][j], adj[i][k] + adj[k][j]); His ideas transformations of lists to tables His worst case time complexity is O(n^3), where n is number of elements in his so-called table. Exception to the last point with Finite structure: O(mlog(n)) where n is number of vertices and m is the amount of neighbour vertices. Questions about his ideas why to waste resources to transform constantly-modified lists to table? Fast? only point where I to some extent agree but cannot understand the same upper limits n for each for-loops -- perhaps he supposed it circular why does the code take O(mlog(n)) to proceed in time as finite structure? The term finite may be wrong, explicit?

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >