Search Results

Search found 48586 results on 1944 pages for 'page performance'.

Page 280/1944 | < Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >

  • mysql way to make a lock in a php page

    - by Cris
    Hello, i have the following mysql table: myTable: id int auto_increment voucher int not null id_user int null I've populated voucher field with values from 1 to 100000 so i've got 100000 records; when a user clicks a button in a PHP page, i need to allocate a record for the user so i make something similar like: update myTable set id_user=XXX where voucher=(SELECT * FROM (SELECT MIN(voucher) FROM myTable WHERE id_user is null) v); The problem is that I don't use locks and i should use them because if two users click in the same moment i risk to assign the same voucher to different persons (2 updates in the same record so i lose 1 user) ... I think there must be a correct way to do this, can you help me please? Thanks ! cris

    Read the article

  • Impact of ordering of correlated subqueries within a projection

    - by Michael Petito
    I'm noticing something a bit unexpected with how SQL Server (SQL Server 2008 in this case) treats correlated subqueries within a select statement. My assumption was that a query plan should not be affected by the mere order in which subqueries (or columns, for that matter) are written within the projection clause of the select statement. However, this does not appear to be the case. Consider the following two queries, which are identical except for the ordering of the subqueries within the CTE: --query 1: subquery for Color is second WITH vw AS ( SELECT p.[ID], (SELECT TOP(1) [FirstName] FROM [Preference] WHERE p.ID = ID AND [FirstName] IS NOT NULL ORDER BY [LastModified] DESC) [FirstName], (SELECT TOP(1) [Color] FROM [Preference] WHERE p.ID = ID AND [Color] IS NOT NULL ORDER BY [LastModified] DESC) [Color] FROM Person p ) SELECT ID, Color, FirstName FROM vw WHERE Color = 'Gray'; --query 2: subquery for Color is first WITH vw AS ( SELECT p.[ID], (SELECT TOP(1) [Color] FROM [Preference] WHERE p.ID = ID AND [Color] IS NOT NULL ORDER BY [LastModified] DESC) [Color], (SELECT TOP(1) [FirstName] FROM [Preference] WHERE p.ID = ID AND [FirstName] IS NOT NULL ORDER BY [LastModified] DESC) [FirstName] FROM Person p ) SELECT ID, Color, FirstName FROM vw WHERE Color = 'Gray'; If you look at the two query plans, you'll see that an outer join is used for each subquery and that the order of the joins is the same as the order the subqueries are written. There is a filter applied to the result of the outer join for color, to filter out rows where the color is not 'Gray'. (It's odd to me that SQL would use an outer join for the color subquery since I have a non-null constraint on the result of the color subquery, but OK.) Most of the rows are removed by the color filter. The result is that query 2 is significantly cheaper than query 1 because fewer rows are involved with the second join. All reasons for constructing such a statement aside, is this an expected behavior? Shouldn't SQL server opt to move the filter as early as possible in the query plan, regardless of the order the subqueries are written?

    Read the article

  • Reset Java Applet on page reload

    - by tommy-susanto
    I need to quit firefox and restart it in order for the applet to be refreshed... its anoying since i'm still programming it an the class files changes... am i missing some codes which makes it unable to refresh the applet and still take the one from the cache??? So I have a .jar applet in my website, a simulation game that spawns army whenever user clicks on the screen... however whenever I refresh the page, the previous army are still there on the screen.. I want it to be refreshed (as if we're just starting to run the application the first time). I already tried pressing CTRL+f5 but the trick doesn't seem to work I'd basically like to do it automatically using scripts or something, so that it won't require user to manually press some button on the keyboard Any Suggestions? I'd really appreciate it Thank you....

    Read the article

  • Seeking to zoom in on an image, slowly, on page load, using Jquery

    - by Heath Waller
    Hello, I am looking to give the impression of zooming in from a large image to a detailed area of that image, over time, when the page loads - using javascript (jquery, preferably). I have been given the following flash site as a reference (action happens just after title fades in): http://www.delicatessennyc.com/ Not sure if this is even possible. I know I could switch out the large image for a smaller one - but my boss is hellbent of achieving this flash-type action using javascript. Please note, I've searched the plugins and I've not found anything like what I'm looking for. And I'm so new to this type of coding that cobbling it together from scratch seems daunting. Thank you all so much.

    Read the article

  • Make compiler copy characters using movsd

    - by Suma
    I would like to copy a relatively short sequence of memory (less than 1 KB, typically 2-200 bytes) in a time critical function. The best code for this on CPU side seems to be rep movsd. However I somehow cannot make my compiler to generate this code. I hoped (and I vaguely remember seeing so) using memcpy would do this using compiler built-in instrinsic, but based on disassembly and debugging it seems compiler is using call to memcpy/memmove library implementation instead. I also hoped the compiler might be smart enough to recognize following loop and use rep movsd on its own, but it seems it does not. char *dst; const char *src; // ... for (int r=size; --r>=0; ) *dst++ = *src++; Is there some way to make the Visual Studio compiler to generate rep movsd sequence other than using inline assembly?

    Read the article

  • Bulk inserts into sqlite db on the iphone...

    - by akaii
    I'm inserting a batch of 100 records, each containing a dictonary containing arbitrarily long HTML strings, and by god, it's slow. On the iphone, the runloop is blocking for several seconds during this transaction. Is my only recourse to use another thread? I'm already using several for acquiring data from HTTP servers, and the sqlite documentation explicitly discourages threading with the database, even though it's supposed to be thread-safe... Is there something I'm doing extremely wrong that if fixed, would drastically reduce the time it takes to complete the whole operation? NSString* statement; statement = @"BEGIN EXCLUSIVE TRANSACTION"; sqlite3_stmt *beginStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &beginStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); return; } if (sqlite3_step(beginStatement) != SQLITE_DONE) { sqlite3_finalize(beginStatement); printf("db error: %s\n", sqlite3_errmsg(database)); return; } NSTimeInterval timestampB = [[NSDate date] timeIntervalSince1970]; statement = @"INSERT OR REPLACE INTO item (hash, tag, owner, timestamp, dictionary) VALUES (?, ?, ?, ?, ?)"; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, [statement UTF8String], -1, &compiledStatement, NULL) == SQLITE_OK) { for(int i = 0; i < [items count]; i++){ NSMutableDictionary* item = [items objectAtIndex:i]; NSString* tag = [item objectForKey:@"id"]; NSInteger hash = [[NSString stringWithFormat:@"%@%@", tag, ownerID] hash]; NSInteger timestamp = [[item objectForKey:@"updated"] intValue]; NSData *dictionary = [NSKeyedArchiver archivedDataWithRootObject:item]; sqlite3_bind_int( compiledStatement, 1, hash); sqlite3_bind_text( compiledStatement, 2, [tag UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text( compiledStatement, 3, [ownerID UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_int( compiledStatement, 4, timestamp); sqlite3_bind_blob( compiledStatement, 5, [dictionary bytes], [dictionary length], SQLITE_TRANSIENT); while(YES){ NSInteger result = sqlite3_step(compiledStatement); if(result == SQLITE_DONE){ break; } else if(result != SQLITE_BUSY){ printf("db error: %s\n", sqlite3_errmsg(database)); break; } } sqlite3_reset(compiledStatement); } timestampB = [[NSDate date] timeIntervalSince1970] - timestampB; NSLog(@"Insert Time Taken: %f",timestampB); // COMMIT statement = @"COMMIT TRANSACTION"; sqlite3_stmt *commitStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &commitStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); } if (sqlite3_step(commitStatement) != SQLITE_DONE) { printf("db error: %s\n", sqlite3_errmsg(database)); } sqlite3_finalize(beginStatement); sqlite3_finalize(compiledStatement); sqlite3_finalize(commitStatement);

    Read the article

  • Practical approach to concurrency control

    - by Industrial
    Hi everyone, I'd read this article recently and are very interested on how to make a practical approach to Concurrency control on a web server. The server will run CentOS + PHP + mySQL with Memcached. How would you set it up to work? http://saasinterrupted.com/2010/02/05/high-availability-principle-concurrency-control/ Thanks!

    Read the article

  • How well do zippers perform in practice, and when should they be used?

    - by Rob
    I think that the zipper is a beautiful idea; it elegantly provides a way to walk a list or tree and make what appear to be local updates in a functional way. Asymptotically, the costs appear to be reasonable. But traversing the data structure requires memory allocation at each iteration, where a normal list or tree traversal is just pointer chasing. This seems expensive (please correct me if I am wrong). Are the costs prohibitive? And what under what circumstances would it be reasonable to use a zipper?

    Read the article

  • Pre-generating GUIDs for use in python?

    - by rjuiaa1
    I have a python program that needs to generate several guids and hand them back with some other data to a client over the network. It may be hit with a lot of requests in a short time period and I would like the latency to be as low as reasonably possible. Ideally, rather than generating new guids on the fly as the client waits for a response, I would rather be bulk-generating a list of guids in the background that is continually replenished so that I always have pre-generated ones ready to hand out. I am using the uuid module in python on linux. I understand that this is using the uuidd daemon to get uuids. Does uuidd already take care of pre-genreating uuids so that it always has some ready? From the documentation it appears that it does not. Is there some setting in python or with uuidd to get it to do this automatically? Is there a more elegant approach then manually creating a background thread in my program that maintains a list of uuids?

    Read the article

  • atomic operation cost

    - by osgx
    Hello What is the cost of the atomic operation? How much cycles does it consume? Will it pause other processors on SMP or NUMA, or will it block memory accesses? Will it flush reorder buffer in out-of-order CPU? What effects will be on the cache? Thanks.

    Read the article

  • Single Page Navigation w/ Javascript and Back Button

    - by Khan
    Ok, so I have a "navigation" div and "content" div. When something on the navigation is clicked, I fade in the content div with the new data. Now, I would like to have the old data returned when the user hits the "back" button on his/her browser, but I'm having a hard time doing this. I know I can set the content to be a named anchor, so they stay on the same page with a breadcrumb trail. However, that's as far as I get. Can I listen for a back button click? Can I set content to display when a certain anchor name is reached? Thanks in advance for your help, SO.

    Read the article

  • How to convert *.aspx (HTML) page to Image format

    - by Swapnil Fegade
    I want to convert *.aspx (HTML) page's (User Interface) to Image such as JPEG. I am using below code for that Protected Sub btnGet_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnGet.Click saveURLToImage("http://google.co.in") End Sub Private Sub saveURLToImage(ByVal url As String) If Not String.IsNullOrEmpty(url) Then Dim content As String = "" Dim webRequest__1 As System.Net.WebRequest = WebRequest.Create(url) Dim webResponse As System.Net.WebResponse = webRequest__1.GetResponse() Dim sr As System.IO.StreamReader = New StreamReader(webResponse.GetResponseStream(), System.Text.Encoding.GetEncoding("UTF-8")) content = sr.ReadToEnd() 'save to file Dim b As Byte() = Convert.FromBase64String(content) Dim ms As New System.IO.MemoryStream(b, 0, b.Length) Dim img As System.Drawing.Image = System.Drawing.Image.FromStream(ms) img.Save("c:\pic.jpg", System.Drawing.Imaging.ImageFormat.Jpeg) img.Dispose() ms.Close() End If End Sub But I am getting Error as "Invalid character in a Base-64 string" at line Dim b As Byte() = Convert.FromBase64String(content)

    Read the article

  • Dynamic page with JSF

    - by Milan
    Hello everybody! I need to make dynamic web-page/form in JSF. Dynamic: In runtime I get the number of input fields Ill have together with buttons. I'll try to demonstrate you what I need to make. inputField1 inputField2 inputField3 BUTTON1 inputField4 inputField5 BUTTON2 inputField6 inputFiled7 inputField8 inputField9 BUTTON3 I hope you understand me what I want to make. So I when I click on some of the buttons I have to collect the data from the input fields and do something with the data. If I click BUTTON1 I collect data from inputField1 inputField2 inputField3 BUTTON2 inputField4 inputField5, etc. Probably I should create the input fields programmatically and group them in forms. Any idea how to do it? Any idea?

    Read the article

  • Python Sets vs Lists

    - by mvid
    In Python, which data structure is more efficient/speedy? Assuming that order is not important to me and I would be checking for duplicates anyway, is a Python set slower than a Python list?

    Read the article

  • Is there a better approach to minify html generated from aspx page

    - by Hoque
    I am using the following code to minify html generated from aspx page duuring runtime. protected override void Render(HtmlTextWriter writer) { TextWriter output = new StringWriter(); base.Render(new HtmlTextWriter(output)); String html = output.ToString(); html = Regex.Replace(html, @"\n|\t", " "); html = Regex.Replace(html, @">\s+<", "><").Trim(); html = Regex.Replace(html, @"\s{2,}", " "); writer.Write(html); } Is there more better approach to do the same. Thank you so much.

    Read the article

  • Of these 3 methods for reading linked lists from shared memory, why is the 3rd fastest?

    - by Joseph Garvin
    I have a 'server' program that updates many linked lists in shared memory in response to external events. I want client programs to notice an update on any of the lists as quickly as possible (lowest latency). The server marks a linked list's node's state_ as FILLED once its data is filled in and its next pointer has been set to a valid location. Until then, its state_ is NOT_FILLED_YET. I am using memory barriers to make sure that clients don't see the state_ as FILLED before the data within is actually ready (and it seems to work, I never see corrupt data). Also, state_ is volatile to be sure the compiler doesn't lift the client's checking of it out of loops. Keeping the server code exactly the same, I've come up with 3 different methods for the client to scan the linked lists for changes. The question is: Why is the 3rd method fastest? Method 1: Round robin over all the linked lists (called 'channels') continuously, looking to see if any nodes have changed to 'FILLED': void method_one() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { Data* current_item = channel_cursors[i]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[i] = static_cast<Data*>(current_item->next_.get(segment)); } } } Method 1 gave very low latency when then number of channels was small. But when the number of channels grew (250K+) it became very slow because of looping over all the channels. So I tried... Method 2: Give each linked list an ID. Keep a separate 'update list' to the side. Every time one of the linked lists is updated, push its ID on to the update list. Now we just need to monitor the single update list, and check the IDs we get from it. void method_two() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { if(update_cursor->state_ == NOT_FILLED_YET) { continue; } ::uint32_t update_id = update_cursor->list_id_; Data* current_item = channel_cursors[update_id]; if(current_item->state_ == NOT_FILLED_YET) { std::cerr << "This should never print." << std::endl; // it doesn't continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[update_id] = static_cast<Data*>(current_item->next_.get(segment)); update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } } Method 2 gave TERRIBLE latency. Whereas Method 1 might give under 10us latency, Method 2 would inexplicably often given 8ms latency! Using gettimeofday it appears that the change in update_cursor-state_ was very slow to propogate from the server's view to the client's (I'm on a multicore box, so I assume the delay is due to cache). So I tried a hybrid approach... Method 3: Keep the update list. But loop over all the channels continuously, and within each iteration check if the update list has updated. If it has, go with the number pushed onto it. If it hasn't, check the channel we've currently iterated to. void method_three() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { std::size_t idx = i; ACQUIRE_MEMORY_BARRIER; if(update_cursor->state_ != NOT_FILLED_YET) { //std::cerr << "Found via update" << std::endl; i--; idx = update_cursor->list_id_; update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } Data* current_item = channel_cursors[idx]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } found_an_update = true; log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[idx] = static_cast<Data*>(current_item->next_.get(segment)); } } } The latency of this method was as good as Method 1, but scaled to large numbers of channels. The problem is, I have no clue why. Just to throw a wrench in things: if I uncomment the 'found via update' part, it prints between EVERY LATENCY LOG MESSAGE. Which means things are only ever found on the update list! So I don't understand how this method can be faster than method 2. The full, compilable code (requires GCC and boost-1.41) that generates random strings as test data is at: http://pastebin.com/e3HuL0nr

    Read the article

  • Get forum page by PostID

    - by cem
    I can't figure out how it's working. Like this. How is this get page number -and records- by post id? I think the first option is; declare an index / int variable in post table and increase-decrease it when adding and deleting post. but whats happen when i delete first row and if table has one million records? Do you have any idea about this? by the way, i'm using nhibernate and sql server 2005. Thank you

    Read the article

  • Screen scrape a web page that uses javaScript and frames

    - by Mello
    Hi, I want to scrape data from www.marktplaats.nl . I want to analyze the scraped description, price, date and views in Excel/Access. I tried to scrape data with Ruby (nokogiri, scrapi) but nothing worked. (on other sites it worked well) The main problem is that for example selectorgadget and the add-on firebug (Firefox) don’t find any css I can use to scrape the page. On other sites I can extract the css with selectorgadget or firebug and use it with nokogiri or scrapi. Due to lack of experience it is difficult to identify the problem and therefore searching for a solution isn’t easy. Can you tell me where to start solving this problem and where I maybe can find more info about a similar scraping process? Thanks in advance!

    Read the article

  • clarification on yslow rules

    - by ooo
    i ran yslow and i got a bad score on expires header: Here was the message: Grade F on Add Expires headers There are 45 static components without a far-future expiration date. i am using IIS on a hosted environment. what do i need to do on my css or js files to fix this issue ?

    Read the article

  • C++: how to truncate the double in efficient way?

    - by Arman
    Hello, I would like to truncate the float to 4 digits. Are there some efficient way to do that? My current solution is: double roundDBL(double d,unsigned int p=4) { unsigned int fac=pow(10,p); double facinv=1.0/static_cast<double>(fac); double x=static_cast<unsigned int>(x*fac)/facinv; return x; } but using pow and delete seems to me not so efficient. kind regards Arman.

    Read the article

< Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >