Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 153/184 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • ASP .NET page runs slow in production

    - by Brandi
    I have created an ASP .NET page that works flawlessly and quickly from Visual Studio. It does a very large database read from a database on our network to load a gridview inside of an update panel. It displays progress in an Ajax modalpopupextender. Of course I don't expect it to be instant what with the large db reads, but it takes on the order of seconds, not on the order of minutes. This is all working great until I put it up on the server - it is very, VERY slow when I access it via the internet - takes several minutes to load the database information into the gridview. I'm baffled why it would not perform the exact same as it had from Visual Studio. (It is in release mode and I have taken off the debug flag) I have since been trying things like eliminating unneeded update panels and throwing out the ajax tool. Nothing has made it any faster on production. It is not the database as far as I know, since it has been consistently fast from my computer (from visual studio) and consistently slow from the server. I am wondering, where do I look next? Has anyone else had this problem before? Could this be caused by update panels or Ajax modalpopupextenders in different parts of the application? Why would the live behaviour differ so much from the localhost behaviour? Both the server with the ASP .NET page and the server with the database are servers on our network. I'm using Visual Studio 2008. Thank you in advance for any insight or advice.

    Read the article

  • capturing CMD batch file parameter list; write to file for later processing

    - by BobB
    I have written a batch file that is launched as a post processing utility by a program. The batch file reads ~24 parameters supplied by the calling program, stores them into variables, and then writes them to various text files. Since the max input variable in CMD is %9, it's necessary to use the 'shift' command to repeatedly read and store these individually to named variables. Because the program outputs several similar batch files, the result is opening several CMD windows sequentially, assigning variables and writing data files. This ties up the calling program for too long. It occurs to me that I could free up the calling program much faster if maybe there's a way to write a very simple batch file that can write all the command parameters to a text file, where I can process them later. Basically, just grab the parameter list, write it and done. Q: Is there some way to treat an entire series of parameter data as one big text string and write it to one big variable... and then echo the whole big thing to one text file? Then later read the string into %n variables when there's no program waiting to resume? Parameter list is something like 25 - 30 words, less than 200 characters. Sample parameter list: "First Name" "Lastname" "123 Steet Name Way" "Cityname" ST 12345 1004968 06/01/2010 "Firstname+Lastname" 101738 "On Account" 20.67 xy-1z 1 8.95 3.00 1.39 0 0 239 8.95 Items in quotes are processed as string variables. List is space delimited. Any suggestions?

    Read the article

  • Optimize a MySQL count each duplicate Query

    - by Onema
    I have the following query That gets the city name, city id, the region name, and a count of duplicate names for that record: SELECT Country_CA.City AS currentCity, Country_CA.CityID, globe_region.region_name, ( SELECT count(Country_CA.City) FROM Country_CA WHERE City LIKE currentCity ) as counter FROM Country_CA LEFT JOIN globe_region ON globe_region.region_id = Country_CA.RegionID AND globe_region.country_code = Country_CA.CountryCode ORDER BY City This example is for Canada, and the cities will be displayed on a dropdown list. There are a few towns in Canada, and in other countries, that have the same names. Therefore I want to know if there is more than one town with the same name region name will be appended to the town name. Region names are found in the globe_region table. Country_CA and globe_region look similar to this (I have changed a few things for visualization purposes) CREATE TABLE IF NOT EXISTS `Country_CA` ( `City` varchar(75) NOT NULL DEFAULT '', `RegionID` varchar(10) NOT NULL DEFAULT '', `CountryCode` varchar(10) NOT NULL DEFAULT '', `CityID` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`City`,`RegionID`), KEY `CityID` (`CityID`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; AND CREATE TABLE IF NOT EXISTS `globe_region` ( `country_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_name` varchar(50) COLLATE utf8_unicode_ci NOT NULL, PRIMARY KEY (`country_code`,`region_code`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; The query on the top does exactly what I want it to do, but It takes way too long to generate a list for 5000 records. I would like to know if there is a way to optimize the sub-query in order to obtain the same results faster. the results should look like this City CityID region_name counter sheraton 2349269 British Columbia 1 sherbrooke 2349270 Quebec 2 sherbrooke 2349271 Nova Scotia 2 shere 2349273 British Columbia 1 sherridon 2349274 Manitoba 1

    Read the article

  • Navigating through code with keyboard shortcuts

    - by MarceloRamires
    I'm starting to feel the need to run fastly through code with keyboard shortcuts, to arrive faster where I want to make any changes (avoiding use of mouse or long times holding [up], [left], [right] and [down]). I'm already using some: [home] - first position in current line [end] - last position in current line [ctrl] + [home] - first line of the entire code [ctrl] + [end] - last line of the entire code [pageup] - same vertical position, one screen above [pagedown] - same vertical position, one screen below [ctrl] + [pageup] - first line in current screen [ctrl] + [end] - last line in current screen [ctrl] + [left/right] - skipping word per word What have you got ? I use Visual Studio. (but I'm open to any answer, as I maybe can use others soon) obs: I've searched through stackoverflow and didn't find a nice question with this content, nor a list of keyboard code searching. If it's repeated, I'm sorry for not finding it, I'm here in my best intentions. This question is NOT about any shortcuts, and not only about visual studio, it's about running through code with shortcuts. Answers that suit the question so far: [Ctrl] + [-] - jumps to last cursor position [Ctrl] + [F3] - Jumps to next occurance of the word the curson is in [Shift] + [F3] - Same as the above, backwards. [F12] - Goes to definition of method/variable the cursor is in [Ctrl] + [ ] ] - Jumps to matching brace and select I'll ad more as there are answers.

    Read the article

  • Did I implement clock drift properly?

    - by David Titarenco
    I couldn't find any clock drift RNG code for Windows anywhere so I attempted to implement it myself. I haven't run the numbers through ent or DIEHARD yet, and I'm just wondering if this is even remotely correct... void QueryRDTSC(__int64* tick) { __asm { xor eax, eax cpuid rdtsc mov edi, dword ptr tick mov dword ptr [edi], eax mov dword ptr [edi+4], edx } } __int64 clockDriftRNG() { __int64 CPU_start, CPU_end, OS_start, OS_end; // get CPU ticks -- uses RDTSC on the Processor QueryRDTSC(&CPU_start); Sleep(1); QueryRDTSC(&CPU_end); // get OS ticks -- uses the Motherboard clock QueryPerformanceCounter((LARGE_INTEGER*)&OS_start); Sleep(1); QueryPerformanceCounter((LARGE_INTEGER*)&OS_end); // CPU clock is ~1000x faster than mobo clock // return raw return ((CPU_end - CPU_start)/(OS_end - OS_start)); // or // return a random number from 0 to 9 // return ((CPU_end - CPU_start)/(OS_end - OS_start)%10); } If you're wondering why I Sleep(1), it's because if I don't, OS_end - OS_start returns 0 consistently (because of the bad timer resolution, I presume). Basically, (CPU_end - CPU_start)/(OS_end - OS_start) always returns around 1000 with a slight variation based on the entropy of CPU load, maybe temperature, quartz crystal vibration imperfections, etc. Anyway, the numbers have a pretty decent distribution, but this could be totally wrong. I have no idea.

    Read the article

  • Submit form with POST data in Android app

    - by datguywhowanders
    I've been searching the web for a way to do this for about a week now, and I just can't seem to figure it out. I'm trying to implement an app that my college can use to allow users to log in to various services on the campus with ease. The way it works currently is they go to an online portal, select which service they want, fill in their user name and pwd, and click login. The form data is sent via post (it includes several hidden values as well as just the user name and pwd) to the corresponding login script which then signs them in and loads the service. I've been trying to come at the problem in two ways. I first tried a WebView, but it doesn't seem to want to support all of the html that normally makes this form work. I get all of the elements I need, fields for user and pwd as well as a login button, but clicking the button doesn't do anything. I wondered if I needed to add an onclick handler for it, but I can't see how as the button is implemented in the html of the webview not using a separate android element. The other possibility was using the xml widgets to create the form in a nice relative layout, which seems to load faster and looks better on the android screen. I used EditText fields for the input, a spinner widget for the service select, and the button widget for the login. I know how to make the onclick and item select handlers for the button and spinner, respectively, but I can't figure out how to send that data via POST in an intent that would then launch a browser. I can do an intent with the action url, but can't get the POST data to feed into it. Anyone have any suggestions?

    Read the article

  • jQuery arrays - newbie needs a kick start

    - by Jonny Wood
    I've only really started using this site and alredy I am very impressed by the community here! This is my third question in less than three days. Hopefully I'll be able to start answering questions soon instead of just asking them! I'm fairly new to jQuery and can't find a decent tutorial on Arrays. I'd like to be able to create an array that targets several ID's on my page and performs the same effect for each. For example I have tabs set up with the following: $('.tabs div.tab').hide(); $('.tabs div:first').show(); $('.tabs ul li:first a').addClass('current'); $('.tabs ul li a').click(function(){ $('.tabs ul li a').removeClass('current'); $(this).addClass('current'); var currentTab = $(this).attr('href'); $('.tabs div.tab').hide(); $(currentTab).show(); return false; }); I've used the class .tag to target the tabs as there are several sets on the same page, but I've heard jQuery works much faster when targetting ID's How would I add an array to the above code to target 4 different ID's? I've looked at var myArray = new Array('#id1', 'id2', 'id3', 'id4'); And also var myValues = [ '#id1', 'id2', 'id3', 'id4' ]; Which is correct and how do I then use the array in the code for my tabs...?

    Read the article

  • SQL Server 2008 spatial index and CPU utilization with MapGuide Open Source 2.1

    - by Antonio de la Peña
    I have a SQL Server table with hundreds of thousands of geometry type parcels. I have made indexes on them trying different combinations of density and objects per cell settings. So far I'm settiling for LOW, LOW, MEDIUM, MEDIUM and 16 objects per cell and I made a SP that sets the bounding box according to the extents of the entities in the table. There is an incredible performance boost from queries taking almost minutes without index to less than seconds, it gets faster when the zoom is closer thus less objects are displayed. Yet the CPU utilization gets to 100% when querying for features, even when the queries themselves are fast. I'm worrying this will not fly in a production environment. I am using MapGuide Open Source 2.1 for this project, but I am positive the CPU load is caused by SQL Server. I wonder if my indexes are set properly. I haven't found any clear documentation on how to properly set them up. Every article I've read basically says "it depends..." but nothing specific. Do you have any recommendations for me, including books, articles? Thank you.

    Read the article

  • 'Fixed' for loop - what is more efficient?

    - by pimvdb
    I'm creating a tic-tac-toe game, and one of the functions has to iterate through each of the 9 fields (tic-tac-toe is played on a 3x3 grid). I was wondering what is more efficient (which one is perhaps faster, or what is the preferred way of scripting in such situation) - using two for nested loops like this: for(var i=0; i<3; i++) { for(var j=0; j<3; j++) { checkField(i, j); } } or hard-coding it like this: checkField(0, 0); checkField(0, 1); checkField(0, 2); checkField(1, 0); checkField(1, 1); checkField(1, 2); checkField(2, 0); checkField(2, 1); checkField(2, 2); As there are only 9 combinations, it would be perhaps overkill to use two nested for loops, but then again this is clearer to read. The for loop, however, will increment variables and check whether i and j are smaller than 3 every time as well. In this example, the time saving at least might be negligible, but what is the preferred way of coding in this case? Thanks.

    Read the article

  • Am I making the right choice in choosing Yii as my PHP Framework?

    - by Bara
    I am about to begin development of a new website and have been doing research on PHP Frameworks. I'm not an advanced PHP developer, but I have been developing web sites and apps (in asp.net) for a few years now. My website will primarily be AJAX-based (using jQuery) and making lots of calls to web services. After some research, here's what I came up with: CakePHP: Originally started developing in this, but found it too complex. The fact that it forces you to use and learn all this new stuff just to use it was a bit daunting, so I put it aside for the time being. Zend: The performance of the framework leaves me a bit skeptical, but I heard it has great support for creating web services. I also heard it was a bit complex. CodeIgniter: No real reason for not using this one. Based on what I've read CodeIgniter and Yii are very similar, but Yii is a bit faster and doesn't have un-needed code for PHP4 (since I plan on developing exclusively in PHP5). As far as Yii, the only things that scare me about it are that it is newer than the other frameworks so it has a smaller community. It also doesn't seem to have a ton of web service support (only SOAP, from my understanding) as opposed to Zend. So my questions come down to: Should these things worry me? (not as big of a community, poor web service support) Is there anything else I should look into? Is my choice of Yii over the other frameworks ok for a primarily AJAX-based web app? Bara

    Read the article

  • Sending files using Winsock - optimal send() data length?

    - by Meta
    I am using Winsock with non-blocking sockets to send a file to a client. The way I'm doing it right now is that I read a chunk of 8192 bytes from the file, and then loop until all of it successfully goes through send() (obviously handling WSAEWOULDBLOCK as it occurs). I then move on and read the next 8192 bytes, and so on... Although I can use any other number than 8192 when I test the transfer on my local machine, once I try it over a network, it seems like 8191 is the largest number I can use. When I try to use any number higher than 8191 (starting with 8192), the file transfer becomes extremely slow (about 5 times slower). Is there any reason why 8191 is so special? I've done some more testing and it turns out that using 8000 is slightly faster (by 0.5%). If you understand why 8191 is so special, can you tell me if there is a number better than the others (better than 8000)? I have a feeling that it has something to do with the fact that the default send buffer allocated to the socket by Winsock is 8KB, but I don't understand why. It might also have something to do with the Nagle algorithm, but again, I'm not sure how. Note that I have not modified the SO_SNDBUF option nor the TCP_NODELAY option. Or am I doing this all wrong? What's the best way of sending a file over a non-blocking socket?

    Read the article

  • Beginner Question: For extract a large subset of a table from MySQL, how does Indexing, order of tab

    - by chongman
    Sorry if this is too simple, but thanks in advance for helping. This is for MySQL but might be relevant for other RDMBSs tblA has 4 columns: colA, colB, colC, mydata, A_id It has about 10^9 records, with 10^3 distinct values for colA, colB, colC. tblB has 3 columns: colA, colB, B_id It has about 10^4 records. I want all the records from tblA (except the A_id) that have a match in tblB. In other words, I want to use tblB to describe the subset that I want to extract and then extract those records from tblA. Namely: SELECT a.colA, a.colB, a.colC, a.mydata FROM tblA as a INNER JOIN tblB as b ON a.colA=b.colA a.colB=b.colB ; It's taking a really long time (more than an hour) on a newish computer (4GB, Core2Quad, ubuntu), and I just want to check my understanding of the following optimization steps. ** Suppose this is the only query I will ever run on these tables. So ignore the need to run other queries. Now my questions: 1) What indexes should I create to optimize this query? I think I just need a multiple index on (colA, colB) for both tables. I don't think I need separate indexes for colA and colB. Another stack overflow article (that I can't find) mentioned that when adding new indexes, it is slower when there are existing indexes, so that might be a reason to use the multiple index. 2) Is INNER JOIN correct? I just want results where a match is found. 3) Is it faster if I join (tblA to tblB) or the other way around, (tblB to tblA)? This previous answer says that the optimizer should take care of that. 4) Does the order of the part after ON matter? This previous answer say that the optimizer also takes care of the execution order.

    Read the article

  • Python: Most efficient way to concatenate and rearrange files

    - by user300890
    Hi, I am reading from several files, each file is divided into 2 pieces, first a header section of a few thousand lines followed by a body of a few thousand. My problem is I need to concatenate these files into one file where all the headers are on the top followed by the body. Currently I am using two loops; one to pull out all the headers and write them, and the second to write the body of each file (I also include a tmp_count variable to limit the number of lines to be loading into memory before dumping to file). This is pretty slow - about 6min for 13gb file. Can anyone tell me how to optimize this or if there is a faster way to do this in python ? Thanks! Here is my code: def cat_files_sam(final_file_name,work_directory_master,file_count): final_file = open(final_file_name,"w") if len(file_count) > 1: file_count=sort_output_files(file_count) # only for @ headers for bowtie_file in file_count: #print bowtie_file tmp_list = [] tmp_count = 0 for line in open(os.path.join(work_directory_master,bowtie_file)): if line.startswith("@"): if tmp_count == 1000000: final_file.writelines(tmp_list) tmp_list = [] tmp_count = 0 tmp_list.append(line) tmp_count += 1 else: final_file.writelines(tmp_list) break for bowtie_file in file_count: #print bowtie_file tmp_list = [] tmp_count = 0 for line in open(os.path.join(work_directory_master,bowtie_file)): if line.startswith("@"): continue if tmp_count == 1000000: final_file.writelines(tmp_list) tmp_list = [] tmp_count = 0 tmp_list.append(line) tmp_count += 1 final_file.writelines(tmp_list) final_file.close()

    Read the article

  • MySQL query optimization - distinct, order by and limit

    - by Manuel Darveau
    I am trying to optimize the following query: select distinct this_.id as y0_ from Rental this_ left outer join RentalRequest rentalrequ1_ on this_.id=rentalrequ1_.rental_id left outer join RentalSegment rentalsegm2_ on rentalrequ1_.id=rentalsegm2_.rentalRequest_id where this_.DTYPE='B' and this_.id<=1848978 and this_.billingStatus=1 and rentalsegm2_.endDate between 1273631699529 and 1274927699529 order by rentalsegm2_.id asc limit 0, 100; This query is done multiple time in a row for paginated processing of records (with a different limit each time). It returns the ids I need in the processing. My problem is that this query take more than 3 seconds. I have about 2 million rows in each of the three tables. Explain gives: +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+----------------------------------------------+ | 1 | SIMPLE | rentalsegm2_ | range | index_endDate,fk_rentalRequest_id_BikeRentalSegment | index_endDate | 9 | NULL | 449904 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | rentalrequ1_ | eq_ref | PRIMARY,fk_rental_id_BikeRentalRequest | PRIMARY | 8 | solscsm_main.rentalsegm2_.rentalRequest_id | 1 | Using where | | 1 | SIMPLE | this_ | eq_ref | PRIMARY,index_billingStatus | PRIMARY | 8 | solscsm_main.rentalrequ1_.rental_id | 1 | Using where | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+----------------------------------------------+ I tried to remove the distinct and the query ran three times faster. explain without the query gives: +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+-----------------------------+ | 1 | SIMPLE | rentalsegm2_ | range | index_endDate,fk_rentalRequest_id_BikeRentalSegment | index_endDate | 9 | NULL | 451972 | Using where; Using filesort | | 1 | SIMPLE | rentalrequ1_ | eq_ref | PRIMARY,fk_rental_id_BikeRentalRequest | PRIMARY | 8 | solscsm_main.rentalsegm2_.rentalRequest_id | 1 | Using where | | 1 | SIMPLE | this_ | eq_ref | PRIMARY,index_billingStatus | PRIMARY | 8 | solscsm_main.rentalrequ1_.rental_id | 1 | Using where | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+-----------------------------+ As you can see, the Using temporary is added when using distinct. I already have an index on all fields used in the where clause. Is there anything I can do to optimize this query? Thank you very much!

    Read the article

  • Searching with a UISearchbar is slow and blocking the main thread.

    - by Robert
    I have a Table with over 3000 entries and searching is very slow. At the moment I am doing just like in the 'TableSearch' example code (but without scopes) - (BOOL)searchDisplayController:(UISearchDisplayController *)controller shouldReloadTableForSearchString:(NSString *)searchString { [self filterContentForSearchText: searchString]; // Return YES to cause the search result table view to be reloaded. return YES; } And the filterContentForSearchText method is as follows: - (void) filterContentForSearchText:(NSString*)searchText { // Update the filtered array based on the search text // First clear the filtered array. [filteredListContent removeAllObjects]; // Search the main list whose name matches searchText // add items that match to the filtered array. if (fetchedResultsController.fetchedObjects) { for (id object in fetchedResultsController.fetchedObjects) { NSString* searchTarget = [tableTypeDelegate getStringForSearchFilteringFromObject:object]; if ([searchTarget rangeOfString:searchText options:(NSCaseInsensitiveSearch|NSDiacriticInsensitiveSearch)].location != NSNotFound) { [filteredListContent addObject:object]; } } } } My question is twofold: How do can I make the searching process faster? How can I stop the search from blocking the main thread? i.e. stop it preventing the user from typing more characters. For the second part, I tried "performSelector:withObject:afterDelay:" and "cancelPreviousPerformRequests..." without much success. I suspect that I will need to use threading instead, but I do not have much experience with it.

    Read the article

  • Are programming languages and methods inefficient? (assembler and C knowledge needed)

    - by b-gen-jack-o-neill
    Hi, for a long time, I am thinking and studying output of C language compiler in assembler form, as well as CPU architecture. I know this may be silly to you, but it seems to me that something is very ineffective. Please, don´t be angry if I am wrong, and there is some reason I do not see for all these principles. I will be very glad if you tell me why is it designed this way. I actually truly believe I am wrong, I know the genius minds of people which get PCs together knew a reason to do so. What exactly, do you ask? I´ll tell you right away, I use C as a example: 1: Stack local scope memory allocation: So, typical local memory allocation uses stack. Just copy esp to ebp and than allocate all the memory via ebp. OK, I would understand this if you explicitly need allocate RAM by default stack values, but if I do understand it correctly, modern OS use paging as a translation layer between application and physical RAM, when address you desire is further translated before reaching actual RAM byte. So why don´t just say 0x00000000 is int a,0x00000004 is int b and so? And access them just by mov 0x00000000,#10? Because you wont actually access memory blocks 0x00000000 and 0x00000004 but those your OS set the paging tables to. Actually, since memory allocation by ebp and esp use indirect addressing, "my" way would be even faster. 2: Variable allocation duplicity: When you run application, Loader load its code into RAM. When you create variable, or string, compiler generates code that pushes these values on the top o stack when created in main. So there is actual instruction for do so, and that actual number in memory. So, there are 2 entries of the same value in RAM. One in form of instruction, second in form of actual bytes in the RAM. But why? Why not to just when declaring variable count at which memory block it would be, than when used, just insert this memory location?

    Read the article

  • Some optimization about the code (computing ranks of a vector)?

    - by user1748356
    The following code is a function (performance-critical) to compute tied ranks of a vector: mergeSort(x,inds,ci); //a sort function to sort vector x of length ci, also returns keys (inds) of x. int tj=0; double xi=x[0]; for (int j = 1; j < ci; ++j) { if (x[j] > xi) { double rankvalue = 0.5 * (j - 1 + tj); for (int k = tj; k < j; ++k) { ranks[inds[k]]=rankvalue; }; tj = j; xi = x[j]; }; }; double rankvalue = 0.5 * (ci - 1 + tj); for (int k = tj; k < ci; ++k) { ranks[inds[k]]=rankvalue; }; The problem is, the supposed performance bottleneck mergeSort(), which is O(NlogN) is several times faster than the other part of codes (which is O(N)), which suggests there is room for huge improvment with the other part of the codes, any advices?

    Read the article

  • SQL Server INSERT, Scope_Identity() and physical writing to disc

    - by TheBlueSky
    Hello everyone, I have a stored procedure that does, among other stuff, some inserts in different table inside a loop. See the example below for clearer understanding: INSERT INTO T1 VALUES ('something') SET @MyID = Scope_Identity() ... some stuff go here INSERT INTO T2 VALUES (@MyID, 'something else') ... The rest of the procedure These two tables (T1 and T2) have an IDENTITY(1, 1) column in each one of them, let's call them ID1 and ID2; however, after running the procedure in our production database (very busy database) and having more than 6250 records in each table, I have noticed one incident where ID1 does not match ID2! Although normally for each record inserted in T1, there is record inserted in T2 and the identity column in both is incremented consistently. The "wrong" records were something like that: ID1 Col1 ---- --------- 4709 data-4709 4710 data-4710 ID2 ID1 Col1 ---- ---- --------- 4709 4710 data-4709 4710 4709 data-4710 Note the "inverted", ID1 in the second table. Knowing not that much about SQL Server underneath operations, I have put the following "theory", maybe someone can correct me on this. What I think is that because the loop is faster than physically writing to the table, and/or maybe some other thing delayed the writing process, the records were buffered. When it comes the time to write them, they were wrote in no particular order. Is that even possible if no, how to explain the above mentioned scenario? If yes, then I have another question to rise. What if the first insert (from the code above) got delayed? Doesn't that mean I won't get the correct IDENTITY to insert into the second table? If the answer of this is also yes, what can I do to insure the insertion in the two tables will happen in sequence with the correct IDENTITY? I appreciate any comment and information that help me understand this. Thanks in advance.

    Read the article

  • What is the best way to identify which form has been submitted?

    - by Rupert
    Currently, when I design my forms, I like to keep the name of the submit button equal to the id of the form. Then, in my php, I just do if(isset($_POST['submitName'])) in order to check if a form has been submitted and which form has been submitted. Firstly, are there any security problems or design flaws with this method? One problem I have encountered is when I wish to overlay my forms with javascript in order to provide faster validation to the user. For example, whilst I obviously need to retain server side validation, it is more convenient for the user if an error message is displayed inline, upon blurring an input. Additionally, it would be good to provide entire form validation, upon clicking the submit button. Therefore, when the user clicks on the form's submit button, I am stopping the default action, doing my validation, and then attempting to renable the traditional submit functionality, if the validation passes. In order to do this, I am using the form.submit() method but, unfortunately, this doesn't send the submit button variable (as it should be as form.submit() can be called without any button being clicked). This means my PHP script fails to detect that the form has been submitted. What is the correct way to work around this? It seems like the standard solution is to add a hidden field into the form, upon passing validation, which has the name of form's id. Then when form.submit() is called, this is passed along in place of the submit button. However, this solution seems very ungraceful to me and so I am wondering whether I should: a) Use an alternative method to detect which form has been submitted which doesn't rely rely on passing the submit button. If so what alternative is there? Obviously, just having an extra hidden field from the start isn't any better. b) Use an alternative Javascript solution which allows me to retain my non-Javascript design. For example, is there an alternative to form.submit() which allows me to pass in extra data? c) Suck it up and just insert a hidden field using Javascript.

    Read the article

  • Is A Web App Feasible For A Heavy Use Data Entry System?

    - by Rob
    Looking for opinions on this, we're working on a project that is essentially a data entry system for a production line. Heavy data input by users who normally work in Excel or other thick client data systems. We've been told (as a consequence) that we have to develop this as a thick client using .NET. Our argument was to develop as a web app, as it resolves a lot of issues and would be easier to write and maintain. Their argument against the web is that (supposedly) the web is not ready yet for a heavy duty data entry system, and that the web in a browser does not offer the speed, responsiveness, and fluid experience for the end-user that a thick client can (citing things such as drag and drop, rapid auto-entry and data navigation, etc.) Personally, I think that with good form design and JQuery/AJAX, a web app could do everything a thick client does just as well, and they just don't know what they're talking about. The irony is that a thick client has to go to a lot more effort to manage the deployment and connectivity back to the central data server than a web app would need to do, so in terms of speed I would expect a web app to be faster. What are the thoughts of those out there? Are there any technologies currently in production use that modern data entry systems are being developed as web apps in? Appreciate any feedback. Regards, Rob.

    Read the article

  • Are .NET 4.0 Runtime slower than .NET 2.0 Runtime?

    - by DxCK
    After I upgraded my projects to .NET 4.0 (With VS2010) I realized than they run slower than they were in .NET 2.0 (VS2008). So i decided to benchmark a simple console application in both VS2008 & VS2010 with various Target Frameworks: using System; using System.Diagnostics; using System.Reflection; namespace RuntimePerfTest { class Program { static void Main(string[] args) { Console.WriteLine(Assembly.GetCallingAssembly().ImageRuntimeVersion); Stopwatch sw = new Stopwatch(); while (true) { sw.Reset(); sw.Start(); for (int i = 0; i < 1000000000; i++) { } TimeSpan elapsed = sw.Elapsed; Console.WriteLine(elapsed); } } } } Here is the results: VS2008 Target Framework 2.0: ~0.25 seconds Target Framework 3.0: ~0.25 seconds Target Framework 3.5: ~0.25 seconds VS2010 Target Framework 2.0: ~3.8 seconds Target Framework 3.0: ~3.8 seconds Target Framework 3.5: ~1.51 seconds Target Framework 3.5 Client Profile: ~3.8 seconds Target Framework 4.0: ~1.01 seconds Target Framework 4.0 Client Profile: ~1.01 seconds My initial conclusion is obviously that programs compiled with VS2008 working faster than programs compiled with VS2010. Can anyone explain those performance changes between VS2008 and VS2010? and between different Target Frameworks inside VS2010 itself?

    Read the article

  • real time stock quotes, StreamReader performance optimization

    - by sean717
    I am working on a program that extracts real time quote for 900+ stocks from a website. I use HttpWebRequest to send HTTP request to the site and store the response to a stream and open a stream using the following code: HttpWebResponse response = (HttpWebResponse)request.GetResponse(); Stream stream = response.GetResponseStream (); StreamReader reader = new StreamReader( stream ) the size of the received HTML is large (5000+ lines), so it takes a long time to parse it and extract the price. For 900 files, It takes about 6 mins for parsing and extracting. Which my boss isn't happy with, he told me he'd want the whole process to be done in TWO mins. I've identified the part of the program that takes most of time to finish is parsing and extracting. I've tried to optimize the code to make it faster, the following is what I have now after some optimization: // skip lines at the top for(int i=0;i<1500;++i) reader.ReadLine(); // read the line that contains the price string theLine = reader.ReadLine(); // ... extract the price from the line now it takes about 4 mins to process all the files, there is still a significant gap to what my boss's expecting. So I am wondering, is there other way that I can further speed up the parsing and extracting and have everything done within 2 mins?

    Read the article

  • Wait until image loads before performing function

    - by Steven
    I'm trying to create a simple portfolio page. I have a list of thumbs and an image. When you click on a thumb, the image will change. When a thumbnail is clicked, I'd like to have the image fade out, wait until the image is loaded, then fade back in. The problem I have right now is that some of the images are pretty big, so it fades out, then fades back in immediately, sometimes while the image is still loading. I'd like to avoid using setTimeout, since sometimes an image will load faster or slower than the time I set. Here's my code: $(function() { $('img#image').attr("src", $('ul#thumbs li:first img').attr("src")); $('ul#thumbs li img').click(function() { $('img#image').fadeOut(700); var src = $(this).attr("src"); $('img#image').attr("src", src); $('img#image').fadeIn(700); }); }); <img id="image" src="" alt="" /> <ul id="thumbs"> <li><img src="/images/thumb1.png" /></li> <li><img src="/images/thumb2.png" /></li> <li><img src="/images/thumb3.png" /></li> </ul>

    Read the article

  • Is it possible to definitively identify whether a DML command was issued from a stored procedure?

    - by Ed Harper
    I have inherited a SQL Server 2008 database to which calling applications have access through stored procedures. Each table in the database has a shadow audit table into which Insert/Update/Delete operations for are logged. Performance testing on populating the audit tables showed that inserting the audit records using OUTPUT clauses was 20% or so faster than using triggers, so this has been implemented in the stored procedures. However, because this design cannot track changes made directly to the tables through DML statements issued directly against the tables, triggers have also been implemented which use the value of @@NESTLEVEL to determine whether or not to run the trigger (the assumption being that all DML run through stored procedures will have @@NESTLEVEL 1). i.e. the body of the trigger code looks something like: IF @@NESTLEVEL = 1 -- implies call is direct sql so generate history from here BEGIN ... insert into audit table This design is flawed because it won't track updates where DML statements are executed in dynamic SQL, or any other context where @@NESTLEVEL is raised above 1. Can anyone suggest a completely reliable method we can use in the triggers to execute them only if not triggered by a stored procedure? Or is this (as I suspect) not possible?

    Read the article

  • Counting entries in a list of dictionaries: for loop vs. list comprehension with map(itemgetter)

    - by Dennis Williamson
    In a Python program I'm writing I've compared using a for loop and increment variables versus list comprehension with map(itemgetter) and len() when counting entries in dictionaries which are in a list. It takes the same time using a each method. Am I doing something wrong or is there a better approach? Here is a greatly simplified and shortened data structure: list = [ {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'biscuits and gravy'}, {'key1': False, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'peaches and cream'}, {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': False, 'filenotfound': 'Abbott and Costello'}, {'key1': False, 'dontcare': False, 'ignoreme': True, 'key2': False, 'filenotfound': 'over and under'}, {'key1': True, 'dontcare': True, 'ignoreme': False, 'key2': True, 'filenotfound': 'Scotch and... well... neat, thanks'} ] Here is the for loop version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True key1 = key2 = 0 for dictionary in list: if dictionary["key1"]: key1 += 1 if dictionary["key2"]: key2 += 1 print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above: Counts: key1: 3, subset key2: 2 Here is the other, perhaps more Pythonic, version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True from operator import itemgetter KEY1 = 0 KEY2 = 1 getentries = itemgetter("key1", "key2") entries = map(getentries, list) key1 = len([x for x in entries if x[KEY1]]) key2 = len([x for x in entries if x[KEY1] and x[KEY2]]) print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above (same as before): Counts: key1: 3, subset key2: 2 I'm a tiny bit surprised these take the same amount of time. I wonder if there's something faster. I'm sure I'm overlooking something simple. One alternative I've considered is loading the data into a database and doing SQL queries, but the data doesn't need to persist and I'd have to profile the overhead of the data transfer, etc., and a database may not always be available. I have no control over the original form of the data. The code above is not going for style points.

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >