Search Results

Search found 6355 results on 255 pages for 'slow downs'.

Page 216/255 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • PocketPC c++ windows message processing recursion problem

    - by user197350
    Hello, I am having a problem in a large scale application that seems related to windows messaging on the Pocket PC. What I have is a PocketPC application written in c++. It has only one standard message loop. while (GetMessage (&msg, NULL, 0, 0)) { { TranslateMessage (&msg); DispatchMessage (&msg); } } We also have standard dlgProc's. In the switch of the dlgProc, we will call a proprietary 3rd party API. This API uses a socket connection to communicate with another process. The problem I am seeing is this: whenever two of the same messages come in quickly (from the user clicking the screen twice too fast and shouldn't be) it seems as though recursion is created. Windows begins processing the first message, gets the api into a thread safe state, and then jumps to process the next (identical ui) message. Well since the second message also makes the API call, the call fails because it is locked. Because of the design of this legacy system, the API will be locked until the recursion comes back out (which also is triggered by the user; so it could be locked the entire working day). I am struggling to figure out exactly why this is happening and what I can do about it. Is this because windows recognizes the socket communication will take time and preempts it? Is there a way I can force this API call to complete before preemption? Is there a way I can slow down the message processing or re-queue the message to ensure the first will execute (capturing it and doing a PostMessage back to itself didnt work). We don't want to lock the ui down while the first call completes. Any insight is greatly appreciated! Thanks!!

    Read the article

  • Any HTTP proxies with explicit, configurable support for request/response buffering and delayed conn

    - by Carlos Carrasco
    When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following: A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part. The proxy server stops buffering the request when: A size limit has been reached (say, 4KB), or The request has been received completely, headers and body Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed. The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.) Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed. The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources. I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model? (Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh)

    Read the article

  • jquery png fix on hidden div and jquerybrowser question

    - by Jared
    Hello, I have some code that if Javascript is available, it will remove a GIF image and replace it with a PNG image. The PNG is display:none and the GIF is visible. Since IE6- browsers can't load PNG, I have loaded the jquery PNG fix. But it only seems to work if the image is already visible. The other issue is I am trying to get the jquery.browser function to apply to less than version 6, and I am not having much luck. <script type="text/javascript"> $(document).ready(function(){ $("#gif").hide(); jQuery.each(jQuery.browser, function(i, val) { if($.browser.msie && jQuery.browser.version <="6"){ $("#png").show(); $('.png').pngFix() }else{ $("#png").fadeIn("slow"); } }); }); </script> HTML <img class="png" id="png" src="images/main_elements/one-2-flush-it-campus-challenge.png" style="display:none;" /> <img id="gif" src="images/main_elements/one-2-flush-it-campus-challenge.gif"/>

    Read the article

  • iPod touch debugging: Error on install/run only if app exists on device already?

    - by Ben
    Hi all, I am using an iPod to test an app. The device is all set up with the right provisioning profiles, etc-- that's not really the issue. But every time I start the app from Xcode on the device, I get the "A signed resource has been added, modified, or deleted." error from the Organizer window. Wait, I know, you think it's a provisioning profile problem. But here's the kicker: if I just delete the app from the iPod (using the main screen) and try again, it works fine. I only get this error when the app is already installed. The other kicker is that this behavior doesn't happen on an iPhone that I have for occasional testing-- on that device, I can start/restart/restart indefinitely. But using the iPod, my compile-run-test cycle is annoyingly slow since I have to manually delete the app each time. Any ideas? I'm using Xcode 3.2.2 (prerelease) FWIW. The iPod has stock OS 3.1.2 on it. Thanks!

    Read the article

  • How to efficiently show many Images? (iPhone programming)

    - by Thomas
    In my application I needed something like a particle system so I did the following: While the application initializes I load a UIImage laserImage = [UIImage imageNamed:@"laser.png"]; UIImage *laserImage is declared in the Interface of my Controller. Now every time I need a new particle this code makes one: // add new Laserimage UIImageView *newLaser = [[UIImageView alloc] initWithImage:laserImage]; [newLaser setTag:[model.lasers count]-9]; [newLaser setBounds:CGRectMake(0, 0, 17, 1)]; [newLaser setOpaque:YES]; [self.view addSubview:newLaser]; [newLaser release]; Please notice that the images are only 17px * 1px small and model.lasers is a internal array to do all the calculating seperated from graphical output. So in my main drawing loop I set all the UIImageView's positions to the calculated positions in my model.lasers array: for (int i = 0; i < [model.lasers count]; i++) { [[self.view viewWithTag:i+10] setCenter:[[model.lasers objectAtIndex:i] pos]]; } I incremented the tags by 10 because the default is 0 and I don't want to move all the views with the default tag. So the animation looks fine with about 10 - 20 images but really gets slow when working with about 60 images. So my question is: Is there any way to optimize this without starting over in OpenGl ES? Thank you very much and sorry for my english! Greetings from Germany, Thomas

    Read the article

  • Good mobile oriented GWT widget library alternatives

    - by Michael Donohue
    I've been developing a travel planning site - tripgrep.com - which is built on appengine, GWT and smartgwt, among other technologies. It is still early days, and the site is now working well on my development environment, which is either a windows or mac computer. However, I am frequently talking up the website to my friends when we are at a bar or other venue, so I am standing there while they try to access the site via an iPhone, Android or Blackberry - I've witnessed all three. It has been painfully obvious that the browser based frontend takes a long time to download on a mobile device. I am pretty sure this is because of the javascript download for SmartGWT. So, I would like to look at alternatives to SmartGWT. What I like about SmartGWT is that it has a reasonable look and feel out of the box - I don't need to learn any design or css and it has an office application look. This is considerably better than the GWT built-in widgets, which just get a blue border. The better look-and-feel is why I went with SmartGWT early on. However, the slow load times are killing me on these mobile demos. So now I want a fast loading widget alternative that has good look-and-feel out of the box. The features I care about are: tabs, good form layout, Google maps API integration, grid data viewing. If those are all available in a library that loads quickly on a mobile device, then that's the library I want.

    Read the article

  • Maintaining a pool of DAO Class instances vs doing new operator

    - by Fazal
    we have been trying to benchmark our application performance in multiple way for sometime now. I always believed that object creation in java using Class.newInstance() was not slow (at least after 1.4 version of java). But we anyways did a test to use newInstance method vs mainitain an object pool of 1000 objects. We did about 200K iterations of loading data from DB using JDBC and populating these objects. I was amazed (even shocked) to see that newInstance code compared to object pool code was almost 10 times slower. These objects represent tables with about 50 fields and all string type. Can someone share there thoughts on this issue as now I am more confused if object pooling of atleast some DAO instances is a better option. The pool size as I see right now should be large enough to meet size of average requests. There is a flip side as my memory footprint will go up but I am beginning to wonder if this kind of idea makes sense atleast for some of the DAO entities representing tables of about 50 or more columns Please share your ideas and let me know if this has been tried by someone or am I missing some point here

    Read the article

  • Swing: what to do when a JTree update takes too long and freezes other GUI elements?

    - by java.is.for.desktop
    Hello, everyone! I know that GUI code in Java Swing must be put inside SwingUtilities.invokeAndWait or SwingUtilities.invokeLater. This way threading works fine. Sadly, in my situation, the GUI update it that thing which takes much longer than background thread(s). More specific: I update a JTree with about just 400 entries, nesting depth is maximum 4, so should be nothing scary, right? But it takes sometimes one second! I need to ensure that the user is able to type in a JTextPane without delays. Well, guess what, the slow JTree updates do cause delays for JTextPane during input. It refreshes only as soon as the tree gets updated. I am using Netbeans and know empirically that a Java app can update lots of information without freezing the rest of the UI. How can it be done? NOTE 1: All those DefaultMutableTreeNodes are prepared outside the invokeAndWait. NOTE 2: When I replace invokeAndWait with invokeLater the tree doesn't get updated. NOTE 3: Fond out that recursive tree expansion takes far the most time. NOTE 4: I'm using custom tree cell renderer, will try without and report. NOTE 4a: My tree cell renderer uses a map to cache and reuse created JTextComponents, depending on tree node (as a key). CLUE 1: Wow! Without setting custom cell renderer it's 10 times faster. I think, I'll need few good tutorials on writing custom tree cell renderers.

    Read the article

  • Optimize CUDA with Thrust in a loop

    - by macs
    Given the following piece of code, generating a kind of code dictionary with CUDA using thrust (C++ template library for CUDA): thrust::device_vector<float> dCodes(codes->begin(), codes->end()); thrust::device_vector<int> dCounts(counts->begin(), counts->end()); thrust::device_vector<int> newCounts(counts->size()); for (int i = 0; i < dCodes.size(); i++) { float code = dCodes[i]; int count = thrust::count(dCodes.begin(), dCodes.end(), code); newCounts[i] = dCounts[i] + count; //Had we already a count in one of the last runs? if (dCounts[i] > 0) { newCounts[i]--; } //Remove thrust::detail::normal_iterator<thrust::device_ptr<float> > newEnd = thrust::remove(dCodes.begin()+i+1, dCodes.end(), code); int dist = thrust::distance(dCodes.begin(), newEnd); dCodes.resize(dist); newCounts.resize(dist); } codes->resize(dCodes.size()); counts->resize(newCounts.size()); thrust::copy(dCodes.begin(), dCodes.end(), codes->begin()); thrust::copy(newCounts.begin(), newCounts.end(), counts->begin()); The problem is, that i've noticed multiple copies of 4 bytes, by using CUDA visual profiler. IMO this is generated by The loop counter i float code, int count and dist Every access to i and the variables noted above This seems to slow down everything (sequential copying of 4 bytes is no fun...). So, how i'm telling thrust, that these variables shall be handled on the device? Or are they already? Using thrust::device_ptr seems not sufficient for me, because i'm not sure whether the for loop around runs on host or on device (which could also be another reason for the slowliness).

    Read the article

  • Link checker ; how to avoid false positives

    - by Burnzy
    I'm working a on a link checker/broken link finder and I am getting many false positives, after double checking I noticed that many error codes were returning webexceptions but they were actually downloadable, but in some other cases the statuscode is 404 and i can access the page from the browse. So here is the code, its pretty ugly, and id like to have something more, id say practical. All the status codes are in that big if are used to filter the ones i dont want to add to brokenlink because they are valid links ( i tested them all ). What i need to fix is the structure (if possible) and how to not get false 404. Thank you! try { HttpWebRequest request = ( HttpWebRequest ) WebRequest.Create ( uri ); request.Method = "Head"; request.MaximumResponseHeadersLength = 32; // FOR IE SLOW SPEED request.AllowAutoRedirect = true; using ( HttpWebResponse response = ( HttpWebResponse ) request.GetResponse() ) { request.Abort(); } /* WebClient wc = new WebClient(); wc.DownloadString( uri ); */ _validlinks.Add ( strUri ); } catch ( WebException wex ) { if ( !wex.Message.Contains ( "The remote name could not be resolved:" ) && wex.Status != WebExceptionStatus.ServerProtocolViolation ) { if ( wex.Status != WebExceptionStatus.Timeout ) { HttpStatusCode code = ( ( HttpWebResponse ) wex.Response ).StatusCode; if ( code != HttpStatusCode.OK && code != HttpStatusCode.BadRequest && code != HttpStatusCode.Accepted && code != HttpStatusCode.InternalServerError && code != HttpStatusCode.Forbidden && code != HttpStatusCode.Redirect && code != HttpStatusCode.Found ) { _brokenlinks.Add ( new Href ( new Uri ( strUri , UriKind.RelativeOrAbsolute ) , UrlType.External ) ); } else _validlinks.Add ( strUri ); } else _brokenlinks.Add ( new Href ( new Uri ( strUri , UriKind.RelativeOrAbsolute ) , UrlType.External ) ); } else _validlinks.Add ( strUri ); }

    Read the article

  • MySQL efficiency as it relates to the database/table size

    - by mlissner
    I'm building a system using django, Sphinx and MySQL that's very quickly becoming quite large. The database currently has about 2000 rows, and I've written a program that's going to populate it with another 40,000 rows in a couple days. Since the database is live right now, and since I've never had a database with this much information in it, I'm worried about some things: Is adding all these rows going to seriously degrade the efficiency of my django app? Will I need to go back through it and optimize all my database calls so they're doing things more cleverly? Or will this make the database slow all around to the extent that I can't do anything about it at all? If you scoff at my 40k rows, then, my next question is, at what point SHOULD I be concerned? I will likely be adding another couple hundred thousand soon, so I worry, and I fret. How is sphinx going to feel about all this? Is it going to freak out when it realizes it has to index all this data? Or will it be fine? Is this normal for it? If it is, at what point should I be concerned that it's too much data for Sphinx? Thanks for any thoughts.

    Read the article

  • random data using php & mysql

    - by Prakash
    I have mysql database structure like below: CREATE TABLE test ( id int(11) NOT NULL auto_increment, title text NULL, tags text NULL, PRIMARY KEY (id) ); data on field tags is stored as a comma separated text like html,php,mysql,website,html etc... now I need create an array that contains around 50 randomly selected tags from random records. currently I am using rand() to select 15 random mysql data from database and then holding all the tags from 15 records in an array. Then I am using array_rand() for randomizing the array and selecting only 50 random records. $query=mysql_query("select * from test order by id asc, RAND() limit 15"); $tags=""; while ($eachData=mysql_fetch_array($query)) { $additionalTags=$eachData['tags']; if ($tags=="") { $tags.=$additionalTags; } else { $tags.=$tags.",".$additionalTags; } } $tags=explode(",", $tags); $newTags=array(); foreach ($tags as $tag) { $tag=trim($tag); if ($tag!="") { if (!in_array($tag, $newTags)) { $newTags[]=$tag; } } } $random_newTags=array_rand($newTags, 50); Now I have huge records on the database, and because of that; rand() is performing very slow and sometimes it doesn't work. So can anyone let me know how to handle this situation correctly so that my page will work normally.

    Read the article

  • how can I speed up insertion of many rows to a table via ADO.NET?

    - by jcollum
    I have a table that has 5 columns: AcctId (int), Address1 (varchar), Address2 (varchar), Person1 (varchar), Person2 (varchar) . I'm generating random data to insert into this table via a C# console application. I've tried doing this random data insert via SQL-Server and decided it was not a good solution -- SQL is not good at random on an each-row basis. Generating the random data -- 975k rows of it -- takes a minimal amount of time. It's in a List of custom objects. I need to take this random data and update many rows in the database with the new random data. I tried updating the rows one at a time, very slow because of the repeated searching of the List object in code. So I think the best approach is to put all the randomized data into a table in the database, then update all the other tables that use this data. I.e. UPDATE t SET t.Address1=d.Address1 FROM Table1 t INNER JOIN RandomizedData d ON d.AcctId = t.Acct_ID. The database is very un-normalized so this Acct data is sprinkled all over the place. I've got no control of the normalization. So, having decided to insert all of the randomized data into a single table, I set out to create insert scripts: USE TheDatabase Insert tmp_RandomizedData SELECT 1,'4392 EIGHTH AVE','','JENNIFER CARTER','BARBARA CARTER' UNION ALL SELECT 2,'2168 MAIN ST','HNGR F','DANIEL HERNANDEZ','SUSAN MARTIN' // etc another 98 times... // FYI, this is not real data! I'm building this INSERT script in batches of 100. It's taking on average 175 ms to run each insert. Does this seem like a long time? It's going to take about 35 mins to run the whole insert. The table doesn't have a primary key or any indexes. I was planning on adding those after all the data in inserted (thinking that that would be faster). Is there a better way to do this?

    Read the article

  • Show hide DIVs : jQuery

    - by Muhammad Sajid
    Hi, I have two links & I want to show / hide them one at a time, my code is : <!DOCTYPE html> <html> <head> <script class="jsbin" src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"></script> <script type="text/javascript" src="js/jquery.js"></script> <script type="text/javascript"> // we will add our javascript code here $(document).ready(function() { $(function(){ $('#link').click(function(){ $('#colorDiv').slideToggle('slow'); return false; }); }); }); </script> <meta charset=utf-8 /> <title>JS Bin</title> <!--[if IE]> <script src="http://html5shiv.googlecode.com/svn/trunk/html5.js"></script> <![endif]--> <style> #dv { width:100px; height:100px; border:1px solid; } </style> </head> <body> <table cellspacing="2"> <tr><td><a href="#" id="link">Color</a></td><td><a href="#" id="link">Car</a></td></tr> <tr><td><div id="colorDiv">Red</div></td><td><div id="carDiv">PRADO</div></td></tr> </table> </body> </html> by default first div should me shown. hanks.

    Read the article

  • Count rows against to SQL server (2005) table?

    - by David.Chu.ca
    I have a simple question with two options to get count of rows in a SQL server (2005). I am using VS 2005. There are two options to get the count: SELECT id FROM Table1 WHERE dt >= startDt AND dt < endDt;; I get a list of ids from above call in cache and then I get count by List.Count. Another option is SELECT COUNT(*) FROM Table1 WHERE dt >= startDt AND dt < endDt; The above call will get the count directly. The issue is that I had several cases of exceptions with the second method: timeout. What I found is that the table1 is too big with millions of data. When I used the first option, it seems OK. I am confused by the fact that Count() takes more time than getting all the rows(is that true?). Not sure if the aggregation call with Count() would cause SQL server to create temporary table or cache on server side and it would result in slow performance when table is too big? I am not sure what is the best way to get the count?

    Read the article

  • iPhone Multithreaded Search

    - by Kulpreet
    I'm sort of new to any sort of multithreading and simply can't seem to get a simple search method working on a background thread properly. Everything seems to be in order with an NSAutoreleasePool and the UI being updated on the main thread. The app doesn't crash and does perform a search in the background but the search results yield the several of the same items several times depending on how fast I type it in. The search works properly without the multithreading (which is commented out), but is very slow because of the large amounts of data I am working with. Here's the code: - (void)filterContentForSearchText:(NSString*)searchText { isSearching = YES; NSAutoreleasePool *apool = [[NSAutoreleasePool alloc] init]; /* Update the filtered array based on the search text and scope. */ //[self.filteredListContent removeAllObjects]; // First clear the filtered array. for (Entry *entry in appDelegate.entries) { NSComparisonResult result = [entry.gurmukhiEntry compare:searchText options:(NSCaseInsensitiveSearch|NSDiacriticInsensitiveSearch) range:NSMakeRange(0, [searchText length])]; if (result == NSOrderedSame) { [self.filteredListContent addObject:entry]; } } [self.searchDisplayController.searchResultsTableView performSelectorOnMainThread:(@selector(reloadData)) withObject:nil waitUntilDone:NO]; //[self.searchDisplayController.searchResultsTableView reloadData]; [apool drain]; isSearching = NO; } - (BOOL)searchDisplayController:(UISearchDisplayController *)controller shouldReloadTableForSearchString:(NSString *)searchString { if (!isSearching) { [self.filteredListContent removeAllObjects]; // First clear the filtered array. [self performSelectorInBackground:(@selector(filterContentForSearchText:)) withObject:searchString]; } //[self filterContentForSearchText:searchString]; return NO; // Return YES to cause the search result table view to be reloaded. }

    Read the article

  • CakePHP - hasMany not fetching?

    - by Paolo Bergantino
    Maybe I am just having a slow day, but for the life of me I can't figure out why this is happening. I haven't done CakePHP in a while and I am trying to use the 1.3 version, but this doesn't seem to be working... I have two models: area.php <?php class Area extends AppModel { var $name = 'Area'; var $useTable = 'OR_AREA'; var $primaryKey = 'A_ID'; var $belongsTo = array( 'Building' => array( 'className' => 'Building', 'foreignKey' => 'FK_B_ID', ), 'Facility' => array( 'className' => 'Facility', 'foreignKey' => 'FK_F_ID', ), 'System' => array( 'className' => 'System', 'foreignKey' => 'FK_S_ID', ) ); } ?> building.php <?php class Building extends AppModel { var $name = 'Building'; var $useTable = 'OR_BLDG'; var $primaryKey = 'B_ID'; var $hasMany = array( 'Area' => array( 'className' => 'Area', 'foreignKey' => 'FK_B_ID', ) ); } ?> OR_AREA has a column titled FK_B_ID that refers to the B_ID. If I run something like: $this->Building->find('all', array('recursive' => 2)); I get empty [Area] arrays for all the Buildings even though there are plenty of Areas in the OR_AREA table that are associated to buildings. Not only that, the Query Table doesn't even show CakePHP attempted to find anything but all the records in OR_BLDG. All the more puzzling, if I do: $this->Area->find('all'); I get all the Areas and the [Building] arrays are populated when appropriate. What am I missing?

    Read the article

  • How do I stop the m2eclipse plugin interfering with command line mvn builds?

    - by locka
    I use the m2eclipse plugin in Eclipse so that I can import a Maven project. The plugin reads the pom.xml and sorts out the dependencies in the projects in an Eclipse friendly way so I'm not looking at a sea of broken references and errors. I use Eclipse for code development however I usually build the projects from the command line, e.g. "mvn clean install". Unfortunately when I do this, m2eclipse detects disk activity and attempts to rebuild the workspace. This interferes with the command line build and sometimes results in a race condition. For example the command line might be in its clean phase but fails because it tries to delete a file or directory which is locked during the workspace rebuild. Aside from that workspace rebuilding is incredibly slow, and between failed builds and wasted CPU my build process is 2-3x longer than it should be. It isn't an option to not use Eclipse (e.g. to use Netbeans), or to disable m2eclipse. It is a useful plugin except for this behaviour. So my question is, how do I stop m2eclipse from rebuilding the workspace all the time? Can I invoke a manual refresh and otherwise disable this behaviour?

    Read the article

  • Fetching real time data from excel

    - by Umesh Sharma
    I am seriouly looking for your valuable help first time here. If possible, plese help me. I am developing a VB.NET app in which i read "real time data" from a excel sheet using "Microsoft.Office.Interop.Excel" i.e. excel automation. All cells in excel sheet are fetching stock data from some LOCAL DDE Server like "=XYZ|Bid!GOLD", "=XYZ|Bid!SILVER", "=XYZ|Ask!SILVER" and so on... Some cells also having fixed values like "Symbol", "Bid Rate", "32.90" etc. Values of DDE mapped cells (i.e. =XYZ|xxxx!yyy) are continuously changing. THE PROBLEM is here..."FIXED values" from excel cells are coming quite ok to my app but all DDE mapped cells values are coming "-2146826246" (When datasource local dde server ON) or "-2146826265" (OFF). Although, if i use C#.NET, it's all ok but not with Vb.NET. I want to display range of excel (A1 to J50) into VB.NET ListView which are changing in every 200ms (5 times in every 1 second) ================ Important ====================================================== Is it possible to BIND "listview items/columns values" with "excel cells" or some local memory variables ?? Currently, i am reading excel "cell by cell" and trying to put values in .NET listview but CPU USES are very high as well as it's toooo slow process. If yes, then how please ? I am a VFP developer but new to .NET It's very easy in VFP then why not in .NET ?? Please guide me, if someone has the solution...

    Read the article

  • t-sql most efficient row to column? crosstab for xml path, pivot

    - by ajberry
    I am looking for the most performant way to turn rows into columns. I have a requirement to output the contents of the db (not actual schema below, but concept is similar) in both fixed width and delimited formats. The below FOR XML PATH query gives me the result I want, but when dealing with anything other than small amounts of data, can take awhile. select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o I've looked at pivot but most of the examples I have found are aggregating information. I just want to combine the child rows and tack them onto the parent. I should also point out I don't need to deal with the column names either since the output of the child rows will either be a fixed width string or a delimited string. For example, given the following tables: OrderId CustomerId ----------- ----------- 1 1 2 2 3 3 DetailId OrderId ProductId ----------- ----------- ----------- 1 1 100 2 1 158 3 1 234 4 2 125 5 3 101 6 3 105 7 3 212 8 3 250 for an order I need to output: orderid Products ----------- ----------------------- 1 100 158 234 2 125 3 101 105 212 250 or orderid Products ----------- ----------------------- 1 100|158|234 2 125 3 101|105|212|250 Thoughts or suggestions? I am using SQL Server 2k5. Example Setup: create table _orders ( OrderId int identity(1,1) primary key nonclustered ,CustomerId int ) create table _details ( DetailId int identity(1,1) primary key nonclustered ,OrderId int ,ProductId int ) insert into _orders (CustomerId) select 1 union select 2 union select 3 insert into _details (OrderId,ProductId) select 1,100 union select 1,158 union select 1,234 union select 2,125 union select 3,105 union select 3,101 union select 3,212 union select 3,250 using FOR XML PATH: select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o which outputs what I want, however is very slow for large amounts of data. One of the child tables is over 2 million rows, pushing the processing time out to ~ 4 hours. orderid Products ----------- ----------------------- 1 100 158 234 2 125 3 101 105 212 250

    Read the article

  • Index Tuning for SSIS tasks

    - by Raj More
    I am loading tables in my warehouse using SSIS. Since my SSIS is slow, it seemed like a great idea to build indexes on the tables. There are no primary keys (and therefore, foreign keys), indexes (clustered or otherwise), constraints, on this warehouse. In other words, it is 100% efficiency free. We are going to put indexes based on usage - by analyzing new queries and current query performance. So, instead of doing it our old fashioned sweat and grunt way of actually reading the SQL statements and execution plans, I thought I'd put the shiny new Database Engine Tuning Advisor to use. I turned SQL logging off in my SSIS package and ran a "Tuning" trace, saved it to a table and analyzed the output in the Tuning Advisor. Most of the lookups are done as: exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',1 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',2 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',3 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',4 and when analyzed, these statements have the reason "Event does not reference any tables". Huh? Does it not see the FROM dbo.Company??!! What is going on here? So, I have multiple questions: How do I get it to capture the actual statement executing in my trace, not what was submitted in a batch? Are there any best practices to follow for tuning performance related to SSIS packages running against SQL Server 2008?

    Read the article

  • jQuery loading issues within wordpress

    - by Chase
    I am having a couple problems trying to manually insert some jQuery features into a wordpress theme. I have a lightbox wordpress plugin that is jQuery based that is working fine. So if I manually load the jQuery script into wordpress the functions seem to work but instead of say a slide being hidden it is revealed when it should still be hidden. Or a pop up that should work is already being shown instead of hidden. I don't think I'm supposed to manually include the jQuery into my skin but using the wp_enqueue_script('jquery'); doesn't seem to be resolving my issues either. <script src="http://platform.twitter.com/anywhere.js?id=i5CnpkmwnlWpDdAZGVpxw&v=1" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function(){ $(".btn-slide").click(function(){ $("#twitpanel").slideToggle("slow"); $(this).toggleClass("active"); }); }); </script> <div id="tweetit"><a class="btn-slide">Tell em'</a> <div id="twitpanel"></div> <script type="text/javascript"> twttr.anywhere(function (T) { T("#twitpanel").tweetBox({ height: 100, width: 225, defaultContent: "Some Random Text" }); }); </script> </div></h2> Like I said it works but in the reverse fashion that it should be. I think I'm just loading in something wrong? TIA, Chase

    Read the article

  • Quickly determine if a number is prime in Python for numbers < 1 billion

    - by Frór
    Hi, My current algorithm to check the primality of numbers in python is way to slow for numbers between 10 million and 1 billion. I want it to be improved knowing that I will never get numbers bigger than 1 billion. The context is that I can't get an implementation that is quick enough for solving problem 60 of project Euler: I'm getting the answer to the problem in 75 seconds where I need it in 60 seconds. http://projecteuler.net/index.php?section=problems&id=60 I have very few memory at my disposal so I can't store all the prime numbers below 1 billion. I'm currently using the standard trial division tuned with 6k±1. Is there anything better than this? Do I already need to get the Rabin-Miller method for numbers that are this large. primes_under_100 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] def isprime(n): if n <= 100: return n in primes_under_100 if n % 2 == 0 or n % 3 == 0: return False for f in range(5, int(n ** .5), 6): if n % f == 0 or n % (f + 2) == 0: return False return True How can I improve this algorithm?

    Read the article

  • is mysql index useful on column 'state' when only doing bit-operations on the column?

    - by Geert-Jan
    I have a lot of domain entities (stored in mysql) which undergo lots of different operations. Each operation is executed from a different program. I need to keep (flow)-state for these entities which I implemented in as a long field 'flowstate' used as a bitset. to query mysql for entities which have undergone a certain operation I do something like: select * from entities where state >> 7 & 1 = 1 Indicating bit 7 (cooresponding to operation 7) has run. (<-- simplified) Anyway, I really didn't pay attention to the performance implications of this setup in the beginning, and I think I'm in a bit of trouble since queries as the above run pretty slow. What I'd like to know: Does an mysql index on 'flowstate' help at all? After all it's not a single value Mysql can quickly find using a binary sort or whatever. If it doesn't, are there any other things I could do to speed things up? . Are there special 'mask-indices' for fields with use-cases as the above? TIA, Geert-jan

    Read the article

  • fastest SCM tool available for Embedded software development

    - by wrapperm
    Hi All, In my company, presently we are using Rational clearcase as the Software Configuration Management tool for our Embedded software development. The software is basically for Automobiles, to be specific for Engines (I dont think these information really matters). But I find Clearcase to be very slow is performing any the activities (accesing files, branching and labelling), in addition to which there are various other limitations. We have recently decided to research on some free & open source, distributed version control system which could be able to handle our large projects with speed and efficiency. This tool should be a full-fledged repository with complete history and full revision tracking capabilities, not dependent on network access or a central server. Branching and merging are fast and easy to do. It should have multisite development facility. With these above mentioned requirement, we have come up with some of the tools that are presently available in the market: GIT, Mercurial, Bazaar, Subversion, CVS, Perforce, and Visual SourceSafe. I need everybody's help in finding me an approrpiate SCM tool for me which meets the above mentioned requirements. Thanking you in Advance, Rahamath.

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >