Search Results

Search found 7671 results on 307 pages for 'slow browsing'.

Page 290/307 | < Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >

  • Seeking for faster $.(':data(key)')

    - by PoltoS
    I'm writing an extension to jQuery that adds data to DOM elements using el.data('lalala', my_data); and then uses that data to upload elements dynamically. Each time I get new data from the server I need to update all elements having el.data('lalala') != null; To get all needed elements I use an extension by James Padolsey: $(':data(lalala)').each(...); Everything was great until I came to the situation where I need to run that code 50 times - it is very slow! It takes about 8 seconds to execute on my page with 3640 DOM elements var x, t = (new Date).getTime(); for (n=0; n < 50; n++) { jQuery(':data(lalala)').each(function() { x++; }); }; console.log(((new Date).getTime()-t)/1000); Since I don't need RegExp as parameter of :data selector I've tried to replace this by var x, t = (new Date).getTime(); for (n=0; n < 50; n++) { jQuery('*').each(function() { if ($(this).data('lalala')) x++; }); }; console.log(((new Date).getTime()-t)/1000); This code is faster (5 sec), but I want get more. Q Are there any faster way to get all elements with this data key? In fact, I can keep an array with all elements I need, since I execute .data('key') in my module. Checking 100 elements having the desired .data('lalala') is better then checking 3640 :) So the solution would be like for (i in elements) { el = elements[i]; .... But sometimes elements are removed from the page (using jQuery .remove()). Both solutions described above [$(':data(lalala)') solution and if ($(this).data('lalala'))] will skip removed items (as I need), while the solution with array will still point to removed element (in fact, the element would not be really deleted - it will only be deleted from the DOM tree - because my array will still have a reference). I found that .remove() also removes data from the node, so my solution will change into var toRemove = []; for (vari in elements) { var el = elements[i]; if ($(el).data('lalala')) .... else toRemove.push(i); }; for (var ii in toRemove) elements.splice(toRemove[ii], 1); // remove element from array This solution is 100 times faster! Q Will the garbage collector release memory taken by DOM elements when deleted from that array? Remember, elements have been referenced by DOM tree, we made a new reference in our array, then removed with .remove() and then removed from the array. Is there a better way to do this?

    Read the article

  • practical security ramifications of increasing WCF clock skew to more than an hour

    - by Andrew Patterson
    I have written a WCF service that returns 'semi-private' data concerning peoples name, addresses and phone numbers. By semi-private, I mean that there is a username and password to access the data, and the data is meant to be secured in transit. However, IMHO noone is going to expend any energy trying to obtain the data, as it is mostly available in the public phone book anyway etc. At some level, the security is a bit of security 'theatre' to tick some boxes imposed on us by government entities. The client end of the service is an application which is given out to registered 'users' to run within their own IT setups. We have no control over the IT of the users - and in fact they often tell us to 'go jump' if we put too many requirements on their systems. One problem we have been encountering is numerous users that have system clocks that are not accurate. This can either be caused by a genuine slow/fast clocks, or more than likely a timezone or daylight savings zone error (putting their machine an hour off the 'real' time). A feature of the WCF bindings we are using is that they rely on the notion of time to detect replay attacks etc. <wsHttpBinding> <binding name="normalWsBinding" maxBufferPoolSize="524288" maxReceivedMessageSize="655360"> <reliableSession enabled="false" /> <security mode="Message"> <message clientCredentialType="UserName" negotiateServiceCredential="false" algorithmSuite="Default" establishSecurityContext="false" /> </security> </binding> </wsHttpBinding> The inaccurate client clocks cause security exceptions to be thrown and unhappy users. Other than suggesting users correct their clocks, we know that we can increase the clock skew of the security bindings. http://www.danrigsby.com/blog/index.php/2008/08/26/changing-the-default-clock-skew-in-wcf/ My question is, what are the real practical security ramifications of increasing the skew to say 2 hours? If an attacker can perform some sort of replay attack, why would a clock skew window of 5 minutes be necessarily safer than 2 hours? I presume performing any attack with security mode of 'message' requires more than just capturing some data at a proxy and sending the data back in again to 'replay' the call? In a situation like mine where data is only 'read' by the users, are there indeed any security ramifications at all to allowing 'replay' attacks?

    Read the article

  • XML Schema to Java Classes with XJC

    - by nevets1219
    I am using xjc to generate Java classes from the XML schema and the following is an excerpt of the XSD. <xs:element name="NameInfo"> <xs:complexType> <xs:sequence> <xs:choice> <xs:element ref="UnstructuredName"/> <!-- This line --> <xs:sequence> <xs:element ref="StructuredName"/> <xs:element ref="UnstructuredName" minOccurs="0"/> <!-- and this line! --> </xs:sequence> </xs:choice> <xs:element ref="SomethingElse" minOccurs="0"/> </xs:sequence> </xs:complexType> </xs:element> For the most part the generated classes are fine but for the above block I would get something like: public List<Object> getContent() { if (content == null) { content = new ArrayList<Object>(); } return this.content; } with the following comment above it: * You are getting this "catch-all" property because of the following reason: * The field name "UnstructuredName" is used by two different parts of a schema. See: * line XXXX of file:FILE.xsd * line XXXX of file:FILE.xsd * To get rid of this property, apply a property customization to one * of both of the following declarations to change their names: * Gets the value of the content property. I have placed a comment at the end of the two line in question. At the moment, I don't think it will be easy to change the schema since this was decided between vendors and I would not want to go this route (if possible) as it will slow down progress quite a bit. I searched and have found this page, is the external customization what I want to do? I have been mostly working with the generated classes so I'm not entirely familiar with the process that generates these classes. A simple example of the "property customization" would be great! Alternative method of generating the Java classes would be fine as long as the schema can still be used.

    Read the article

  • Asynchronous readback from opengl front buffer using multiple PBO's

    - by KillianDS
    I am developing an application that needs to read back the whole frame from the front buffer of an openGL application. I can hijack the application's opengl library and insert my code on swapbuffers. At the moment I am successfully using a simple but excruciating slow glReadPixels command without PBO's. Now I read about using multiple PBO's to speed things up. While I think I've found enough resources to actually program that (isn't that hard), I have some operational questions left. I would do something like this: create a series (e.g. 3) of PBO's use glReadPixels in my swapBuffers override to read data from front buffer to a PBO (should be fast and non-blocking, right?) Create a seperate thread to call glMapBufferARB, once per PBO after a glReadPixels, because this will block until the pixels are in client memory. Process the data from step 3. Now my main concern is of course in steps 2 and 3. I read about glReadPixels used on PBO's being non-blocking, will this be an issue if I issue new opengl commands after that very fast? Will those opengl commands block? Or will they continue (my guess), and if so, I guess only swapbuffers can be a problem, will this one stall or will glReadPixels from front buffer be many times faster than swapping (about each 15-30ms) or, worst case scenario, will swapbuffers be executed while glReadPixels is still reading data to the PBO? My current guess is this logic will do something like this: copy FRONT_BUFFER - generic place in VRAM, copy VRAM-RAM. But I have no idea which of those 2 is the real bottleneck and more, what the influence on the normal opengl command stream is. Then in step 3. Is it wise to do this asynchronously in a thread separated from normal opengl logic? At the moment I think not, It seems you have to restore buffer operations to normal after doing this and I can't install synchronization objects in the original code to temporarily block those. So I think my best option is to define a certain swapbuffer delay before reading them out, so e.g. calling glReadPixels on PBO i%3 and glMapBufferARB on PBO (i+2)%3 in the same thread, resulting in a delay of 2 frames. Also, when I call glMapBufferARB to use data in client memory, will this be the bottleneck or will glReadPixels (asynchronously) be the bottleneck? And finally, if you have some better ideas to speed up frame readback from GPU in opengl, please tell me, because this is a painful bottleneck in my current system. I hope my question is clear enough, I know the answer will probably also be somewhere on the internet but I mostly came up with results that used PBO's to keep buffers in video memory and do processing there. I really need to read back the front buffer to RAM and I do not find any clear explanations about performance in that case (which I need, I cannot rely on "it's faster", I need to explain why it's faster). Thank you

    Read the article

  • Galleria jQuery plugin briefly shows all images in IE 7 & 8

    - by hollyb
    I'm using the galleria jQuery plugin on a site. When the gallery first loads, all of the images appear briefly & vertically in ie 7 & 8. This doesn't happen when i isolate the gallery, only when i put it on a somewhat heavy page. This leads me to believe that it happens when the page is a little slow to load. Does anybody know a way to fix this? I feel like an overflow: hidden should fix this, but I've applied it along with a height in every container I could think of. Anybody have any ideas? Here is my css: .galleria{list-style:none;width:350px; overflow:hidden; height: 70px;} .galleria li{display:block;width:50px;height:50px;overflow:hidden;float:left;margin:4px 10px 20px 0;} .galleria li a{display:none;} .galleria li div{position:absolute;display:none;top:0;left:180px;} .galleria li div img{cursor:pointer;} .galleria li.active div img,.galleria li.active div{display:block;} .galleria li img.thumb{cursor:pointer;top:auto;left:auto;display:block;width:auto;height:auto} .galleria li .caption{display: inline;padding-top:.5em; width: 300px; } * html .galleria li div span{width:350px;} /* MSIE bug */ html: <ul class="gallery"> <li class="active"><img src="1.jpg" cap="A great veiw by so and so. This is a long block of info.<br /><span style=color:#666;>Photo by: Billy D. Williams</span>" alt="Image01"></li> <li><img src="2.jpg" cap="A mountain <span style=color:#666;>Photo by: Billy D. Williams</span>" alt="Image01"></li> <li><img src="3.jpg" cap="Another witty caption <span style=color:#666;>Photo by: Billy D. Williams</span>" alt="Image01"></li> <li><img src="4.jpg" cap="<span style=color:#666;>Photo by: Billy D. Williams</span>" alt="Image01"></li> </ul>

    Read the article

  • How to quickly acquire and process real time screen output

    - by Akusete
    I am trying to write a program to play a full screen PC game for fun (as an experiment in Computer Vision and Artificial Intelligence). For this experiment I am assuming the game has no underlying API for AI players (nor is the source available) so I intend to process the visual information rendered by the game on the screen. The game runs in full screen mode on a win32 system (direct-X I assume). Currently I am using the win32 functions #include <windows.h> #include <cvaux.h> class Screen { public: HWND windowHandle; HDC windowContext; HBITMAP buffer; HDC bufferContext; CvSize size; uchar* bytes; int channels; Screen () { windowHandle = GetDesktopWindow(); windowContext = GetWindowDC (windowHandle); size = cvSize (GetDeviceCaps (windowContext, HORZRES), GetDeviceCaps (windowContext, VERTRES)); buffer = CreateCompatibleBitmap (windowContext, size.width, size.height); bufferContext = CreateCompatibleDC (windowContext); SelectObject (bufferContext, buffer); channels = 4; bytes = new uchar[size.width * size.height * channels]; } ~Screen () { ReleaseDC(windowHandle, windowContext); DeleteDC(bufferContext); DeleteObject(buffer); delete[] bytes; } void CaptureScreen (IplImage* img) { BitBlt(bufferContext, 0, 0, size.width, size.height, windowContext, 0, 0, SRCCOPY); int n = size.width * size.height; int imgChannels = img->nChannels; GetBitmapBits (buffer, n * channels, bytes); uchar* src = bytes; uchar* dest = (uchar*) img->imageData; uchar* end = dest + n * imgChannels; while (dest < end) { dest[0] = src[0]; dest[1] = src[1]; dest[2] = src[2]; dest += imgChannels; src += channels; } } The rate at which I can process frames using this approach is much to slow. Is there a better way to acquire screen frames?

    Read the article

  • Implement looped movement animation with tap to cancel

    - by Nader
    Hi All; My app is based around a grid and an image that moves within a grid that is contained within a scrollview. I have an imageview that I am animating from one cell to another in time with a slow finger movement and recentering the scrollview. That is rather straight forward. I have also implement the ability to detect a swipe and therefore move the image all the way to the end of the grid and the uiscrollview recentering. I have even implemented the ability to detect a subsequent tap and freeze the swiped movement. The issue with the swipe movement is that the UIScrollView will scroll all the way to the end before the Image reaches the end and so I have to wait for the image to arrive. Also, when I freeze the movement of the image, I have to re-align the image to a cell (which I can do). I have come to the realization that I have to animate the image one cell at a time for swipes and recentering the uiscrollview before moving the image to the next cell. I have attempted to implement this but I cannot come up with a solution that works or works properly. Can anyone suggest how I go about implementing this? Even if you are able to put up code from a different example or sudo code, it would help a lot as I cannot workout how this should be done, should I be using selectors, a listener in delegates, I just simply lack the experience to solve this design pattern. Here is some code: Note that the sprite is an UIImageView - (void)animateViewToPosition:(SpriteView *)sprite Position:(CGPoint)pos Duration:(CFTimeInterval)duration{ CGMutablePathRef traversePath = CGPathCreateMutable(); CGPathMoveToPoint(traversePath, NULL, sprite.center.x, sprite.center.y); CGPathAddLineToPoint(traversePath, NULL, pos.x, pos.y); CAKeyframeAnimation *traverseAnimation = [CAKeyframeAnimation animationWithKeyPath:kAnimatePosition]; traverseAnimation.duration = duration; traverseAnimation.removedOnCompletion = YES; traverseAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear]; traverseAnimation.delegate = sprite; traverseAnimation.path = traversePath; CGPathRelease(traversePath); [sprite.layer addAnimation:traverseAnimation forKey:kAnimatePosition]; sprite.center = pos;

    Read the article

  • Implement looped movement animation with tap to cancel

    - by Nader
    Hi All; My app is based around a grid and an image that moves within a grid that is contained within a scrollview. I have an imageview that I am animating from one cell to another in time with a slow finger movement and recentering the scrollview. That is rather straight forward. I have also implement the ability to detect a swipe and therefore move the image all the way to the end of the grid and the uiscrollview recentering. I have even implemented the ability to detect a subsequent tap and freeze the swiped movement. The issue with the swipe movement is that the UIScrollView will scroll all the way to the end before the Image reaches the end and so I have to wait for the image to arrive. Also, when I freeze the movement of the image, I have to re-align the image to a cell (which I can do). I have come to the realization that I have to animate the image one cell at a time for swipes and recentering the uiscrollview before moving the image to the next cell. I have attempted to implement this but I cannot come up with a solution that works or works properly. Can anyone suggest how I go about implementing this? Even if you are able to put up code from a different example or sudo code, it would help a lot as I cannot workout how this should be done, should I be using selectors, a listener in delegates, I just simply lack the experience to solve this design pattern. Here is some code: Note that the sprite is an UIImageView - (void)animateViewToPosition:(SpriteView *)sprite Position:(CGPoint)pos Duration:(CFTimeInterval)duration{ CGMutablePathRef traversePath = CGPathCreateMutable(); CGPathMoveToPoint(traversePath, NULL, sprite.center.x, sprite.center.y); CGPathAddLineToPoint(traversePath, NULL, pos.x, pos.y); CAKeyframeAnimation *traverseAnimation = [CAKeyframeAnimation animationWithKeyPath:kAnimatePosition]; traverseAnimation.duration = duration; traverseAnimation.removedOnCompletion = YES; traverseAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear]; traverseAnimation.delegate = sprite; traverseAnimation.path = traversePath; CGPathRelease(traversePath); [sprite.layer addAnimation:traverseAnimation forKey:kAnimatePosition]; sprite.center = pos; }

    Read the article

  • Efficiency of Java "Double Brace Initialization"?

    - by Jim Ferrans
    In Hidden Features of Java the top answer mentions Double Brace Initialization, with a very enticing syntax: Set<String> flavors = new HashSet<String>() {{ add("vanilla"); add("strawberry"); add("chocolate"); add("butter pecan"); }}; This idiom creates an anonymous inner class with just an instance initializer in it, which "can use any [...] methods in the containing scope". Main question: Is this as inefficient as it sounds? Should its use be limited to one-off initializations? (And of course showing off!) Second question: The new HashSet must be the "this" used in the instance initializer ... can anyone shed light on the mechanism? Third question: Is this idiom too obscure to use in production code? Summary: Very, very nice answers, thanks everyone. On question (3), people felt the syntax should be clear (though I'd recommend an occasional comment, especially if your code will pass on to developers who may not be familiar with it). On question (1), The generated code should run quickly. The extra .class files do cause jar file clutter, and slow program startup slightly (thanks to coobird for measuring that). Thilo pointed out that garbage collection can be affected, and the memory cost for the extra loaded classes may be a factor in some cases. Question (2) turned out to be most interesting to me. If I understand the answers, what's happening in DBI is that the anonymous inner class extends the class of the object being constructed by the new operator, and hence has a "this" value referencing the instance being constructed. Very neat. Overall, DBI strikes me as something of an intellectual curiousity. Coobird and others point out you can achieve the same effect with Arrays.asList, varargs methods, Google Collections, and the proposed Java 7 Collection literals. Newer JVM languages like Scala, JRuby, and Groovy also offer concise notations for list construction, and interoperate well with Java. Given that DBI clutters up the classpath, slows down class loading a bit, and makes the code a tad more obscure, I'd probably shy away from it. However, I plan to spring this on a friend who's just gotten his SCJP and loves good natured jousts about Java semantics! ;-) Thanks everyone!

    Read the article

  • Accurately display upload progress in Silverilght upload

    - by Matt
    I'm trying to debug a file upload / download issue I'm having. I've got a Silverlight file uploader, and to transmit the files I make use of the HttpWebRequest class. So I create a connection to my file upload handler on the server and begin transmitting. While a file uploads I keep track of total bytes written to the RequestStream so I can figure out a percentage. Now working at home I've got a rather slow connection, and I think Silverlight, or the browser, is lying to me. It seems that my upload progress logic is inaccurate. When I do multiple file uploads (24 images of 3-6mb big in my testing), the logic reports the files finish uploading but I believe that it only reflects the progress of written bytes to the RequestStream, not the actual amount of bytes uploaded. What is the most accurate way to measure upload progress. Here's the logic I'm using. public void Upload() { if( _TargetFile != null ) { Status = FileUploadStatus.Uploading; Abort = false; long diff = _TargetFile.Length - BytesUploaded; UriBuilder ub = new UriBuilder( App.siteUrl + "upload.ashx" ); bool complete = diff <= ChunkSize; ub.Query = string.Format( "{3}name={0}&StartByte={1}&Complete={2}", fileName, BytesUploaded, complete, string.IsNullOrEmpty( ub.Query ) ? "" : ub.Query.Remove( 0, 1 ) + "&" ); HttpWebRequest webrequest = ( HttpWebRequest ) WebRequest.Create( ub.Uri ); webrequest.Method = "POST"; webrequest.BeginGetRequestStream( WriteCallback, webrequest ); } } private void WriteCallback( IAsyncResult asynchronousResult ) { HttpWebRequest webrequest = ( HttpWebRequest ) asynchronousResult.AsyncState; // End the operation. Stream requestStream = webrequest.EndGetRequestStream( asynchronousResult ); byte[] buffer = new Byte[ 4096 ]; int bytesRead = 0; int tempTotal = 0; Stream fileStream = _TargetFile.OpenRead(); fileStream.Position = BytesUploaded; while( ( bytesRead = fileStream.Read( buffer, 0, buffer.Length ) ) != 0 && tempTotal + bytesRead < ChunkSize && !Abort ) { requestStream.Write( buffer, 0, bytesRead ); requestStream.Flush(); BytesUploaded += bytesRead; tempTotal += bytesRead; int percent = ( int ) ( ( BytesUploaded / ( double ) _TargetFile.Length ) * 100 ); UploadPercent = percent; if( UploadProgressChanged != null ) { UploadProgressChangedEventArgs args = new UploadProgressChangedEventArgs( percent, bytesRead, BytesUploaded, _TargetFile.Length, _TargetFile.Name ); SmartDispatcher.BeginInvoke( () => UploadProgressChanged( this, args ) ); } } //} // only close the stream if it came from the file, don't close resizestream so we don't have to resize it over again. fileStream.Close(); requestStream.Close(); webrequest.BeginGetResponse( ReadCallback, webrequest ); }

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

  • Optimizing multiple dispatch notification algorithm in C#?

    - by Robert Fraser
    Sorry about the title, I couldn't think of a better way to describe the problem. Basically, I'm trying to implement a collision system in a game. I want to be able to register a "collision handler" that handles any collision of two objects (given in either order) that can be cast to particular types. So if Player : Ship : Entity and Laser : Particle : Entity, and handlers for (Ship, Particle) and (Laser, Entity) are registered than for a collision of (Laser, Player), both handlers should be notified, with the arguments in the correct order, and a collision of (Laser, Laser) should notify only the second handler. A code snippet says a thousand words, so here's what I'm doing right now (naieve method): public IObservable<Collision<T1, T2>> onCollisionsOf<T1, T2>() where T1 : Entity where T2 : Entity { Type t1 = typeof(T1); Type t2 = typeof(T2); Subject<Collision<T1, T2>> obs = new Subject<Collision<T1, T2>>(); _onCollisionInternal += delegate(Entity obj1, Entity obj2) { if (t1.IsAssignableFrom(obj1.GetType()) && t2.IsAssignableFrom(obj2.GetType())) obs.OnNext(new Collision<T1, T2>((T1) obj1, (T2) obj2)); else if (t1.IsAssignableFrom(obj2.GetType()) && t2.IsAssignableFrom(obj1.GetType())) obs.OnNext(new Collision<T1, T2>((T1) obj2, (T2) obj1)); }; return obs; } However, this method is quite slow (measurable; I lost ~2 FPS after implementing this), so I'm looking for a way to shave a couple cycles/allocation off this. I thought about (as in, spent an hour implementing then slammed my head against a wall for being such an idiot) a method that put the types in an order based on their hash code, then put them into a dictionary, with each entry being a linked list of handlers for pairs of that type with a boolean indication whether the handler wanted the order of arguments reversed. Unfortunately, this doesn't work for derived types, since if a derived type is passed in, it won't notify a subscriber for the base type. Can anyone think of a way better than checking every type pair (twice) to see if it matches? Thanks, Robert

    Read the article

  • Updating Checked Checkboxes using CodeIgniter + MySQL

    - by Tim
    Hello I have about 8 check boxes that are being generated dynamically from my database. This is the code in my controller //Start Get Processes Query $this->db->select('*'); $this->db->from('projects_processes'); $this->db->where('process_enabled', '1'); $data['getprocesses'] = $this->db->get(); //End Get Processes Query //Start Get Checked Processes Query $this->db->select('*'); $this->db->from('projects_processes_reg'); $this->db->where('project_id', $project_id); $data['getchecked'] = $this->db->get(); //End Get Processes Query This is the code in my view. <?php if($getprocesses->result_array()) { ?> <?php foreach($getprocesses->result_array() as $getprocessrow): ?> <tr> <td><input <?php if($getchecked->result_array()) { foreach($getchecked->result_array() as $getcheckedrow): if($getprocessrow['process_id'] == $getcheckedrow['process_id']) { echo 'checked'; } endforeach; }?> type="checkbox" name="progresscheck[]" value="<?php echo $getprocessrow['process_id']; ?>"><?php echo $getprocessrow['process_name']; ?><br> </td> </tr> <?php endforeach; ?> This generates the checkboxes into the form and also checks the appropriate ones as specified by the database. The problem is updating them. What I have been doing so far is simply deleting all checkbox entries for the project and then re-inserting all the values into the database. This is bad because 1. It's slow and horrible. 2. I lose all my meta data of when the check boxes were checked. So I guess my question is, how do I update only the checkboxes that have been changed? Thanks, Tim

    Read the article

  • Asp.net hosting equivalent of Dreamhost (pricing, features and support)

    - by Cherian
    Disclaimer: I have browsed http://stackoverflow.com/questions/tagged/asp.net+hosting and didn’t find anything quite similar in value to Dreamhost. One of the biggest impediments IMHO for developing web applications on asp.net is the cost of deployment. I am not talking about building sites like Stackoverflow.com or plentyoffish.com. This is about sites that are bigger than brochureware and smaller than ones that require dedicated servers. Let me give you an example. xmec.org is an asp.net site I maintain for my college alumni. On an average it’s slated to hit around 1000-1100 views per day. At present it’s hosted on godaddy. The service is so damn pathetic; I am using it only because of the lack of options. The site doesn’t scale (no, it’s not the code) and the web control panels are extremely slow. The money I pay doesn’t justify the service or the performance. Every deployment push is a visit to the infuriating web control panel to set the permissions and the root directories. Had I developed it in python, this would have been deployed on Dreamhost.com with $10/year hosting fees (they have offers running all throughout) 50 GB space 5 MySQL Databases Shell / FTP Users POP / SMTP Access Unlimited Domains hosting Unlimited Sub domains hosting Unlimited Domains Forwarded/Mirrored Custom DNS (These are the only ones I could think of. More at the feature page) With a dream host shell, I even have a svn checked-out version of wordpress for my blog. Now, that’s control! To my question: Is there any asp.net (preferably .net 3.5. Dreamhost keeps on updating versions every fortnight) hosting company providing remotely similar feature-sets and pricing like Dreamhost. My requirements are: Less than $15-25/ year Typical WISP minus PHP .net 3.5 SP1 Full Trust mode(I can live with medium trust, if not for the IL emitting libraries) Isolated Application Pool 5 – 10 MySQL db’s Unlimited domain hosting MsSql 2005 or 2008 FTP support At Least 5 GB space SMTP IIS 7 Log files Accessibility Moderately good control panel Scripting, shell support Nominal bandwidth Another case in point: Recently I’ve been contemplating building a tool-website to find duplicates and weird characters in my Google contacts and fix them. With asp.net, the best part is that I can do this with LINQ to XML in less than 100 lines of code. What’s bad is the hosting part. I don’t think I stand to make any money out of this and therefore can’t afford to host it on GoGrid or DiscountAsp.net. Godaddy is not an option either. If I do this in python, I can push to this my existing $10 Dreamhost account with another domain pointed. No extra cost. Svn exported with scripts (capability) to change the connection string! Looking at the problem holistically, I think I represent a large breed of programmers playing it cheap and experimenting different things on a regular basis, one of which will become the next twitter/digg.

    Read the article

  • performance problem looping through table rows

    - by Sridhar
    Hi, I am using jquery to loop through table rows and save the data. If the table has 200 rows it is performing slow. I am getting the javascript message "Stop Running this script" in IE when I call this method. Following is the code I am using to loop through table rows. Can you please let me know if there is a better way to do this. function SaveData() { var $table = $('#' + gridid); var rows = $table.find('tbody > tr').get(); var transactions = []; var $row, empno, newTransaction, $rowChildren; $.each(rows, function(index, row) { $row = $(row); $rowChildren = $row.children("td"); if ($rowChildren.find("input[id*=hRV]").val() === '1') { empno = $rowChildren.find("input[id*=tEmpno]").val(); newTransaction = new Array(); newTransaction[0] = company; newTransaction[1] = $rowChildren.find("input[id*=tEmpno]").val(); newTransaction[2] = $rowChildren.find("input[id*=tPC]").val(); newTransaction[3] = $rowChildren.find("input[id*=hQty]").val(); newTransaction[4] = $rowChildren.find("input[id*=hPR]").val(); newTransaction[5] = $rowChildren.find("input[id*=tJC]").val(); newTransaction[6] = $rowChildren.find("input[id*=tL1]").val(); newTransaction[7] = $rowChildren.find("input[id*=tL2]").val(); newTransaction[8] = $rowChildren.find("input[id*=tL3]").val(); newTransaction[9] = $rowChildren.find("input[id*=tL4]").val(); newTransaction[10] = $rowChildren.find("input[id*=tL5]").val(); newTransaction[11] = $rowChildren.find("input[id*=tL6]").val(); newTransaction[12] = $rowChildren.find("input[id*=tL7]").val(); newTransaction[13] = $rowChildren.find("input[id*=tL8]").val(); newTransaction[14] = $rowChildren.find("input[id*=tL9]").val(); newTransaction[15] = $rowChildren.find("input[id*=tL10]").val(); newTransaction[16] = $rowChildren.find("input[id*=tSF]").val(); newTransaction[17] = $rowChildren.find("input[id*=tCG]").val(); newTransaction[18] = $rowChildren.find("input[id*=tTF]").val(); newTransaction[19] = $rowChildren.find("input[id*=tWK]").val(); newTransaction[20] = $rowChildren.find("input[id*=tAI]").val(); newTransaction[21] = $rowChildren.find("input[id*=tWC]").val(); newTransaction[22] = $rowChildren.find("input[id*=tPI]").val(); newTransaction[23] = "E"; var record = newTransaction.join(';'); transactions.push(record); } }); if (transactions.length > 0) { var strTransactions = transactions.join('|'); //send data to server //here ajax function is called to save data. } }

    Read the article

  • C++ vs Matlab vs Python as a main language for Computer Vision Research

    - by Hough
    Hi all, Firstly, sorry for a somewhat long question but I think that many people are in the same situation as me and hopefully they can also gain some benefit from this. I'll be starting my PhD very soon which involves the fields of computer vision, pattern recognition and machine learning. Currently, I'm using opencv (2.1) C++ interface and I especially like its powerful Mat class and the overloaded operations available for matrix and image operations and seamless transformations. I've also tried (and implemented many small vision projects) using opencv python interface (new bindings; opencv 2.1) and I really enjoy python's ability to integrate opencv, numpy, scipy and matplotlib. But recently, I went back to opencv C++ interface because I felt that the official python new bindings were not stable enough and no overloaded operations are available for matrices and images, not to mention the lack of machine learning modules and slow speeds in certain operations. I've also used Matlab extensively in the past and although I've used mex files and other means to speed up the program, I just felt that Matlab's performance was inadequate for real-time vision tasks, be it for fast prototyping or not. When the project becomes larger and larger, many tasks have to be re-written in C and compiled into Mex files increasingly and Matlab becomes nothing more than a glue language. Here comes the sub-questions: For carrying out research in these fields (machine learning, vision, pattern recognition), what is your main or ideal programming language for rapid prototyping of ideas and testing algorithms contained in papers? For computer vision research work, can you list down the pros and cons of using the following languages? C++ (with opencv + gsl + svmlib + other libraries) vs Matlab (with all its toolboxes) vs python (with the imcomplete opencv bindings + numpy + scipy + matplotlib). Are there computer vision PhD/postgrad students here who are using only C++ (with all its availabe libraries including opencv) without even needing to resort to Matlab or python? In other words, given the current existing computer vision or machine learning libraries, is C++ alone sufficient for fast prototyping of ideas? If you're currently using Java or C# for your research, can you list down the reasons why they should be used and how they compare to other languages in terms of available libraries? What is the de facto vision/machine learning programming language and its associated libraries used in your research group? Thanks in advance. Edit: As suggested, I've opened the question to both academic and non-academic computer vision/machine learning/pattern recognition researchers and groups.

    Read the article

  • UIImagePickerController Memory Leak

    - by Watson
    I am seeing a huge memory leak when using UIImagePickerController in my iPhone app. I am using standard code from the apple documents to implement the control: UIImagePickerController* imagePickerController = [[UIImagePickerController alloc] init]; imagePickerController.delegate = self; if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]) { switch (buttonIndex) { case 0: imagePickerController.sourceType = UIImagePickerControllerSourceTypeCamera; [self presentModalViewController:imagePickerController animated:YES]; break; case 1: imagePickerController.sourceType = UIImagePickerControllerSourceTypePhotoLibrary; [self presentModalViewController:imagePickerController animated:YES]; break; default: break; } } And for the cancel: -(void) imagePickerControllerDidCancel:(UIImagePickerController *)picker { [[picker parentViewController] dismissModalViewControllerAnimated: YES]; [picker release]; } The didFinishPickingMediaWithInfo callback is just as stanard, although I do not even have to pick anything to cause the leak. Here is what I see in instruments when all I do is open the UIImagePickerController, pick photo library, and press cancel, repeatedly. As you can see the memory keeps growing, and eventually this causes my iPhone app to slow down tremendously. As you can see I opened the image picker 24 times, and each time it malloc'd 128kb which was never released. Basically 3mb out of my total 6mb is never released. This memory stays leaked no matter what I do. Even after navigating away from the current controller, is remains the same. I have also implemented the picker control as a singleton with the same results. Here is what I see when I drill down into those two lines: Any help here would be greatly appreciated! Again, I do not even have to choose an image. All I do is present the controller, and press cancel. Update 1 I downloaded and ran apple's example of using the UIIMagePickerController and I see the same leak happening there when running instruments (both in simulator and on the phone). http://developer.apple.com/library/ios/#samplecode/PhotoPicker/Introduction/Intro.html%23//apple_ref/doc/uid/DTS40010196 All you have to do is hit the photo library button and hit cancel over and over, you'll see the memory keep growing. Any ideas? Update 2 I only see this problem when viewing the photo library. I can choose take photo, and open and close that one over and over, without a leak.

    Read the article

  • Faster Matrix Multiplication in C#

    - by Kyle Lahnakoski
    I have as small c# project that involves matrices. I am processing large amounts of data by splitting it into n-length chunks, treating the chucks as vectors, and multiplying by a Vandermonde** matrix. The problem is, depending on the conditions, the size of the chucks and corresponding Vandermonde** matrix can vary. I have a general solution which is easy to read, but way too slow: public byte[] addBlockRedundancy(byte[] data) { if (data.Length!=numGood) D.error("Expecting data to be just "+numGood+" bytes long"); aMatrix d=aMatrix.newColumnMatrix(this.mod, data); var r=vandermonde.multiplyBy(d); return r.ToByteArray(); }//method This can process about 1/4 megabytes per second on my i5 U470 @ 1.33GHz. I can make this faster by manually inlining the matrix multiplication: int o=0; int d=0; for (d=0; d<data.Length-numGood; d+=numGood) { for (int r=0; r<numGood+numRedundant; r++) { Byte value=0; for (int c=0; c<numGood; c++) { value=mod.Add(value, mod.Multiply(vandermonde.get(r, c), data[d+c])); }//for output[r][o]=value; }//for o++; }//for This can process about 1 meg a second. (Please note the "mod" is performing operations over GF(2^8) modulo my favorite irreducible polynomial.) I know this can get a lot faster: After all, the Vandermonde** matrix is mostly zeros. I should be able to make a routine, or find a routine, that can take my matrix and return a optimized method which will effectively multiply vectors by the given matrix, but faster. Then, when I give this routine a 5x5 Vandermonde matrix (the identity matrix), there is simply no arithmetic to perform, and the original data is just copied. ** Please note: What I use the term "Vandermonde", I actually mean an Identity matrix with some number of rows from the Vandermonde matrix appended (see comments). This matrix is wonderful because of all the zeros, and because if you remove enough rows (of your choosing) to make it square, it is an invertible matrix. And, of course, I would like to use this same routine to convert any one of those inverted matrices into an optimized series of instructions. How can I make this matrix multiplication faster? Thanks! (edited to correct my mistake with Vandermonde matrix)

    Read the article

  • Non-linear regression models in PostgreSQL using R

    - by Dave Jarvis
    Background I have climate data (temperature, precipitation, snow depth) for all of Canada between 1900 and 2009. I have written a basic website and the simplest page allows users to choose category and city. They then get back a very simple report (without the parameters and calculations section): The primary purpose of the web application is to provide a simple user interface so that the general public can explore the data in meaningful ways. (A list of numbers is not meaningful to the general public, nor is a website that provides too many inputs.) The secondary purpose of the application is to provide climatologists and other scientists with deeper ways to view the data. (Using too many inputs, of course.) Tool Set The database is PostgreSQL with R (mostly) installed. The reports are written using iReport and generated using JasperReports. Poor Model Choice Currently, a linear regression model is applied against annual averages of daily data. The linear regression model is calculated within a PostgreSQL function as follows: SELECT regr_slope( amount, year_taken ), regr_intercept( amount, year_taken ), corr( amount, year_taken ) FROM temp_regression INTO STRICT slope, intercept, correlation; The results are returned to JasperReports using: SELECT year_taken, amount, year_taken * slope + intercept, slope, intercept, correlation, total_measurements INTO result; JasperReports calls into PostgreSQL using the following parameterized analysis function: SELECT year_taken, amount, measurements, regression_line, slope, intercept, correlation, total_measurements, execute_time FROM climate.analysis( $P{CityId}, $P{Elevation1}, $P{Elevation2}, $P{Radius}, $P{CategoryId}, $P{Year1}, $P{Year2} ) ORDER BY year_taken This is not an optimal solution because it gives the false impression that the climate is changing at a slow, but steady rate. Questions Using functions that take two parameters (e.g., year [X] and amount [Y]), such as PostgreSQL's regr_slope: What is a better regression model to apply? What CPAN-R packages provide such models? (Installable, ideally, using apt-get.) How can the R functions be called within a PostgreSQL function? If no such functions exist: What parameters should I try to obtain for functions that will produce the desired fit? How would you recommend showing the best fit curve? Keep in mind that this is a web app for use by the general public. If the only way to analyse the data is from an R shell, then the purpose has been defeated. (I know this is not the case for most R functions I have looked at so far.) Thank you!

    Read the article

  • How to bind DataTable to Chart series?

    - by user175908
    Hello, How to do bind data from DataTable to Chart series? I get null reference exception. I tried binding with square brackets and it did not worked either. So, how to do the binding? Thanks. P.S: I included DataGrid XAML and CS which works just fine. Converting data to List<KeyValuePair<string,int>> works good but it is kinda slow and is unnessesary trash in code. I use WPFToolkit (the latest version). XAML: <Window x:Class="BindingzTest.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="606" Width="988" xmlns:charting="clr-namespace:System.Windows.Controls.DataVisualization.Charting;assembly=System.Windows.Controls.DataVisualization.Toolkit"> <Grid Name="LayoutRoot"> <charting:Chart Title="Letters and Numbers" VerticalAlignment="Top" Height="400"> <charting:Chart.Series> <charting:ColumnSeries Name="myChartSeries" IndependentValueBinding="{Binding Letter}" DependentValueBinding="{Binding Number}" ItemsSource="{Binding}" /> </charting:Chart.Series> </charting:Chart> <DataGrid Name="myDataGrid" VerticalAlignment="Stretch" Margin="0,400,0,50" ItemsSource="{Binding}" AutoGenerateColumns="False"> <DataGrid.Columns> <DataGridTextColumn Header="Letter" Binding="{Binding Letter}"/> <DataGridTextColumn Header="Number" Binding="{Binding Number}"/> </DataGrid.Columns> </DataGrid> <Button Content="Generate" HorizontalAlignment="Left" Name="generateButton" Width="128" Click="GenerateButtonClicked" Height="52" VerticalAlignment="Bottom" /> </Grid> CS: public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } DataTable GenerateMyTable() { var myTable = new DataTable("MyTable"); myTable.Columns.Add("Letter"); myTable.Columns.Add("Number"); myTable.Rows.Add("A", 500); myTable.Rows.Add("B", 400); myTable.Rows.Add("C", 500); myTable.Rows.Add("D", 600); myTable.Rows.Add("E", 300); myTable.Rows.Add("F", 200); return myTable; } private void GenerateButtonClicked(object sender, RoutedEventArgs e) { var myGeneratedTable = GenerateMyTable(); myDataGrid.DataContext = myGeneratedTable; myChartSeries.DataContext = myGeneratedTable; // Calling this throws "Object reference not set to an instance of an object" exception } }

    Read the article

  • Cases of companies taking IP rights of your own personal projects developed outside company time

    - by GSS
    Hi, I have heard of cases where a developer working for a company is also making his own personal projects in his own time, using his own equipment yet the company he works for tries to claim ownership for the project. I really find this annoying, and bang out of order. It should also be illegal. I am in this position (work for a company and working on my own systems - from small class libraries used to practise what I learn in my exam revision to a large commercial-scale system). While I don't know if the company will try to take ownership, all I know is they say they do not want a conflict of interest. Fair enough, my system is developed in my own time using my own equipment. They also say that work time should be for work only, which it is. Funny thing that as work is so boring, easy and slow that I have plenty of free time, which I wish I could spend on something productive - said system. The problem is, my company does not take hiring technical talent seriously. This is my first job, I am a junior coder (but my status/position doesn't really reflect what I can do), but I am the only developer. Likewise with the guy who controls Windows Server. As the contract does not say anything about taking ownership, I would assume they would. They would try to milk my success (I've made a good impression so I am sure they would). How can this be allowed? Are there any examples of this happening to any fellow Stacker here? It really makes my blood boil. What I find funny is that my company hardly has the expertise and resources to even be able to successfully run a project of my size. What I do at work is an ASP.NET application consisting of five pages, and even then there are flaws in the project. If I told them that they would also have to take responsibility for flaws in the project, then they would think twice! It's exactly because of this I save the best code for myself and at work I write rubbish code full of code smells. The company don't really care about error handling, as long as the business functionality works (ie a scheduled email sends, but there is no error handling). They'd think twice when they see the embarassment and business cost of a YSOD...

    Read the article

  • Populating fields in modal form using PHP, jQuery

    - by Benjamin
    I have a form that adds links to a database, deletes them, and -- soon -- allows the user to edit details. I am using jQuery and Ajax heavily on this project and would like to keep all control in the same page. In the past, to handle editing something like details about another website (link entry), I would have sent the user to another PHP page with form fields populated with PHP from a MySQL database table. How do I accomplish this using a jQuery UI modal form and calling the details individually for that particular entry? Here is what I have so far- <?php while ($linkDetails = mysql_fetch_assoc($getLinks)) {?> <div class="linkBox ui-corner-all" id="linkID<?php echo $linkDetails['id'];?>"> <div class="linkHeader"><?php echo $linkDetails['title'];?></div> <div class="linkDescription"><p><?php echo $linkDetails['description'];?></p> <p><strong>Link:</strong><br/> <span class="link"><a href="<?php echo $linkDetails['url'];?>" target="_blank"><?php echo $linkDetails['url'];?></a></span></p></div> <p align="right"> <span class="control"> <span class="delete addButton ui-state-default">Delete</span> <span class="edit addButton ui-state-default">Edit</span> </span> </p> </div> <?php }?> And here is the jQuery that I am using to delete entries- $(".delete").click(function() { var parent = $(this).closest('div'); var id = parent.attr('id'); $("#delete-confirm").dialog({ resizable: false, modal: true, title: 'Delete Link?', buttons: { 'Delete': function() { var dataString = 'id='+ id ; $.ajax({ type: "POST", url: "../includes/forms/delete_link.php", data: dataString, cache: false, success: function() { parent.fadeOut('slow'); $("#delete-confirm").dialog('close'); } }); }, Cancel: function() { $(this).dialog('close'); } } }); return false; }); Everything is working just fine, just need to find a solution to edit. Thanks!

    Read the article

  • Adding unique objects to Core Data

    - by absolut
    I'm working on an iPhone app that gets a number of objects from a database. I'd like to store these using Core Data, but I'm having problems with my relationships. A Detail contains any number of POIs (points of interest). When I fetch a set of POI's from the server, they contain a detail ID. In order to associate the POI with the Detail (by ID), my process is as follows: Query the ManagedObjectContext for the detailID. If that detail exists, add the poi to it. If it doesn't, create the detail (it has other properties that will be populated lazily). The problem with this is performance. Performing constant queries to Core Data is slow, to the point where adding a list of 150 POI's takes a minute thanks to the multiple relationships involved. In my old model, before Core Data (various NSDictionary cache objects) this process was super fast (look up a key in a dictionary, then create it if it doesn't exist) I have more relationships than just this one, but pretty much every one has to do this check (some are many to many, and they have a real problem). Does anyone have any suggestions for how I can help this? I could perform fewer queries (by searching for a number of different ID's), but I'm not sure how much this will help. Some code: POI *poi = [NSEntityDescription insertNewObjectForEntityForName:@"POI" inManagedObjectContext:[(AppDelegate*)[UIApplication sharedApplication].delegate managedObjectContext]]; poi.POIid = [attributeDict objectForKey:kAttributeID]; poi.detailId = [attributeDict objectForKey:kAttributeDetailID]; Detail *detail = [self findDetailForID:poi.POIid]; if(detail == nil) { detail = [NSEntityDescription insertNewObjectForEntityForName:@"Detail" inManagedObjectContext:[(AppDelegate*)[UIApplication sharedApplication].delegate managedObjectContext]]; detail.title = poi.POIid; detail.subtitle = @""; detail.detailType = [attributeDict objectForKey:kAttributeType]; } -(Detail*)findDetailForID:(NSString*)detailID { NSManagedObjectContext *moc = [[UIApplication sharedApplication].delegate managedObjectContext]; NSEntityDescription *entityDescription = [NSEntityDescription entityForName:@"Detail" inManagedObjectContext:moc]; NSFetchRequest *request = [[[NSFetchRequest alloc] init] autorelease]; [request setEntity:entityDescription]; NSPredicate *predicate = [NSPredicate predicateWithFormat: @"detailid == %@", detailID]; [request setPredicate:predicate]; NSLog(@"%@", [predicate description]); NSError *error; NSArray *array = [moc executeFetchRequest:request error:&error]; if (array == nil || [array count] != 1) { // Deal with error... return nil; } return [array objectAtIndex:0]; }

    Read the article

  • Floating point vs integer calculations on modern hardware

    - by maxpenguin
    I am doing some performance critical work in C++, and we are currently using integer calculations for problems that are inherently floating point because "its faster". This causes a whole lot of annoying problems and adds a lot of annoying code. Now, I remember reading about how floating point calculations were so slow approximately circa the 386 days, where I believe (IIRC) that there was an optional co-proccessor. But surely nowadays with exponentially more complex and powerful CPUs it makes no difference in "speed" if doing floating point or integer calculation? Especially since the actual calculation time is tiny compared to something like causing a pipeline stall or fetching something from main memory? I know the correct answer is to benchmark on the target hardware, what would be a good way to test this? I wrote two tiny C++ programs and compared their run time with "time" on Linux, but the actual run time is too variable (doesn't help I am running on a virtual server). Short of spending my entire day running hundreds of benchmarks, making graphs etc. is there something I can do to get a reasonable test of the relative speed? Any ideas or thoughts? Am I completely wrong? The programs I used as follows, they are not identical by any means: #include <iostream> #include <cmath> #include <cstdlib> #include <time.h> int main( int argc, char** argv ) { int accum = 0; srand( time( NULL ) ); for( unsigned int i = 0; i < 100000000; ++i ) { accum += rand( ) % 365; } std::cout << accum << std::endl; return 0; } Program 2: #include <iostream> #include <cmath> #include <cstdlib> #include <time.h> int main( int argc, char** argv ) { float accum = 0; srand( time( NULL ) ); for( unsigned int i = 0; i < 100000000; ++i ) { accum += (float)( rand( ) % 365 ); } std::cout << accum << std::endl; return 0; } Thanks in advance!

    Read the article

  • Comparing two large sets of attributes

    - by andyashton
    Suppose you have a Django view that has two functions: The first function renders some XML using a XSLT stylesheet and produces a div with 1000 subelements like this: <div id="myText"> <p id="p1"><a class="note-p1" href="#" style="display:none" target="bot">?</a></strong>Lorem ipsum</p> <p id="p2"><a class="note-p2" href="#" style="display:none" target="bot">?</a></strong>Foo bar</p> <p id="p3"><a class="note-p3" href="#" style="display:none" target="bot">?</a></strong>Chocolate peanut butter</p> (etc for 1000 lines) <p id="p1000"><a class="note-p1000" href="#" style="display:none" target="bot">?</a></strong>Go Yankees!</p> </div> The second function renders another XML document using another stylesheet to produce a div like this: <div id="myNotes"> <p id="n1"><cite class="note-p1"><sup>1</sup><span>Trololo</span></cite></p> <p id="n2"><cite class="note-p1"><sup>2</sup><span>Trololo</span></cite></p> <p id="n3"><cite class="note-p2"><sup>3</sup><span>lololo</span></cite></p> (etc for n lines) <p id="n"><cite class="note-p885"><sup>n</sup><span>lololo</span></cite></p> </div> I need to see which elements in #myText have classes that match elements in #myNotes, and display them. I can do this using the following jQuery: $('#myText').find('a').each(function() { var $anchor = $(this); $('#myNotes').find('cite').each(function() { if($(this).attr('class') == $anchor.attr('class')) { $anchor.show(); }); }); However this is incredibly slow and inefficient for a large number of comparisons. What is the fastest/most efficient way to do this - is there a jQuery/js method that is reasonable for a large number of items? Or do I need to reengineer the Django code to do the work before passing it to the template?

    Read the article

< Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >