Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 630/670 | < Previous Page | 626 627 628 629 630 631 632 633 634 635 636 637  | Next Page >

  • Seeking for faster $.(':data(key)')

    - by PoltoS
    I'm writing an extension to jQuery that adds data to DOM elements using el.data('lalala', my_data); and then uses that data to upload elements dynamically. Each time I get new data from the server I need to update all elements having el.data('lalala') != null; To get all needed elements I use an extension by James Padolsey: $(':data(lalala)').each(...); Everything was great until I came to the situation where I need to run that code 50 times - it is very slow! It takes about 8 seconds to execute on my page with 3640 DOM elements var x, t = (new Date).getTime(); for (n=0; n < 50; n++) { jQuery(':data(lalala)').each(function() { x++; }); }; console.log(((new Date).getTime()-t)/1000); Since I don't need RegExp as parameter of :data selector I've tried to replace this by var x, t = (new Date).getTime(); for (n=0; n < 50; n++) { jQuery('*').each(function() { if ($(this).data('lalala')) x++; }); }; console.log(((new Date).getTime()-t)/1000); This code is faster (5 sec), but I want get more. Q Are there any faster way to get all elements with this data key? In fact, I can keep an array with all elements I need, since I execute .data('key') in my module. Checking 100 elements having the desired .data('lalala') is better then checking 3640 :) So the solution would be like for (i in elements) { el = elements[i]; .... But sometimes elements are removed from the page (using jQuery .remove()). Both solutions described above [$(':data(lalala)') solution and if ($(this).data('lalala'))] will skip removed items (as I need), while the solution with array will still point to removed element (in fact, the element would not be really deleted - it will only be deleted from the DOM tree - because my array will still have a reference). I found that .remove() also removes data from the node, so my solution will change into var toRemove = []; for (vari in elements) { var el = elements[i]; if ($(el).data('lalala')) .... else toRemove.push(i); }; for (var ii in toRemove) elements.splice(toRemove[ii], 1); // remove element from array This solution is 100 times faster! Q Will the garbage collector release memory taken by DOM elements when deleted from that array? Remember, elements have been referenced by DOM tree, we made a new reference in our array, then removed with .remove() and then removed from the array. Is there a better way to do this?

    Read the article

  • Which OAuth library do you find works best for Objective-C/iPhone?

    - by Brennan
    I have been looking to switch to OAuth for my Twitter integration code and now that there is a deadline in less than 7 weeks (see countdown link) it is even more important to make the jump to OAuth. I have been doing Basic Authentication which is extremely easy. Unfortunately OAuth does not appear to be something that I would whip together in a couple of hours. http://www.countdowntooauth.com/ So I am looking to use a library. I have put together the following list. MPOAuth MGTwitterEngine OAuthConsumer I see that MPOAuth has some great features with a good deal of testing code in place but there is one big problem. It does not work. The sample iPhone project that is supposed to authenticate with Twitter causes an error which others have identified and logged as a bug. http://code.google.com/p/mpoauthconnection/issues/detail?id=29 The last code change was March 11 and this bug was filed on March 30. It has been over a month and this critical bug has not been fixed yet. So I have moved on to MGTwitterEngine. I pulled down the source code and loaded it up in Xcode. Immediately I find that there are a few dependencies and the README file does not have a clear list of steps to fetch those dependencies and integrate them with the project so that it builds successfully. I see this as a sign that the project is not mature enough for prime time. I see also that the project references 2 libraries for JSON when one should be enough. One is TouchJSON which has worked well for me so I am again discouraged from relying on this project for my applications. I did find that MGTwitterEngine makes use of OAuthConsumer which is one of many OAuth projects hosted by an OAuth project on Google Code. http://code.google.com/p/oauth/ http://code.google.com/p/oauthconsumer/wiki/UsingOAuthConsumer It looks like OAuthConsumer is a good choice at first glance. It is hosted with other OAuth libraries and has some nice documentation with it. I pulled down the code and it builds without errors but it does have many warnings. And when I run the new Build and Analyze feature in Xcode 3.2 I see 50 analyzer results. Many are marked as potential memory leaks which would likely lead to instability in any app which uses this library. It seems there is no clear winner and I have to go with something before the big Twitter OAuth deadline. Any suggestions?

    Read the article

  • MySQL Query performance - huge difference in time

    - by Damo
    I have a query that is returning in vastly different amounts of time between 2 datasets. For one set (database A) it returns in a few seconds, for the other (database B)....well I haven't waited long enough yet, but over 10 minutes. I have dumped both of these databases to my local machine where I can reproduce the issue running MySQL 5.1.37. Curiously, database B is smaller than database A. A stripped down version of the query that reproduces the problem is: SELECT * FROM po_shipment ps JOIN po_shipment_item psi USING (ship_id) JOIN po_alloc pa ON ps.ship_id = pa.ship_id AND pa.UID_items = psi.UID_items JOIN po_header ph ON pa.hdr_id = ph.hdr_id LEFT JOIN EVENT_TABLE ev0 ON ev0.TABLE_ID1 = ps.ship_id AND ev0.EVENT_TYPE = 'MAS0' LEFT JOIN EVENT_TABLE ev1 ON ev1.TABLE_ID1 = ps.ship_id AND ev1.EVENT_TYPE = 'MAS1' LEFT JOIN EVENT_TABLE ev2 ON ev2.TABLE_ID1 = ps.ship_id AND ev2.EVENT_TYPE = 'MAS2' LEFT JOIN EVENT_TABLE ev3 ON ev3.TABLE_ID1 = ps.ship_id AND ev3.EVENT_TYPE = 'MAS3' LEFT JOIN EVENT_TABLE ev4 ON ev4.TABLE_ID1 = ps.ship_id AND ev4.EVENT_TYPE = 'MAS4' LEFT JOIN EVENT_TABLE ev5 ON ev5.TABLE_ID1 = ps.ship_id AND ev5.EVENT_TYPE = 'MAS5' WHERE ps.eta >= '2010-03-22' GROUP BY ps.ship_id LIMIT 100; The EXPLAIN query plan for the first database (A) that returns in ~2 seconds is: +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | ps | range | PRIMARY,IX_ETA_DATE | IX_ETA_DATE | 4 | NULL | 174 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | ev0 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev1 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev2 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev3 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev4 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev5 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | psi | ref | PRIMARY,IX_po_shipment_item_po_shipment1,FK_po_shipment_item_po_shipment1 | IX_po_shipment_item_po_shipment1 | 4 | UNIVIS_PROD.ps.ship_id | 1 | | | 1 | SIMPLE | pa | ref | IX_po_alloc_po_shipment_item2,IX_po_alloc_po_details_old,FK_po_alloc_po_shipment1,FK_po_alloc_po_shipment_item1,FK_po_alloc_po_header1 | FK_po_alloc_po_shipment1 | 4 | UNIVIS_PROD.psi.ship_id | 5 | Using where | | 1 | SIMPLE | ph | eq_ref | PRIMARY,IX_HDR_ID | PRIMARY | 4 | UNIVIS_PROD.pa.hdr_id | 1 | | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+------------------------------+------+----------------------------------------------+ The EXPLAIN query plan for the second database (B) that returns in 600 seconds is: +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+--------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+--------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | ps | range | PRIMARY,IX_ETA_DATE | IX_ETA_DATE | 4 | NULL | 38 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | psi | ref | PRIMARY,IX_po_shipment_item_po_shipment1,FK_po_shipment_item_po_shipment1 | IX_po_shipment_item_po_shipment1 | 4 | UNIVIS_DEV01.ps.ship_id | 1 | | | 1 | SIMPLE | ev0 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.psi.ship_id,const | 1 | | | 1 | SIMPLE | ev1 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.psi.ship_id,const | 1 | | | 1 | SIMPLE | ev2 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev3 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.psi.ship_id,const | 1 | | | 1 | SIMPLE | ev4 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.psi.ship_id,const | 1 | | | 1 | SIMPLE | ev5 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.ps.ship_id,const | 1 | | | 1 | SIMPLE | pa | ref | IX_po_alloc_po_shipment_item2,IX_po_alloc_po_details_old,FK_po_alloc_po_shipment1,FK_po_alloc_po_shipment_item1,FK_po_alloc_po_header1 | IX_po_alloc_po_shipment_item2 | 4 | UNIVIS_DEV01.ps.ship_id | 4 | Using where | | 1 | SIMPLE | ph | eq_ref | PRIMARY,IX_HDR_ID | PRIMARY | 4 | UNIVIS_DEV01.pa.hdr_id | 1 | | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+--------------------------------+------+----------------------------------------------+ When database B is running I can look at the MySQL Administrator and the state remains at "Copying to tmp table" indefinitely. Database A also has this state but for only a second or so. There are no differences in the table structure, indexes, keys etc between these databases (I have done show create tables and diff'd them). The sizes of the tables are: database A: po_shipment 1776 po_shipment_item 1945 po_alloc 36298 po_header 71642 EVENT_TABLE 1608 database B: po_shipment 463 po_shipment_item 470 po_alloc 3291 po_header 56149 EVENT_TABLE 1089 Some points to note: Removing the WHERE clause makes the query return < 1 sec. Removing the GROUP BY makes the query return < 1 sec. Removing ev5, ev4, ev3 etc makes the query get faster for each one removed. Can anyone suggest how to resolve this issue? What have I missed? Many Thanks.

    Read the article

  • Using FFmpeg in .net?

    - by daniel
    So i know its a fairly big challenge but i want to write a basic movie player/converter in c# using the FFmpeg library. However the first obstacle i need to overcome is wrapping the FFmpeg library in c#. I downloaded ffmpeg but couldn't compile it on windows, so i downloaded a precompiled version for me. Ok awesome. Then i started looking for c# wrappers. I have looked around and have found a few wrappers such as SharpFFmpeg (http://sourceforge.net/projects/sharpffmpeg/) and ffmpeg-sharp (http://code.google.com/p/ffmpeg-sharp/). First of all i wanted to use ffmpeg-sharp as its LGPL and SharpFFmpeg is GPL. However it had quite a few compile errors. Turns out it was written for the mono compiler, i tried compiling it with mon but couldn't figure out how. I then started to manually fix the compiler errors myself, but came across a few scary ones and thought i'd better leave those alone. So i gave up on ffmpeg-sharp. Then i looked at SharpFFmpeg and it looks like what i want, all the functions P/Invoked for me. However its GPL? Both the AVCodec.cs and AVFormat.cs files look like ports of avcodec.c and avformat.c which i reckon i could port myself? Then not have to worry about licencing. But i wan't to get this right before i go ahead and start coding. Should I: Write my own c++ library for interacting with ffmpeg, then have my c# program talk to the c++ library in order to play/convert videos etc. OR Port avcodec.h and avformat.h (is that all i need?) to c# by using a whole lot of DllImports and write it entirely in c#? First of all consider that i'm not great at c++ as i rarely use it but i know enough to get around. The reason i'm thinking #1 might be the better option is that most FFmpeg tutorials are in c++ and i'd also have more control over memory management than if i was to do it in c#. What do you think? Also would you happen to have any usefull links (perhaps a tutorial) for using FFmpeg? EDIT: spelling mistakes

    Read the article

  • C++ How to deep copy a struct with unknown datatype?

    - by Ewald Peters
    hi, i have a "data provider" which stores its output in a struct of a certain type, for instance struct DATA_TYPE1{ std::string data_string; }; then this struct has to be casted into a general datatype, i thought about void * or char *, because the "intermediate" object that copies and stores it in its binary tree should be able to store many different types of such struct data. struct BINARY_TREE_ENTRY{ void * DATA; struct BINARY_TREE_ENTRY * next; }; this void * is then later taken by another object that casts the void * back into the (struct DATA_TYPE1 *) to get the original data. so the sender and the receiver know about the datatype DATA_TYPE1 but not the copying object inbetween. but how can the intermidiate object deep copy the contents of the different structs, when it doesn't know the datatype, only void * and it has no method to copy the real contents; dynamic_cast doesn't work for void *; the "intermediate" object should do something like: void store_data(void * CASTED_DATA_STRUCT){ void * DATA_COPY = create_a_deepcopy_of(CASTED_DATA_STRUCT); push_into_bintree(DATA_COPY); } a simple solution would be that the sending object doesn't delete the sent data struct, til the receiving object got it, but the sending objects are dynamically created and deleted, before the receiver got the data from the intermediate object, for asynchronous communication, therefore i want to copy it. instead of converting it to void * i also tried converting to a superclass pointer of which the intermediate copying object knows about, and which is inherited by all the different datatypes of the structs: struct DATA_BASE_OBJECT{ public: DATA_BASE_OBJECT(){} DATA_BASE_OBJECT(DATA_BASE_OBJECT * old_ptr){ std::cout << "this should be automatically overridden!" << std::endl; } virtual ~DATA_BASE_OBJECT(){} }; struct DATA_TYPE1 : public DATA_BASE_OBJECT { public: string str; DATA_TYPE1(){} ~DATA_TYPE1(){} DATA_TYPE1(DATA_TYPE1 * old_ptr){ str = old_ptr->str; } }; and the corresponding binary tree entry would then be: struct BINARY_TREE_ENTRY{ struct DATA_BASE_OBJECT * DATA; struct BINARY_TREE_ENTRY * next; }; and to then copy the unknown datatype, i tried in the class that just gets the unknown datatype as a struct DATA_BASE_OBJECT * (before it was the void *): void * copy_data(DATA_BASE_OBJECT * data_that_i_get_in_the_sub_struct){ struct DATA_BASE_OBJECT * copy_sub = new DATA_BASE_OBJECT(data_that_i_get_in_the_sub_struct); push_into_bintree(copy_sub); } i then added a copy constructor to the DATA_BASE_OBJECT, but if the struct DATA_TYPE1 is first casted to a DATA_BASE_OBJECT and then copied, the included sub object DATA_TYPE1 is not also copied. i then thought what about finding out the size of the actual object to copy and then just memcopy it, but the bytes are not stored in one row and how do i find out the real size in memory of the struct DATA_TYPE1 which holds a std::string? Which other c++ methods are available to deepcopy an unknown datatype (and to maybe get the datatype information somehow else during runtime) thanks Ewald

    Read the article

  • Why might login failures cause SQL 2005 to dump and ditch?

    - by Byron Sommardahl
    Our SQL 2005 server began timing out and finally stopped responding on Oct 26th. The application logs showed a ton of 17883 events leading up to a reboot. After the reboot everything was fine but we were still scratching our heads. Fast forward 6 days... it happened again. Then again 2 days later. The last night. Today it has happened three times to far. The timeline is fairly predictable when it happens: Trans log backups. Login failure for "user2". Minidump Another minidump for the scheduler Repeated 17883 events. Server fails little by little until it won't accept any requests. Reboot is all that gets us going again (a band-aid) Interesting, though, is that the server box itself doesn't seem to have any problems. CPU usage is normal. Network connectivity is fine. We can remote in and look at logs. Management studio does eventually bog down, though. Today, for the first time, we tried stopping services instead of a reboot. All services stopped on their own except for the SQL Server service. We finally did an "end task" on that one and were able to bring everything back up. It worked fine for about 30 minutes until we started seeing timeouts and 17883's again. This time, probably because we didn't reboot all the way, we saw a bunch of 844 events mixed in with the 17883's. Our entire tech team here is scratching heads... some ideas we're kicking around: MS Cumulative Update hit around the same time as when we first had a problem. Since then, we've rolled it back. Maybe it didn't rollback all the way. The situation looks and feels like an unhandled "stack overflow" (no relation) in that it starts small and compounds over time. Problem with this is that there isn't significant CPU usage. At any rate, we're not ruling SQL 2005 bug out at all. Maybe we added one too many import processes and have reached our limit on this box. (hard to believe). Looking at SQLDUMP0151.log at the time of one of the crashes. There are some "login failures" and then there are two stack dumps. 1st a normal stack dump, 2nd for a scheduler dump. Here's a snippet: (sorry for the lack of line breaks) 2009-11-10 11:59:14.95 spid63 Using 'xpsqlbot.dll' version '2005.90.3042' to execute extended stored procedure 'xp_qv'. This is an informational message only; no user action is required. 2009-11-10 11:59:15.09 spid63 Using 'xplog70.dll' version '2005.90.3042' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required. 2009-11-10 12:02:33.24 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:02:33.24 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:08:21.12 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:08:21.12 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:13:49.38 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:13:49.38 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:15:16.88 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:15:16.88 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:24.41 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:18:24.41 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:38.88 spid111 Using 'dbghelp.dll' version '4.0.5' 2009-11-10 12:18:39.02 spid111 *Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\SQLDump0149.txt 2009-11-10 12:18:39.02 spid111 SqlDumpExceptionHandler: Process 111 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process. 2009-11-10 12:18:39.02 spid111 * ***************************************************************************** 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * BEGIN STACK DUMP: 2009-11-10 12:18:39.02 spid111 * 11/10/09 12:18:39 spid 111 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * Exception Address = 0159D56F Module(sqlservr+0059D56F) 2009-11-10 12:18:39.02 spid111 * Exception Code = c0000005 EXCEPTION_ACCESS_VIOLATION 2009-11-10 12:18:39.02 spid111 * Access Violation occurred writing address 00000000 2009-11-10 12:18:39.02 spid111 * Input Buffer 138 bytes - 2009-11-10 12:18:39.02 spid111 * " N R S C _ P T A 22 00 4e 00 52 00 53 00 43 00 5f 00 50 00 54 00 41 00 2009-11-10 12:18:39.02 spid111 * C _ Q A . d b o . 43 00 5f 00 51 00 41 00 2e 00 64 00 62 00 6f 00 2e 00 2009-11-10 12:18:39.02 spid111 * U s p S e l N e x 55 00 73 00 70 00 53 00 65 00 6c 00 4e 00 65 00 78 00 2009-11-10 12:18:39.02 spid111 * t A c c o u n t 74 00 41 00 63 00 63 00 6f 00 75 00 6e 00 74 00 00 00 2009-11-10 12:18:39.02 spid111 * @ i n t F o r m I 0a 40 00 69 00 6e 00 74 00 46 00 6f 00 72 00 6d 00 49 2009-11-10 12:18:39.02 spid111 * D & 8 @ t x 00 44 00 00 26 04 04 38 00 00 00 09 40 00 74 00 78 00 2009-11-10 12:18:39.02 spid111 * t A l i a s § 74 00 41 00 6c 00 69 00 61 00 73 00 00 a7 0f 00 09 04 2009-11-10 12:18:39.02 spid111 * Ð GQE9732 d0 00 00 07 00 47 51 45 39 37 33 32 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * MODULE BASE END SIZE 2009-11-10 12:18:39.02 spid111 * sqlservr 01000000 02C09FFF 01c0a000 2009-11-10 12:18:39.02 spid111 * ntdll 7C800000 7C8C1FFF 000c2000 2009-11-10 12:18:39.02 spid111 * kernel32 77E40000 77F41FFF 00102000

    Read the article

  • Some questions about writing on ASP.NET response stream

    - by vtortola
    Hi, I'm making tests with ASP.NET HttpHandler for download a file writting directly on the response stream, and I'm not pretty sure about the way I'm doing it. This is a example method, in the future the file could be stored in a BLOB in the database: public void GetFile(HttpResponse response) { String fileName = "example.iso"; response.ClearHeaders(); response.ClearContent(); response.ContentType = "application/octet-stream"; response.AppendHeader("Content-Disposition", "attachment; filename=" + fileName); using (FileStream fs = new FileStream(Path.Combine(HttpContext.Current.Server.MapPath("~/App_Data"), fileName), FileMode.Open)) { Byte[] buffer = new Byte[4096]; Int32 readed = 0; while ((readed = fs.Read(buffer, 0, buffer.Length)) > 0) { response.OutputStream.Write(buffer, 0, readed); response.Flush(); } } } But, I'm not sure if this is correct or there is a better way to do it. My questions are: When I open the url with the browser, appears the "Save File" dialog... but it seems like the server has started already to push data into the stream before I click "Save", is that normal? If I remove the line"response.Flush()", when I open the url with the browser, ... I see how the web server is pushing data but the "Save File" dialog doesn't come up, (or at least not in a reasonable time fashion) why? When I open the url with a WebRequest object, I see that the HttpResponse.ContentLength is "-1", although I can read the stream and get the file. What is the meaning of -1? When is HttpResponse.ContentLength going to show the length of the response? For example, I have a method that retrieves a big xml compresed with deflate as a binary stream, but in that case... when I access it with a WebRequest, in the HttpResponse I can actually see the ContentLength with the length of the stream, why? What is the optimal length for the Byte[] array that I use as buffer for optimal performance in a web server? I've read that is between 4K and 8K... but which factors should I consider to make the correct decision. Does this method bloat the IIS or client memory usage? or is it actually buffering the transference correctly? Sorry for so many questions, I'm pretty new in web development :P Cheers.

    Read the article

  • Efficiency of Java "Double Brace Initialization"?

    - by Jim Ferrans
    In Hidden Features of Java the top answer mentions Double Brace Initialization, with a very enticing syntax: Set<String> flavors = new HashSet<String>() {{ add("vanilla"); add("strawberry"); add("chocolate"); add("butter pecan"); }}; This idiom creates an anonymous inner class with just an instance initializer in it, which "can use any [...] methods in the containing scope". Main question: Is this as inefficient as it sounds? Should its use be limited to one-off initializations? (And of course showing off!) Second question: The new HashSet must be the "this" used in the instance initializer ... can anyone shed light on the mechanism? Third question: Is this idiom too obscure to use in production code? Summary: Very, very nice answers, thanks everyone. On question (3), people felt the syntax should be clear (though I'd recommend an occasional comment, especially if your code will pass on to developers who may not be familiar with it). On question (1), The generated code should run quickly. The extra .class files do cause jar file clutter, and slow program startup slightly (thanks to coobird for measuring that). Thilo pointed out that garbage collection can be affected, and the memory cost for the extra loaded classes may be a factor in some cases. Question (2) turned out to be most interesting to me. If I understand the answers, what's happening in DBI is that the anonymous inner class extends the class of the object being constructed by the new operator, and hence has a "this" value referencing the instance being constructed. Very neat. Overall, DBI strikes me as something of an intellectual curiousity. Coobird and others point out you can achieve the same effect with Arrays.asList, varargs methods, Google Collections, and the proposed Java 7 Collection literals. Newer JVM languages like Scala, JRuby, and Groovy also offer concise notations for list construction, and interoperate well with Java. Given that DBI clutters up the classpath, slows down class loading a bit, and makes the code a tad more obscure, I'd probably shy away from it. However, I plan to spring this on a friend who's just gotten his SCJP and loves good natured jousts about Java semantics! ;-) Thanks everyone!

    Read the article

  • Strategy and AI for the game 'Proximity'

    - by smci
    'Proximity' is a strategy game of territorial domination similar to Othello, Go and Risk. Two players, uses a 10x12 hex grid. Game invented by Brian Cable in 2007. Seems to be a worthy game for discussing a) optimal strategy then b) how to build an AI Strategies are going to be probabilistic or heuristic-based, due to the randomness factor, and the high branching factor (starts out at 120). So it will be kind of hard to compare objectively. A compute time limit of 5s per turn seems reasonable. Game: Flash version here and many copies elsewhere on the web Rules: here Object: to have control of the most armies after all tiles have been placed. Each turn you received a randomly numbered tile (value between 1 and 20 armies) to place on any vacant board space. If this tile is adjacent to any ally tiles, it will strengthen each tile's defenses +1 (up to a max value of 20). If it is adjacent to any enemy tiles, it will take control over them if its number is higher than the number on the enemy tile. Thoughts on strategy: Here are some initial thoughts; setting the computer AI to Expert will probably teach a lot: minimizing your perimeter seems to be a good strategy, to prevent flips and minimize worst-case damage like in Go, leaving holes inside your formation is lethal, only more so with the hex grid because you can lose armies on up to 6 squares in one move low-numbered tiles are a liability, so place them away from your main territory, near the board edges and scattered. You can also use low-numbered tiles to plug holes in your formation, or make small gains along the perimeter which the opponent will not tend to bother attacking. a triangle formation of three pieces is strong since they mutually reinforce, and also reduce the perimeter Each tile can be flipped at most 6 times, i.e. when its neighbor tiles are occupied. Control of a formation can flow back and forth. Sometimes you lose part of a formation and plug any holes to render that part of the board 'dead' and lock in your territory/ prevent further losses. Low-numbered tiles are obvious-but-low-valued liabilities, but high-numbered tiles can be bigger liabilities if they get flipped (which is harder). One lucky play with a 20-army tile can cause a swing of 200 (from +100 to -100 armies). So tile placement will have both offensive and defensive considerations. Comment 1,2,4 seem to resemble a minimax strategy where we minimize the maximum expected possible loss (modified by some probabilistic consideration of the value ß the opponent can get from 1..20 i.e. a structure which can only be flipped by a ß=20 tile is 'nearly impregnable'.) I'm not clear what the implications of comments 3,5,6 are for optimal strategy. Interested in comments from Go, Chess or Othello players. (The sequel ProximityHD for XBox Live, allows 4-player -cooperative or -competitive local multiplayer increases the branching factor since you now have 5 tiles in your hand at any given time, of which you can only play one. Reinforcement of ally tiles is increased to +2 per ally.)

    Read the article

  • My block is not retaining some of its objects

    - by Drew Crawford
    From the Blocks documentation: In a reference-counted environment, by default when you reference an Objective-C object within a block, it is retained. This is true even if you simply reference an instance variable of the object. I am trying to implement a completion handler pattern, where a block is given to an object before the work is performed and the block is executed by the receiver after the work is performed. Since I am being a good memory citizen, the block should own the objects it references in the completion handler and then they will be released when the block goes out of scope. I know enough to know that I must copy the block to move it to the heap since the block will survive the stack scope in which it was declared. However, one of my objects is getting deallocated unexpectedly. After some playing around, it appears that certain objects are not retained when the block is copied to the heap, while other objects are. I am not sure what I am doing wrong. Here's the smallest test case I can produce: typedef void (^ActionBlock)(UIView*); In the scope of some method: NSObject *o = [[[NSObject alloc] init] autorelease]; mailViewController = [[[MFMailComposeViewController alloc] init] autorelease]; NSLog(@"o's retain count is %d",[o retainCount]); NSLog(@"mailViewController's retain count is %d",[mailViewController retainCount]); ActionBlock myBlock = ^(UIView *view) { [mailViewController setCcRecipients:[NSArray arrayWithObjects:@"[email protected]",nil]]; [o class]; }; NSLog(@"mailViewController's retain count after the block is %d",[mailViewController retainCount]); NSLog(@"o's retain count after the block is %d",[o retainCount]); Block_copy(myBlock); NSLog(@"o's retain count after the copy is %d",[o retainCount]); NSLog(@"mailViewController's retain count after the copy is %d",[mailViewController retainCount]); I expect both objects to be retained by the block at some point, and I certainly expect their retain counts to be identical. Instead, I get this output: o's retain count is 1 mailViewController's retain count is 1 mailViewController's retain count after the block is 1 o's retain count after the block is 1 o's retain count after the copy is 2 mailViewController's retain count after the copy is 1 o (subclass of NSObject) is getting retained properly and will not go out of scope. However mailViewController is not retained and will be deallocated before the block is run, causing a crash.

    Read the article

  • Mobile App Data Syncronization

    - by Matt Rogish
    Let's say I have a mobile app that uses HTML5 SQLite DB (and/or the HTML5 key-value store). Assets (media files, PDFs, etc.) are stored locally on the mobile device. Luckily enough, the mobile device is a read-only copy of the "centralized" storage, so the mobile device won't have to propagate changes upstream. However, as the server changes assets (creates new ones, modifies existing, deletes old ones) I need to propagate those changes back to the mobile app. Assume that server changes are grouped into changesets (version number n) that contain some information (added element XYZ, deleted id = 45, etc.) and that the mobile device has limited CPU/bandwidth, so most of the processing has to take place on the server. I can think of a couple of methods to do this. All have trade-offs and at this point, I'm unsure which is the right course of action... Method 1: For change set n, store the "diff" of the current n and previous n-1. When a client with version y asks if there have been any changes, send the change sets from version y up to the current version. e.g. added item 334, contents: xxx. Deleted picture 44. Deleted PDF 11. Changed 33. added picture 99. Characteristics: Diffs take up space, although in theory would be kept small. However, all diffs must be kept around indefinitely (should a v1 app have not been updated for a year, must apply v2..v100). High latency devices (mobile apps) will incur a penalty to send lots of small files (assume cannot be zipped or tarr'd up into one file) Very few server CPU resources required, as all it does is send the client a list of files "Dumb" - if I change an item in change set 3, and change it to something else in 4, the client is going to perform both actions, even though #3 is rendered moot by #4. Or, if an asset is added in #4 and removed in #5 - the client will download a file just to delete it later. Method 2: Very similar to method 1 except on the server, do some sort of a diff between the change sets represented by the app version and server version. Package that up and send that single change set to the client. Characteristics: Client-efficient: The client only has to process one file, duplicate or irrelevant changes are stripped out. Server CPU/space intensive. The change sets must be diff'd and then written out to a file that is then sent to the client. Makes diff server scalability an issue. Possibly ways to cache the results and re-use them, but in the wild there's likely to be a lot of different versions so the diff re-use has a limit Diff algorithm is complicated. The change sets must be structured in such a way that an efficient and effective diff can be performed. Method 3: Instead of keeping diffs, write out the entire versioned asset collection to a mobile-database import file. When client requests an update, send the entire database to client and have them update their assets appropriately. Characteristics: Conceptually simple -- easy to develop and deploy Very inefficient as the client database is restored every update. If only one new thing was added, the whole database is refreshed. Server space and CPU efficient. Only the latest version DB needs kept around and the server just throws the file to the client. Others?? Thoughts? Thanks!!

    Read the article

  • Java ExecutorService java heap space ptoblems

    - by Sergey Aganezov jr
    I have a little bit of a problem in a multitasking java department. I have a class, called public class ThreadWorker implements Runnable { //some code in here public void run(){ // invokes some recursion method in the ThreadWorker itself, // which will stop eventually { } all in all, pretty simple "worker" that can work on it's on. To work with threads I'm using public static int THREAD_NUMBER = 4; public static ExecutorServide es = Executors.newFixedThreadPool(THREAD_NUMBER); adding instances of ThreadWroker class happens here: public void recursiveMethod(Arraylist<Integers> elements, MyClass data){ if (elements.size() == 0 && data.qualifies()){ ThreadWorker tw = new ThreadWorker(data); es.execute(tw); return; } for (int i=0; i< elements.size(); i++){ // some code to prevent my problem MyClass data1 = new MyClass(data); MyClass data2 = new MyClass(data); ArrayList<Integer> newElements = (ArrayList<Integer>)elements.clone(); data1.update(elements.get(i)); data2.update(-1 * elements.get(i)); newElements.remove(i); recursiveMethod(newElements, data1); recursiveMethod(newElements, data2); { } and the problem is that the depth of the recursion tree is quite big, so as it's width, so a lot of ThreadWorkers are added to the ExecutorService, so after some time on the big input a get Exception in thread "pool-1-thread-2" java.lang.OutOfMemoryError: Java heap space which is caused, as I think because of a ginormous number of ThreadWorkers i'm adding to ExecutorSirvice to be executed, so it runs out of memory. Every ThreadWorker takes about 40 Mb of RAM for all it needs. Is there a method to get how many threads (instances of classes implementing runnable interface) have been added to ExecutorService? So I can add it in the shown above code (int the " // some code to prevent my problem"), as while ("number of threads in the ExecutorService" > 10){ Thread.sleep(10000); } so I won't go to deep or to broad with my recursion and prevent those exception-throwing situations. Sincerely, Sergey Aganezov jr.

    Read the article

  • Unable to cast lists with Reflection in C#

    - by DrLazer
    I am having a lot of trouble with Reflection in C# at the moment. The app I am writing allows the user to modify the attributes of certain objects using a config file. I want to be able to save the object model (users project) to XML. The function below is called in the middle of a foreach loop, looping through a list of objects that contain all the other objects in the project within them. The idea is, that it works recursively to translate the object model into XML. Dont worry about the call to "Unreal" that just modifes the name of the objects slightly if they contain certain words. private void ReflectToXML(object anObject, XmlElement parentElement) { Type aType = anObject.GetType(); XmlElement anXmlElement = m_xml.CreateElement(Unreal(aType.Name)); parentElement.AppendChild(anXmlElement); PropertyInfo[] pinfos = aType.GetProperties(); //loop through this objects public attributes foreach (PropertyInfo aInfo in pinfos) { //if the attribute is a list Type propertyType = aInfo.PropertyType; if ((propertyType.IsGenericType)&&(propertyType.GetGenericTypeDefinition() == typeof(List<>))) { List<object> listObjects = (aInfo.GetValue(anObject,null) as List<object>); foreach (object aListObject in listObjects) { ReflectToXML(aListObject, anXmlElement); } } //attribute is not a list else anXmlElement.SetAttribute(aInfo.Name, ""); } } If an object attributes are just strings then it should be writing them out as string attributes in the XML. If an objects attributes are lists, then it should recursively call "ReflectToXML" passing itself in as a parameter, thereby creating the nested structure I require that nicely reflect the object model in memory. The problem I have is with the line List<object> listObjects = (aInfo.GetValue(anObject,null) as List<object>); The cast doesn't work and it just returns null. While debugging I changed the line to object temp = aInfo.GetValue(anObject,null); slapped a breakpoint on it to see what "GetValue" was returning. It returns a "Generic list of objects" Surely I should be able to cast that? The annoying thing is that temp becomes a generic list of objects but because i declared temp as a singular object, I can't loop through it because it has no Enumerator. How can I loop through a list of objects when I only have it as a propertyInfo of a class? I know at this point I will only be saving a list of empty strings out anyway, but thats fine. I would be happy to see the structure save out for now. Thanks in advance

    Read the article

  • Sockets server design advice

    - by Rob
    We are writing a socket server in c# and need some advice on the design. Background: Clients (from mobile devices) connect to our server app and we leave their socket open so we can send data back down to them whenever we need to. The amount of data varies but we generally send/receive data from each client every few seconds, so it's quite intensive. The amount of simultaneous connections can range from 50-500 (and more in the future). We have already written a server app using async sockets and it works, however we've come across some stumbling blocks and we need to make sure that what we're doing is correct. We have a collection which holds our client states (we have no socket/connection pool at the moment, should we?). Each time a client connects we create a socket and then wait for them to send us some data and in receiveCallBack we add their clientstate object to our connections dictionary (once we have verified who they are). When a client object then signs off we shutdown their socket and then close it as well as remove them from our collection of clients dictionary. Presumably everything happens in the right order, everything works as expected. However, almost everyday it stops accepting connections, or so we think, either that or it connects but doesn't actually do anything past that and we can't work out why it's just stopping. There are few things that we'r'e unsure about 1) Should we be creating some kind of connection pool as opposed to just a dictionary of client sockets 2) What happens to the sockets that connect but then don't get added to our dictionary, they just linger around in memory doing nothing, should we create ANOTHER dictionary that holds the sockets as soon as they are created? 3) What's the best way of finding if clients are no longer connected? We've read some many methods but we're not sure of the best one to use, send data or read data, if so how? 4) If we loop through the connections dictonary to check for disposed clients, should we be locking the dictionary, if so how does this affect other clients objects trying to use it at the same time, will it throw an error or just wait? 5) We often get disposedSocketException within ReceiveCallBack method at random times, does this mean we are safe to remove that socket from the collection? We can't seem to find any production type examples which show any of this working. Any advice would be greatly received

    Read the article

  • Does anyone have database, programming language/framework suggestions for a GUI point of sale system

    - by Jason Down
    Our company has a point of sale system with many extras, such as ordering and receiving functionality, sales and order history etc. Our main issue is that the system was not designed properly from the ground up, so it takes too long to make fixes and handle requests from our customers. Also, the current technology we are using (Progress database, Progress 4GL for the language) incurs quite a bit of licensing expenses on our customers due to mutli-user license fees for database connections etc. After a lot of discussion it is looking like we will probably start over from scratch (while maintaining the current product at least for the time being). We are looking for a couple of things: Create the system with a nice GUI front end (it is currently CHUI and the application was not built in a way that allows us to redesign the front end... no layering or separation of business logic and gui...shudder). Create the system with the ability to modularize different functionality so the product doesn't have to include all features. This would keep the cost down for our current customers that want basic functionality and a lower price tag. The bells and whistles would be available for those that would want them. Use proper design patterns to make the product easy to add or change any part at any time (i.e. change the database or change the front end without needing to rewrite the application or most of it). This is a problem today because the Progress 4GL code is directly compiled against the database. Small changes in the database requires lots of code recompiling. Our new system will be Linux based, with a possibility of a client application providing functionality from one or more windows boxes. So what I'm looking for is any suggestions on which database and/or framework or programming language(s) someone might recommend for this sort of product. Anyone that has experience in this field might be able to point us in the right direction or even have some ideas of what to avoid. We have considered .NET and SQL Express (we don't need an enterprise level DB), but that would limit us to windows (as far as I know anyway). I have heard of Mono for writing .NET code in a Linux environment, but I don't know much about it yet. We've also considered a Java and MySql based implementation. To summarize we are looking to do the following: Keep licensing costs down on the technology we will use to develop the product (Oracle, yikes! MySQL, nice.) Deliver a solution that is easily maintainable and supportable. A solution that has a component capable of running on "old" hardware through a CHUI front end. (some of our customers have 40+ terminals which would be a ton of cash in order to convert over to a PC). Suggestions would be appreciated. Thanks [UPDATE] I should note that we are currently performing a total cost analysis. This question is intended to give us a couple of "educated" options to look into to include in or analysis. Anyone who could share experiences/suggestions about client/server setups would be appreciated (not just those who have experience with point of sale systems... that would just be a bonus).

    Read the article

  • Mercurial over ssh client and server on Windows

    - by Ben Von Handorf
    I'm trying to configure Mercurial for use with both a windows server (freeSSHd) and client (both command line and TortoiseHG). I'm using the most recent versions of everything... all downloaded in the past few days. Using public key auth, I have been able to get connected to the server and I'm able to use plink to execute "hg version" and get a response, but when I try to clone a repository from the ssh server the command appears to hang. Running with -v yields: hg -v clone ssh://<username>@<server>//hg/repositoryA testRepositoryA running "plink.exe -i "<path to private key file>" <username>@<server> "hg -R /hg/repositoryA serve --stdio"" with nothing more forthcoming. Running the hg serve command directly on the server yields an apparently responsive Mercurial server, but the clients do not seem to make any further requests. Running "hg serve" in the repository directory and cloning over http works perfectly. What should I be looking for to help debug this? Is there something the clients (hg and TortoiseHG) aren't sending to continue the request stream? Additional Information: If I change to an invalid repository on the target machine, the appropriate error is displayed, so it does appear that the remote hg is running and correctly evaluating the path. Running with --debug and --traceback results in: sending hello command sending between command It hangs here, until I CTRL-C Traceback (most recent call last): File "mercurial\dispatch.pyo", line 46, in _runcatch File "mercurial\dispatch.pyo", line 452, in _dispatch File "mercurial\dispatch.pyo", line 320, in runcommand File "mercurial\dispatch.pyo", line 504, in _runcommand File "mercurial\dispatch.pyo", line 457, in checkargs File "mercurial\dispatch.pyo", line 451, in <lambda> File "mercurial\util.pyo", line 402, in check File "mercurial\commands.pyo", line 636, in clone File "mercurial\hg.pyo", line 187, in clone File "mercurial\hg.pyo", line 63, in repository File "mercurial\sshrepo.pyo", line 51, in __init__ File "mercurial\sshrepo.pyo", line 73, in validate_repo KeyboardInterrupt interrupted! Responding to Ryan: There does not appear to be any CPU usage or increasing memory usage on the server. It appears to be waiting for the client to send a request or something similar. 11/19/2009 : More information: The problem is definitely in the freeSSHd/server side of the equation. Connecting to bitbucket over ssh with the same keyset works fine. Still working on this.

    Read the article

  • objective-c uncompress/inflate a NSData object which contains a gzipped string

    - by user141146
    Hi, I'm using objective-c (iPhone) to read a sqlite table that contains blob data. The blob data is actually a gzipped string. The blob data is read into memory as an NSData object I then use a method (gzipInflate) which I grabbed from here (http://www.cocoadev.com/index.pl?NSDataCategory) -- see method below. However, the gzipInflate method is returning nil. If I step through the gzipInflate method, I can see that the status returned near the bottom of the method is -3, which I assume is some type of error (Z_STREAM_END = 1 and Z_OK = 0, but I don't know precisely what a status of -3 is). Consequently, the function returns nil even though the input NSData is 841 bytes in length. Any thoughts on what I'm doing wrong? Is there something wrong with the method? Something wrong with how I'm using the method? Any thoughts on how to test this? Thanks for any help! - (NSData *)gzipInflate { // NSData *compressed_data = [self.description dataUsingEncoding: NSUTF16StringEncoding]; NSData *compressed_data = self.description; if ([compressed_data length] == 0) return compressed_data; unsigned full_length = [compressed_data length]; unsigned half_length = [compressed_data length] / 2; NSMutableData *decompressed = [NSMutableData dataWithLength: full_length + half_length]; BOOL done = NO; int status; z_stream strm; strm.next_in = (Bytef *)[compressed_data bytes]; strm.avail_in = [compressed_data length]; strm.total_out = 0; strm.zalloc = Z_NULL; strm.zfree = Z_NULL; if (inflateInit2(&strm, (15+32)) != Z_OK) return nil; while (!done) { // Make sure we have enough room and reset the lengths. if (strm.total_out >= [decompressed length]) [decompressed increaseLengthBy: half_length]; strm.next_out = [decompressed mutableBytes] + strm.total_out; strm.avail_out = [decompressed length] - strm.total_out; // Inflate another chunk. status = inflate (&strm, Z_SYNC_FLUSH); NSLog(@"zstreamend = %d", Z_STREAM_END); if (status == Z_STREAM_END) done = YES; else if (status != Z_OK) break; //status here actually is -3 } if (inflateEnd (&strm) != Z_OK) return nil; // Set real length. if (done) { [decompressed setLength: strm.total_out]; return [NSData dataWithData: decompressed]; } else return nil; }

    Read the article

  • Mysql - Help me change this single complex query to use temporary tables

    - by sandeepan-nath
    About the system: - There are tutors who create classes and packs - A tags based search approach is being followed.Tag relations are created when new tutors register and when tutors create packs (this makes tutors and packs searcheable). For details please check the section How tags work in this system? below. Following is the concerned query Can anybody help me suggest an approach using temporary tables. We have indexed all the relevant fields and it looks like this is the least time possible with this approach:- SELECT SUM(DISTINCT( t.tag LIKE "%Dictatorship%" OR tt.tag LIKE "%Dictatorship%" OR ttt.tag LIKE "%Dictatorship%" )) AS key_1_total_matches , SUM(DISTINCT( t.tag LIKE "%democracy%" OR tt.tag LIKE "%democracy%" OR ttt.tag LIKE "%democracy%" )) AS key_2_total_matches , COUNT(DISTINCT( od.id_od )) AS tutor_popularity, CASE WHEN ( IF(( wc.id_wc > 0 ), ( wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date > '2010-06-01 22:00:56' AND wccp.status = 1 AND ( wccp.country_code = 'IE' OR wccp.country_code IN ( 'INT' ) ) ), 0) ) THEN 1 ELSE 0 END AS 'classes_published' , CASE WHEN ( IF(( lp.id_lp > 0 ), ( lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND ( lpcp.country_code = 'IE' OR lpcp.country_code IN ( 'INT' ) ) ), 0) ) THEN 1 ELSE 0 END AS 'packs_published', td . *, u . * FROM tutor_details AS td JOIN users AS u ON u.id_user = td.id_user LEFT JOIN learning_packs_tag_relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN learning_packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN learning_packs_categories AS lpc ON lpc.id_lp_cat = lp.id_lp_cat LEFT JOIN learning_packs_categories AS lpcp ON lpcp.id_lp_cat = lpc.id_parent LEFT JOIN learning_pack_content AS lpct ON ( lp.id_lp = lpct.id_lp ) LEFT JOIN webclasses_tag_relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN webclasses AS wc ON wtagrels.id_wc = wc.id_wc LEFT JOIN learning_packs_categories AS wcc ON wcc.id_lp_cat = wc.id_wp_cat LEFT JOIN learning_packs_categories AS wccp ON wccp.id_lp_cat = wcc.id_parent LEFT JOIN order_details AS od ON td.id_tutor = od.id_author LEFT JOIN orders AS o ON od.id_order = o.id_order LEFT JOIN tutors_tag_relations AS ttagrels ON td.id_tutor = ttagrels.id_tutor LEFT JOIN tags AS t ON t.id_tag = ttagrels.id_tag LEFT JOIN tags AS tt ON tt.id_tag = lptagrels.id_tag LEFT JOIN tags AS ttt ON ttt.id_tag = wtagrels.id_tag WHERE ( u.country = 'IE' OR u.country IN ( 'INT' ) ) AND CASE WHEN ( ( tt.id_tag = lptagrels.id_tag ) AND ( lp.id_lp > 0 ) ) THEN lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND ( lpcp.country_code = 'IE' OR lpcp.country_code IN ( 'INT' ) ) ELSE 1 END AND CASE WHEN ( ( ttt.id_tag = wtagrels.id_tag ) AND ( wc.id_wc > 0 ) ) THEN wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date > '2010-06-01 22:00:56' AND wccp.status = 1 AND ( wccp.country_code = 'IE' OR wccp.country_code IN ( 'INT' ) ) ELSE 1 END AND CASE WHEN ( od.id_od > 0 ) THEN od.id_author = td.id_tutor AND o.order_status = 'paid' AND CASE WHEN ( od.id_wc > 0 ) THEN od.can_attend_class = 1 ELSE 1 END ELSE 1 END AND ( t.tag LIKE "%Dictatorship%" OR t.tag LIKE "%democracy%" OR tt.tag LIKE "%Dictatorship%" OR tt.tag LIKE "%democracy%" OR ttt.tag LIKE "%Dictatorship%" OR ttt.tag LIKE "%democracy%" ) GROUP BY td.id_tutor HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 ORDER BY tutor_popularity DESC, u.surname ASC, u.name ASC LIMIT 0, 20 The problem The results returned by the above query are correct (AND logic working as per expectation), but the time taken by the query rises alarmingly for heavier data and for the current data I have it is like 10 seconds as against normal query timings of the order of 0.005 - 0.0002 seconds, which makes it totally unusable. Somebody suggested in my previous question to do the following:- create a temporary table and insert here all relevant data that might end up in the final result set run several updates on this table, joining the required tables one at a time instead of all of them at the same time finally perform a query on this temporary table to extract the end result All this was done in a stored procedure, the end result has passed unit tests, and is blazing fast. I have never worked with temporary tables till now. Only if I could get some hints, kind of schematic representations so that I can start with... Is there something faulty with the query? What can be the reason behind 10+ seconds of execution time? How tags work in this system? When a tutor registers, tags are entered and tag relations are created with respect to tutor's details like name, surname etc. When a Tutors create packs, again tags are entered and tag relations are created with respect to pack's details like pack name, description etc. tag relations for tutors stored in tutors_tag_relations and those for packs stored in learning_packs_tag_relations. All individual tags are stored in tags table. The explain query output:- Please see this screenshot - http://www.test.examvillage.com/Explain_query_improved.jpg

    Read the article

  • Best way to handle multiple tables to replace one big table in Rails? (e.g. 'Books1', 'Books2', etc.

    - by mikep
    Hello, I've decided to use multiple tables for an entity (e.g. Books1, Books2, Books3, etc.), instead of just one main table which could end up having a lot of rows (e.g. just Books). I'm doing this to try and to avoid a potential future performance drop that could come with having too many rows in one table. With that, I'm looking for a good way to handle this in Rails, mainly by trying to avoid loading a bunch of unused associations. (I know that I could use a partition for this, but, for now, I've decided to go the 'multiple tables' route.) Each user has their books placed into a specific table. The actual book table is chosen when the user is created, and all of their books go into the same table. I'm going to split the adds across the tables. The goal is to try and keep each table pretty much even -- but that's a different issue. One thing I don't particularly want to have is a bunch of unused associations in the User class. Right now, it looks like I'd have to do the following: class User < ActiveRecord::Base has_many :books1, :books2, :books3, :books4, :books5 end class Books1 < ActiveRecord::Base belongs_to :user end class Books2 < ActiveRecord::Base belongs_to :user end class Books3 < ActiveRecord::Base belongs_to :user end I'm assuming that the main performance hit would come in terms of memory and possibly some method call overhead for each User object, since it has to load all of those associations, which in turn creates all of those nice, dynamic model accessor methods like User.find_by_. But for each specific user, only one of the book tables would be usable/applicable, since all of a user's books are stored in the same table. So, only one of the associations would be in use at any time and any other has_many :bookX association that was loaded would be a waste. For example, with a user.id of 2, I'd only need books3.find_by_author('Author'), but the way I'm thinking of setting this up, I'd still have access to Books1..n. I don't really know Ruby/Rails does internally with all of those has_many associations though, so maybe it's not so bad. But right now I'm thinking that it's really wasteful, and that there may just be a better, more efficient way of doing this. So, a few questions: 1) Is there's some sort of special Ruby/Rails methodology that could be applied to this 'multiple tables to represent one entity' scheme? Are there any 'best practices' for this? 2) Is it really bad to have so many unused has_many associations for each object? Is there a better way to do this? 3) Does anyone have any advice on how to abstract the fact that there's multiple book tables behind a single books model/class? For example, so I can call books.find_by_author('Author') instead of books3.find_by_author('Author'). Thank you!

    Read the article

  • WPF Choppy Animation

    - by Chris Dunaway
    WPF Windows-XP SP3 I'm having a problem with a simple WPF animation. I use the following Xaml code (in XamlPad and also in a WPF project): <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:sys="clr-namespace:System;assembly=mscorlib" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" > <Border Name="MyBorder" BorderThickness="10" BorderBrush="Blue" CornerRadius="10" Background="DarkRed" > <Rectangle Name="MyRectangle" Margin="10" StrokeDashArray="2.0,1.0" StrokeThickness="10" RadiusX="10" RadiusY="10" Stroke="Black" StrokeDashOffset="0"> <Rectangle.Triggers> <EventTrigger RoutedEvent="Rectangle.Loaded"> <BeginStoryboard> <Storyboard> <DoubleAnimation Storyboard.TargetName="MyRectangle" Storyboard.TargetProperty="StrokeDashOffset" From="0.0" To="3.0" Duration="0:0:1" RepeatBehavior="Forever" Timeline.DesiredFrameRate="30" /> </Storyboard> </BeginStoryboard> </EventTrigger> </Rectangle.Triggers> </Rectangle> </Border> </Page> It has the effect of causing the border to animate around the rectangle. After a fresh reboot of the machine, this animation is nice and smooth. However, I tend to leave my machine on all the time and after a period time elapses (I don't know how long), the animation starts stuttering and becomes choppy. I thought that it may be memory or resource issues, but after shutting down all other apps and any services that seem unnecessary, the stuttering still continues. However, after a system reboot, the animation is smooth again! I get the same symptoms in a WPF app or in XamlPad. In the case of the app, it doesn't seem to make any difference whether I run in the debugger or if I run the executable directly. I applied the patch at this link: http://support.microsoft.com/kb/981741 and I thought that it had taken care of the issue, but it seems not to have. I have seen some posts that might indicate that using transparency might affect animation, but as you can see, my xaml does not use transparency. Can anyone give me some suggestions on how to determine what the problem is? Are there any WPF diagnostic tools that might help? UPDATE: I have checked my video drivers and they are the latest version. (nVidia GeForce 8400 GS)

    Read the article

  • debugging JBoss 100% CPU usage

    - by NateS
    Originally posted on Server Fault, where it was suggested this question might better asked here. We are using JBoss to run two of our WARs. One is our web app, the other is our web service. The web app accesses a database on another machine and makes requests to the web service. The web service makes JMS requests to other machines, aggregates the data, and returns it. At our biggest client, about once a month the JBoss Java process takes 100% of all CPUs. The machine running JBoss has 8 CPUs. Our web app is still accessible during this time, however pages take about 3 minutes to load. Restarting JBoss restores everything to normal. The database machine and all the other machines are fine, only the machine running JBoss is affected. Memory usage is normal. Network utilization is normal. There are no suspect error messages in the JBoss logs. I have set up a test environment as close as possible to the client's production environment and I've done load testing with as much as 2x the number of concurrent users. I have not gotten my test environment to replicate the problem. Where do we go from here? How can we narrow down the problem? Currently the only plan we have is to wait until the problem occurs in production on its own, then do some debugging to determine the cause. So far people have just restarted JBoss when the problem occurred to minimize down time. Next time it happens they will get a developer to take a look. The question is, next time it happens, what can be done to determine the cause? We could setup a separate JBoss instance on the same box and install the web app separately from the web service. This way when the problem next occurs we will know which WAR has the problem (assuming it is our code). This doesn't narrow it down much though. Should I enable JMX remote? This way the next time the problem occurs I can connect with VisualVM and see which threads are taking the CPU and what the hell they are doing. However, is there a significant down side to enabling JMX remote in a production environment? Is there another way to see what threads are eating the CPU and to get a stacktrace to see what they are doing? Any other ideas? Thanks!

    Read the article

  • FFmpeg audio dont work in converted videos

    - by Juddy Swaft
    NOTICE: when i convert videos via terminal and download them from ftp into pc the audio works fine. I use: if($ext == "avi" && $convert_avi == true) { $convert_source = _VIDEOS_DIR_PATH.$new_name; $conv_name = substr(md5($file['name'].rand(1,888)), 2, 10).".mp4"; $converted_file = _VIDEOS_DIR_PATH.$conv_name; $ffmpeg_command = 'ffmpeg -i '.$convert_source.' -acodec libmp3lame -vcodec libx264 -s 1280x720 -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 '.$converted_file; echo exec($ffmpeg_command); $sql = "UPDATE pm_temp SET url = '".$conv_name."' WHERE url = '".$new_name."' LIMIT 1"; $result = @mysql_query($sql); unlink($convert_source); } This code to convert avi to mp4 ffmpeg concole output: root@1tb:~# ffmpeg -i sample.avi -acodec libmp3lame -vcodec libx264 -s 1280x720 -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 goodsample.mp4 ffmpeg version 0.7.15, Copyright (c) 2000-2013 the FFmpeg developers built on Feb 22 2013 07:18:58 with gcc 4.4.5 configuration: --enable-libdc1394 --prefix=/usr --extra-cflags='-Wall -g ' --cc='ccache cc' --enable-shared --enable-libmp3lame --enable-gpl --enable-libvorbis --enable-pthreads --enable-libfaac --enable-libxvid --enable-postproc --enable-x11grab --enable-libgsm --enable-libtheora --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libx264 --enable-libspeex --enable-nonfree --disable-stripping --enable-avfilter --enable-libdirac --disable-decoder=libdirac --enable-libfreetype --enable-libschroedinger --disable-encoder=libschroedinger - s libavutil 50. 43. 0 / 50. 43. 0 libavcodec 52.123. 0 / 52.123. 0 libavformat 52.111. 0 / 52.111. 0 libavdevice 52. 5. 0 / 52. 5. 0 libavfilter 1. 80. 0 / 1. 80. 0 libswscale 0. 14. 1 / 0. 14. 1 libpostproc 51. 2. 0 / 51. 2. 0 [mp3 @ 0x191d4100] Header missing [mpeg4 @ 0x191d1dc0] Invalid and inefficient vfw-avi packed B frames detected Input #0, avi, from 'sample.avi': Metadata: encoder : VirtualDubMod 1.5.10.2 (build 2540/release) Duration: 00:01:01.81, start: 0.000000, bitrate: 1194 kb/s Stream #0.0: Video: mpeg4, yuv420p, 640x352 [PAR 1:1 DAR 20:11], 23.98 tbr, Stream #0.1: Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s [buffer @ 0x191d1c80] w:640 h:352 pixfmt:yuv420p tb:1/1000000 sar:1/1 sws_param: [scale @ 0x191d6880] w:640 h:352 fmt:yuv420p -> w:1280 h:720 fmt:yuv420p flags:0 [libx264 @ 0x191ce5a0] Default settings detected, using medium profile [libx264 @ 0x191ce5a0] using SAR=45/44 [libx264 @ 0x191ce5a0] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle S [libx264 @ 0x191ce5a0] profile High, level 3.1 [libx264 @ 0x191ce5a0] 264 - core 118 - H.264/MPEG-4 AVC codec - Copyleft 2003-2 6 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_off 1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_l Output #0, mp4, to 'goodsample.mp4': Metadata: encoder : Lavf52.111.0 Stream #0.0: Video: libx264, yuv420p, 1280x720 [PAR 45:44 DAR 20:11], q=2-31 Stream #0.1: Audio: libmp3lame, 44100 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop, [?] for help [mp3 @ 0x191d4100] Header missing Error while decoding stream #0.1 [mpeg4 @ 0x191d1dc0] Invalid and inefficient vfw-avi packed B frames detected [mp3 @ 0x191d4100] incomplete frame 9467kB time=00:01:00.32 bitrate=1285.5kbits/ Error while decoding stream #0.1 frame= 1852 fps= 20 q=29.0 Lsize= 9652kB time=00:01:01.72 bitrate=1280.9kbits video:9121kB audio:483kB global headers:0kB muxing overhead 0.499688% frame I:11 Avg QP:16.78 size: 51456 [libx264 @ 0x191ce5a0] frame P:784 Avg QP:20.81 size: 8954 [libx264 @ 0x191ce5a0] frame B:1057 Avg QP:26.06 size: 1659 [libx264 @ 0x191ce5a0] consecutive B-frames: 22.0% 3.1% 7.5% 67.4% [libx264 @ 0x191ce5a0] mb I I16..4: 31.1% 59.8% 9.1% [libx264 @ 0x191ce5a0] mb P I16..4: 1.8% 2.6% 0.2% P16..4: 24.3% 7.0% 4.0 [libx264 @ 0x191ce5a0] mb B I16..4: 0.1% 0.1% 0.0% B16..8: 22.7% 0.8% 0.2 [libx264 @ 0x191ce5a0] 8x8 transform intra:57.0% inter:72.6% [libx264 @ 0x191ce5a0] coded y,uvDC,uvAC intra: 44.4% 33.3% 10.3% inter: 7.6% 5. [libx264 @ 0x191ce5a0] i16 v,h,dc,p: 68% 14% 8% 10% [libx264 @ 0x191ce5a0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 14% 27% 5% 7% 7% 6 [libx264 @ 0x191ce5a0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 14% 14% 6% 10% 9% 7 [libx264 @ 0x191ce5a0] i8c dc,h,v,p: 67% 13% 17% 3% [libx264 @ 0x191ce5a0] Weighted P-Frames: Y:1.9% UV:0.4% [libx264 @ 0x191ce5a0] ref P L0: 62.2% 12.8% 10.3% 14.5% 0.2% [libx264 @ 0x191ce5a0] ref B L0: 88.1% 5.5% 6.4% [libx264 @ 0x191ce5a0] ref B L1: 95.7% 4.3% [libx264 @ 0x191ce5a0] kb/s:1209.03 I know there is couple errors tough, but i dont know hot to fix it. Also i would be very thankfull if someone can help reduce video size but is not main problem video weights as original avi but sill.

    Read the article

  • Exposing the AnyConnect HTTPS service to outside network

    - by Maciej Swic
    We have a Cisco ASA 5505 with firmware ASA9.0(1) and ASDM 7.0(2). It is configured with a public ip address, and when trying to reach it from the outside by HTTPS for AnyConnect VPN, we get the following log output: 6 Nov 12 2012 07:01:40 <client-ip> 51000 <asa-ip> 443 Built inbound TCP connection 2889 for outside:<client-ip>/51000 (<client-ip>/51000) to identity:<asa-ip>/443 (<asa-ip>/443) 6 Nov 12 2012 07:01:40 <client-ip> 50999 <asa-ip> 443 Built inbound TCP connection 2890 for outside:<client-ip>/50999 (<client-ip>/50999) to identity:<asa-ip>/443 (<asa-ip>/443) 6 Nov 12 2012 07:01:40 <client-ip> 51000 <asa-ip> 443 Teardown TCP connection 2889 for outside:<client-ip>/51000 to identity:<asa-ip>/443 duration 0:00:00 bytes 0 No valid adjacency 6 Nov 12 2012 07:01:40 <client-ip> 50999 <asa-ip> 443 Teardown TCP connection 2890 for outside:<client-ip>/50999 to identity:<asa-ip>/443 duration 0:00:00 bytes 0 No valid adjacency We finished the startup wizard and the anyconnect vpn wizard and here is the resulting configuration: Cryptochecksum: 12262d68 23b0d136 bb55644a 9c08f86b : Saved : Written by enable_15 at 07:08:30.519 UTC Mon Nov 12 2012 ! ASA Version 9.0(1) ! hostname vpn domain-name office.<redacted>.com enable password <redacted> encrypted passwd <redacted> encrypted names ip local pool vpn-pool 192.168.67.2-192.168.67.253 mask 255.255.255.0 ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.68.250 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address <redacted> 255.255.255.248 ! ftp mode passive dns server-group DefaultDNS domain-name office.<redacted>.com object network obj_any subnet 0.0.0.0 0.0.0.0 pager lines 24 logging enable logging asdm informational mtu outside 1500 mtu inside 1500 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 no arp permit-nonconnected ! object network obj_any nat (inside,outside) dynamic interface timeout xlate 3:00:00 timeout pat-xlate 0:00:30 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy user-identity default-domain LOCAL http server enable http 192.168.68.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart warmstart crypto ipsec ikev2 ipsec-proposal DES protocol esp encryption des protocol esp integrity sha-1 md5 crypto ipsec ikev2 ipsec-proposal 3DES protocol esp encryption 3des protocol esp integrity sha-1 md5 crypto ipsec ikev2 ipsec-proposal AES protocol esp encryption aes protocol esp integrity sha-1 md5 crypto ipsec ikev2 ipsec-proposal AES192 protocol esp encryption aes-192 protocol esp integrity sha-1 md5 crypto ipsec ikev2 ipsec-proposal AES256 protocol esp encryption aes-256 protocol esp integrity sha-1 md5 crypto ipsec security-association pmtu-aging infinite crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set ikev2 ipsec-proposal AES256 AES192 AES 3DES DES crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto map inside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map inside_map interface inside crypto ca trustpoint _SmartCallHome_ServerCA crl configure crypto ca trustpoint ASDM_TrustPoint0 enrollment self subject-name CN=vpn proxy-ldc-issuer crl configure crypto ca trustpool policy crypto ca certificate chain _SmartCallHome_ServerCA certificate ca 6ecc7aa5a7032009b8cebcf4e952d491 <redacted> quit crypto ca certificate chain ASDM_TrustPoint0 certificate f678a050 <redacted> quit crypto ikev2 policy 1 encryption aes-256 integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 policy 10 encryption aes-192 integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 policy 20 encryption aes integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 policy 30 encryption 3des integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 policy 40 encryption des integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 enable outside client-services port 443 crypto ikev2 remote-access trustpoint ASDM_TrustPoint0 telnet timeout 5 ssh 192.168.68.0 255.255.255.0 inside ssh timeout 5 console timeout 0 vpn-addr-assign local reuse-delay 60 dhcpd auto_config outside ! dhcpd address 192.168.68.254-192.168.68.254 inside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept ssl trust-point ASDM_TrustPoint0 inside ssl trust-point ASDM_TrustPoint0 outside webvpn enable outside enable inside anyconnect image disk0:/anyconnect-win-3.1.01065-k9.pkg 1 anyconnect image disk0:/anyconnect-linux-3.1.01065-k9.pkg 2 anyconnect image disk0:/anyconnect-macosx-i386-3.1.01065-k9.pkg 3 anyconnect profiles GM-AnyConnect_client_profile disk0:/GM-AnyConnect_client_profile.xml anyconnect enable tunnel-group-list enable group-policy GroupPolicy_GM-AnyConnect internal group-policy GroupPolicy_GM-AnyConnect attributes wins-server none dns-server value 192.168.68.254 vpn-tunnel-protocol ikev2 ssl-client default-domain value office.<redacted>.com webvpn anyconnect profiles value GM-AnyConnect_client_profile type user username <redacted> password <redacted> encrypted tunnel-group GM-AnyConnect type remote-access tunnel-group GM-AnyConnect general-attributes address-pool vpn-pool default-group-policy GroupPolicy_GM-AnyConnect tunnel-group GM-AnyConnect webvpn-attributes group-alias GM-AnyConnect enable ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global prompt hostname context call-home reporting anonymous Cryptochecksum:12262d6823b0d136bb55644a9c08f86b : end Clearly we are missing something, but the question is, what?

    Read the article

  • Marking multi-level nested forms as "dirty" in Rails

    - by Charles Kihe
    I have a three-level multi-nested form in Rails. The setup is like this: Projects have many Milestones, and Milestones have many Notes. The goal is to have everything editable within the page with JavaScript, where we can add multiple new Milestones to a Project within the page, and add new Notes to new and existing Milestones. Everything works as expected, except that when I add new notes to an existing Milestone (new Milestones work fine when adding notes to them), the new notes won't save unless I edit any of the fields that actually belong to the Milestone to mark the form "dirty"/edited. Is there a way to flag the Milestone so that the new Notes that have been added will save? Edit: sorry, it's hard to paste in all of the code because there's so many parts, but here goes: Models class Project < ActiveRecord::Base has_many :notes, :dependent => :destroy has_many :milestones, :dependent => :destroy accepts_nested_attributes_for :milestones, :allow_destroy => true accepts_nested_attributes_for :notes, :allow_destroy => true, :reject_if => proc { |attributes| attributes['content'].blank? } end class Milestone < ActiveRecord::Base belongs_to :project has_many :notes, :dependent => :destroy accepts_nested_attributes_for :notes, :allow_destroy => true, :allow_destroy => true, :reject_if => proc { |attributes| attributes['content'].blank? } end class Note < ActiveRecord::Base belongs_to :milestone belongs_to :project scope :newest, lambda { |*args| order('created_at DESC').limit(*args.first || 3) } end I'm using an jQuery-based, unobtrusive version of Ryan Bates' combo helper/JS code to get this done. Application Helper def add_fields_for_association(f, association, partial) new_object = f.object.class.reflect_on_association(association).klass.new fields = f.fields_for(association, new_object, :child_index => "new_#{association}") do |builder| render(partial, :f => builder) end end I render the form for the association in a hidden div, and then use the following JavaScript to find it and add it as needed. JavaScript function addFields(link, association, content, func) { var newID = new Date().getTime(); var regexp = new RegExp("new_" + association, "g"); var form = content.replace(regexp, newID); var link = $(link).parent().next().before(form).prev(); if (func) { func.call(); } return link; } I'm guessing the only other relevant piece of code that I can think of would be the create method in the NotesController: def create respond_with(@note = @owner.notes.create(params[:note])) do |format| format.js { render :json => @owner.notes.newest(3).all.to_json } format.html { redirect_to((@milestone ? [@project, @milestone, @note] : [@project, @note]), :notice => 'Note was successfully created.') } end end The @owner ivar is created in the following before filter: def load_milestone @milestone = @project.milestones.find(params[:milestone_id]) if params[:milestone_id] end def determine_owner @owner = load_milestone @owner ||= @project end Thing is, all this seems to work fine, except when I'm adding new notes to existing milestones. The milestone has to be "touched" in order for new notes to save, or else Rails won't pay attention.

    Read the article

  • PHP: Proper way of using a PDO database connection in a class

    - by Cortopasta
    Trying to organize all my code into classes, and I can't get the database queries to work inside a class. I tested it without the class wrapper, and it worked fine. Inside the class = no dice. What about my classes is messing this up? class ac { public function dbConnect() { global $dbcon; $dbInfo['server'] = "localhost"; $dbInfo['database'] = "sn"; $dbInfo['username'] = "sn"; $dbInfo['password'] = "password"; $con = "mysql:host=" . $dbInfo['server'] . "; dbname=" . $dbInfo['database']; $dbcon = new PDO($con, $dbInfo['username'], $dbInfo['password']); $dbcon->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $error = $dbcon->errorInfo(); if($error[0] != "") { print "<p>DATABASE CONNECTION ERROR:</p>"; print_r($error); } } public function authentication() { global $dbcon; $plain_username = $_POST['username']; $md5_password = md5($_POST['password']); $ac = new ac(); if (is_int($ac->check_credentials($plain_username, $md5_password))) { ?> <p>Welcome!</p> <!--go to account manager here--> <?php } else { ?> <p>Not a valid username and/or password. Please try again.</p> <?php unset($_POST['username']); unset($_POST['password']); $ui = new ui(); $ui->start(); } } private function check_credentials($plain_username, $md5_password) { global $dbcon; $userid = $dbcon->prepare('SELECT id FROM users WHERE username = :username AND password = :password LIMIT 1'); $userid->bindParam(':username', $plain_username); $userid->bindParam(':password', $md5_password); $userid->execute(); print_r($dbcon->errorInfo()); $id = $userid->fetch(); Return $id; } } And if it's any help, here's the class that's calling it: require_once("ac/acclass.php"); $ac = new ac(); $ac->dbconnect(); class ui { public function start() { if ((!isset($_POST['username'])) && (!isset($_POST['password']))) { $ui = new ui(); $ui->loginform(); } else { $ac = new ac(); $ac->authentication(); } } private function loginform() { ?> <form id="userlogin" action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> User:<input type="text" name="username"/><br/> Password:<input type="password" name="password"/><br/> <input type="submit" value="submit"/> </form> <?php } }

    Read the article

< Previous Page | 626 627 628 629 630 631 632 633 634 635 636 637  | Next Page >