Search Results

Search found 834 results on 34 pages for 'looping'.

Page 30/34 | < Previous Page | 26 27 28 29 30 31 32 33 34  | Next Page >

  • Python: Script works, but seems to deadlock after some time

    - by sberry2A
    I have the following script, which is working for the most part Link to PasteBin The script's job is to start a number of threads which in turn each start a subprocess with Popen. The output from each subprocess is as follows: 1 2 3 . . . n Done Bascially the subprocess is transferring 10M records from tables in one database to different tables in another db with a lot of data massaging/manipulation in between because of the different schemas. If the subprocess fails at any time in it's execution (bad records, duplicate primary keys, etc), or it completes successfully, it will output "Done\n". If there are no more records to select against for transfer then it will output "NO DATA\n" My intent was to create my script "tableTransfer.py" which would spawn a number of these processes, read their output, and in turn output information such as number of updates completed, time remaining, time elapsed, and number of transfers per second. I started running the process last night and checked in this morning to see it had deadlocked. There were not subprocceses running, there are still records to be updated, and the script had not exited. It was simply sitting there, no longer outputting the current information because no subprocces were running to update the total number complete which is what controls updates to the output. This is running on OS X. I am looking for three things: I would like to get rid of the possibility of this deadlock occurring so I don't need to check in on it as frequently. Is there some issue with locking? Am I doing this in a bad way (gThreading variable to control looping of spawning additional thread... etc.) I would appreciate some suggestions for improving my overall methodology. How should I handle ctrl-c exit? Right now I need to kill the process, but assume I should be able to use the signal module or other to catch the signal and kill the threads, is that right? I am not sure whether I should be pasting my entire script here, since I usually just paste snippets. Let me know if I should paste it here as well.

    Read the article

  • PHP: Condense array of similar strings into one merged array

    - by Matt Andrews
    Hi everyone. Working with an array of dates (opening times for a business). I want to condense them to their briefest possible form. So far, I started out with this structure Array ( [Mon] => 12noon-2:45pm, 5:30pm-10:30pm [Tue] => 12noon-2:45pm, 5:30pm-10:30pm [Wed] => 12noon-2:45pm, 5:30pm-10:30pm [Thu] => 12noon-2:45pm, 5:30pm-10:30pm [Fri] => 12noon-2:45pm, 5:30pm-10:30pm [Sat] => 12noon-11pm [Sun] => 12noon-9:30pm ) What I want to achieve is this: Array ( [Mon-Fri] => 12noon-2:45pm, 5:30pm-10:30pm [Sat] => 12noon-11pm [Sun] => 12noon-9:30pm ) I've tried writing a recursive function and have managed to output this so far: Array ( [Mon-Fri] => 12noon-2:45pm, 5:30pm-10:30pm [Tue-Fri] => 12noon-2:45pm, 5:30pm-10:30pm [Wed-Fri] => 12noon-2:45pm, 5:30pm-10:30pm [Thu-Fri] => 12noon-2:45pm, 5:30pm-10:30pm [Sat] => 12noon-11pm [Sun] => 12noon-9:30pm ) Can anybody see a simple way of comparing the values and combining the keys where they're similar? My recursive function is basically two nested foreach() loops - not very elegant. Thanks, Matt EDIT: Here's my code so far, which produces the 3rd array above (from the first one as input): $last_time = array('t' => '', 'd' => ''); // blank array for looping $i = 0; foreach($final_times as $day=>$time) { if($last_time['t'] != $time ) { // it's a new time if($i != 0) { $print_times[] = $day . ' ' . $time; } // only print if it's not the first, otherwise we get two mondays } else { // this day has the same time as last time $end_day = $day; foreach($final_times as $day2=>$time2) { if($time == $time2) { $end_day = $day2; } } $print_times[] = $last_time['d'] . '-' . $end_day . ' ' . $time; } $last_time = array('t' => $time, 'd' => $day); $i++; }

    Read the article

  • How do I delete a [sub]hash based off of the keys/values of another hash?

    - by Zack
    Lets assume I have two hashes. One of them contains a set of data that only needs to keep things that show up in the other hash. e.g. my %hash1 = ( test1 => { inner1 => { more => "alpha", evenmore => "beta" } }, test2 => { inner2 => { more => "charlie", somethingelse => "delta" } }, test3 => { inner9999 => { ohlookmore => "golf", somethingelse => "foxtrot" } } ); my %hash2 = ( major=> { test2 => "inner2", test3 => "inner3" } ); What I would like to do, is to delete the whole subhash in hash1 if it does not exist as a key/value in hash2{major}, preferably without modules. The information contained in "innerX" does not matter, it merely must be left alone (unless the subhash is to be deleted then it can go away). In the example above after this operation is preformed hash1 would look like: my %hash1 = ( test2 => { inner2 => { more => "charlie", somethingelse => "delta" } }, ); It deletes hash1{test1} and hash1{test3} because they don't match anything in hash2. Here's what I've currently tried, but it doesn't work. Nor is it probably the safest thing to do since I'm looping over the hash while trying to delete from it. However I'm deleting at the each which should be okay? This was my attempt at doing this, however perl complains about: Can't use string ("inner1") as a HASH ref while "strict refs" in use at while(my ($test, $inner) = each %hash1) { if(exists $hash2{major}{$test}{$inner}) { print "$test($inner) is in exists.\n"; } else { print "Looks like $test($inner) does not exist, REMOVING.\n"; #not to sure if $inner is needed to remove the whole entry delete ($hash1{$test}{$inner}); } }

    Read the article

  • JSF inner datatable not respecting rendered condition of outer table.

    - by Marc
    <h:dataTable cellpadding="0" cellspacing="0" styleClass="list_table" id="OuterItems" value="#{valueList.values}" var="item" border="0"> <h:column rendered="#{item.typeA"> <h:dataTable cellpadding="0" cellspacing="0" styleClass="list_table" id="InnerItems" value="#{item.options}" var="option" border="0"> <h:column > <h:outputText value="Option: #{option.displayValue}"/> </h:column> </h:dataTable> </h:column> <h:column rendered="#{item.typeB"> <h:dataTable cellpadding="0" cellspacing="0" styleClass="list_table" id="InnerItems" value="#{item.demands}" var="demand" border="0"> <h:column > <h:outputText value="Demand: #{demand.displayValue}"/> </h:column> </h:dataTable> </h:column> </h:dataTable> public class Item{ ... public boolean isTypeA(){ return this instanceof TypeA; } public boolean isTypeB(){ return this instanceof TypeB; } ... } public class typeA extends Item(){ ... public List getOptions(){ .... } ... } public class typeB extends Item(){ ... public List getDemands(){ ... } .... } I'm having an issue with JSF. I've abstracted the problem out here, and I'm hoping someone can help me understand how what I'm doing fails. I'm looping over a list of Items. These Items are actually instances of the subclasses TypeA and TypeB. For Type A, I want to display the options, for Type B I want to display the demands. When rendering the page for the first time, this works fine. However, when I post back to the page for some action, I get an error: [3/26/10 12:52:32:781 EST] 0000008c SystemErr R javax.faces.FacesException: Error getting property 'options' from bean of type TypeB at com.sun.faces.lifecycle.ApplyRequestValuesPhase.execute(ApplyRequestValuesPhase.java:89) at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java(Compiled Code)) at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:91) at com.ibm.faces.portlet.FacesPortlet.processAction(FacesPortlet.java:193) My grasp on the JSF lifecyle is very rough. At this point, i understand there is an error in the ApplyRequestValues Phases which is very early and so the previous state is restored and nothing changes. What I don't understand is that in order to fufill the condition for rendering "item.typeA" that object has to be an instance of TypeA. But here, it looks like that object passed the condition even though it was an instance of TypeB. It is like it is evaluating the inner dataTable (InnerItems) before evaluating the outer (outerItems). My working assumption is that I just don't understand how/when the rendered attribute is actually evaluated.

    Read the article

  • Parallel.For maintain input list order on output list

    - by romeozor
    I'd like some input on keeping the order of a list during heavy-duty operations that I decided to try to do in a parallel manner to see if it boosts performance. (It did!) I came up with a solution, but since this was my first attempt at anything parallel, I'd need someone to slap my hands if I did something very stupid. There's a query that returns a list of card owners, sorted by name, then by date of birth. This needs to be rendered in a table on a web page (ASP.Net WebForms). The original coder decided he would construct the table cell-by-cell (TableCell), add them to rows (TableRow), then each row to the table. So no GridView, allegedly its performance is bad, but the performance was very poor regardless :). The database query returns in no time, the most time is spent on looping through the results and adding table cells etc. I made the following method to maintain the original order of the list: private TableRow[] ComposeRows(List<CardHolder> queryResult) { int queryElementsCount = queryResult.Count(); // array with the query's size var rowArray = new TableRow[queryElementsCount]; Parallel.For(0, queryElementsCount, i => { var row = new TableRow(); var cell = new TableCell(); // various operations, including simple ones such as: cell.Text = queryResult[i].Name; row.Cells.Add(cell); // here I'm adding the current item to it's original index // to maintain order in the output list rowArray[i] = row; }); return rowArray; } So as you can see, because I'm returning a very different type of data (List<CardHolder> -> TableRow[]), I can't just simply omit the ordering from the original query to do it after the operations. Also, I also thought it would be a good idea to Dispose() the objects at the end of each loop, because the query can return a huge list and letting cell and row objects pile up in the heap could impact performance.(?) How badly did I do? Does anyone have a better solution in case mine is flawed?

    Read the article

  • STDOUT can not return to Screen

    - by rockyurock
    STDOUT can not return to Screen Hello all below is the part of my code, my code enters "if loop" with $value =1 and output of the process "iperf.exe" is getting into my_output.txt. As i am timing out the process after alram(20sec) time,also wanted to capture the output of this process only. then after i want to continue to the command prompt but i am not able to return to the command promt... not only this code itself does not PRINT on the command prompt , rather it is priniting on the my_output.txt file (i am looping this if loop through rest of my code) output.txt ========== inside value loop2 ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [160] local 10.232.62.151 port 5001 connected with 10.232.62.151 port 1505 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [160] 0.0- 5.0 sec 2.14 MBytes 3.59 Mbits/sec 0.000 ms 0/ 1528 (0%) inside value loop3 clue1 clue2 inside value loop4 one iperf completed Transfer Transfer Starting: Intent { act=android.settings.APN_SETTINGS } ******AUTOMATION COMPLETED****** Looks some problem with reinitializing the STDOUT.. even i tried to use close(STDOUT); but again it did not return to STDOUT could sombbody please help out ?? /rocky CODE:: if($value) { my $file = 'my_output.txt'; use Win32::Process; print"inside value loop\n"; # redirect stdout to a file open STDOUT, '>', $file or die "can't redirect STDOUT to <$file> $!"; Win32::Process::Create(my $ProcessObj, "iperf.exe", "iperf.exe -u -s -p 5001", 0, NORMAL_PRIORITY_CLASS, ".") || die ErrorReport(); $alarm_time = $IPERF_RUN_TIME+2; #20sec print"inside value loop2\n"; sleep $alarm_time; $ProcessObj->Kill(0); sub ErrorReport{ print Win32::FormatMessage( Win32::GetLastError() ); } print"inside value loop3\n"; print"clue1\n"; #close(STDOUT); print"clue2\n"; print"inside value loop4\n"; print"one iperf completed\n"; } my $data_file="my_output.txt"; open(ROCK, $data_file)|| die("Could not open file!"); @raw_data=<ROCK>; @COUNT_PS =split(/ /,$raw_data[7]); my $LOOP_COUNT_PS_4 = $COUNT_PS[9]; my $LOOP_COUNT_PS_5 = $COUNT_PS[10]; print "$LOOP_COUNT_PS_4\n"; print "$LOOP_COUNT_PS_5\n"; my $tput_value = "$LOOP_COUNT_PS_4"." $LOOP_COUNT_PS_5"; print "$tput_value"; close(ROCK); print FH1 "\n $count \| $tput_value \n"; regds rakesh

    Read the article

  • ColdFusion's cfquery failing silently

    - by johnthexiii
    I have a query that retrieves a large amount of data. <cfsetting requesttimeout="9999999" > <cfquery name="randomething" datasource="ds" timeout="9999999" > SELECT col1, col2 FROM table </cfquery> <cfdump var="#randomething.recordCount#" /> <!---should be about 5 million rows ---> I can successfully retrieve the data with python's cx_Oracle and using sys.getsizeof on the python list returns 22621060, so about 21 megabytes. ColdFusion does not return an error on the page, and I can't find anything in any of the logs. Why is cfdump not showing the number of rows? Additional Information The reason for doing it this way is because I have about 8000 smaller queries to run against the randomthing query. In other words when I run those 8000 queries against the database it takes hours for that process to complete. I suspect this is because I am competing with several other database users, and the database is getting bogged down. The 8000 smaller queries are getting counts of col1 over a period of col2. SELECT count(col1) as count WHERE col2 < 20121109 AND col2 > 20121108 According to Adam Cameron's suggestions. cflog is suggesting that the query isn't finishing. I tried changing the queries timeout both in the code and in the CFIDE/administrator, apparently CF9 no long respects the timeout attribute, regardless of what I tried I couldn't get the query to timeout. I also started playing around with the maxrows attribute to see if I could discern any information that way. when maxrows is set to 1300000 everything works fine when maxrows is 1400000 or greater I get this error when maxrows is 2000000 I observe my original problem Update So this isn't a limit of cfquery. By using QueryNew then looping over it to add data and I can get well past the 2 million mark without any problems. I also created a ThinClient datasource using the information in this question, I didn't observe any change in behavior. The messages on the database end are SQL*Net message from client and SQL*Net more data to client I just discovered that by using the thin client along with blockfactor1="100" I can retrieve more rows (appx. 3000000).

    Read the article

  • translating specifications into query predicates

    - by Jeroen
    I'm trying to find a nice and elegant way to query database content based on DDD "specifications". In domain driven design, a specification is used to check if some object, also known as the candidate, is compliant to a (domain specific) requirement. For example, the specification 'IsTaskDone' goes like: class IsTaskDone extends Specification<Task> { boolean isSatisfiedBy(Task candidate) { return candidate.isDone(); } } The above specification can be used for many purposes, e.g. it can be used to validate if a task has been completed, or to filter all completed tasks from a collection. However, I want to re-use this, nice, domain related specification to query on the database. Of course, the easiest solution would be to retrieve all entities of our desired type from the database, and filter that list in-memory by looping and removing non-matching entities. But clearly that would not be optimal for performance, especially when the entity count in our db increases. Proposal So my idea is to create a 'ConversionManager' that translates my specification into a persistence technique specific criteria, think of the JPA predicate class. The services looks as follows: public interface JpaSpecificationConversionManager { <T> Predicate getPredicateFor(Specification<T> specification, Root<T> root, CriteriaQuery<?> cq, CriteriaBuilder cb); JpaSpecificationConversionManager registerConverter(JpaSpecificationConverter<?, ?> converter); } By using our manager, the users can register their own conversion logic, isolating the domain related specification from persistence specific logic. To minimize the configuration of our manager, I want to use annotations on my converter classes, allowing the manager to automatically register those converters. JPA repository implementations could then use my manager, via dependency injection, to offer a find by specification method. Providing a find by specification should drastically reduce the number of methods on our repository interface. In theory, this all sounds decent, but I feel like I'm missing something critical. What do you guys think of my proposal, does it comply to the DDD way of thinking? Or is there already a framework that does something identical to what I just described?

    Read the article

  • multithreading with database

    - by Darsin
    I am looking out for a strategy to utilize multithreading (probably asynchronous delegates) to do a synchronous operation. I am new to multithreading so i will outline my scenario first. This synchronous operation right now is done for one set of data (portfolio) based on the the parameters provided. The (psudeo-code) implementation is given below: public DataSet DoTests(int fundId, DateTime portfolioDate) { // Get test results for the portfolio // Call the database adapter method, which in turn is a stored procedure, // which in turns runs a series of "rule" stored procs and fills a local temp table and returns it back. DataSet resultsDataSet = GetTestResults(fundId, portfolioDate); try { // Do some local processing on the results DoSomeProcessing(resultsDataSet); // Save the results in Test, TestResults and TestAllocations tables in a transaction. // Sets a global transaction which is provided to all the adapter methods called below // It is defined in the Base class StartTransaction("TestTransaction"); // Save Test and get a testId int testId = UpdateTest(resultsDataSet); // Adapter method, uses the same transaction // Update testId in the other tables in the dataset UpdateTestId(resultsDataSet, testId); // Update TestResults UpdateTestResults(resultsDataSet); // Adapter method, uses the same transaction // Update TestAllocations UpdateTestAllocations(resultsDataSet); // Adapter method, uses the same transaction // It is defined in the base class CommitTransaction("TestTransaction"); } catch { RollbackTransaction("TestTransaction"); } return resultsDataSet; } Now the requirement is to do it for multiple set of data. One way would be to call the above DoTests() method in a loop and get the data. I would prefer doing it in parallel. But there are certain catches: StartTransaction() method creates a connection (and transaction) every time it is called. All the underlying database tables, procedures are the same for each call of DoTests(). (obviously). Thus my question are: Will using multithreading anyway improve performance? What are the chances of deadlock especially when new TestId's are being created and the Tests, TestResults and TestAllocations are being saved? How can these deadlocked be handled? Is there any other more efficient way of doing the above operation apart from looping over the DoTests() method repeatedly?

    Read the article

  • how to wait multiple function processing to finish

    - by user351412
    I have a problem about multiple function processing , listed as below code, the main function is btnEvalClick, I have try to use alter native 1and 2 to wait the function not move to next record before theprocessed function finish, but it does not work //private function btnEvalClick(event:Event):void { // var i:int; // for(i= 0; i < (dataArr1.length); i++) { // dispatchEvent( new FlexEvent('test') ); // callfunc1('cydatGMX'); //call function 1 // callfun2('cydatGMO'); //call function 1 // editSave(); //save record (HTTP) //## Alternative 1 //if (String(event) == 'SAVEOK') { // RecMov('next'); //move record if save = OK //} //## Alternative 2 //while (waitfc == '') // if waitfc not 'OK' continue looping //{ // z = z + 1; //} // RecMov('next'); //Move to next record to process //} //private function callfunc1(tasal:String):void { // var mySO :SharedObject; // var myDP: Array; // var i:int; // var prm:Array; // try // { // mySO = SharedObject.getLocal(tasal,'/'); // prm = mySO.data.txt.split('?'); // for(i=0; i < (prm.length - 1); i++) { // myDP = prm[i].toString().split('^'); // if ( myDP[0].toString() == String(dataArr1[dg].MatrixCDCol)){ // myDPX = myDP; // break; // } // } // } // catch (err:Error) { // Alert.show('Limit object creation fail (' + tasal + '), please retry ); // } //} //private function editSave():void //{ // var parameters:* = // { // 'CertID': CertIDCol.text, 'AssetID': AssetIDCol.text, 'CertDate': cdt, //'Ccatat': CcatatCol.text, 'CertBy': CertByCol.text, 'StatusID': StatusIDCol.text, //'UpdDate': lele, 'UpdUsr': ApplicationState.instance.luNm }; // doRequest('Update', parameters, saveItemHandler); //} //private function doRequest(method_name:String, parameters:Object, callback:Function):void // { // add the method to the parameters list // parameters['method'] = (method_name + 'ASC'); // gateway.request = parameters; // var call:AsyncToken = gateway.send(); // call.request_params = gateway.request; // call.handler = callback; // } //private function saveItemHandler(e:Object):void // { // if (e.isError) // { // Alert.show('Error: ' + e.data.error); // } // else // { // Alert.show('Record Saved..'); // waitfc = 'OK'; // dispatchEvent( new FlexEvent('SAVEOK') ); // } // }

    Read the article

  • Best practices for displaying large number of images as thumbnails in c#

    - by andySF
    I got to a point where it's very difficult to get answers by debugging and tracing object, so i need some help. What I'm trying to do: A history form for my screen capture pet project. The history must list all images as thumbnails (ex: picasa). What I've done: I created a HistoryItem:UserControl. This history item has a few buttons, a check box, a label and a picture box. The buttons are for delete/edit/copy image. The check box is used for selecting one or more images and the label is for some info text. The picture box is getting the image from a public property that is a path and a method creates a proportional thumbnail to display it when the control has been loaded. This user control has two public events. One for deleting the image and one for bubbling the events for mouse enter and mouse leave trough all controls. For this I use EventBroadcastProvider. The bubbling is useful because wherever I move the mouse over the control, the buttons appear. The dispose method has been extended and I manually remove the events. All images are loaded by looping a xml file that contains the path of all images. For each image in this XML I create a new HitoryItem that is added (after a little coding to sort and limit the amount of images loaded) to a flow layout panel. The problem: When I lunch the history form, and the flow layout panel is populated with my HistoryItem custom control, my memory usage increases drastically.From 14Mb to around 100MB with 100 images loaded. By closing the history form and disposing whatever I could dispose and even trying to call GC.Collect() the memory increase remain. I search for any object that could not be disposed properly like an image or event but wherever I used them they are disposed. The problem seams to be from multiple sources. One is that the events for bubbling are not disposing properly, and the other is from the picture box itself. All of this i could see by commenting all the code to a limited version when only the custom control without any image processing and even events is loaded. Without the events the memory consumption is reduced by axiomatically 20%. So my real question is if this logic, flow layout panels and custom controls with picture boxes, is the best solution for displaying large amounts of images as thumbnails. Thank you!

    Read the article

  • string s; &s+1; Legal? UB?

    - by John Dibling
    Consider the following code: #include <cstdlib> #include <iostream> #include <string> #include <vector> #include <algorithm> using namespace std; int main() { string myAry[] = { "Mary", "had", "a", "Little", "Lamb" }; const size_t numStrs = sizeof(myStr)/sizeof(myAry[0]); vector<string> myVec(&myAry[0], &myAry[numStrs]); copy( myVec.begin(), myVec.end(), ostream_iterator<string>(cout, " ")); return 0; } Of interest here is &myAry[numStrs]: numStrs is equal to 5, so &myAry[numStrs] points to something that doesn't exist; the sixth element in the array. There is another example of this in the above code: myVec.end(), which points to one-past-the-end of the vector myVec. It's perfecly legal to take the address of this element that doesn't exist. We know the size of string, so we know where the address of the 6th element of a C-style array of strings must point to. So long as we only evaluate this pointer and never dereference it, we're fine. We can even compare it to other pointers for equality. The STL does this all the time in algorithms that act on a range of iterators. The end() iterator points past the end, and the loops keep looping while a counter != end(). So now consider this: #include <cstdlib> #include <iostream> #include <string> #include <vector> #include <algorithm> using namespace std; int main() { string myStr = "Mary"; string* myPtr = &myStr; vector<string> myVec2(myPtr, &myPtr[1]); copy( myVec2.begin(), myVec2.end(), ostream_iterator<string>(cout, " ")); return 0; } Is this code legal and well-defined? It is legal and well-defined to take the address of an array element past the end, as in &myAry[numStrs], so should it be legal and well-defined to pretend that myPtr is also an array?

    Read the article

  • Jquery help : How to implement a Previous/Next & Play/Pause toggle for this script

    - by rameshelamathi
    (function($){ // Creating the sweetPages jQuery plugin: $.fn.sweetPages = function(opts){ // If no options were passed, create an empty opts object if(!opts) opts = {}; var resultsPerPage = opts.perPage || 3; var swDiv = opts.getSwDiv || "swControls"; // The plugin works best for unordered lists, althugh ols would do just as well: var ul = this; var li = ul.find('li'); li.each(function(){ // Calculating the height of each li element, and storing it with the data method: var el = $(this); el.data('height',el.outerHeight(true)); }); // Calculating the total number of pages: var pagesNumber = Math.ceil(li.length/resultsPerPage); // If the pages are less than two, do nothing: if(pagesNumber<2) return this; // Creating the controls div: //var swControls = $('<div class="swControls">'); var swControls = $('<div class='+swDiv+'>'); for(var i=0;i<pagesNumber;i++) { // Slice a portion of the lis, and wrap it in a swPage div: li.slice(i*resultsPerPage,(i+1)*resultsPerPage).wrapAll('<div class="swPage" />'); // Adding a link to the swControls div: swControls.append('<a href="" class="swShowPage">'+(i+1)+'</a>'); } ul.append(swControls); var maxHeight = 0; var totalWidth = 0; var swPage = ul.find('.swPage'); swPage.each(function(){ // Looping through all the newly created pages: var elem = $(this); var tmpHeight = 0; elem.find('li').each(function(){tmpHeight+=$(this).data('height');}); if(tmpHeight>maxHeight) maxHeight = tmpHeight; totalWidth+=elem.outerWidth(); elem.css('float','left').width(ul.width()); }); swPage.wrapAll('<div class="swSlider" />'); // Setting the height of the ul to the height of the tallest page: //ul.height(maxHeight); var swSlider = ul.find('.swSlider'); swSlider.append('<div class="clear" />').width(totalWidth); var hyperLinks = ul.find('a.swShowPage'); hyperLinks.click(function(e){ // If one of the control links is clicked, slide the swSlider div // (which contains all the pages) and mark it as active: $(this).addClass('active').siblings().removeClass('active'); swSlider.stop().animate({'margin-left':-(parseInt($(this).text())-1)*ul.width()},'slow'); e.preventDefault(); }); // Mark the first link as active the first time this code runs: hyperLinks.eq(0).addClass('active'); // Center the control div: swControls.css({ 'right':'10%', 'margin-left':-swControls.width()/2 }); return this; }})(jQuery);

    Read the article

  • A program where user enters a string and the program counts the instances of the letters

    - by user1865183
    This is the first C++ program I have ever written and I'm having trouble understanding the order in which operands must be put in. This is for a class, but it looks like I'm not supposed to use the homework tag. Sorry if I'm doing this wrong. This is my input // Get DNA string string st; cout << "Enter the DNA sequence to be analysed: "; cin >> st; This seems to work ok, but I thought I would include it incase this is what I'm doing wrong. This is what I have so far to check that the input is exclusively C,T,A, or G. It runs through the program and simply prints "Please enter a valid sequnce1, please enter a valid sequence2, ... ect. I'm sure I'm doing something very stupid, I just can't figure it out. // Check that the sequence is all C, T, A, G while (i <= st.size()){ if (st[i] != 'c' && st[i] != 'C' && st[i] != 'g' && st[i] != 'G' && st[i] != 't' && st[i] != 'T' && st[i] != 'a' && st[i] != 'A'); cout << "Please enter a valid sequence" << i++; else if (st[i] == c,C,G,t,T,a,A) i++; The second half of my program is to count the number of Cs and Gs in the sequence for (i < st.size() ; i++ ;); for (loop <= st.size() ; loop++;) if (st[loop] == 'c') { count_c++; } else if (st[loop] == C) { count_c++; } else if (st[loop] == g) { count_g++; } else if (st[loop] == G); { count_g++; } cout << "Number of instances of C = " << count_c; cout << "Number of instances of G = " << count_g; It seems like it's not looping, it will count 1 of one of the letters. How do I make it loop? I can't seem to put in endl; anywhere without getting an error back, although I know I'll need it somewhere. Any help or tips to point me in the right direction would be greatly appreciated - I've been working on this code for two days (this is embarrassing to admit).

    Read the article

  • c++ recursion with arrays

    - by Sam
    I have this project that I'm working on for a class, and I'm not looking for the answer, but maybe just a few tips since I don't feel like I'm using true recursion. The problem involves solving the game of Hi Q or "peg solitaire" where you want the end result to be one peg remaining (it's played with a triangular board and one space is open at the start.) I've represented the board with a simple array, each index being a spot and having a value of 1 if it has a peg, and 0 if it doesn't and also the set of valid moves with a 2 dimensional array that is 36, 3 (each move set contains 3 numbers; the peg you're moving, the peg it hops over, and the destination index.) So my problem is that in my recursive function, I'm using a lot of iteration to determine things like which space is open (or has a value of 0) and which move to use based on which space is open by looping through the arrays. Secondly, I don't understand how you would then backtrack with recursion, in the event that an incorrect move was made and you wanted to take that move back in order to choose a different one. This is what I have so far; the "a" array represents the board, the "b" array represents the moves, and the "c" array was my idea of a reminder as to which moves I used already. The functions used are helper functions that basically just loop through the arrays to find an empty space and corresponding move. : void solveGame(int a[15], int b[36][3], int c[15][3]){ int empSpace; int moveIndex; int count = 0; if(pegCount(a) < 2){ return; } else{ empSpace = findEmpty(a); moveIndex = chooseMove( a, b, empSpace); a[b[moveIndex][0]] = 0; a[b[moveIndex][1]] = 0; a[b[moveIndex][2]] = 1; c[count][0] = b[moveIndex][0]; c[count][1] = b[moveIndex][1]; c[count][2] = b[moveIndex][2]; solveGame(a,b,c); } }

    Read the article

  • show tweets inside div from an asynchronous loop

    - by ak_47
    Am trying to laod tweets into a div after looping them from yahoo placemaker. They are loading on the div but the information shown by them is placemaker's last result. This is the code.. function getLocation(user, date, profile_img, text,url) { var templates = []; templates[0] = '<div><div></div><h2 class="firstHeading">'+user+'</h2><div>'+text+'</div><div><p><a href="' + url + '"target="_blank">'+url+'</a></p></div><p>Date Posted- '+date+'</p></div>'; templates[1] = '<table width="320" border="0"><tr><td class="user" colspan="2" rowspan="1">'+user+'</td></tr><tr><td width="45"><a href="'+profile_img+'"><img src="'+profile_img+'" width="55" height="50"/></a></td><td width="186">'+text+'<p><a href="' + url + '"target="_blank">'+url+'</a></p></td></tr></table><hr>'; templates[2] = '<div><div></div><h2 class="firstHeading">'+user+'</h2><div>'+text+'</div><div><p><a href="' + url + '"target="_blank">'+url+'</a></p></div><p>Date Posted- '+date+'</p></div>'; templates[3] = '<table width="320" border="0"><tr><td class="user" colspan="2" rowspan="1">'+user+'</td></tr><tr><td width="45"><a href="'+profile_img+'"><img src="'+profile_img+'" width="55" height="50"/></a></td><td width="186">'+text+'<p><a href="' + url + '"target="_blank">'+url+'</a></p></td></tr></table><hr>'; var geocoder = new google.maps.Geocoder(); Placemaker.getPlaces(text, function (o) { console.log(o); if (!$.isArray(o.match)) { var latitude = o.match.place.centroid.latitude; var longitude = o.match.place.centroid.longitude; var myLatLng = new google.maps.LatLng(latitude, longitude); var marker = new google.maps.Marker({ icon: profile_img, title: user, map: map, position: myLatLng }); var infowindow = new google.maps.InfoWindow({ content: templates[0].replace('user',user).replace('text',text).replace('url',url).replace('date',date) }); var $tweet = $(templates[1].replace('%user',user).replace(/%profile_img/g,profile_img).replace('%text',text).replace('%url',url)); $('#user-banner').css("visibility","visible");$('#news-banner').css("visibility","visible"); $('#news-tweets').css("overflow","scroll").append($tweet); function openInfoWindow() { infowindow.open(map, marker); } google.maps.event.addListener(marker, 'click', openInfoWindow); $tweet.find(".user").on('click', openInfoWindow); bounds.extend(myLatLng); } }); }

    Read the article

  • Finding open contiguous blocks of time for every day of a month, fast

    - by Chris
    I am working on a booking availability system for a group of several venues, and am having a hard time generating the availability of time blocks for days in a given month. This is happening server-side in PHP, but the concept itself is language agnostic -- I could be doing this in JS or anything else. Given a venue_id, month, and year (6/2012 for example), I have a list of all events occurring in that range at that venue, represented as unix timestamps start and end. This data comes from the database. I need to establish what, if any, contiguous block of time of a minimum length (different per venue) exist on each day. For example, on 6/1 I have an event between 2:00pm and 7:00pm. The minimum time is 5 hours, so there's a block open there from 9am - 2pm and another between 7pm and 12pm. This would continue for the 2nd, 3rd, etc... every day of June. Some (most) of the days have nothing happening at all, some have 1 - 3 events. The solution I came up with works, but it also takes waaaay too long to generate the data. Basically, I loop every day of the month and create an array of timestamps for each 15 minutes of that day. Then, I loop the time spans of events from that day by 15 minutes, marking any "taken" timeslot as false. Remaining, I have an array that contains timestamp of free time vs. taken time: //one day's array after processing through loops (not real timestamps) array( 12345678=>12345678, // <--- avail 12345878=>12345878, 12346078=>12346078, 12346278=>false, // <--- not avail 12346478=>false, 12346678=>false, 12346878=>false, 12347078=>12347078, // <--- avail 12347278=>12347278 ) Now I would need to loop THIS array to find continuous time blocks, then check to see if they are long enough (each venue has a minimum), and if so then establish the descriptive text for their start and end (i.e. 9am - 2pm). WHEW! By the time all this looping is done, the user has grown bored and wandered off to Youtube to watch videos of puppies; it takes ages to so examine 30 or so days. Is there a faster way to solve this issue? To summarize the problem, given time ranges t1 and t2 on day d, how can I determine the remaining time left in d that is longer than the minimum time block m. This data is assembled on demand via AJAX as the user moves between calendar months. Results are cached per-page-load, so if the user goes to July a second time, the data that was generated the first time would be reused. Any other details that would help, let me know. Edit Per request, the database structure (or the part that is relevant here) *events* id (bigint) title (varchar) *event_times* id (bigint) event_id (bigint) venue_id (bigint) start (bigint) end (bigint) *venues* id (bigint) name (varchar) min_block (int) min_start (varchar) max_start (varchar)

    Read the article

  • Node.js Cron Job Messing with Date Object

    - by PazoozaTest Pazman
    I'm trying to schedule several cron jobs to generate serial numbers for different entities within my web app. However I am running into this problem, when I'm looping each table, it says it has something to do with date.js. I'm not doing anything with a date object ? Not at this stage anyway. A couple of guesses is that the cron object is doing a date thing in its code and is referencing date.js. I'm using date.js to get access to things like ISO date. for (t in config.generatorTables) { console.log("t = " + config.generatorTables[t] + "\n\n"); var ts3 = azure.createTableService(); var jobSerialNumbers = new cronJob({ //cronTime: '*/' + rndNumber + ' * * * * *', cronTime: '*/1 * * * * *', onTick: function () { //console.log(new Date() + " calling topUpSerialNumbers \n\n"); manageSerialNumbers.topUpSerialNumbers(config.generatorTables[t], function () { }); }, start: false, timeZone: "America/Los_Angeles" }); ts3.createTableIfNotExists(config.generatorTables[t], function (error) { if (error === null) { var query = azure.TableQuery .select() .from(config.generatorTables[t]) .where('PartitionKey eq ?', '0') ts3.queryEntities(query, function (error, serialNumberEntities) { if (error === null && serialNumberEntities.length == 0) { manageSerialNumbers.generateNewNumbers(config.maxNumber, config.serialNumberSize, config.generatorTables[t], function () { jobSerialNumbers.start(); }); } else jobSerialNumbers.start(); }); } }); } And this is the error message I'm getting when I examine the server.js.logs\0.txt file: C:\node\w\WebRole1\public\javascripts\date.js:56 onsole.log('isDST'); return this.toString().match(/(E|C|M|P)(S|D)T/)[2] == "D" ^ TypeError: Cannot read property '2' of null at Date.isDST (C:\node\w\WebRole1\public\javascripts\date.js:56:110) at Date.getTimezone (C:\node\w\WebRole1\public\javascripts\date.js:56:228) at Object._getNextDateFrom (C:\node\w\WebRole1\node_modules\cron\lib\cron.js:88:30) at Object.sendAt (C:\node\w\WebRole1\node_modules\cron\lib\cron.js:51:17) at Object.getTimeout (C:\node\w\WebRole1\node_modules\cron\lib\cron.js:58:30) at Object.start (C:\node\w\WebRole1\node_modules\cron\lib\cron.js:279:33) at C:\node\w\WebRole1\server.js:169:46 at Object.generateNewNumbers (C:\node\w\WebRole1\utils\manageSerialNumbers.js:106:5) at C:\node\w\WebRole1\server.js:168:45 at C:\node\w\WebRole1\node_modules\azure\lib\services\table\tableservice.js:485:7 I am using this line in my database.js file: require('../public/javascripts/date'); is that correct that I only have to do this once, because date.js is global? I.e. it has a bunch of prototypes (extensions) for the inbuilt date object. Within manageSerialNumbers.js I am just doing a callback, their is no code executing as I've commented it all out, but still receiving this error. Any help or advice would be appreciated.

    Read the article

  • Large transactions causing "System.Data.SqlClient.SqlException: Timeout expired" error?

    - by Michael
    My application requires a user to log in and allows them to edit a list of things. However, it seems that if the same user always logs in and out and edits the list, this user will run into a "System.Data.SqlClient.SqlException: Timeout expired." error. I've read comments about increasing the timeout period but I've also read a comment about it possibly caused by uncommitted transactions. And I do have one going in the application. I'll provide the code I'm working with and there is an IF statement in there that I was a little iffy about but it seemed like a reasonable thing to do. I'll just go over what's going on here, there is a list of objects to update or add into the database. New objects created in the application are given an ID of 0 while existing objects have their own ID's generated from the DB. If the user chooses to delete some objects, their IDs are stored in a separate list of Integers. Once the user is ready to save their changes, the two lists are passed into this method. By use of the IF statement, objects with ID of 0 are added (using the Add stored procedure) and those objects with non-zero IDs are updated (using the Update stored procedure). After all this, a FOR loop goes through all the integers in the "removal" list and uses the Delete stored procedure to remove them. A transaction is used for all this. Public Shared Sub UpdateSomethings(ByVal SomethingList As List(Of Something), ByVal RemovalList As List(Of Integer)) Using DBConnection As New SqlConnection(conn) DBConnection.Open() Dim MyTransaction As SqlTransaction MyTransaction = DBConnection.BeginTransaction() Try For Each SomethingItem As Something In SomethingList Using MyCommand As New SqlCommand() MyCommand.Connection = DBConnection If SomethingItem.ID > 0 Then MyCommand.CommandText = "UpdateSomething" Else MyCommand.CommandText = "AddSomething" End If MyCommand.Transaction = MyTransaction MyCommand.CommandType = CommandType.StoredProcedure With MyCommand.Parameters If MyCommand.CommandText = "UpdateSomething" Then .Add("@id", SqlDbType.Int).Value = SomethingItem.ID End If .Add("@stuff", SqlDbType.Varchar).Value = SomethingItem.Stuff End With MyCommand.ExecuteNonQuery() End Using Next For Each ID As Integer In RemovalList Using MyCommand As New SqlCommand("DeleteSomething", DBConnection) MyCommand.Transaction = MyTransaction MyCommand.CommandType = CommandType.StoredProcedure With MyCommand.Parameters .Add("@id", SqlDbType.Int).Value = ID End With MyCommand.ExecuteNonQuery() End Using Next MyTransaction.Commit() Catch ex As Exception MyTransaction.Rollback() 'Exception handling goes here End Try End Using End Sub There are three stored procedures used here as well as some looping so I can see how something can be holding everything up if the list is large enough. Other users can log in to the system at the same time just fine though. I'm using Visual Studio 2008 to debug and am using SQL Server 2000 for the DB.

    Read the article

  • recursion resulting in extra unwanted data

    - by spacerace
    I'm writing a module to handle dice rolling. Given x die of y sides, I'm trying to come up with a list of all potential roll combinations. This code assumes 3 die, each with 3 sides labeled 1, 2, and 3. (I realize I'm using "magic numbers" but this is just an attempt to simplify and get the base code working.) int[] set = { 1, 1, 1 }; list = diceroll.recurse(0,0, list, set); ... public ArrayList<Integer> recurse(int index, int i, ArrayList<Integer> list, int[] set){ if(index < 3){ // System.out.print("\n(looping on "+index+")\n"); for(int k=1;k<=3;k++){ // System.out.print("setting i"+index+" to "+k+" "); set[index] = k; dump(set); recurse(index+1, i, list, set); } } return list; } (dump() is a simple method to just display the contents of list[]. The variable i is not used at the moment.) What I'm attempting to do is increment a list[index] by one, stepping through the entire length of the list and incrementing as I go. This is my "best attempt" code. Here is the output: Bold output is what I'm looking for. I can't figure out how to get rid of the rest. (This is assuming three dice, each with 3 sides. Using recursion so I can scale it up to any x dice with y sides.) [1][1][1] [1][1][1] [1][1][1] [1][1][2] [1][1][3] [1][2][3] [1][2][1] [1][2][2] [1][2][3] [1][3][3] [1][3][1] [1][3][2] [1][3][3] [2][3][3] [2][1][3] [2][1][1] [2][1][2] [2][1][3] [2][2][3] [2][2][1] [2][2][2] [2][2][3] [2][3][3] [2][3][1] [2][3][2] [2][3][3] [3][3][3] [3][1][3] [3][1][1] [3][1][2] [3][1][3] [3][2][3] [3][2][1] [3][2][2] [3][2][3] [3][3][3] [3][3][1] [3][3][2] [3][3][3] I apologize for the formatting, best I could come up with. Any help would be greatly appreciated. (This method was actually stemmed to use the data for something quite trivial, but has turned into a personal challenge. :) edit: If there is another approach to solving this problem I'd be all ears, but I'd also like to solve my current problem and successfully use recursion for something useful.

    Read the article

  • Why does Cacti keep waiting for dead poller processes?

    - by Oliver Salzburg
    sorry for the length I am currently setting up a new Debian (6.0.5) server. I put Cacti (0.8.7g) on it yesterday and have been battling with it ever since. Initial issue The initial issue I was observing, was that my graphs weren't updating. So I checked my cacti.log and found this concerning message: POLLER: Poller[0] Maximum runtime of 298 seconds exceeded. Exiting. That can't be good, right? So I went checking and started poller.php myself (via sudo -u www-data php poller.php --force). It will pump out a lot of message (which all look like what I would expect) and then hang for a minute. After that 1 minute, it will loop the following message: Waiting on 1 of 1 pollers. This goes on for 4 more minutes until the process is forcefully ended for running longer than 300s. So far so good I went on for a good hour trying to determine what poller might still be running, until I got to the conclusion that there simply is no running poller. Debugging I checked poller.php to see how that warning is issued and why. On line 368, Cacti will retrieve the number of finished processes from the database and use that value to calculate how many processes are still running. So, let's see that value! I added the following debug code into poller.php: print "Finished: " . $finished_processes . " - Started: " . $started_processes . "\n"; Result This will print the following within seconds of starting poller.php: Finished: 0 - Started: 1 Waiting on 1 of 1 pollers. Finished: 1 - Started: 1 So the values are being read and are valid. Until we get to the part where it keeps looping: Finished: - Started: 1 Waiting on 1 of 1 pollers. Suddenly, the value is gone. Why? Putting var_dump() in there confirms the issue: NULL Finished: - Started: 1 Waiting on 1 of 1 pollers. The return value is NULL. How can that be when querying SELECT COUNT()...? (SELECT COUNT() should always return one result row, shouldn't it?) More debugging So I went into lib\database.php and had a look at that db_fetch_cell(). A bit of testing confirmed, that the result set is actually empty. So I added my own database query code in there to see what that would do: $finished_processes = db_fetch_cell("SELECT count(*) FROM poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'"); print "Finished: " . $finished_processes . " - Started: " . $started_processes . "\n"; $mysqli = new mysqli("localhost","cacti","cacti","cacti"); $result = $mysqli->query("SELECT COUNT(*) FROM poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00';"); $row = $result->fetch_assoc(); var_dump( $row ); This will output Finished: - Started: 1 array(1) { ["COUNT(*)"]=> string(1) "2" } Waiting on 1 of 1 pollers. So, the data is there and can be accessed without any problems, just not with the method Cacti is using? Double-check that! I enabled MySQL logging to make sure I'm not imagining things. Sure enough, when the error message is looped, the cacti.log reads as if it was querying like mad: 06/29/2012 08:44:00 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" 06/29/2012 08:44:01 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" 06/29/2012 08:44:02 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" But none of these queries are logged my MySQL. Yet, when I add my own database query code, it shows up just fine. What the heck is going on here?

    Read the article

  • MySQL 5.5 (Percona) assertion failure log.. what would cause this?

    - by Tom Geee
    256GB, 64 Core , AMD running Ubuntu 12.04 with Percona MySQL 5.5.28. Below is the assertion failure. We just had a second assertion failure (different "in file", position, etc) while running a large set of inserts. After the first failure, MySQL restarted after a reboot only - after continuously looping on the same error after trying to recover. I decided to do a mysqlcheck with -o for optimize. Since these are all Innodb tables (very large tables, 60+GB) this would do an alter table on all tables. In the middle of this , the below assertion failure happened again: 121115 22:30:31 InnoDB: Assertion failure in thread 140086589445888 in file btr0pcur.c line 452 InnoDB: Failing assertion: btr_page_get_prev(next_page, mtr) == buf_block_get_page_no(btr_pcur_get_block(cursor)) InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 03:30:31 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Please help us make Percona Server better by reporting any bugs at http://bugs.percona.com/ key_buffer_size=536870912 read_buffer_size=131072 max_used_connections=404 max_threads=500 thread_count=90 connection_count=90 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1618416 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x14edeb710 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f687366ce80 thread_stack 0x30000 /usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x7b52ee] /usr/sbin/mysqld(handle_fatal_signal+0x484)[0x68f024] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f9cbb23fcb0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f9cbaea6425] /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7f9cbaea9b8b] /usr/sbin/mysqld[0x858463] /usr/sbin/mysqld[0x804513] /usr/sbin/mysqld[0x808432] /usr/sbin/mysqld[0x7db8bf] /usr/sbin/mysqld(_Z13rr_sequentialP11READ_RECORD+0x1d)[0x755aed] /usr/sbin/mysqld(_Z17mysql_alter_tableP3THDPcS1_P24st_ha_create_informationP10TABLE_LISTP10Alter_infojP8st_orderb+0x216b)[0x60399b] /usr/sbin/mysqld(_Z20mysql_recreate_tableP3THDP10TABLE_LIST+0x166)[0x604bd6] /usr/sbin/mysqld[0x647da1] /usr/sbin/mysqld(_ZN24Optimize_table_statement7executeEP3THD+0xde)[0x64891e] /usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x1168)[0x59b558] /usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x30c)[0x5a132c] /usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1620)[0x5a2a00] /usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x14f)[0x63ce6f] /usr/sbin/mysqld(handle_one_connection+0x51)[0x63cf31] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f9cbb237e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f9cbaf63cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f6300004b60): is an invalid pointer Connection ID (thread ID): 876 Status: NOT_KILLED You may download the Percona Server operations manual by visiting http://www.percona.com/software/percona-server/. You may find information in the manual which will help you identify the cause of the crash. 121115 22:31:07 [Note] Plugin 'FEDERATED' is disabled. 121115 22:31:07 InnoDB: The InnoDB memory heap is disabled 121115 22:31:07 InnoDB: Mutexes and rw_locks use GCC atomic builtins .. Then it recovered , without a reboot this time. from the log, what would cause this? I am currently running a dump to see if the problem resurfaces. edit: data partition is all in / since this is a hosted, defaulted file system unfortunately: Filesystem Size Used Avail Use% Mounted on /dev/vda3 742G 445G 260G 64% / udev 121G 4.0K 121G 1% /dev tmpfs 49G 248K 49G 1% /run none 5.0M 0 5.0M 0% /run/lock none 121G 0 121G 0% /run/shm /dev/vda1 99M 54M 40M 58% /boot my.cnf: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] skip-name-resolve innodb_file_per_table default_storage_engine=InnoDB user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /data/mysql tmpdir = /tmp skip-external-locking key_buffer = 512M max_allowed_packet = 128M thread_stack = 192K thread_cache_size = 64 myisam-recover = BACKUP max_connections = 500 table_cache = 812 table_definition_cache = 812 #query_cache_limit = 4M #query_cache_size = 512M join_buffer_size = 512K innodb_additional_mem_pool_size = 20M innodb_buffer_pool_size = 196G #innodb_file_io_threads = 4 #innodb_thread_concurrency = 12 innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 8M innodb_log_file_size = 1024M innodb_log_files_in_group = 2 innodb_max_dirty_pages_pct = 90 innodb_lock_wait_timeout = 120 log_error = /var/log/mysql/error.log long_query_time = 5 slow_query_log = 1 slow_query_log_file = /var/log/mysql/slowlog.log [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M

    Read the article

  • CodePlex Daily Summary for Sunday, June 06, 2010

    CodePlex Daily Summary for Sunday, June 06, 2010New ProjectsActive Worlds Dot Net Wrapper (Based on AwSdk): Active Worlds Dot Net Wrapper (Based on AwSdk)Combina: Smart calculator for large combinatorial calculations.Concurrent Cache: ConcurrentCache is a smart output cache library extending OutputCacheProvider. It consists of in memory, cache files and compressed files modes and...Decay: Personal use. For learningFazTalk: FazTalk is a suite of tools and products that are designed to improve collaboration and workflow interactions. FazTalk takes an innovative approach...grouped: A peer to peer text editor, written in C# [update] I wrote this little thing a while back and even forgot about it, I stopped coding for more tha...HitchARide MVC 2 Sample: An MVC 2 sample written as part of the Microsoft 2010 London Web Camp based on the wireframes at http://schematics.earthware.co.uk/hitcharide. Not...Inspiration.Web: Description: A simple (but entertaining) ASP.NET MVC (C#) project to suggest random code names for projects. Intended audience: People who ne...NetFileBrowser - TinyMCE: tinyMCE file plugin with asp.netOil Slick Live Feeds: All live feeds from BP's Remotely Operated VehiclesParticle Lexer: Parser and Tokenizer libraryPdf Form Tool: Pdf Form Tool demonstrates how the iTextSharp library could be used to fill PDF forms. The input data is provided as a csv file. The application ...Planning Poker Windows Mobile 7: This project is a Planning Poker application for Windows Mobile 7 (and later?). RandomRat: RandomRat is a program for generating random sets that meet specific criteriaScience.NET: A scientific library written in managed code. It supports advanced mathematics (algebra system, sequences, statistics, combinatorics...), data stru...Spider Compiler: Spider Compiler parses the input of a spider programming source file and compiles it (with help of csc.exe; the C#-Compiler) to an exe-file. This p...Sununpro: sunun's project for study by team foundation server.TFS Buddy: An application that manipulates your I-Buddy whenever something happens in your Team Foundation ServerValveSoft: ValveSysWiiMote Physics: WiiMote Physics is an application that allows you to retrieve data from your WiiMote or Balance Board and display it in real-time. It has a number...WinGet: WinGet is a download manager for Windows. You can drag links onto the WinGet Widget and it will download a file on the selected folder. It is dev...XProject.NET: A project management and team collaboration platformNew Releases.NET DiscUtils: Version 0.9 Preview: This release is still under development. New features available in this release: Support for accessing short file names stored in WIM files Incr...Active Worlds Dot Net Wrapper (Based on AwSdk): Active World Dot Net Wrapper (0.0.1.85): Based on AwSdk 85AwSdk UnOfficial Wrapper Howto Use: C# using AwWrapper; VB.Net Import AwWrapperAjaxControlToolkit additional extenders: ZhecheAjaxControls for .NET3.5: Used AJAX Control Toolkit Release Notes - April 12th 2010 Release Version 40412. Fixed deadlock in long operation canceling Some other fixesAnyCAD: AnyCAD.v1.2.ENU.Install: http://www.anycad.net Parametric Modeling *3D: Sphere, Box, Cylinder, Cone •2D: Line, Rectangle, Arc, Arch, Circle, Spline, Polygon •Feature: Extr...Community Forums NNTP bridge: Community Forums NNTP Bridge V29: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Concurrent Cache: 1.0: This is the first release for the ConcurrentCache library.Configuration Section Designer: 2.0.0: This is the first Beta Release for VS 2010 supportDoxygen Browser Addin for VS: Doxygen Browser Addin - v0.1.4 Beta: Support for Visual Studio 2010 improved the logging of errors (Event Logs) Fixed some issues/bugs Hot key for navigation "Control + F1, Contr...Folder Bookmarks: Folder Bookmarks 1.6.2: The latest version of Folder Bookmarks (1.6.2), with new UI changes. Once you have extracted the file, do not delete any files/folders. They are n...HERB.IQ: Beta 0.1 Source code release 5: Beta 0.1 Source code release 5Inspiration.Web: Initial release (deployment package): Initial release (deployment package)NetFileBrowser - TinyMCE: Demo Project: Demo ProjectNetFileBrowser - TinyMCE: NetFileBrowser: NetImageBrowserNLog - Advanced .NET Logging: Nightly Build 2010.06.05.001: Changes since the last build:2010-06-04 23:29:42 Jarek Kowalski Massive update to documentation generator. 2010-05-28 15:41:42 Jarek Kowalski upda...Oil Slick Live Feeds: Oil Slick Live Feeds 0.1: A the first release, with feeds from the MS Skandi, Boa Deep C, Enterprise and Q4000. They are live streams from the ROV's monitoring the damaged...Pcap.Net: Pcap.Net 0.7.0 (46671): Pcap.Net - June 2010 Release Pcap.Net is a .NET wrapper for WinPcap written in C++/CLI and C#. It Features almost all WinPcap features and includes...sqwarea: Sqwarea 0.0.289.0 (alpha): API supportTFS Buddy: TFS Buddy First release (Beta 1): This is the first release of the TFS Buddy.Visual Studio DSite: Looping Animation (Visual C++ 2008): A solider firing a bullet that loops and displays an explosion everytime it hits the edge of the form.WiiMote Physics: WiiMote Physics v4.0: v4.0.0.1 Recovered from existing compiled assembly after hard drive failure Now requires .NET 4.0 (it seems to make it run faster) Added new c...WinGet: Alpha 1: First Alpha of WinGet. It includes all the planned features but it contains many bugs. Packaged using 7-Zip and ClickOnce.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)PHPExcelpatterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsCommunity Forums NNTP bridgeRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationN2 CMSIonics Isapi Rewrite FilterStyleCopsmark C# LibraryFarseer Physics Enginepatterns & practices: Composite WPF and Silverlight

    Read the article

  • Using SQL Execution Plans to discover the Swedish alphabet

    - by Rob Farley
    SQL Server is quite remarkable in a bunch of ways. In this post, I’m using the way that the Query Optimizer handles LIKE to keep it SARGable, the Execution Plans that result, Collations, and PowerShell to come up with the Swedish alphabet. SARGability is the ability to seek for items in an index according to a particular set of criteria. If you don’t have SARGability in play, you need to scan the whole index (or table if you don’t have an index). For example, I can find myself in the phonebook easily, because it’s sorted by LastName and I can find Farley in there by moving to the Fs, and so on. I can’t find everyone in my suburb easily, because the phonebook isn’t sorted that way. I can’t even find people who have six letters in their last name, because also the book is sorted by LastName, it’s not sorted by LEN(LastName). This is all stuff I’ve looked at before, including in the talk I gave at SQLBits in October 2010. If I try to find everyone who’s names start with F, I can do that using a query a bit like: SELECT LastName FROM dbo.PhoneBook WHERE LEFT(LastName,1) = 'F'; Unfortunately, the Query Optimizer doesn’t realise that all the entries that satisfy LEFT(LastName,1) = 'F' will be together, and it has to scan the whole table to find them. But if I write: SELECT LastName FROM dbo.PhoneBook WHERE LastName LIKE 'F%'; then SQL is smart enough to understand this, and performs an Index Seek instead. To see why, I look further into the plan, in particular, the properties of the Index Seek operator. The ToolTip shows me what I’m after: You’ll see that it does a Seek to find any entries that are at least F, but not yet G. There’s an extra Predicate in there (a Residual Predicate if you like), which checks that each LastName is really LIKE F% – I suppose it doesn’t consider that the Seek Predicate is quite enough – but most of the benefit is seen by its working out the Seek Predicate, filtering to just the “at least F but not yet G” section of the data. This got me curious though, particularly about where the G comes from, and whether I could leverage it to create the Swedish alphabet. I know that in the Swedish language, there are three extra letters that appear at the end of the alphabet. One of them is ä that appears in the word Västerås. It turns out that Västerås is quite hard to find in an index when you’re looking it up in a Swedish map. I talked about this briefly in my five-minute talk on Collation from SQLPASS (the one which was slightly less than serious). So by looking at the plan, I can work out what the next letter is in the alphabet of the collation used by the column. In other words, if my alphabet were Swedish, I’d be able to tell what the next letter after F is – just in case it’s not G. It turns out it is… Yes, the Swedish letter after F is G. But I worked this out by using a copy of my PhoneBook table that used the Finnish_Swedish_CI_AI collation. I couldn’t find how the Query Optimizer calculates the G, and my friend Paul White (@SQL_Kiwi) tells me that it’s frustratingly internal to the QO. He’s particularly smart, even if he is from New Zealand. To investigate further, I decided to do some PowerShell, leveraging the Get-SqlPlan function that I blogged about recently (make sure you also have the SqlServerCmdletSnapin100 snap-in added). I started by indicating that I was going to use Finnish_Swedish_CI_AI as my collation of choice, and that I’d start whichever letter cam straight after the number 9. I figure that this is a cheat’s way of guessing the first letter of the alphabet (but it doesn’t actually work in Unicode – luckily I’m using varchar not nvarchar. Actually, there are a few aspects of this code that only work using ASCII, so apologies if you were wanting to apply it to Greek, Japanese, etc). I also initialised my $alphabet variable. $collation = 'Finnish_Swedish_CI_AI'; $firstletter = '9'; $alphabet = ''; Now I created the table for my test. A single field would do, and putting a Clustered Index on it would suffice for the Seeks. Invoke-Sqlcmd -server . -data tempdb -query "create table dbo.collation_test (col varchar(10) collate $collation primary key);" Now I get into the looping. $c = $firstletter; $stillgoing = $true; while ($stillgoing) { I construct the query I want, seeking for entries which start with whatever $c has reached, and get the plan for it: $query = "select col from dbo.collation_test where col like '$($c)%';"; [xml] $pl = get-sqlplan $query "." "tempdb"; At this point, my $pl variable is a scary piece of XML, representing the execution plan. A bit of hunting through it showed me that the EndRange element contained what I was after, and that if it contained NULL, then I was done. $stillgoing = ($pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange -ne $null); Now I could grab the value out of it (which came with apostrophes that needed stripping), and append that to my $alphabet variable.   if ($stillgoing)   {  $c=$pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange.RangeExpressions.ScalarOperator.ScalarString.Replace("'","");     $alphabet += $c;   } Finally, finishing the loop, dropping the table, and showing my alphabet! } Invoke-Sqlcmd -server . -data tempdb -query "drop table dbo.collation_test;"; $alphabet; When I run all this, I see that the Swedish alphabet is ABCDEFGHIJKLMNOPQRSTUVXYZÅÄÖ, which matches what I see at Wikipedia. Interesting to see that the letters on the end are still there, even with Case Insensitivity. Turns out they’re not just “letters with accents”, they’re letters in their own right. I’m sure you gave up reading long ago, and really aren’t that fazed about the idea of doing this using PowerShell. I chose PowerShell because I’d already come up with an easy way of grabbing the estimated plan for a query, and PowerShell does allow for easy navigation of XML. I find the most interesting aspect of this as the fact that the Query Optimizer uses the next letter of the alphabet to maintain the SARGability of LIKE. I’m hoping they do something similar for a whole bunch of operations. Oh, and the fact that you know how to find stuff in the IKEA catalogue. Footnote: If you are interested in whether this works in other languages, you might want to consider the following screenshot, which shows that in principle, it should work with Japanese. It might be a bit harder to run this in PowerShell though, as I’m not sure how it translates. In Hiragana, the Japanese alphabet starts ?, ?, ?, ?, ?, ...

    Read the article

  • Using Lightbox with _Screen

    Although, I have to admit that I discovered Bernard Bout's ideas and concepts about implementing a lightbox in Visual FoxPro quite a while ago, there was no "spare" time in active projects that allowed me to have a closer look into his solution(s). Luckily, these days I received a demand to focus a little bit more on this. This article describes the steps about how to integrate and make use of Bernard's lightbox class in combination with _Screen in Visual FoxPro. The requirement in this project was to be able to visually lock the whole application (_Screen area) and guide the user to an information that should not be ignored easily. Depending on the importance any current user activity should be interrupted and focus put onto the notification. Getting the "meat", eh, source code Please check out Bernard's blog on Foxite directly in order to get the latest and greatest version. As time of writing this article I use version 6.0 as described in this blog entry: The Fastest Lightbox Ever The Lightbox class is sub-classed from the imgCanvas class from the GdiPlusX project on VFPx and therefore you need to have the source code of GdiPlusX as well, and integrate it into your development environment. The version I use is available here: Release GDIPlusX 1.20 As soon as you open the bbGdiLightbox class the first it, VFP might ask you to update the reference to the gdiplusx.vcx. As we have the sources, no problem and you have access to Bernard's code. The class itself is pretty easy to understand, some properties that you do not need to change and three methods: Setup(), ShowLightbox() and BeforeDraw() The challenge - _Screen or not? Reading Bernard's article about the fastest lightbox ever, he states the following: "The class will only work on a form. It will not support any other containers" Really? And what about _Screen? Isn't that a form class, too? Yes, of course it is but nonetheless trying to use _Screen directly will fail. Well, let's have look at the code to see why: WITH This .Left = 0 .Top = 0 .Height = ThisForm.Height .Width = ThisForm.Width .ZOrder(0) .Visible = .F.ENDWITH During the setup of the lightbox as well as while capturing the image as replacement for your forms and controls, the object reference Thisform is used. Which is a little bit restrictive to my opinion but let's continue. The second issue lies in the method ShowLightbox() and introduced by the call of .Bitmap.FromScreen(): Lparameters tlVisiblilty* tlVisiblilty - show or hide (T/F)* grab a screen dump with controlsIF tlVisiblilty Local loCaptureBmp As xfcBitmap Local lnTitleHeight, lnLeftBorder, lnTopBorder, lcImage, loImage lnTitleHeight = IIF(ThisForm.TitleBar = 1,Sysmetric(9),0) lnLeftBorder = IIF(ThisForm.BorderStyle < 2,0,Sysmetric(3)) lnTopBorder = IIF(ThisForm.BorderStyle < 2,0,Sysmetric(4)) With _Screen.System.Drawing loCaptureBmp = .Bitmap.FromScreen(ThisForm.HWnd,; lnLeftBorder,; lnTopBorder+lnTitleHeight,; ThisForm.Width ,; ThisForm.Height) ENDWITH * save it to a property This.capturebmp = loCaptureBmp ThisForm.SetAll("Visible",.F.) This.DraW() This.Visible = .T.ELSE ThisForm.SetAll("Visible",.T.) This.Visible = .F.ENDIF My first trials in using the class ended in an exception - GdiPlusError:OutOfMemory - thrown by the Bitmap object. Frankly speaking, this happened mainly because of my lack of knowledge about GdiPlusX. After reading some documentation, especially about the FromScreen() method I experimented a little bit. Capturing the visible area of _Screen actually was not the real problem but the dimensions I specified for the bitmap. The modifications - step by step First of all, it is to get rid of restrictive object references on Thisform and to change them into either This.Parent or more generic into This.oForm (even better: This.oControl). The Lightbox.Setup() method now sets the necessary object reference like so: *====================================================================* Initial setup* Default value: This.oControl = "This.Parent"* Alternative: This.oControl = "_Screen"*====================================================================With This .oControl = Evaluate(.oControl) If Vartype(.oControl) == T_OBJECT .Anchor = 0 .Left = 0 .Top = 0 .Width = .oControl.Width .Height = .oControl.Height .Anchor = 15 .ZOrder(0) .Visible = .F. EndIfEndwith Also, based on other developers' comments in Bernard articles on his lightbox concept and evolution I found the source code to handle the differences between a form and _Screen and goes into Lightbox.ShowLightbox() like this: *====================================================================* tlVisibility - show or hide (T/F)* grab a screen dump with controls*====================================================================Lparameters tlVisibility Local loControl m.loControl = This.oControl If m.tlVisibility Local loCaptureBmp As xfcBitmap Local lnTitleHeight, lnLeftBorder, lnTopBorder, lcImage, loImage lnTitleHeight = Iif(m.loControl.TitleBar = 1,Sysmetric(9),0) lnLeftBorder = Iif(m.loControl.BorderStyle < 2,0,Sysmetric(3)) lnTopBorder = Iif(m.loControl.BorderStyle < 2,0,Sysmetric(4)) With _Screen.System.Drawing If Upper(m.loControl.Name) == Upper("Screen") loCaptureBmp = .Bitmap.FromScreen(m.loControl.HWnd) Else loCaptureBmp = .Bitmap.FromScreen(m.loControl.HWnd,; lnLeftBorder,; lnTopBorder+lnTitleHeight,; m.loControl.Width ,; m.loControl.Height) EndIf Endwith * save it to a property This.CaptureBmp = loCaptureBmp m.loControl.SetAll("Visible",.F.) This.Draw() This.Visible = .T. Else This.CaptureBmp = .Null. m.loControl.SetAll("Visible",.T.) This.Visible = .F. Endif {loadposition content_adsense} Are we done? Almost... Although, Bernard says it clearly in his article: "Just drop the class on a form and call it as shown." It did not come clear to my mind in the first place with _Screen, but, yeah, he is right. Dropping the class on a form provides a permanent link between those two classes, it creates a valid This.Parent object reference. Bearing in mind that the lightbox class can not be "dropped" on the _Screen, we have to create the same type of binding during runtime execution like so: *====================================================================* Create global lightbox component*==================================================================== Local llOk, loException As Exception m.llOk = .F. m.loException = .Null. If Not Vartype(_Screen.Lightbox) == "O" Try _Screen.AddObject("Lightbox", "bbGdiLightbox") Catch To m.loException Assert .F. Message m.loException.Message EndTry EndIf m.llOk = (Vartype(_Screen.Lightbox) == "O")Return m.llOk Through runtime instantiation we create a valid binding to This.Parent in the lightbox object and the code works as expected with _Screen. Ease your life: Use properties instead of constants Having a closer look at the BeforeDraw() method might wet your appetite to simplify the code a little bit. Looking at the sample screenshots in Bernard's article you see several forms in different colors. This got me to modify the code like so: *====================================================================* Apply the actual lightbox effect on the captured bitmap.*====================================================================If Vartype(This.CaptureBmp) == T_OBJECT Local loGfx As xfcGraphics loGfx = This.oGfx With _Screen.System.Drawing loGfx.DrawImage(This.CaptureBmp,This.Rectangle,This.Rectangle,.GraphicsUnit.Pixel) * change the colours as needed here * possible colours are (220,128,0,0),(220,0,0,128) etc. loBrush = .SolidBrush.New(.Color.FromArgb( ; This.Opacity, .Color.FromRGB(This.BorderColor))) loGfx.FillRectangle(loBrush,This.Rectangle) EndwithEndif Create an additional property Opacity to specify the grade of translucency you would like to have without the need to change the code in each instance of the class. This way you only need to change the values of Opacity and BorderColor to tweak the appearance of your lightbox. This could be quite helpful to signalize different levels of importance (ie. green, yellow, orange, red, etc...) of notifications to the users of the application. Final thoughts Using the lightbox concept in combination with _Screen instead of forms is possible. Already Jim Wiggins comments in Bernard's article to loop through the _Screen.Forms collection in order to cascade the lightbox visibility to all active forms. Good idea. But honestly, I believe that instead of looping all forms one could use _Screen.SetAll("ShowLightbox", .T./.F., "Form") with Form.ShowLightbox_Access method to gain more speed. The modifications described above might provide even more features to your applications while consuming less resources and performance. Additionally, the restrictions to capture only forms does not exist anymore. Using _Screen you are able to capture and cover anything. The captured area of _Screen does not include any toolbars, docked windows, or menus. Therefore, it is advised to take this concept on a higher level and to combine it with additional classes that handle the state of toolbars, docked windows and menus. Which I did for the customer's project.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34  | Next Page >