Search Results

Search found 9311 results on 373 pages for 'loop counter'.

Page 115/373 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • directory traversal in Java using different regular and enhanced for loops

    - by user3245621
    I have this code to print out all directories and files. I tried to use recursive method call in for loop. With enhanced for loop, the code prints out all the directories and files correctly. But with regular for loop, the code does not work. I am puzzled by the difference between regular and enhanced for loops. public class FileCopy { private File[] childFiles = null; public static void main(String[] args) { FileCopy fileCopy = new FileCopy(); File srcFile = new File("c:\\temp"); fileCopy.copyTree(srcFile); } public void copyTree(File file){ if(file.isDirectory()){ System.out.println(file + " is a directory. "); childFiles = file.listFiles(); /*for(int j=0; j<childFiles.length; j++){ copyTree(childFiles[j]); } This part is not working*/ for(File a: childFiles){ copyTree(a); } return; } else{ System.out.println(file + " is a file. "); return; } } }

    Read the article

  • Sinatra: rendering snippets (partials)

    - by Michael
    I'm following along with an O'Reilly book that's building a twitter clone with Sinatra. As Sinatra doesn't have 'partials' (in the way that Rails does), the author creates his own 'snippets' that work like partials. I understand that this is fairly common in Sinatra. Anyways, inside one of his snippets (see the first one below) he calls another snippet text_limiter_js (which is copied below). Text_limiter_js is basically a javascript function. If you look at the javascript function in text_limiter_js, you'll notice that it takes two parameters. I don't understand where these parameters are coming from because they're not getting passed in when text_limiter_js is rendered inside the other snippet. I'm not sure if I've given enough information/code for someone to help me understand this, but if you can, please explain. =snippet :'/snippets/text_limiter_js' %h2.comic What are you doing? %form{:method => 'post', :action => '/update'} %textarea.update.span-15#update{:name => 'status', :rows => 2, :onKeyDown => "text_limiter($('#update'), $('#counter'))"} .span-6 %span#counter 140 characters left .prepend-12 %input#button{:type => 'submit', :value => 'update'} text_limiter_js.haml :javascript function text_limiter(field,counter_field) { limit = 139; if (field.val().length > limit) field.val(field.val().substring(0, limit)); else counter_field.text(limit - field.val().length); }

    Read the article

  • table in drupal with edit link

    - by user544079
    I have a table created in drupal with the edit link pointing to the input form. But the problem is, it only displays the last row values in the $email and $comment variables. Can anyone suggest how to modify the table display to have the edit link to the corresponding records? function _MYMODULE_sql_to_table($sql) { $html = ""; // execute sql $resource = db_query($sql); // fetch database results in an array $results = array(); while ($row = db_fetch_array($resource)) { $results[] = $row; $email = $row['email']; $comment = $row['comment']; drupal_set_message('Email: '.$email. ' comment: '.$comment); } // ensure results exist if (!count($results)) { $html .= "Sorry, no results could be found."; return $html; } // create an array to contain all table rows $rows = array(); // get a list of column headers $columnNames = array_keys($results[0]); // loop through results and create table rows foreach ($results as $key => $data) { // create row data $row = array( 'edit' => l(t('Edit'),"admin/content/test/$email/$comment/ContactUs", $options=array()),); // loop through column names foreach ($columnNames as $c) { $row[] = array( 'data' => $data[$c], 'class' => strtolower(str_replace(' ', '-', $c)), ); } // add row to rows array $rows[] = $row; } // loop through column names and create headers $header = array(); foreach ($columnNames as $c) { $header[] = array( 'data' = $c, 'class' = strtolower(str_replace(' ', '-', $c)), ); } // generate table html $html .= theme('table', $header, $rows); return $html; } // then you can call it in your code... function _MYMODULE_some_page_callback() { $html = ""; $sql = "select * from {contactus}"; $html .= _MYMODULE_sql_to_table($sql); return $html; }

    Read the article

  • set countdown correctly, as3

    - by VideoDnd
    How can I set my countdown correctly? I'm counting from 33,000.00 to zero. It works in a fashion, but the minus operator appears in the textfield. //Countdown from 33,000.00 to zero var timer:Timer = new Timer(10); var count:int = -3300000; var fcount:int = 0; timer.addEventListener(TimerEvent.TIMER, incrementCounter); timer.start(); function incrementCounter(event:TimerEvent) { count++; fcount=int(count); mytext.text = formatCount(fcount); } function formatCount(i:int):String { var fraction:int = i % 100; var whole:int = i / 100; return ("0000000" + whole).substr(-7, 7) + "." + (fraction < 10 ? "0" + fraction : fraction); } EXAMPLE I need something I can update with XML, to be an up-counter or down-counter depending on the variables. //Count up from 33,000.00 var countValue:int = 3300000; count = countValue; //Count down from 33,000.00 var countValue:int = -3300000; count = countValue;

    Read the article

  • Setting default values for object properties in AS3

    - by Paul T
    I'm an actionscript newbie so please bear with me. Below is a function, and I am curious how to set default property values for objects that are being created in a loop. In the example below, these propeties are the same for each object created in the loop: titleTextField.selectable, titleTextField.wordWrap, titleTextField.x If you pull these properties out of the loop, they are null because the TextField objects have not been created, but it seems silly to have to set them each time. What is the correct way to do this. Thanks! var titleTextFormat:TextFormat = new TextFormat(); titleTextFormat.size = 10; titleTextFormat.font = "Arial"; titleTextFormat.color = 0xfff200; for (var i=0; i<arrThumbPicList.length; i++) { var yPos = 55 * i var titleTextField:TextField = new TextField(); titleTextField.selectable = false; titleTextField.wordWrap = true; titleTextField.text = arrThumbTitles[i]; titleTextField.x = 106; titleTextField.y = 331 + yPos; container.addChild(titleTextField); titleTextField.setTextFormat(titleTextFormat); }

    Read the article

  • How do I reset my pointer to a specific array location?

    - by ohtanya
    I am a brand new programming student, so please forgive my ignorance. My assignment states: Write a program that declares an array of 10 integers. Write a loop that accepts 10 values from the keyboard and write another loop that displays the 10 values. Do not use any subscripts within the two loops; use pointers only. Here is my code: #include "stdafx.h" #include <iostream> using namespace std; int main() { const int NUM = 10; int values[NUM]; int *p = &values[0]; int x; for(x = 0; x < NUM; ++x, ++p) { cout << "Enter a value: "; cin >> *p; } for(x = 0; x < NUM; ++x, ++p) { cout << *p << " "; } return 0; } I think I know where my problem is. After my first loop, my pointer is at values[10], but I need to get it back to values[0] to display them. How can I do that?

    Read the article

  • Differentiate Between UITableView Editing States?

    - by Josh Kahane
    I have been looking at trying to differentiate between editing states in my UITableView. I need to call a method only when in editing mode after tapping the edit button, so when you get your cell slide in and you see the little circular delete icons but NOT when the user swipes to delete. Is there anyway I can differentiate between the two? Thanks. EDIT: Solution thanks to Rodrigo Both each cell and the entire tableview has an 'editing' BOOL value, so I loop through all the cells and if more than one of them is editing then we know the whole table is (the user tapped the edit button), however if only one is editing then we know that the user has swiped a cell, editing that individual one, this lets me deal with each editing state individually! - (void)setEditing:(BOOL)editing animated:(BOOL)animated { [super setEditing:editing animated:animated]; int i = 0; //When editing loop through cells and hide status image so it doesn't block delete controls. Fade back in when done editing. for (customGuestCell *cell in self.tableView.visibleCells) { if (cell.isEditing) { i += 1; } } if (i > 1) { for (customGuestCell *cell in self.tableView.visibleCells) { if (editing) { // loop through the visible cells and animate their imageViews [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:0.4]; cell.statusImg.alpha = 0; [UIView commitAnimations]; } } } else if (!editing) { for (customGuestCell *cell in self.tableView.visibleCells) { [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:0.4]; cell.statusImg.alpha = 1.0; [UIView commitAnimations]; } } }

    Read the article

  • Runtime of optimized Primehunter

    - by Setton
    Ok so I need some serious runtime help here! This method should take in an int value, check its primality, and return true if the number is indeed a prime. I understand why the loop only needs to go up to i squared, I understand that the worst case scenario is the case in which either the number is prime (or a multiple of a prime). But I don't understand how to quantify the actual runtime. I have done the loop myself by hand to try to understand the pattern or correlation of the number (n) and how many loops occur, but I literally feel like I keep falling into the same trap every time. I need a new way of thinking about this! I have a hint: "Think about the SIZE of the integer" which makes me want to quantify the literal number of integers in a number in relation to how many iterations it does in the for loop (floor log(n)) +1). BUT IT'S NOT WORKIIIING?! I KNOW it isn't square root n, obviously. I'm asking for Big O notation. public class PrimeHunter { public static boolean isPrime(int n) { boolean answer = (n > 1) ? true : false; //runtime = linear runtime for (int i = 2; i * i <= n; i++) //runtime = ????? { if (n % i == 0) //doesn't occur if it is a prime { answer = false; break; } } return answer; //runtime = linear runtime } }

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • A jQuery Plug-in to monitor Html Element CSS Changes

    - by Rick Strahl
    Here's a scenario I've run into on a few occasions: I need to be able to monitor certain CSS properties on an HTML element and know when that CSS element changes. The need for this arose out of wanting to build generic components that could 'attach' themselves to other objects and monitor changes on the ‘parent’ object so the dependent object can adjust itself accordingly. What I wanted to create is a jQuery plug-in that allows me to specify a list of CSS properties to monitor and have a function fire in response to any change to any of those CSS properties. The result are the .watch() and .unwatch() jQuery plug-ins. Here’s a simple example page of this plug-in that demonstrates tracking changes to an element being moved with draggable and closable behavior: http://www.west-wind.com/WestWindWebToolkit/samples/Ajax/jQueryPluginSamples/WatcherPlugin.htm Try it with different browsers – IE and FireFox use the DOM event handlers and Chrome, Safari and Opera use setInterval handlers to manage this behavior. It should work in all of them but all but IE and FireFox will show a bit of lag between the changes in the main element and the shadow. The relevant HTML for this example is this fragment of a main <div> (#notebox) and an element that is to mimic a shadow (#shadow). <div class="containercontent"> <div id="notebox" style="width: 200px; height: 150px;position: absolute; z-index: 20; padding: 20px; background-color: lightsteelblue;"> Go ahead drag me around and close me! </div> <div id="shadow" style="background-color: Gray; z-index: 19;position:absolute;display: none;"> </div> </div> The watcher plug in is then applied to the main <div> and shadow in sync with the following plug-in code: <script type="text/javascript"> $(document).ready(function () { var counter = 0; $("#notebox").watch("top,left,height,width,display,opacity", function (data, i) { var el = $(this); var sh = $("#shadow"); var propChanged = data.props[i]; var valChanged = data.vals[i]; counter++; showStatus("Prop: " + propChanged + " value: " + valChanged + " " + counter); var pos = el.position(); var w = el.outerWidth(); var h = el.outerHeight(); sh.css({ width: w, height: h, left: pos.left + 5, top: pos.top + 5, display: el.css("display"), opacity: el.css("opacity") }); }) .draggable() .closable() .css("left", 10); }); </script> When you run this page as you drag the #notebox element the #shadow element will maintain and stay pinned underneath the #notebox element effectively keeping the shadow attached to the main element. Likewise, if you hide or fadeOut() the #notebox element the shadow will also go away – show the #notebox element and the shadow also re-appears because we are assigning the display property from the parent on the shadow. Note we’re attaching the .watch() plug-in to the #notebox element and have it fire whenever top,left,height,width,opacity or display CSS properties are changed. The passed data element contains a props[] and vals[] array that holds the properties monitored and their current values. An index passed as the second parm tells you which property has changed and what its current value is (propChanged/valChanged in the code above). The rest of the watcher handler code then deals with figuring out the main element’s position and recalculating and setting the shadow’s position using the jQuery .css() function. Note that this is just an example to demonstrate the watch() behavior here – this is not the best way to create a shadow. If you’re interested in a more efficient and cleaner way to handle shadows with a plug-in check out the .shadow() plug-in in ww.jquery.js (code search for fn.shadow) which uses native CSS features when available but falls back to a tracked shadow element on browsers that don’t support it, which is how this watch() plug-in came about in the first place :-) How does it work? The plug-in works by letting the user specify a list of properties to monitor as a comma delimited string and a handler function: el.watch("top,left,height,width,display,opacity", function (data, i) {}, 100, id) You can also specify an interval (if no DOM event monitoring isn’t available in the browser) and an ID that identifies the event handler uniquely. The watch plug-in works by hooking up to DOMAttrModified in FireFox, to onPropertyChanged in Internet Explorer, or by using a timer with setInterval to handle the detection of changes for other browsers. Unfortunately WebKit doesn’t support DOMAttrModified consistently at the moment so Safari and Chrome currently have to use the slower setInterval mechanism. In response to a changed property (or a setInterval timer hit) a JavaScript handler is fired which then runs through all the properties monitored and determines if and which one has changed. The DOM events fire on all property/style changes so the intermediate plug-in handler filters only those hits we’re interested in. If one of our monitored properties has changed the specified event handler function is called along with a data object and an index that identifies the property that’s changed in the data.props/data.vals arrays. The jQuery plugin to implement this functionality looks like this: (function($){ $.fn.watch = function (props, func, interval, id) { /// <summary> /// Allows you to monitor changes in a specific /// CSS property of an element by polling the value. /// when the value changes a function is called. /// The function called is called in the context /// of the selected element (ie. this) /// </summary> /// <param name="prop" type="String">CSS Properties to watch sep. by commas</param> /// <param name="func" type="Function"> /// Function called when the value has changed. /// </param> /// <param name="interval" type="Number"> /// Optional interval for browsers that don't support DOMAttrModified or propertychange events. /// Determines the interval used for setInterval calls. /// </param> /// <param name="id" type="String">A unique ID that identifies this watch instance on this element</param> /// <returns type="jQuery" /> if (!interval) interval = 100; if (!id) id = "_watcher"; return this.each(function () { var _t = this; var el$ = $(this); var fnc = function () { __watcher.call(_t, id) }; var data = { id: id, props: props.split(","), vals: [props.split(",").length], func: func, fnc: fnc, origProps: props, interval: interval, intervalId: null }; // store initial props and values $.each(data.props, function (i) { data.vals[i] = el$.css(data.props[i]); }); el$.data(id, data); hookChange(el$, id, data); }); function hookChange(el$, id, data) { el$.each(function () { var el = $(this); if (typeof (el.get(0).onpropertychange) == "object") el.bind("propertychange." + id, data.fnc); else if ($.browser.mozilla) el.bind("DOMAttrModified." + id, data.fnc); else data.intervalId = setInterval(data.fnc, interval); }); } function __watcher(id) { var el$ = $(this); var w = el$.data(id); if (!w) return; var _t = this; if (!w.func) return; // must unbind or else unwanted recursion may occur el$.unwatch(id); var changed = false; var i = 0; for (i; i < w.props.length; i++) { var newVal = el$.css(w.props[i]); if (w.vals[i] != newVal) { w.vals[i] = newVal; changed = true; break; } } if (changed) w.func.call(_t, w, i); // rebind event hookChange(el$, id, w); } } $.fn.unwatch = function (id) { this.each(function () { var el = $(this); var data = el.data(id); try { if (typeof (this.onpropertychange) == "object") el.unbind("propertychange." + id, data.fnc); else if ($.browser.mozilla) el.unbind("DOMAttrModified." + id, data.fnc); else clearInterval(data.intervalId); } // ignore if element was already unbound catch (e) { } }); return this; } })(jQuery); Note that there’s a corresponding .unwatch() plug-in that can be used to stop monitoring properties. The ID parameter is optional both on watch() and unwatch() – a standard name is used if you don’t specify one, but it’s a good idea to use unique names for each element watched to avoid overlap in event ids especially if you’re monitoring many elements. The syntax is: $.fn.watch = function(props, func, interval, id) props A comma delimited list of CSS style properties that are to be watched for changes. If any of the specified properties changes the function specified in the second parameter is fired. func The function fired in response to a changed styles. Receives this as the element changed and an object parameter that represents the watched properties and their respective values. The first parameter is passed in this structure: { id: watcherId, props: [], vals: [], func: thisFunc, fnc: internalHandler, origProps: strPropertyListOnWatcher }; A second parameter is the index of the changed property so data.props[i] or data.vals[i] gets the property and changed value. interval The interval for setInterval() for those browsers that don't support property watching in the DOM. In milliseconds. id An optional id that identifies this watcher. Required only if multiple watchers might be hooked up to the same element. The default is _watcher if not specified. It’s been a Journey I started building this plug-in about two years ago and had to make many modifications to it in response to changes in jQuery and also in browser behaviors. I think the latest round of changes made should make this plug-in fairly future proof going forward (although I hope there will be better cross-browser change event notifications in the future). One of the big problems I ran into had to do with recursive change notifications – it looks like starting with jQuery 1.44 and later, jQuery internally modifies element properties on some calls to some .css()  property retrievals and things like outerHeight/Width(). In IE this would cause nasty lock up issues at times. In response to this I changed the code to unbind the events when the handler function is called and then rebind when it exits. This also makes user code less prone to stack overflow recursion as you can actually change properties on the base element. It also means though that if you change one of the monitors properties in the handler the watch() handler won’t fire in response – you need to resort to a setTimeout() call instead to force the code to run outside of the handler: $("#notebox") el.watch("top,left,height,width,display,opacity", function (data, i) { var el = $(this); … // this makes el changes work setTimeout(function () { el.css("top", 10) },10); }) Since I’ve built this component I’ve had a lot of good uses for it. The .shadow() fallback functionality is one of them. Resources The watch() plug-in is part of ww.jquery.js and the West Wind West Wind Web Toolkit. You’re free to use this code here or the code from the toolkit. West Wind Web Toolkit Latest version of ww.jquery.js (search for fn.watch) watch plug-in documentation © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  JavaScript  jQuery  

    Read the article

  • NSClient++ FAIL on Windows 2008 R2 -- PDHCollector.cpp(215) Failed to query performance counters

    - by John DaCosta
    I am attempting to monitor windows server 2008 r2 x64 Enterprise with Nagios. When I test/install the nsclientI get the following error: PDHCollector.cpp(215) Failed to query performance counters: \Processor(_total)\% Processor Time: PdhGetFormattedCounterValue failed: A counter with a negative denominator value was detected. (800007D6) Has anyone else encountered the same issue and / or resolved it, found a work around?

    Read the article

  • How to enable boot timeout in Grub2?

    - by Casey
    I'm am trying to enable the boot timeout selection in grub2 on Ubuntu 9.10. I modified /etc/default/grub: GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=false GRUB_TIMEOUT=2 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="splash quiet" GRUB_CMDLINE_LINUX="" and ran update-grub, but I still do not have a boot timeout counter. What else can you do to enable this?

    Read the article

  • ASPNET WMI class not available

    - by Nexus
    I need to extract the ASPNET\Requests Queued performance counter from some IIS servers via WMI. The WMI class for this sort of thing appears to be contained in Win32_PerfFormattedData_ASPNET_ASPNET. I've queried all available classes in root\cimv2 on my Win 2003/IIS6 servers, and it's not listed. It is, however, available on an unrelated Win2008/IIS7 box (which is interesting but doesn't really help me much) What gives? Why is this WMI class not available on my Windows 2003 servers?

    Read the article

  • Snow Leopard on Core Duo

    - by Brendan Foote
    The first run of MacBook Pros have Core Duo processors, whereas all the ones after that have Core 2 Duos. Apple says Snow Leopard only requires an Intel processor, but will a first-gen MacBook Pro get enough of the improvements to be worth upgrading? This is similar to the question about Snow Leopard on an old iBook, but it differs because this processor is supported by Apple, but seems counter to the 64-bit theme of the upgrade.

    Read the article

  • Setting Timeouts: SQL Server 2008/IIS 7.5

    - by Julie
    We have recently migrated from a Win 2003/SQL Server 2000 system to Win 2008 64 bit R2, SQL Server 2008 R2. Our websites are in classic asp, and this can't be changed to another scripting language at this time. On the old server, if I got stuck in some kind of endless loop, the page would throw an error. On the new server, I have a page that has some sort of looping problem, that even though the SQL SP is called only once (and runs fine run as a query on the server) it pegs SQL server and therefore locks all of our websites. I'll get my code figured out, no biggie. But I need to make sure the server times out when this happens. (The page I'm working on runs fine with certain instances of the query, and locks with others using a different query variable. I can't have something like that sneak up on me on a page I haven't touched for three years.) I can't figure out how an SP that runs once on the server, from an ASP page, is tying up SQL server this way. It's obviously some sort of a timeout issue, but I can't figure out where/which timeout values to change. I actually have to remote desktop to the server and kill the process in SQL server. I'm afraid I'm a generalist, and server management is not my thing, even though it's my responsibility, so I am almost certain to have questions about any answer that I receive. How can I track this down? What settings do I need to change? More info: It's not SQL Server On our test site, I created an ASP file that just did an endless loop (do while 1=1) and had the same problem - the other websites wouldn't load - without SQL server being involved. So I think the reason the process was hanging is that the page wasn't timing out as it should, and so the connection to SQL was never closed. Killing the process in SQL server would reset the page somehow. For my intentional endless loop, I had to refresh the app pool to get rid of it. This points more to either IIS or the ASP settings. The ASP timeouts are set to whatever the default were when the server was first loaded. I still can't figure out why one file is locking up all websites, though. Again, that didn't happen on the old server.

    Read the article

  • How to set per user mail quota for postfix using policyd v2?

    - by ACHAL
    I have configured cluebringer 2.0.7 mysql httpd and all services are running well . But now i want to set per user mail quota for outgoing mails and want to restrict for a fix number of mail. I have tried to setup a quota for my host r10.4reseller.org but not working Quota List Policy:- Default Outbound Name:-Default Outbound Track:-Sender:user@domain Period:-60 verdict:-REJECT Data:- Disabled:- no Quota Limits Type:- MessageCount Counter Limit:- 1 Disabled:-no Do I need to do anymore settings for quota ?

    Read the article

  • Configuring service restart with 'restart service after' parameter

    - by Tim Brigham
    It appears that sc.exe isn't capable of setting the 'restart service after' parameter and powershell isn't capable of setting up service restarts at all. My intended configuration is failure1/restart failure2/restart failure3/nothing with a five minute counter between each restart. The five minute timer is extremely important. Is there anything else I can look at other than some registry hackery configure this?

    Read the article

  • Partimage and autocheck problem when restoring Windows XP from image

    - by Xolstice
    I'm trying to create an image of Windows XP and clone it to several partitions on the same hard drive using Partimage. I seem to be running into a problem when I restore the image onto another partition - when I boot into the OS from the partition I just restored, it brings up this message during the boot sequence: autochk program not found - skipping autocheck, and then after this, the OS reboots the PC and the whole process repeats itself in an infinite loop. After doing some Google search, it is suggested that this loop was caused by the partition being hidden or the mountmgr.sys file is missing. I checked my configuration and verified that this was not the case. I'm just wondering: Has anyone else experienced this and is there a solution for it? Is this what happens when you try to restore the image to a different partition on the same hard disk or is Partimage itself the problem? Should I be trying out a different partition cloning software?

    Read the article

  • File open fails initially when trying to open a file located on a win2k8 share but eventually can su

    - by Ruddy Douglass
    Core of the problem: I receive "(0x80070002) The system cannot find the file specified" for roughly 8 to 9 seconds before it can open it successfully. In a nutshell, we have two COM components. Component A calls into Component B and asks for a UNC filename to write to - the filename returned doesn't exist yet, but the path does - it then does its work, creates and populates the file, and tells Component B its done by making another com call. At that point Component B will call MoveFile to rename the file to its "official" name. This code has worked for (literally) years. Its works fine on win2k3. Its works fine when its running on win2k8 and points to a share on a win2k3 server. It also runs fine, if the share is actually located on the same win2k8 machine that the code is running on. But if you run it on win2k8 and point it to a share on a different win2k8 server, it fails. Both Component A & component B exist in their own Windows Service running with as a domain admin account. The shares are configured for "Everyone/Full control" in all test environments, similarly so are the underlying folders that the share points to. All machines are in the same domain. During debugging I realized the file actually does exist by the time I get to checking manually for it - after several iterations it occurred to me that the file doesn't "show up" until some delay passes by - so I put in the loop below in component B as shown below: int nCounter = 0; while (true) { CFileStatus fs; if (CFile::GetStatus(tempname, fs)) break; SleepEx(100, FALSE); nCounter++; } This code does, in fact, exit and nCounter is generally between 80 & 90 iterations when it does -- indicating the file "appears" approx 8 to 9 seconds later. Once that loop exits the code can successfully rename the file and all further processing appears to work. I put a CFile::GetStatus in component A immediately before it calls into Component B and that indicates success - it can see the file and get its true size yet the call into component b made immediately after can not see the file until the above indicated delay passes. I have verified the pathnames are precisely the same, even though it would clearly have to be for the calls to eventually succeed after a pause of 8 to 9 seconds... When something like this occurs I always assume there is a bug in my code until proven otherwise, but given (a) this code has executed properly for a very long time and (other than my diagnostic loop added) has not changed, and (b) it works in all environments except the win2k8 - win2k8 share, I'm guessing there is some OS issue in here that i do not understand. Any insight would be helpful. Thanks!

    Read the article

  • AutoHotKey - change mouse position and click a specific key every time window comes up

    - by John
    I'm a newbie to AHK, learning the basics. I'm using Notepad as an example - I want to hit the Cancel key every time I close Notepad and it asks if I want to Save/Don't Save/Cancel. I've come up with the script below and it works once but then I have to reload the script to do it again. I want it to hit Cancel every time the question box comes up. I thought Loop would work but am doing something wrong. Any ideas why, anyone? Loop IfWinExist Notepad, { WinWait Notepad WinActivate Click 312, 109 return }

    Read the article

  • File open fails initially when trying to open a file located on a win2k8 share but eventually can su

    - by Ruddy Douglass
    Core of the problem: I receive "(0x80070002) The system cannot find the file specified" for roughly 8 to 9 seconds before it can open it successfully. In a nutshell, we have two COM components. Component A calls into Component B and asks for a UNC filename to write to - the filename returned doesn't exist yet, but the path does - it then does its work, creates and populates the file, and tells Component B its done by making another com call. At that point Component B will call MoveFile to rename the file to its "official" name. This code has worked for (literally) years. Its works fine on win2k3. Its works fine when its running on win2k8 and points to a share on a win2k3 server. It also runs fine, if the share is actually located on the same win2k8 machine that the code is running on. But if you run it on win2k8 and point it to a share on a different win2k8 server, it fails. Both Component A & component B exist in their own Windows Service running with as a domain admin account. The shares are configured for "Everyone/Full control" in all test environments, similarly so are the underlying folders that the share points to. All machines are in the same domain. During debugging I realized the file actually does exist by the time I get to checking manually for it - after several iterations it occurred to me that the file doesn't "show up" until some delay passes by - so I put in the loop below in component B as shown below: int nCounter = 0; while (true) { CFileStatus fs; if (CFile::GetStatus(tempname, fs)) break; SleepEx(100, FALSE); nCounter++; } This code does, in fact, exit and nCounter is generally between 80 & 90 iterations when it does -- indicating the file "appears" approx 8 to 9 seconds later. Once that loop exits the code can successfully rename the file and all further processing appears to work. I put a CFile::GetStatus in component A immediately before it calls into Component B and that indicates success - it can see the file and get its true size yet the call into component b made immediately after can not see the file until the above indicated delay passes. I have verified the pathnames are precisely the same, even though it would clearly have to be for the calls to eventually succeed after a pause of 8 to 9 seconds... When something like this occurs I always assume there is a bug in my code until proven otherwise, but given (a) this code has executed properly for a very long time and (other than my diagnostic loop added) has not changed, and (b) it works in all environments except the win2k8 - win2k8 share, I'm guessing there is some OS issue in here that i do not understand. Any insight would be helpful. Thanks!

    Read the article

  • Redirect subdomain to hidden folder using mod_rewrite

    - by radious
    Hello! I want to keep my blog in subfolder domain_com/htdocs/blog and access it using blog.domain.com. I can obtain it using apache's mod_rewrite: RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} ^blog\.domain\.com RewriteCond %{HTTP_HOST} !^/blog RewriteRule ^(.*)$ /blog/$1 [L] But I also want to redirect hxxp://domain.com/blog to hxxp://blog.domain.com (simply because I want to hide it from users). Simple redirection like: RewriteCond %{HTTP_HOST} ^wojtyniak\.com$ RewriteRule %{REQUEST_URI} ^/foo RewriteRule ^(.*)$ http://foo.wojtyniak.com [L,R=301] causes redirection loop. Is there any way to make such a redirection without loop? Big thanks! PS. Sorry for those hxxp thing, but serverfault thinks these are link and doesn't allow me to post more than one.

    Read the article

  • Policyd quotas setup

    - by sasap
    I have installed and configured policyd 2.0.8 according to INSTALL document in installation directory. My webui works fine, no problem reported in logs. And then I setup some quotas on Default policy. Quota: - Policy: Default - Name: Sending quota - Track: SenderIP:/24 - Period: 3600 - Verdict: REJECT - Data: - Disabled: no Limits: - Type: Message count - Counter limit: 2 - Disabled: no But, problem is that I'm still able to send as manu messages as I want. Perhaps I missing something?

    Read the article

  • Windows XP Pro SP3 stop error code 0x000000F4

    - by Andrew Ford
    I have a system running Windows XP Pro SP3. My wife is the primary user of this system, and the last several days in a row, when she tries to log in to the system it gives her a BSOD with the stop error code 0x000000F4. The rest of the error codes change every time she tries to log on, but that one seems to stay constant. The error it gives me is: A process or thread crucial to system operation has unexpectedly exited or been terminated. The machine will also get stuck in a boot loop, where it keeps giving me the screen saying that windows did not start successfully, and to choose how to start windows. I have tried Safe Mode, Last known Good Config, and Normally, and all have resulted in either a return to the boot loop, or BSOD. What might this mean? How can I fix it, without doing a clean install?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >