Search Results

Search found 11665 results on 467 pages for 'ajax push'.

Page 213/467 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • [jquery] Different function for same class on 'click' / 'dblclick'

    - by Shishant
    Hello, This are my two functions, on single click it works fine, but on dblclick both functions execute, any idea? I tried using live instead of delegate but still both functions execute on dblclick // Change Status on click $(".todoBox").delegate("li", "click", function() { var id = $(this).attr("id"); $.ajax({ //ajax stuff }); return false; }); // Double Click to Delete $(".todoBox").delegate("li", "dblclick", function(){ var id = $(this).attr("id"); $.ajax({ //ajax stuff }); return false; });

    Read the article

  • Google Analytics and Whos.amung.us in realtime visitors, why such an enormous discrepancy?

    - by jacouh
    Since years I use in a site both Google Analytics and Whos.amung.us, both Google analytics and whos.amung.us javascripts are inserted in the same pages in the tracked part of the site. In real-time visitors, why such an enormous discrepancy ? for example at the moment, Google analytics gives me 9 visitors, whos.amung.us indicates 59, a ratio of 6 times? Why whos.amung.us is 6 times optimistic than Google Analytics in terms of the realtime visitors? Google whos.amung.us My question is: whos.amung.us does not detect robots while Google does? GA ignores visitors from some countries, not whos.amung.us? Some robots/bots execute whos.amung.us javascript for tracking? While no robots/bots can execute the tracking javascript provided by Google Analytics? To facilitate your analysis, I copy JS code used below: Google analytics: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'MyGaAccountNo']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> Whos.amung.us: <script>var _wau = _wau || []; _wau.push(["tab", "MyWAUAccountNo", "c6x", "right-upper"]);(function() { var s=document.createElement("script"); s.async=true; s.src="http://widgets.amung.us/tab.js";document.getElementsByTagName("head")[0].appendChild(s);})();</script> I've aleady signaled this to WAU staff some time ago, NR, I've not done this to Google as they don't handle this kind of feedback. Thank you for your explanations.

    Read the article

  • Metro: Promises

    - by Stephen.Walther
    The goal of this blog entry is to describe the Promise class in the WinJS library. You can use promises whenever you need to perform an asynchronous operation such as retrieving data from a remote website or a file from the file system. Promises are used extensively in the WinJS library. Asynchronous Programming Some code executes immediately, some code requires time to complete or might never complete at all. For example, retrieving the value of a local variable is an immediate operation. Retrieving data from a remote website takes longer or might not complete at all. When an operation might take a long time to complete, you should write your code so that it executes asynchronously. Instead of waiting for an operation to complete, you should start the operation and then do something else until you receive a signal that the operation is complete. An analogy. Some telephone customer service lines require you to wait on hold – listening to really bad music – until a customer service representative is available. This is synchronous programming and very wasteful of your time. Some newer customer service lines enable you to enter your telephone number so the customer service representative can call you back when a customer representative becomes available. This approach is much less wasteful of your time because you can do useful things while waiting for the callback. There are several patterns that you can use to write code which executes asynchronously. The most popular pattern in JavaScript is the callback pattern. When you call a function which might take a long time to return a result, you pass a callback function to the function. For example, the following code (which uses jQuery) includes a function named getFlickrPhotos which returns photos from the Flickr website which match a set of tags (such as “dog” and “funny”): function getFlickrPhotos(tags, callback) { $.getJSON( "http://api.flickr.com/services/feeds/photos_public.gne?jsoncallback=?", { tags: tags, tagmode: "all", format: "json" }, function (data) { if (callback) { callback(data.items); } } ); } getFlickrPhotos("funny, dogs", function(data) { $.each(data, function(index, item) { console.log(item); }); }); The getFlickr() function includes a callback parameter. When you call the getFlickr() function, you pass a function to the callback parameter which gets executed when the getFlicker() function finishes retrieving the list of photos from the Flickr web service. In the code above, the callback function simply iterates through the results and writes each result to the console. Using callbacks is a natural way to perform asynchronous programming with JavaScript. Instead of waiting for an operation to complete, sitting there and listening to really bad music, you can get a callback when the operation is complete. Using Promises The CommonJS website defines a promise like this (http://wiki.commonjs.org/wiki/Promises): “Promises provide a well-defined interface for interacting with an object that represents the result of an action that is performed asynchronously, and may or may not be finished at any given point in time. By utilizing a standard interface, different components can return promises for asynchronous actions and consumers can utilize the promises in a predictable manner.” A promise provides a standard pattern for specifying callbacks. In the WinJS library, when you create a promise, you can specify three callbacks: a complete callback, a failure callback, and a progress callback. Promises are used extensively in the WinJS library. The methods in the animation library, the control library, and the binding library all use promises. For example, the xhr() method included in the WinJS base library returns a promise. The xhr() method wraps calls to the standard XmlHttpRequest object in a promise. The following code illustrates how you can use the xhr() method to perform an Ajax request which retrieves a file named Photos.txt: var options = { url: "/data/photos.txt" }; WinJS.xhr(options).then( function (xmlHttpRequest) { console.log("success"); var data = JSON.parse(xmlHttpRequest.responseText); console.log(data); }, function(xmlHttpRequest) { console.log("fail"); }, function(xmlHttpRequest) { console.log("progress"); } ) The WinJS.xhr() method returns a promise. The Promise class includes a then() method which accepts three callback functions: a complete callback, an error callback, and a progress callback: Promise.then(completeCallback, errorCallback, progressCallback) In the code above, three anonymous functions are passed to the then() method. The three callbacks simply write a message to the JavaScript Console. The complete callback also dumps all of the data retrieved from the photos.txt file. Creating Promises You can create your own promises by creating a new instance of the Promise class. The constructor for the Promise class requires a function which accepts three parameters: a complete, error, and progress function parameter. For example, the code below illustrates how you can create a method named wait10Seconds() which returns a promise. The progress function is called every second and the complete function is not called until 10 seconds have passed: (function () { "use strict"; var app = WinJS.Application; function wait10Seconds() { return new WinJS.Promise(function (complete, error, progress) { var seconds = 0; var intervalId = window.setInterval(function () { seconds++; progress(seconds); if (seconds > 9) { window.clearInterval(intervalId); complete(); } }, 1000); }); } app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { wait10Seconds().then( function () { console.log("complete") }, function () { console.log("error") }, function (seconds) { console.log("progress:" + seconds) } ); } } app.start(); })(); All of the work happens in the constructor function for the promise. The window.setInterval() method is used to execute code every second. Every second, the progress() callback method is called. If more than 10 seconds have passed then the complete() callback method is called and the clearInterval() method is called. When you execute the code above, you can see the output in the Visual Studio JavaScript Console. Creating a Timeout Promise In the previous section, we created a custom Promise which uses the window.setInterval() method to complete the promise after 10 seconds. We really did not need to create a custom promise because the Promise class already includes a static method for returning promises which complete after a certain interval. The code below illustrates how you can use the timeout() method. The timeout() method returns a promise which completes after a certain number of milliseconds. WinJS.Promise.timeout(3000).then( function(){console.log("complete")}, function(){console.log("error")}, function(){console.log("progress")} ); In the code above, the Promise completes after 3 seconds (3000 milliseconds). The Promise returned by the timeout() method does not support progress events. Therefore, the only message written to the console is the message “complete” after 10 seconds. Canceling Promises Some promises, but not all, support cancellation. When you cancel a promise, the promise’s error callback is executed. For example, the following code uses the WinJS.xhr() method to perform an Ajax request. However, immediately after the Ajax request is made, the request is cancelled. // Specify Ajax request options var options = { url: "/data/photos.txt" }; // Make the Ajax request var request = WinJS.xhr(options).then( function (xmlHttpRequest) { console.log("success"); }, function (xmlHttpRequest) { console.log("fail"); }, function (xmlHttpRequest) { console.log("progress"); } ); // Cancel the Ajax request request.cancel(); When you run the code above, the message “fail” is written to the Visual Studio JavaScript Console. Composing Promises You can build promises out of other promises. In other words, you can compose promises. There are two static methods of the Promise class which you can use to compose promises: the join() method and the any() method. When you join promises, a promise is complete when all of the joined promises are complete. When you use the any() method, a promise is complete when any of the promises complete. The following code illustrates how to use the join() method. A new promise is created out of two timeout promises. The new promise does not complete until both of the timeout promises complete: WinJS.Promise.join([WinJS.Promise.timeout(1000), WinJS.Promise.timeout(5000)]) .then(function () { console.log("complete"); }); The message “complete” will not be written to the JavaScript Console until both promises passed to the join() method completes. The message won’t be written for 5 seconds (5,000 milliseconds). The any() method completes when any promise passed to the any() method completes: WinJS.Promise.any([WinJS.Promise.timeout(1000), WinJS.Promise.timeout(5000)]) .then(function () { console.log("complete"); }); The code above writes the message “complete” to the JavaScript Console after 1 second (1,000 milliseconds). The message is written to the JavaScript console immediately after the first promise completes and before the second promise completes. Summary The goal of this blog entry was to describe WinJS promises. First, we discussed how promises enable you to easily write code which performs asynchronous actions. You learned how to use a promise when performing an Ajax request. Next, we discussed how you can create your own promises. You learned how to create a new promise by creating a constructor function with complete, error, and progress parameters. Finally, you learned about several advanced methods of promises. You learned how to use the timeout() method to create promises which complete after an interval of time. You also learned how to cancel promises and compose promises from other promises.

    Read the article

  • Referral traffic not appearing properly in Google Analytics

    - by Crashalot
    We have a partnership arrangement with another site where we pay them for users sent to us. However, they claim our referral numbers for them are lower than theirs by 50%. They are tracking clicks in Google Analytics (using events) while we are using visits in Google Analytics. Are we doing something wrong with our Google Analytics installation? <!-- Google Analytics BEGIN --> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-12345678-1']); _gaq.push(['_setDomainName', 'example.com']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> <!-- Google Analytics END -->

    Read the article

  • OpenVPN, Server 12.04, connect to machines in home LAN behind VPN server

    - by inexion
    Problem: I've set up a working OpenVPN server, and am able to connect to it from anywhere using my mac laptop and tunnelblick. When I connect in, I'm assigned an IP address of 10.8.0.x, the server is 10.8.0.1, so I have no problems SSHing into it. Once SSHd in, I can even ping other machines (obviously) on my home network (192.168.1.x). Desired outcome: What I want, is, to connect to the VPN server, and instead of getting a 10.8.0.x address, I get a 192.168.1.x on my home network. I can't figure out how to talk to the OTHER machines on my home network WITHOUT being SSHd in to the VPN server. I'd like to just connect to my VPN server, then be a part of my home network. Attempted solutions: I've read that I need to set up routes, and/or enable IP forwarding. I enabled IP forwarding using sudo sysctl -w net.ipv4.ip_forward=1 and that doesn't seem to have done anything. I've also uncommented a line in the OpenVPN's server.conf file: # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. push "route 192.168.1.0 255.255.255.0" ;push "route 192.168.20.0 255.255.255.0" But still no luck, I still get a 10.8.0.x address... I've also read I may have to add routes to the router itself, but haven't tried that. Any help appreciated, thanks!

    Read the article

  • Analytics: Test events not showing up - how to troubleshoot?

    - by David Parks
    I've got 3 profiles: Master, Raw Data, and Test, on the Test profile I have no filters configured. I want to test using some events. I created a local HTML file as shown below to generate some test data that I could play with in Analytics. But the events never showed up in Analytics. I wonder what I might be doing wrong? Is the lack of a domain an issue maybe? <html><head></head><body>Login_popup_complete_Facebook <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-28554309-1']); _gaq.push(['_trackPageview']); _gaq.push(['_trackEvent', 'Login popup completed', 'Facebook']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> </body></html>

    Read the article

  • Using SPServices &amp; jQuery to Find My Stuff from Multi-Select Person/Group Field

    - by Mark Rackley
    Okay… quick blog post for all you SPServices fans out there. I needed to quickly write a script that would return all the tasks currently assigned to me.  I also wanted it to return any task that was assigned to a group I belong to. This can actually be done with a CAML query, so no big deal, right?  The rub is that the “assigned to” field is a multi-select person or group field. As far as I know (and I actually know so little) you cannot just write a CAML query to return this information. If you can, please leave a comment below and disregard the rest of this blog post… So… what’s a hacker to do? As always, I break things down to their most simple components (I really love the KISS principle and would get it tattooed on my back if people wouldn’t think it meant “Knights In Satan’s Service”. You really gotta be an old far to get that reference).  Here’s what we’re going to do: Get currently logged in user’s name as it is stored in a person field Find all the SharePoint groups the current user belongs to Retrieve a set of assigned tasks from the task list and then find those that are assigned to current  user or group current user belongs to Nothing too hairy… So let’s get started Some Caveats before I continue There are some obvious performance implications with this solution as I make a total of four SPServices calls and there’s a lot of looping going on. Also, the CAML query in this blog has NOT been optimized. If you move forward with this code, tweak it so that it returns a further subset of data or you will see horrible performance if you have a few hundred entries in your task list. Add a date range to the CAML or something. Find some way to limit the results as much as possible. Lastly, if you DO have a better solution, I would like you to share. Iron sharpens iron and all…   Alright, let’s really get started. Get currently logged in user’s name as it is stored in a person field First thing we need to do is understand how a person group looks when you look at the XML returned from a SharePoint Web Service call. It turns out it’s stored like any other multi select item in SharePoint which is <id>;#<value> and when you assign a person to that field the <value> equals the person’s name “Mark Rackley” in my case. This is for Windows Authentication, I would expect this to be different in FBA, but I’m not using FBA. If you want to know what it looks like with FBA you can use the code in this blog and strategically place an alert to see the value.  Anyway… I need to find the name of the user who is currently logged in as it is stored in the person field. This turns out to be one SPServices call: var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     }); As you can see, the “Title” field has the information we need. I suspect (although again, I haven’t tried) that the Title field also contains the user’s name as we need it if I was using FBA. Okay… last thing we need to do is store our users name in an array for processing later: myGroups = new Array(); myGroups.push(userName); Find all the SharePoint groups the current user belongs to Now for the groups. How are groups returned in that XML stream?  Same as the person <ID>;#<Group Name>, and if it’s a mutli select it’s all returned in one big long string “<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>”.  So, how do we find all the groups the current user belongs to? This is also a simple SPServices call. Using the “GetGroupCollectionFromUser” operation we can find all the groups a user belongs to. So, let’s execute this method and store all our groups. $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });         }     }); So, all we did in the above code was execute the “GetGroupCollectionFromUser” operation and look for the each “Group” node (row) and store the name for each group in our array that we put the user’s name in previously (myGroups). Now we have an array that contains the current user’s name as it will appear in the person field XML and  all the groups the current user belongs to. The Rest Now comes the easy part for all of you familiar with SPServices. We are going to retrieve our tasks from the Task list using “GetListItems” and look at each entry to see if it belongs to this person. If it does belong to this person we are going to store it for later processing. That code looks something like this: // get list of assigned tasks that aren't closed... *modify the CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                        //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                             //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 }); Some notes on why I did certain things and additional caveats. You will notice in my code that I’m doing an AssignedToString.indexOf(GroupName) to see if the task belongs to the person. This could possibly return bad results if you have SharePoint Group names that are named in such a way that the “IndexOf” returns a false positive.  For example if you have a Group called “My Users” and a group called “My Users – SuperUsers” then if a user belonged to “My Users” it would return a false positive on executing “My Users – SuperUsers”.IndexOf(“My Users”). Make sense? Just be aware of this when naming groups, we don’t have this problem. This is where also some fine-tuning can probably be done by those smarter than me. This is a pretty inefficient method to determine if a task belongs to a user, I mean what if a user belongs to 20 groups? That’s a LOT of looping.  See all the opportunities I give you guys to do something fun?? Also, why am I storing my values in an array instead of just writing them out to a Div? Well.. I want to pass my data to a jQuery library to format it all nice and pretty and an Array is a great way to do that. When all is said and done and we put all the code together it looks like:   $(document).ready(function() {         var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     });         myGroups = new Array();     myGroups.push(userName );       $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });                      // get list of assigned tasks that aren't closed... *modify this CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                         //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                            //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 });       }    });  }); Final Thoughts So, there you have it. Take it and run with it. Make it something cool (and tell me how you did it). Another possible way to improve performance in this scenario is to use a DVWP to display the tasks and use jQuery and the “myGroups” array from this blog post to hide all those rows that don’t belong to the current user. I haven’t tried it, but it does move some of the processing off to the server (generating the view) so it may perform better.  As always, thanks for stopping by… hope you have a Merry Christmas…

    Read the article

  • Find an element in a JavaScript array

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/22/find-an-element-in-a-javascript-array.aspxI needed a C# Dictionary like data structure in JavaScript and then a way to find that object by a key. I had forgotten how to do this, so did some searching and talked to a colleague and came up with this JsFiddle. See the code in my jsFiddle or below: var processingProgressTimeoutIds = []; var file = { name: 'test', timeId: 1 }; var file2 = { name: 'test2', timeId: 2 }; var file3 = { name: 'test3', timeId: 3 }; processingProgressTimeoutIds.push({ name: file.name, timerId: file.id }); processingProgressTimeoutIds.push({ name: file2.name, timerId: file2.id }); processingProgressTimeoutIds.push({ name: file3.name, timerId: file3.id }); console.log(JSON.stringify(processingProgressTimeoutIds)); var keyName = 'test'; var match = processingProgressTimeoutIds.filter(function (item) { return item.name === keyName; })[0]; console.log(JSON.stringify(match)); // optimization var match2 = processingProgressTimeoutIds.some(function (element, index, array) { return element.name === keyName; }); console.log(JSON.stringify(match2)); // if you have the full object var match3 = processingProgressTimeoutIds.indexOf(file); console.log(JSON.stringify(match3)); // http://jsperf.com/array-find-equal – from Dave // indexOf is faster, but I need to find it by the key, so I can’t use it here //ES6 will rock though, array comprehension! – also from Dave // var ys = [x of xs if x == 3]; // var y = ys[0]; Here’s a good blog post on Array comprehension.

    Read the article

  • Google Analytics Visitors drop-off for certain region of site only

    - by crmpicco
    I have an issue with the tracking on my site where I have seen a dramatic drop off of visitors to the site from a certain region. I have four regions on my site at the moment, these are UK, EU, US and RoW (Rest of the World). The UK, EU and US regions are unaffected, only the RoW region suffers this drop-off. I have included a screen shot below from my GA account, which shows this effect. My GA code, which is included on every page on the site is below. I have changed the UA account number intentionally for this example. There have been no changes made to the GA account or the tracking code in a live environment for some considerable time, but for some reason I am seeing the drop-off for this region only. In the code below I am not tracking page views on certain pages as I have event tracking setup for these pages. <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-18721873-5']); _gaq.push(['_setCookiePath', '/row/']); if ( typeof(p_page) != 'undefined') { // do nothing if user is on above pages // N.B. there are a series of conditions in this if statement checking that we are not on a particular page } else { _gaq.push(['_trackPageview']); } </script>

    Read the article

  • Tracking Outgoing Links With Google Analytics Events

    - by the_archer
    I've been trying to track clicks on external links on my website using the events tracking method. So I've got my Google Analytics code setup before body ends as shown below (note: quotes have been entitied by blogger, but it works fine): <script type='text/javascript'> var _gaq = _gaq || []; _gaq.push([&#39;_setAccount&#39;, &#39;UA-XXXXXXX-X#39;]); _gaq.push([&#39;_trackPageview&#39;]); (function() { var ga = document.createElement(&#39;script&#39;); ga.type = &#39;text/javascript&#39;; ga.async = true; ga.src = (&#39;https:&#39; == document.location.protocol ? &#39;https://ssl&#39; : &#39;http://www&#39;) + &#39;.google-analytics.com/ga.js&#39;; var s = document.getElementsByTagName(&#39;script&#39;)[0]; s.parentNode.insertBefore(ga, s); })(); </script> Now I wanted to track a link on the addthis.com follow widget. So there is a link of the type below to which following instructions from here I added the onclick event. <a addthis:url='http://feeds.feedburner.com/myfeedburnerlurl' onClick="_gaq.push(['_trackEvent', 'Subscription Clicks', 'RSS']);" class='addthis_button_rss_follow'/> I clicked on it a couple of times, left it for over a day now, but nothing shows up in google analytics events. It just says zero events. Here's a screenshot of the events page on GA: Could anybody help me? Am I doing anything wrong?

    Read the article

  • Performing a Depth First Search iteratively using async/parallel processing?

    - by Prabhu
    Here is a method that does a DFS search and returns a list of all items given a top level item id. How could I modify this to take advantage of parallel processing? Currently, the call to get the sub items is made one by one for each item in the stack. It would be nice if I could get the sub items for multiple items in the stack at the same time, and populate my return list faster. How could I do this (either using async/await or TPL, or anything else) in a thread safe manner? private async Task<IList<Item>> GetItemsAsync(string topItemId) { var items = new List<Item>(); var topItem = await GetItemAsync(topItemId); Stack<Item> stack = new Stack<Item>(); stack.Push(topItem); while (stack.Count > 0) { var item = stack.Pop(); items.Add(item); var subItems = await GetSubItemsAsync(item.SubId); foreach (var subItem in subItems) { stack.Push(subItem); } } return items; } I was thinking of something along these lines, but it's not coming together: var tasks = stack.Select(async item => { items.Add(item); var subItems = await GetSubItemsAsync(item.SubId); foreach (var subItem in subItems) { stack.Push(subItem); } }).ToList(); if (tasks.Any()) await Task.WhenAll(tasks); The language I'm using is C#.

    Read the article

  • Is Moving Entity Framework objects over a webservice really the best way?

    - by aceinthehole
    I've inherited a .NET project that has close to 2 thousand clients out in the field that need to push data periodically up to a central repository. The clients wake up and attempt to push the data up via a series of WCF webservices where they are passing each entity framework entity as parameter. Once the service receives this object, it preforms some business logic on the data, and then turns around and sticks it in it's own database that mirrors the database on the client machines. The trick is, is that this data is being transmitted over a metered connection, which is very expensive. So optimizing the data is a serious priority. Now, we are using a custom encoder that compresses the data (and decompresses it on the other end) while it is being transmitted, and this is reducing the data footprint. However, the amount of data that the clients are using, seem ridiculously large, given the amount of information that is actually being transmitted. It seems me that entity framework itself may be to blame. I'm suspecting that the objects are very large when serialized to be sent over wire, with a lot context information and who knows what else, when what we really need is just the 'new' inserts. Is using the entity framework and WCF services as we have done so far the correct way, architecturally, of approaching this n-tiered, asynchronous, push only problem? Or is there a different approach, that could optimize the data use?

    Read the article

  • Making mercurial subrepositories behave like subversion externals

    - by Emily Dickinson
    Hi guys, The FAQ, and hginit.com have been really useful for helping me make the transition from svn to hg. However, when it comes to using Hg's subrepository feature in the manner of subversion's externals, I've tried everythign and cannot replicate the nice behavior of svn externals. Here's the simplest example of what I want to do: Init "lib" repository This repository is never to be used as a standalone; it's always included by main repositories, as a sub-repository. Init one or more including repositories To keep the example simple, I'll "init" a repository called "main" Have "main" include "lib" as a subrepository Importantly -- AND HERE'S WHAT I CAN'T GET TO WORK: When I modify a file inside of "main/lib", and I push the modification, then that change gets pushed to the "lib" repository -- NOT to a copy inside of "main". Command lines speak louder than words. I've tried so many variations on this theme, but here's the gist. If someone can reply, in command lines, I'll be forever grateful! 1. Init "lib" repository $ cd /home/moi/hgrepos ## Where I'm storing my hg repositories, on my main server $ hg init lib $ echo "foo" lib/lib.txt $ hg add lib $ hg ci -A -m "Init lib" lib 2. Init "main" repository, and include "lib" as a subrepos $ cd /home/moi/hgrepos $ hg init main $ echo "foo" main/main.txt $ hg add main $ cd main $ hg clone ../lib lib $ echo "lib=lib" .hgsub $ hg ci -A -m "Init main" . This all works fine, but when I make a clone of the "main" repository, and make local modifications to files in "main/lib", and push them, the changes get pushed to "main/lib", NOT to "lib". IN COMMAND-LINE-ESE, THIS IS THE PROBLEM: $ /home/moi/hg-test $ hg clone ssh://[email protected]/hgrepos/lib lib $ hg clone ssh://[email protected]/hgrepos/main main $ cd main $ echo foo lib/lib.txt $ hg st M lib.txt $ hg com -m "Modified lib.txt, from inside the main repos" lib.txt $ hg push pushing to ssh://[email protected]/hgrepos/main/lib That last line of output from hg shows the problem. It shows that I've made a modification to a COPY of a file in lib, NOT to a file in the lib repository. If this were working as I'd like it to work, the push would be to hgrepos/lib, NOT to hgrepos/main/lib. I.e., I would see: $ hg push pushing to ssh://[email protected]/hgrepos/lib IF YOU CAN ANSWER THIS IN TERMS OF COMMAND LINES RATHER THAN IN ENGLISH, I WILL BE ETERNALLY GRATEFUL! Thank you in advance! Emily in Portland

    Read the article

  • Uploadify Hanging at random on 100%

    - by Matty
    I am using Uploadify (http://www.uploadify.com/) to enable my users to upload images via my web application. The problem I am having is that every now and then (at what appears to be random) when the progress bar reaches 100% it 'hangs' and does nothing. I was wondering if any developers familiar with uploadify may have any idea how to solve this? I am in desperate need of some help. Here is my front-end code: jQuery(document).ready(function() { jQuery("#uploadify").uploadify({ 'uploader' : 'javascripts/uploadify.swf', 'script' : 'upload-file2.php', 'cancelImg' : 'css/images/cancel.png', 'folder' : 'uploads/personal_images/' + profileOwner, 'queueID' : 'fileQueue', 'auto' : true, 'multi' : true, 'fileDesc' : 'Image files', 'fileExt' : '.jpg;.jpeg;.gif;.png', 'sizeLimit' : '2097152', 'onComplete': function(event, queueID, fileObj, response, data) { processPersonalImage(fileObj.name); arrImgNames.push(fileObj.name); showUploadedImages(true); document.getElementById("photos").style.backgroundImage = "url('css/images/minicam.png')"; }, 'onAllComplete' : function() { completionMessage(arrFailedNames); document.getElementById("displayImageButton").style.display = "inline"; document.getElementById("photos").style.backgroundImage = "url('css/images/minicam.png')"; }, 'onCancel' : function() { arrImgNames.push(fileObj.name); arrFailedNames.push(fileObj.name); showUploadedImages(false); }, 'onError' : function() { arrImgNames.push(fileObj.name); arrFailedNames.push(fileObj.name); showUploadedImages(false); } }); }); And server side: if (!empty($_FILES)) { //Get user ID from the file path for use later.. $userID = getIdFromFilePath($_REQUEST['folder'], 3); $row = mysql_fetch_assoc(getRecentAlbum($userID, "photo_album_personal")); $subFolderName = $row['pk']; //Prepare target path / file.. $tempFile = $_FILES['Filedata']['tmp_name']; $targetPath = $_SERVER['DOCUMENT_ROOT'] . $_REQUEST['folder'] . '/'.$subFolderName.'/'; $targetFile = str_replace('//','/',$targetPath) . $_FILES['Filedata']['name']; //Move uploaded file from temp directory to new folder move_uploaded_file($tempFile,$targetFile); //Now add a record to DB to reflect this personal image.. if(file_exists($targetFile)) { //add photo record to DB $directFilePath = $_REQUEST['folder'] . '/'.$subFolderName.'/' . $_FILES['Filedata']['name']; addPersonalPhotoRecordToDb($directFilePath, $row['pk']); } echo "1"; die(true); } thanks for any help!!

    Read the article

  • Printf in assembler doesn't print

    - by Gaim
    Hi there, I have got a homework to hack program using buffer overflow ( with disassambling, program was written in C++, I haven't got the source code ). I have already managed it but I have a problem. I have to print some message on the screen, so I found out address of printf function, pushed address of "HACKED" and address of "%s" on the stack ( in this order ) and called that function. Called code passed well but nothing had been printed. I have tried to simulate the environment like in other place in the program but there has to be something wrong. Do you have any idea what I am doing wrong that I have no output, please? Thanks a lot EDIT: This program is running on Windows XP SP3 32b, written in C++, Intel asm there is the "hack" code CPU Disasm Address Hex dump Command Comments 0012F9A3 90 NOP ;hack begins 0012F9A4 90 NOP 0012F9A5 90 NOP 0012F9A6 89E5 MOV EBP,ESP 0012F9A8 83EC 7F SUB ESP,7F ;creating a place for working data 0012F9AB 83EC 7F SUB ESP,7F 0012F9AE 31C0 XOR EAX,EAX 0012F9B0 50 PUSH EAX 0012F9B1 50 PUSH EAX 0012F9B2 50 PUSH EAX 0012F9B3 89E8 MOV EAX,EBP 0012F9B5 83E8 09 SUB EAX,9 0012F9B8 BA 1406EDFF MOV EDX,FFED0614 ;address to jump, it is negative because there mustn't be 00 bytes 0012F9BD F7DA NOT EDX 0012F9BF FFE2 JMP EDX ;I have to jump because there are some values overwritten by the program 0012F9C1 90 NOP 0012F9C2 0090 00000000 ADD BYTE PTR DS:[EAX],DL 0012F9C8 90 NOP 0012F9C9 90 NOP 0012F9CA 90 NOP 0012F9CB 90 NOP 0012F9CC 6C INS BYTE PTR ES:[EDI],DX ; I/O command 0012F9CD 65:6E OUTS DX,BYTE PTR GS:[ESI] ; I/O command 0012F9CF 67:74 68 JE SHORT 0012FA3A ; Superfluous address size prefix 0012F9D2 2069 73 AND BYTE PTR DS:[ECX+73],CH 0012F9D5 203439 AND BYTE PTR DS:[EDI+ECX],DH 0012F9D8 34 2C XOR AL,2C 0012F9DA 2066 69 AND BYTE PTR DS:[ESI+69],AH 0012F9DD 72 73 JB SHORT 0012FA52 0012F9DF 74 20 JE SHORT 0012FA01 0012F9E1 3120 XOR DWORD PTR DS:[EAX],ESP 0012F9E3 6C INS BYTE PTR ES:[EDI],DX ; I/O command 0012F9E4 696E 65 7300909 IMUL EBP,DWORD PTR DS:[ESI+65],-6F6FFF8D 0012F9EB 90 NOP 0012F9EC 90 NOP 0012F9ED 90 NOP 0012F9EE 31DB XOR EBX,EBX ; hack continues 0012F9F0 8818 MOV BYTE PTR DS:[EAX],BL ; writing 00 behind word "HACKED" 0012F9F2 83E8 06 SUB EAX,6 0012F9F5 50 PUSH EAX ; address of "HACKED" 0012F9F6 B8 3B8CBEFF MOV EAX,FFBE8C3B 0012F9FB F7D0 NOT EAX 0012F9FD 50 PUSH EAX ; address of "%s" 0012F9FE B8 FFE4BFFF MOV EAX,FFBFE4FF 0012FA03 F7D0 NOT EAX 0012FA05 FFD0 CALL EAX ;address of printf This code is really ugly because I am new in assembler and there mustn't be null bytes because of buffer-overflow bug

    Read the article

  • Go - Using a container/heap to implement a priority queue

    - by Seth Hoenig
    In the big picture, I'm trying to implement Dijkstra's algorithm using a priority queue. According to members of golang-nuts, the idiomatic way to do this in Go is to use the heap interface with a custom underlying data structure. So I have created Node.go and PQueue.go like so: //Node.go package pqueue type Node struct { row int col int myVal int sumVal int } func (n *Node) Init(r, c, mv, sv int) { n.row = r n.col = c n.myVal = mv n.sumVal = sv } func (n *Node) Equals(o *Node) bool { return n.row == o.row && n.col == o.col } And PQueue.go: // PQueue.go package pqueue import "container/vector" import "container/heap" type PQueue struct { data vector.Vector size int } func (pq *PQueue) Init() { heap.Init(pq) } func (pq *PQueue) IsEmpty() bool { return pq.size == 0 } func (pq *PQueue) Push(i interface{}) { heap.Push(pq, i) pq.size++ } func (pq *PQueue) Pop() interface{} { pq.size-- return heap.Pop(pq) } func (pq *PQueue) Len() int { return pq.size } func (pq *PQueue) Less(i, j int) bool { I := pq.data.At(i).(Node) J := pq.data.At(j).(Node) return (I.sumVal + I.myVal) < (J.sumVal + J.myVal) } func (pq *PQueue) Swap(i, j int) { temp := pq.data.At(i).(Node) pq.data.Set(i, pq.data.At(j).(Node)) pq.data.Set(j, temp) } And main.go: (the action is in SolveMatrix) // Euler 81 package main import "fmt" import "io/ioutil" import "strings" import "strconv" import "./pqueue" const MATSIZE = 5 const MATNAME = "matrix_small.txt" func main() { var matrix [MATSIZE][MATSIZE]int contents, err := ioutil.ReadFile(MATNAME) if err != nil { panic("FILE IO ERROR!") } inFileStr := string(contents) byrows := strings.Split(inFileStr, "\n", -1) for row := 0; row < MATSIZE; row++ { byrows[row] = (byrows[row])[0 : len(byrows[row])-1] bycols := strings.Split(byrows[row], ",", -1) for col := 0; col < MATSIZE; col++ { matrix[row][col], _ = strconv.Atoi(bycols[col]) } } PrintMatrix(matrix) sum, len := SolveMatrix(matrix) fmt.Printf("len: %d, sum: %d\n", len, sum) } func PrintMatrix(mat [MATSIZE][MATSIZE]int) { for r := 0; r < MATSIZE; r++ { for c := 0; c < MATSIZE; c++ { fmt.Printf("%d ", mat[r][c]) } fmt.Print("\n") } } func SolveMatrix(mat [MATSIZE][MATSIZE]int) (int, int) { var PQ pqueue.PQueue var firstNode pqueue.Node var endNode pqueue.Node msm1 := MATSIZE - 1 firstNode.Init(0, 0, mat[0][0], 0) endNode.Init(msm1, msm1, mat[msm1][msm1], 0) if PQ.IsEmpty() { // make compiler stfu about unused variable fmt.Print("empty") } PQ.Push(firstNode) // problem return 0, 0 } The problem is, upon compiling i get the error message: [~/Code/Euler/81] $ make 6g -o pqueue.6 Node.go PQueue.go 6g main.go main.go:58: implicit assignment of unexported field 'row' of pqueue.Node in function argument make: *** [all] Error 1 And commenting out the line PQ.Push(firstNode) does satisfy the compiler. But I don't understand why I'm getting the error message in the first place. Push doesn't modify the argument in any way.

    Read the article

  • Messing with the stack in assembly and c++

    - by user246100
    I want to do the following: I have a function that is not mine (it really doesn't matter here but just to say that I don't have control over it) and that I want to patch so that it calls a function of mine, preserving the arguments list (jumping is not an option). What I'm trying to do is, to put the stack pointer as it was before that function is called and then call mine (like going back and do again the same thing but with a different function). This doesn't work straight because the stack becomes messed up. I believe that when I do the call it replaces the return address. So, I did a step to preserve the return address saving it in a globally variable and it works but this is not ok because I want it to resist to recursitivy and you know what I mean. Anyway, i'm a newbie in assembly so that's why I'm here. Please, don't tell me about already made software to do this because I want to make things my way. Of course, this code has to be compiler and optimization independent. My code (If it is bigger than what is acceptable please tell me how to post it): // A function that is not mine but to which I have access and want to patch so that it calls a function of mine with its original arguments void real(int a,int b,int c,int d) { } // A function that I want to be called, receiving the original arguments void receiver(int a,int b,int c,int d) { printf("Arguments %d %d %d %d\n",a,b,c,d); } long helper; // A patch to apply in the "real" function and on which I will call "receiver" with the same arguments that "real" received. __declspec( naked ) void patch() { _asm { // This first two instructions save the return address in a global variable // If I don't save and restore, the program won't work correctly. // I want to do this without having to use a global variable mov eax, [ebp+4] mov helper,eax push ebp mov ebp, esp // Make that the stack becomes as it were before the real function was called add esp, 8 // Calls our receiver call receiver mov esp, ebp pop ebp // Restores the return address previously saved mov eax, helper mov [ebp+4],eax ret } } int _tmain(int argc, _TCHAR* argv[]) { FlushInstructionCache(GetCurrentProcess(),&real,5); DWORD oldProtection; VirtualProtect(&real,5,PAGE_EXECUTE_READWRITE,&oldProtection); // Patching the real function to go to my patch ((unsigned char*)real)[0] = 0xE9; *((long*)((long)(real) + sizeof(unsigned char))) = (char*)patch - (char*)real - 5; // calling real function (I'm just calling it with inline assembly because otherwise it seems to works as if it were un patched // that is strange but irrelevant for this _asm { push 666 push 1337 push 69 push 100 call real add esp, 16 } return 0; }

    Read the article

  • C#/.NET Little Wonders: The Concurrent Collections (1 of 3)

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In the next few weeks, we will discuss the concurrent collections and how they have changed the face of concurrent programming. This week’s post will begin with a general introduction and discuss the ConcurrentStack<T> and ConcurrentQueue<T>.  Then in the following post we’ll discuss the ConcurrentDictionary<T> and ConcurrentBag<T>.  Finally, we shall close on the third post with a discussion of the BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. A brief history of collections In the beginning was the .NET 1.0 Framework.  And out of this framework emerged the System.Collections namespace, and it was good.  It contained all the basic things a growing programming language needs like the ArrayList and Hashtable collections.  The main problem, of course, with these original collections is that they held items of type object which means you had to be disciplined enough to use them correctly or you could end up with runtime errors if you got an object of a type you weren't expecting. Then came .NET 2.0 and generics and our world changed forever!  With generics the C# language finally got an equivalent of the very powerful C++ templates.  As such, the System.Collections.Generic was born and we got type-safe versions of all are favorite collections.  The List<T> succeeded the ArrayList and the Dictionary<TKey,TValue> succeeded the Hashtable and so on.  The new versions of the library were not only safer because they checked types at compile-time, in many cases they were more performant as well.  So much so that it's Microsoft's recommendation that the System.Collections original collections only be used for backwards compatibility. So we as developers came to know and love the generic collections and took them into our hearts and embraced them.  The problem is, thread safety in both the original collections and the generic collections can be problematic, for very different reasons. Now, if you are only doing single-threaded development you may not care – after all, no locking is required.  Even if you do have multiple threads, if a collection is “load-once, read-many” you don’t need to do anything to protect that container from multi-threaded access, as illustrated below: 1: public static class OrderTypeTranslator 2: { 3: // because this dictionary is loaded once before it is ever accessed, we don't need to synchronize 4: // multi-threaded read access 5: private static readonly Dictionary<string, char> _translator = new Dictionary<string, char> 6: { 7: {"New", 'N'}, 8: {"Update", 'U'}, 9: {"Cancel", 'X'} 10: }; 11:  12: // the only public interface into the dictionary is for reading, so inherently thread-safe 13: public static char? Translate(string orderType) 14: { 15: char charValue; 16: if (_translator.TryGetValue(orderType, out charValue)) 17: { 18: return charValue; 19: } 20:  21: return null; 22: } 23: } Unfortunately, most of our computer science problems cannot get by with just single-threaded applications or with multi-threading in a load-once manner.  Looking at  today's trends, it's clear to see that computers are not so much getting faster because of faster processor speeds -- we've nearly reached the limits we can push through with today's technologies -- but more because we're adding more cores to the boxes.  With this new hardware paradigm, it is even more important to use multi-threaded applications to take full advantage of parallel processing to achieve higher application speeds. So let's look at how to use collections in a thread-safe manner. Using historical collections in a concurrent fashion The early .NET collections (System.Collections) had a Synchronized() static method that could be used to wrap the early collections to make them completely thread-safe.  This paradigm was dropped in the generic collections (System.Collections.Generic) because having a synchronized wrapper resulted in atomic locks for all operations, which could prove overkill in many multithreading situations.  Thus the paradigm shifted to having the user of the collection specify their own locking, usually with an external object: 1: public class OrderAggregator 2: { 3: private static readonly Dictionary<string, List<Order>> _orders = new Dictionary<string, List<Order>>(); 4: private static readonly _orderLock = new object(); 5:  6: public void Add(string accountNumber, Order newOrder) 7: { 8: List<Order> ordersForAccount; 9:  10: // a complex operation like this should all be protected 11: lock (_orderLock) 12: { 13: if (!_orders.TryGetValue(accountNumber, out ordersForAccount)) 14: { 15: _orders.Add(accountNumber, ordersForAccount = new List<Order>()); 16: } 17:  18: ordersForAccount.Add(newOrder); 19: } 20: } 21: } Notice how we’re performing several operations on the dictionary under one lock.  With the Synchronized() static methods of the early collections, you wouldn’t be able to specify this level of locking (a more macro-level).  So in the generic collections, it was decided that if a user needed synchronization, they could implement their own locking scheme instead so that they could provide synchronization as needed. The need for better concurrent access to collections Here’s the problem: it’s relatively easy to write a collection that locks itself down completely for access, but anything more complex than that can be difficult and error-prone to write, and much less to make it perform efficiently!  For example, what if you have a Dictionary that has frequent reads but in-frequent updates?  Do you want to lock down the entire Dictionary for every access?  This would be overkill and would prevent concurrent reads.  In such cases you could use something like a ReaderWriterLockSlim which allows for multiple readers in a lock, and then once a writer grabs the lock it blocks all further readers until the writer is done (in a nutshell).  This is all very complex stuff to consider. Fortunately, this is where the Concurrent Collections come in.  The Parallel Computing Platform team at Microsoft went through great pains to determine how to make a set of concurrent collections that would have the best performance characteristics for general case multi-threaded use. Now, as in all things involving threading, you should always make sure you evaluate all your container options based on the particular usage scenario and the degree of parallelism you wish to acheive. This article should not be taken to understand that these collections are always supperior to the generic collections. Each fills a particular need for a particular situation. Understanding what each container is optimized for is key to the success of your application whether it be single-threaded or multi-threaded. General points to consider with the concurrent collections The MSDN points out that the concurrent collections all support the ICollection interface. However, since the collections are already synchronized, the IsSynchronized property always returns false, and SyncRoot always returns null.  Thus you should not attempt to use these properties for synchronization purposes. Note that since the concurrent collections also may have different operations than the traditional data structures you may be used to.  Now you may ask why they did this, but it was done out of necessity to keep operations safe and atomic.  For example, in order to do a Pop() on a stack you have to know the stack is non-empty, but between the time you check the stack’s IsEmpty property and then do the Pop() another thread may have come in and made the stack empty!  This is why some of the traditional operations have been changed to make them safe for concurrent use. In addition, some properties and methods in the concurrent collections achieve concurrency by creating a snapshot of the collection, which means that some operations that were traditionally O(1) may now be O(n) in the concurrent models.  I’ll try to point these out as we talk about each collection so you can be aware of any potential performance impacts.  Finally, all the concurrent containers are safe for enumeration even while being modified, but some of the containers support this in different ways (snapshot vs. dirty iteration).  Once again I’ll highlight how thread-safe enumeration works for each collection. ConcurrentStack<T>: The thread-safe LIFO container The ConcurrentStack<T> is the thread-safe counterpart to the System.Collections.Generic.Stack<T>, which as you may remember is your standard last-in-first-out container.  If you think of algorithms that favor stack usage (for example, depth-first searches of graphs and trees) then you can see how using a thread-safe stack would be of benefit. The ConcurrentStack<T> achieves thread-safe access by using System.Threading.Interlocked operations.  This means that the multi-threaded access to the stack requires no traditional locking and is very, very fast! For the most part, the ConcurrentStack<T> behaves like it’s Stack<T> counterpart with a few differences: Pop() was removed in favor of TryPop() Returns true if an item existed and was popped and false if empty. PushRange() and TryPopRange() were added Allows you to push multiple items and pop multiple items atomically. Count takes a snapshot of the stack and then counts the items. This means it is a O(n) operation, if you just want to check for an empty stack, call IsEmpty instead which is O(1). ToArray() and GetEnumerator() both also take snapshots. This means that iteration over a stack will give you a static view at the time of the call and will not reflect updates. Pushing on a ConcurrentStack<T> works just like you’d expect except for the aforementioned PushRange() method that was added to allow you to push a range of items concurrently. 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: // but you can also push multiple items in one atomic operation (no interleaves) 7: stack.PushRange(new [] { "Second", "Third", "Fourth" }); For looking at the top item of the stack (without removing it) the Peek() method has been removed in favor of a TryPeek().  This is because in order to do a peek the stack must be non-empty, but between the time you check for empty and the time you execute the peek the stack contents may have changed.  Thus the TryPeek() was created to be an atomic check for empty, and then peek if not empty: 1: // to look at top item of stack without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (stack.TryPeek(out item)) 5: { 6: Console.WriteLine("Top item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Stack was empty."); 11: } Finally, to remove items from the stack, we have the TryPop() for single, and TryPopRange() for multiple items.  Just like the TryPeek(), these operations replace Pop() since we need to ensure atomically that the stack is non-empty before we pop from it: 1: // to remove items, use TryPop or TryPopRange to get multiple items atomically (no interleaves) 2: if (stack.TryPop(out item)) 3: { 4: Console.WriteLine("Popped " + item); 5: } 6:  7: // TryPopRange will only pop up to the number of spaces in the array, the actual number popped is returned. 8: var poppedItems = new string[2]; 9: int numPopped = stack.TryPopRange(poppedItems); 10:  11: foreach (var theItem in poppedItems.Take(numPopped)) 12: { 13: Console.WriteLine("Popped " + theItem); 14: } Finally, note that as stated before, GetEnumerator() and ToArray() gets a snapshot of the data at the time of the call.  That means if you are enumerating the stack you will get a snapshot of the stack at the time of the call.  This is illustrated below: 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: var results = stack.GetEnumerator(); 7:  8: // but you can also push multiple items in one atomic operation (no interleaves) 9: stack.PushRange(new [] { "Second", "Third", "Fourth" }); 10:  11: while(results.MoveNext()) 12: { 13: Console.WriteLine("Stack only has: " + results.Current); 14: } The only item that will be printed out in the above code is "First" because the snapshot was taken before the other items were added. This may sound like an issue, but it’s really for safety and is more correct.  You don’t want to enumerate a stack and have half a view of the stack before an update and half a view of the stack after an update, after all.  In addition, note that this is still thread-safe, whereas iterating through a non-concurrent collection while updating it in the old collections would cause an exception. ConcurrentQueue<T>: The thread-safe FIFO container The ConcurrentQueue<T> is the thread-safe counterpart of the System.Collections.Generic.Queue<T> class.  The concurrent queue uses an underlying list of small arrays and lock-free System.Threading.Interlocked operations on the head and tail arrays.  Once again, this allows us to do thread-safe operations without the need for heavy locks! The ConcurrentQueue<T> (like the ConcurrentStack<T>) has some departures from the non-concurrent counterpart.  Most notably: Dequeue() was removed in favor of TryDequeue(). Returns true if an item existed and was dequeued and false if empty. Count does not take a snapshot It subtracts the head and tail index to get the count.  This results overall in a O(1) complexity which is quite good.  It’s still recommended, however, that for empty checks you call IsEmpty instead of comparing Count to zero. ToArray() and GetEnumerator() both take snapshots. This means that iteration over a queue will give you a static view at the time of the call and will not reflect updates. The Enqueue() method on the ConcurrentQueue<T> works much the same as the generic Queue<T>: 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5: queue.Enqueue("Second"); 6: queue.Enqueue("Third"); For front item access, the TryPeek() method must be used to attempt to see the first item if the queue.  There is no Peek() method since, as you’ll remember, we can only peek on a non-empty queue, so we must have an atomic TryPeek() that checks for empty and then returns the first item if the queue is non-empty. 1: // to look at first item in queue without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (queue.TryPeek(out item)) 5: { 6: Console.WriteLine("First item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Queue was empty."); 11: } Then, to remove items you use TryDequeue().  Once again this is for the same reason we have TryPeek() and not Peek(): 1: // to remove items, use TryDequeue. If queue is empty returns false. 2: if (queue.TryDequeue(out item)) 3: { 4: Console.WriteLine("Dequeued first item " + item); 5: } Just like the concurrent stack, the ConcurrentQueue<T> takes a snapshot when you call ToArray() or GetEnumerator() which means that subsequent updates to the queue will not be seen when you iterate over the results.  Thus once again the code below will only show the first item, since the other items were added after the snapshot. 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5:  6: var iterator = queue.GetEnumerator(); 7:  8: queue.Enqueue("Second"); 9: queue.Enqueue("Third"); 10:  11: // only shows First 12: while (iterator.MoveNext()) 13: { 14: Console.WriteLine("Dequeued item " + iterator.Current); 15: } Using collections concurrently You’ll notice in the examples above I stuck to using single-threaded examples so as to make them deterministic and the results obvious.  Of course, if we used these collections in a truly multi-threaded way the results would be less deterministic, but would still be thread-safe and with no locking on your part required! For example, say you have an order processor that takes an IEnumerable<Order> and handles each other in a multi-threaded fashion, then groups the responses together in a concurrent collection for aggregation.  This can be done easily with the TPL’s Parallel.ForEach(): 1: public static IEnumerable<OrderResult> ProcessOrders(IEnumerable<Order> orderList) 2: { 3: var proxy = new OrderProxy(); 4: var results = new ConcurrentQueue<OrderResult>(); 5:  6: // notice that we can process all these in parallel and put the results 7: // into our concurrent collection without needing any external locking! 8: Parallel.ForEach(orderList, 9: order => 10: { 11: var result = proxy.PlaceOrder(order); 12:  13: results.Enqueue(result); 14: }); 15:  16: return results; 17: } Summary Obviously, if you do not need multi-threaded safety, you don’t need to use these collections, but when you do need multi-threaded collections these are just the ticket! The plethora of features (I always think of the movie The Three Amigos when I say plethora) built into these containers and the amazing way they acheive thread-safe access in an efficient manner is wonderful to behold. Stay tuned next week where we’ll continue our discussion with the ConcurrentBag<T> and the ConcurrentDictionary<TKey,TValue>. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here.   Tweet Technorati Tags: C#,.NET,Concurrent Collections,Collections,Multi-Threading,Little Wonders,BlackRabbitCoder,James Michael Hare

    Read the article

  • Adding Client-Side events to DevExpress ASP.Net controls

    - by nikolaosk
    I have been involved in a ASP.Net project recently and I have implemented it using the awesome DevExpress ASP.Net controls. In this post I would like to show you how to use the client-side events that can make the user experience of your web application for the end user much better.We do avoid unnecessary page flickering and postbacks.All this functionality is possible through the magic of Ajax and Javascript.I am not going to cover Ajax and Javascript on this post. With the DevExpress ASP.net controls...(read more)

    Read the article

  • Rendering ASP.NET Script References into the Html Header

    - by Rick Strahl
    One thing that I’ve come to appreciate in control development in ASP.NET that use JavaScript is the ability to have more control over script and script include placement than ASP.NET provides natively. Specifically in ASP.NET you can use either the ClientScriptManager or ScriptManager to embed scripts and script references into pages via code. This works reasonably well, but the script references that get generated are generated into the HTML body and there’s very little operational control for placement of scripts. If you have multiple controls or several of the same control that need to place the same scripts onto the page it’s not difficult to end up with scripts that render in the wrong order and stop working correctly. This is especially critical if you load script libraries with dependencies either via resources or even if you are rendering referenced to CDN resources. Natively ASP.NET provides a host of methods that help embedding scripts into the page via either Page.ClientScript or the ASP.NET ScriptManager control (both with slightly different syntax): RegisterClientScriptBlock Renders a script block at the top of the HTML body and should be used for embedding callable functions/classes. RegisterStartupScript Renders a script block just prior to the </form> tag and should be used to for embedding code that should execute when the page is first loaded. Not recommended – use jQuery.ready() or equivalent load time routines. RegisterClientScriptInclude Embeds a reference to a script from a url into the page. RegisterClientScriptResource Embeds a reference to a Script from a resource file generating a long resource file string All 4 of these methods render their <script> tags into the HTML body. The script blocks give you a little bit of control by having a ‘top’ and ‘bottom’ of the document location which gives you some flexibility over script placement and precedence. Script includes and resource url unfortunately do not even get that much control – references are simply rendered into the page in the order of declaration. The ASP.NET ScriptManager control facilitates this task a little bit with the abililty to specify scripts in code and the ability to programmatically check what scripts have already been registered, but it doesn’t provide any more control over the script rendering process itself. Further the ScriptManager is a bear to deal with generically because generic code has to always check and see if it is actually present. Some time ago I posted a ClientScriptProxy class that helps with managing the latter process of sending script references either to ClientScript or ScriptManager if it’s available. Since I last posted about this there have been a number of improvements in this API, one of which is the ability to control placement of scripts and script includes in the page which I think is rather important and a missing feature in the ASP.NET native functionality. Handling ScriptRenderModes One of the big enhancements that I’ve come to rely on is the ability of the various script rendering functions described above to support rendering in multiple locations: /// <summary> /// Determines how scripts are included into the page /// </summary> public enum ScriptRenderModes { /// <summary> /// Inherits the setting from the control or from the ClientScript.DefaultScriptRenderMode /// </summary> Inherit, /// Renders the script include at the location of the control /// </summary> Inline, /// <summary> /// Renders the script include into the bottom of the header of the page /// </summary> Header, /// <summary> /// Renders the script include into the top of the header of the page /// </summary> HeaderTop, /// <summary> /// Uses ClientScript or ScriptManager to embed the script include to /// provide standard ASP.NET style rendering in the HTML body. /// </summary> Script, /// <summary> /// Renders script at the bottom of the page before the last Page.Controls /// literal control. Note this may result in unexpected behavior /// if /body and /html are not the last thing in the markup page. /// </summary> BottomOfPage } This enum is then applied to the various Register functions to allow more control over where scripts actually show up. Why is this useful? For me I often render scripts out of control resources and these scripts often include things like a JavaScript Library (jquery) and a few plug-ins. The order in which these can be loaded is critical so that jQuery.js always loads before any plug-in for example. Typically I end up with a general script layout like this: Core Libraries- HeaderTop Plug-ins: Header ScriptBlocks: Header or Script depending on other dependencies There’s also an option to render scripts and CSS at the very bottom of the page before the last Page control on the page which can be useful for speeding up page load when lots of scripts are loaded. The API syntax of the ClientScriptProxy methods is closely compatible with ScriptManager’s using static methods and control references to gain access to the page and embedding scripts. For example, to render some script into the current page in the header: // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Same again - shouldn't be rendered because it's the same id ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Create a second script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function2", "function helloWorld2() { alert('hello2'); }", true, ScriptRenderModes.Header); // This just calls ClientScript and renders into bottom of document ClientScriptProxy.Current.RegisterStartupScript(this,typeof(ControlResources), "call_hello", "helloWorld();helloWorld2();", true); which generates: <html xmlns="http://www.w3.org/1999/xhtml" > <head><title> </title> <script type="text/javascript"> function helloWorld() { alert('hello'); } </script> <script type="text/javascript"> function helloWorld2() { alert('hello2'); } </script> </head> <body> … <script type="text/javascript"> //<![CDATA[ helloWorld();helloWorld2();//]]> </script> </form> </body> </html> Note that the scripts are generated into the header rather than the body except for the last script block which is the call to RegisterStartupScript. In general I wouldn’t recommend using RegisterStartupScript – ever. It’s a much better practice to use a script base load event to handle ‘startup’ code that should fire when the page first loads. So instead of the code above I’d actually recommend doing: ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "call_hello", "$().ready( function() { alert('hello2'); });", true, ScriptRenderModes.Header); assuming you’re using jQuery on the page. For script includes from a Url the following demonstrates how to embed scripts into the header. This example injects a jQuery and jQuery.UI script reference from the Google CDN then checks each with a script block to ensure that it has loaded and if not loads it from a server local location: // load jquery from CDN ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js", ScriptRenderModes.HeaderTop); // check if jquery loaded - if it didn't we're not online string scriptCheck = @"if (typeof jQuery != 'object') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));"; string jQueryUrl = ClientScriptProxy.Current.GetWebResourceUrl(this, typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jquery_register", string.Format(scriptCheck,jQueryUrl),true, ScriptRenderModes.HeaderTop); // Load jquery-ui from cdn ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js", ScriptRenderModes.Header); // check if we need to load from local string jQueryUiUrl = ResolveUrl("~/scripts/jquery-ui-custom.min.js"); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jqueryui_register", string.Format(scriptCheck, jQueryUiUrl), true, ScriptRenderModes.Header); // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "$().ready( function() { alert('hello'); });", true, ScriptRenderModes.Header); which in turn generates this HTML: <html xmlns="http://www.w3.org/1999/xhtml" > <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=DIykvYhJ_oXCr-TA_dr35i4AayJoV1mgnQAQGPaZsoPM2LCdvoD3cIsRRitHKlKJfV5K_jQvylK7tsqO3lQIFw2&t=633979863959332352' type='text/javascript'%3E%3C/script%3E")); </script> <title> </title> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/scripts/jquery-ui-custom.min.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> $().ready(function() { alert('hello'); }); </script> </head> <body> …</body> </html> As you can see there’s a bit more control in this process as you can inject both script includes and script blocks into the document at the top or bottom of the header, plus if necessary at the usual body locations. This is quite useful especially if you create custom server controls that interoperate with script and have certain dependencies. The above is a good example of a useful switchable routine where you can switch where scripts load from by default – the above pulls from Google CDN but a configuration switch may automatically switch to pull from the local development copies if your doing development for example. How does it work? As mentioned the ClientScriptProxy object mimicks many of the ScriptManager script related methods and so provides close API compatibility with it although it contains many additional overloads that enhance functionality. It does however work against ScriptManager if it’s available on the page, or Page.ClientScript if it’s not so it provides a single unified frontend to script access. There are however many overloads of the original SM methods like the above to provide additional functionality. The implementation of script header rendering is pretty straight forward – as long as a server header (ie. it has to have runat=”server” set) is available. Otherwise these routines fall back to using the default document level insertions of ScriptManager/ClientScript. Given that there is a server header it’s relatively easy to generate the script tags and code and append them to the header either at the top or bottom. I suspect Microsoft didn’t provide header rendering functionality precisely because a runat=”server” header is not required by ASP.NET so behavior would be slightly unpredictable. That’s not really a problem for a custom implementation however. Here’s the RegisterClientScriptBlock implementation that takes a ScriptRenderModes parameter to allow header rendering: /// <summary> /// Renders client script block with the option of rendering the script block in /// the Html header /// /// For this to work Header must be defined as runat="server" /// </summary> /// <param name="control">any control that instance typically page</param> /// <param name="type">Type that identifies this rendering</param> /// <param name="key">unique script block id</param> /// <param name="script">The script code to render</param> /// <param name="addScriptTags">Ignored for header rendering used for all other insertions</param> /// <param name="renderMode">Where the block is rendered</param> public void RegisterClientScriptBlock(Control control, Type type, string key, string script, bool addScriptTags, ScriptRenderModes renderMode) { if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; if (control.Page.Header == null || renderMode != ScriptRenderModes.HeaderTop && renderMode != ScriptRenderModes.Header && renderMode != ScriptRenderModes.BottomOfPage) { RegisterClientScriptBlock(control, type, key, script, addScriptTags); return; } // No dupes - ref script include only once const string identifier = "scriptblock_"; if (HttpContext.Current.Items.Contains(identifier + key)) return; HttpContext.Current.Items.Add(identifier + key, string.Empty); StringBuilder sb = new StringBuilder(); // Embed in header sb.AppendLine("\r\n<script type=\"text/javascript\">"); sb.AppendLine(script); sb.AppendLine("</script>"); int? index = HttpContext.Current.Items["__ScriptResourceIndex"] as int?; if (index == null) index = 0; if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if(renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1,new LiteralControl(sb.ToString())); HttpContext.Current.Items["__ScriptResourceIndex"] = index; } Note that the routine has to keep track of items inserted by id so that if the same item is added again with the same key it won’t generate two script entries. Additionally the code has to keep track of how many insertions have been made at the top of the document so that entries are added in the proper order. The RegisterScriptInclude method is similar but there’s some additional logic in here to deal with script file references and ClientScriptProxy’s (optional) custom resource handler that provides script compression /// <summary> /// Registers a client script reference into the page with the option to specify /// the script location in the page /// </summary> /// <param name="control">Any control instance - typically page</param> /// <param name="type">Type that acts as qualifier (uniqueness)</param> /// <param name="url">the Url to the script resource</param> /// <param name="ScriptRenderModes">Determines where the script is rendered</param> public void RegisterClientScriptInclude(Control control, Type type, string url, ScriptRenderModes renderMode) { const string STR_ScriptResourceIndex = "__ScriptResourceIndex"; if (string.IsNullOrEmpty(url)) return; if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; // Extract just the script filename string fileId = null; // Check resource IDs and try to match to mapped file resources // Used to allow scripts not to be loaded more than once whether // embedded manually (script tag) or via resources with ClientScriptProxy if (url.Contains(".axd?r=")) { string res = HttpUtility.UrlDecode( StringUtils.ExtractString(url, "?r=", "&", false, true) ); foreach (ScriptResourceAlias item in ScriptResourceAliases) { if (item.Resource == res) { fileId = item.Alias + ".js"; break; } } if (fileId == null) fileId = url.ToLower(); } else fileId = Path.GetFileName(url).ToLower(); // No dupes - ref script include only once const string identifier = "script_"; if (HttpContext.Current.Items.Contains( identifier + fileId ) ) return; HttpContext.Current.Items.Add(identifier + fileId, string.Empty); // just use script manager or ClientScriptManager if (control.Page.Header == null || renderMode == ScriptRenderModes.Script || renderMode == ScriptRenderModes.Inline) { RegisterClientScriptInclude(control, type,url, url); return; } // Retrieve script index in header int? index = HttpContext.Current.Items[STR_ScriptResourceIndex] as int?; if (index == null) index = 0; StringBuilder sb = new StringBuilder(256); url = WebUtils.ResolveUrl(url); // Embed in header sb.AppendLine("\r\n<script src=\"" + url + "\" type=\"text/javascript\"></script>"); if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if (renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1, new LiteralControl(sb.ToString())); HttpContext.Current.Items[STR_ScriptResourceIndex] = index; } There’s a little more code here that deals with cleaning up the passed in Url and also some custom handling of script resources that run through the ScriptCompressionModule – any script resources loaded in this fashion are automatically cached based on the resource id. Raw urls extract just the filename from the URL and cache based on that. All of this to avoid doubling up of scripts if called multiple times by multiple instances of the same control for example or several controls that all load the same resources/includes. Finally RegisterClientScriptResource utilizes the previous method to wrap the WebResourceUrl as well as some custom functionality for the resource compression module: /// <summary> /// Returns a WebResource or ScriptResource URL for script resources that are to be /// embedded as script includes. /// </summary> /// <param name="control">Any control</param> /// <param name="type">A type in assembly where resources are located</param> /// <param name="resourceName">Name of the resource to load</param> /// <param name="renderMode">Determines where in the document the link is rendered</param> public void RegisterClientScriptResource(Control control, Type type, string resourceName, ScriptRenderModes renderMode) { string resourceUrl = GetClientScriptResourceUrl(control, type, resourceName); RegisterClientScriptInclude(control, type, resourceUrl, renderMode); } /// <summary> /// Works like GetWebResourceUrl but can be used with javascript resources /// to allow using of resource compression (if the module is loaded). /// </summary> /// <param name="control"></param> /// <param name="type"></param> /// <param name="resourceName"></param> /// <returns></returns> public string GetClientScriptResourceUrl(Control control, Type type, string resourceName) { #if IncludeScriptCompressionModuleSupport // If wwScriptCompression Module through Web.config is loaded use it to compress // script resources by using wcSC.axd Url the module intercepts if (ScriptCompressionModule.ScriptCompressionModuleActive) { string url = "~/wwSC.axd?r=" + HttpUtility.UrlEncode(resourceName); if (type.Assembly != GetType().Assembly) url += "&t=" + HttpUtility.UrlEncode(type.FullName); return WebUtils.ResolveUrl(url); } #endif return control.Page.ClientScript.GetWebResourceUrl(type, resourceName); } This code merely retrieves the resource URL and then simply calls back to RegisterClientScriptInclude with the URL to be embedded which means there’s nothing specific to deal with other than the custom compression module logic which is nice and easy. What else is there in ClientScriptProxy? ClientscriptProxy also provides a few other useful services beyond what I’ve already covered here: Transparent ScriptManager and ClientScript calls ClientScriptProxy includes a host of routines that help figure out whether a script manager is available or not and all functions in this class call the appropriate object – ScriptManager or ClientScript – that is available in the current page to ensure that scripts get embedded into pages properly. This is especially useful for control development where controls have no control over the scripting environment in place on the page. RegisterCssLink and RegisterCssResource Much like the script embedding functions these two methods allow embedding of CSS links. CSS links are appended to the header or to a form declared with runat=”server”. LoadControlScript Is a high level resource loading routine that can be used to easily switch between different script linking modes. It supports loading from a WebResource, a url or not loading anything at all. This is very useful if you build controls that deal with specification of resource urls/ids in a standard way. Check out the full Code You can check out the full code to the ClientScriptProxyClass here: ClientScriptProxy.cs ClientScriptProxy Documentation (class reference) Note that the ClientScriptProxy has a few dependencies in the West Wind Web Toolkit of which it is part of. ControlResources holds a few standard constants and script resource links and the ScriptCompressionModule which is referenced in a few of the script inclusion methods. There’s also another useful ScriptContainer companion control  to the ClientScriptProxy that allows scripts to be placed onto the page’s markup including the ability to specify the script location and script minification options. You can find all the dependencies in the West Wind Web Toolkit repository: West Wind Web Toolkit Repository West Wind Web Toolkit Home Page© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  JavaScript  

    Read the article

  • DevConnections jQuery Session Slides and Samples posted

    - by Rick Strahl
    I’ve posted all of my slides and samples from the DevConnections VS 2010 Launch event last week in Vegas. All three sessions are contained in a single zip file which contains all slide decks and samples in one place: www.west-wind.com/files/conferences/jquery.zip There were 3 separate sessions: Using jQuery with ASP.NET Starting with an overview of jQuery client features via many short and fun examples, you'll find out about core features like the power of selectors to select document elements, manipulate these elements with jQuery's wrapped set methods in a browser independent way, how to hook up and handle events easily and generally apply concepts of unobtrusive JavaScript principles to client scripting. The session also covers AJAX interaction between jQuery and the .NET server side code using several different approaches including sending HTML and JSON data and how to avoid user interface duplication by using client side templating. This session relies heavily on live examples and walk-throughs. jQuery Extensibility and Integration with ASP.NET Server Controls One of the great strengths of the jQuery Javascript framework is its simple, yet powerful extensibility model that has resulted in an explosion of plug-ins available for jQuery. You need it - chances are there's a plug-in for it! In this session we'll look at a few plug-ins to demonstrate the power of the jQuery plug-in model before diving in and creating our own custom jQuery plug-ins. We'll look at how to create a plug-in from scratch as well as discussing when it makes sense to do so. Once you have a plug-in it can also be useful to integrate it more seamlessly with ASP.NET by creating server controls that coordinate both server side and jQuery client side behavior. I'll demonstrate a host of custom components that utilize a combination of client side jQuery functionality and server side ASP.NET server controls that provide smooth integration in the user interface development process. This topic focuses on component development both for pure client side plug-ins and mixed mode controls. jQuery Tips and Tricks This session was kind of a last minute substitution for an ASP.NET AJAX talk. Nothing too radical here :-), but I focused on things that have been most productive for myself. Look at the slide deck for individual points and some of the specific samples.   It was interesting to see that unlike in previous conferences this time around all the session were fairly packed – interest in jQuery is definitely getting more pronounced especially with microsoft’s recent announcement of focusing on jQuery integration rather than continuing on the path of ASP.NET AJAX – which is a welcome change. Most of the samples also use the West Wind Web & Ajax Toolkit and the support tools contained within it – a snapshot version of the toolkit is included in the samples download. Specicifically a number of the samples use functionality in the ww.jquery.js support file which contains a fairly large set of plug-ins and helper functionality – most of these pieces while contained in the single file are self-contained and can be lifted out of this file (several people asked). Hopefully you'll find something useful in these slides and samples.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  jQuery  

    Read the article

  • Google CDN - using http vs https

    - by HorusKol
    All the examples of accessing google's CDN use https:// in the URL (including on Google itself) - but this has caused a problem when testing in Safari (certificate problem and also different domain). <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> I have switched to calling it over http instead, but just wondering if this is a mistake or security issue?

    Read the article

  • Create a Remote Git Repository from an Existing XCode Repository

    - by codeWithoutFear
    Introduction Distributed version control systems (VCS’s), like Git, provide a rich set of features for managing source code.  Many development tools, including XCode, provide built-in support for various VCS’s.  These tools provide simple configuration with limited customization to get you up and running quickly while still providing the safety net of basic version control. I hate losing (and re-doing) work.  I have OCD when it comes to saving and versioning source code.  Save early, save often, and commit to the VCS often.  I also hate merging code.  Smaller and more frequent commits enable me to minimize merge time and effort as well. The work flow I prefer even for personal exploratory projects is: Make small local changes to the codebase to create an incrementally improved (and working) system. Commit these changes to the local repository.  Local repositories are quick to access, function even while offline, and provides the confidence to continue making bold changes to the system.  After all, I can easily recover to a recent working state. Repeat 1 & 2 until the codebase contains “significant” functionality and I have connectivity to the remote repository. Push the accumulated changes to the remote repository.  The smaller the change set, the less likely extensive merging will be required.  Smaller is better, IMHO. The remote repository typically has a greater degree of fault tolerance and active management dedicated to it.  This can be as simple as a network share that is backed up nightly or as complex as dedicated hardware with specialized server-side processing and significant administrative monitoring. XCode’s out-of-the-box Git integration enables steps 1 and 2 above.  Time Machine backups of the local repository add an additional degree of fault tolerance, but do not support collaboration or take advantage of managed infrastructure such as on-premises or cloud-based storage. Creating a Remote Repository These are the steps I use to enable the full workflow identified above.  For simplicity the “remote” repository is created on the local file system.  This location could easily be on a mounted network volume. Create a Test Project My project is called HelloGit and is located at /Users/Don/Dev/HelloGit.  Be sure to commit all outstanding changes.  XCode always leaves a single changed file for me after the project is created and the initial commit is submitted. Clone the Local Repository We want to clone the XCode-created Git repository to the location where the remote repository will reside.  In this case it will be /Users/Don/Dev/RemoteHelloGit. Open the Terminal application. Clone the local repository to the remote repository location: git clone /Users/Don/Dev/HelloGit /Users/Don/Dev/RemoteHelloGit Convert the Remote Repository to a Bare Repository The remote repository only needs to contain the Git database.  It does not need a checked out branch or local files. Go to the remote repository folder: cd /Users/Don/Dev/RemoteHelloGit Indicate the repository is “bare”: git config --bool core.bare true Remove files, leaving the .git folder: rm -R * Remove the “origin” remote: git remote rm origin Configure the Local Repository The local repository should reference the remote repository.  The remote name “origin” is used by convention to indicate the originating repository.  This is set automatically when a repository is cloned.  We will use the “origin” name here to reflect that relationship. Go to the local repository folder: cd /Users/Don/Dev/HelloGit Add the remote: git remote add origin /Users/Don/Dev/RemoteHelloGit Test Connectivity Any changes made to the local Git repository can be pushed to the remote repository subject to the merging rules Git enforces. Create a new local file: date > date.txt /li> Add the new file to the local index: git add date.txt Commit the change to the local repository: git commit -m "New file: date.txt" Push the change to the remote repository: git push origin master Now you can save, commit, and push/pull to your OCD hearts’ content! Code without fear! --Don

    Read the article

  • Visual WebGui's XAML based programming for web developers

    - by Webgui
    While ASP.NET provides an event base approach it is completely dismissed when working with AJAX and the richness of the server is lost and replaced with JavaScript programming and couple with a very high security risk. Visual WebGui reinstates the power of the server to AJAX development and provides a statefull yet scalable, server centric architecture that provides the benefits and user productivity of AJAX with the security and developer productivity we had before AJAX stormed into our lives. "When I first came up with the concept of Visual WebGui , I was frustrated by the fragile and complex nature of developing web applications. The contrast in productivity between working in a fully OOP compiled environment vs. scripting even today, with JQuery, Dojo and such, is still huge. Even today the greatest sponsor of JavaScript programming, Google, is offering a framework to avoid JavaScript using Java that compiles to JavaScript (GWT). So I decided to find a way to abstract the complexity or rather delegate the complex job to enable developers to concentrate on the “What” instead of the “How” and embraced the Form based approach," said Guy Peled the inventor of Visual WebGui. Although traditional OOP development still rules the enterprise, the differences between web sites and web applications have blurred and so did the differences between classic developers and web developers. As a result, we now see declarative languages in desktop / backend development environments (WPF / WF) and we see OOP, gaining more and more power in web development (ASP.NET MVC / ASP.NET DOM). However, what has not changed is enterprise need for security, development ROI, reach, highly responsive and interactive UIs and scalability. The advantages that declarative languages and 'on demand' compilation provide over classic development are mostly the flexibility and a more readable initialize component it offers which is what Gizmox is aspiring to do by replacing the designer initialize component with XAML code. The code in this new project template will be compiled on demand using the build provider mechanism ASP.NET has. This means that the performance hit is only on the first request and after that the performance is the same as a prebuilt solution. This will allow the flexibility of a dynamically updated sites and the power of fully blown enterprise applications over web. You can also use prebuilt features available in ASP.NET to enjoy both worlds in production. VWG XAML implementation (VWG Sites) will be the first truly compliable XAML implementation as Microsoft implemented Silverlight and WPF as a runtime markup interpretation opposed to the ASP.NET markup implementation which is compiled to CLR code once. We have chosen to implement the VWG Sites parser as a different way to create CLR code that provides greater performance over the reflection alternative. VWG Sites will also be the first server side XAML UI engine which, while giving the power of XAML, it will not require any plug-ins or installations on the client side. Short demo video of VWG Sites markup. There is also a live sample available here.

    Read the article

  • Twitter like character counter - jQuery version

    - by bipinjoshi
    My recent article titled "Displaying a Character Counter for Multiline Textboxes" shows you how to create a character counter like Twitter for multiline textboxes. The articles does so using ASP.NET AJAX client behavior. Here is a jQuery version of the code that does similar job. Note, however, that unlike ASP.NET AJAX client behavior as illustrated in the article the following code takes a "function" based approach to quickly implement similar functionality.http://www.bipinjoshi.net/articles/84e691b2-0306-4911-87bb-875806ba981b.aspx

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >