Search Results

Search found 8040 results on 322 pages for 'live messenger'.

Page 316/322 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • how to get 12 for joel test working in a small team of 3-4 on php website?

    - by keisimone
    Hi i read this inspired, i am asking for specific help to achieve a 12 for my current project. i am working in a team of 3-4 on a php project that is based on cakephp. i only have a dedicated server running on linux which i intend to have the website live on. and i have a plan with assembla where i am using its svn repository. that's it. i like to hear a major, impactful step towards answering each point raised by the joel test. by impactful i mean doing just this one thing would raise my project to scoring or close to scoring on that area of the joel test. lets begin: 1) do you have a source control system? I am very proud to say learning how to use svn even though we know nuts about branch/release policies made the biggest impact to our programming lives. and the svn repos is on assembla paid plan. Feel free to add if anyone thinks we can do more in this area. 2) Can you make a build in one step? i think the issue is how do i define as a build? i think we are going to define it as if tomorrow my dedicated server crashed and we found another server from another normal hosting provider and all my team's machines all destroyed, how are we going to get the website up again? my code is in svn on assembla. 1 step means as close to 1 button to push as possible. 3)Do you make daily builds? i know nothing about this. please help. i googled and came across this phpundercontrol. but i am not sure if we can get that to work with assembla. are there easier ways? 4)Do you have a bug database? we have not used the assembla features on bug tracking. ashamed to say. i think i will sort this out myself. 5)Do you fix bugs before writing new code? policy issue. i will sort it out myself. 6)Do you have an up-to-date schedule? Working on it. Same as above. estimates have historically been overly optimistic. having spent too much time using all sorts of funny project management tools, i think this time i am going to use just paper and pen. please dont tell me scrum. i need to keep things even simpler than that. 7)Do you have a spec? We do, but its in paper and pen. what would be a good template? 8)Do programmers have quiet working conditions? Well we work at home and in distributed manner. so .. 9)Do you use the best tools money can buy? We use cheap tools. we are not big. 10)Do you have testers? NO testers. Since we have a team of 3, i think i should go get 1 tester. even on a part time basis. so i should get this 1 part time tester test in what manner to extract maximum effects? should i get him to write out the test scenarios and expected outcomes and then test it? or i write the test scenarios and then ask him to do it? we will be writing the test cases ourselves using simpletest. i came across selenium. how useful is that? 11)Do new candidates write code during their interview? Not applicable. But i will do it next time i try to hire anyone else. hires or contractors alike. 12)Do you do hallway usability testing? Will do so on a per month or per milestone basis. i will grab my friends who are not net-savvy. they will be the best testers of this type. Thank you.

    Read the article

  • How can I make a jQuery UI 'draggable()' div draggable for touchscreen?

    - by artlung
    I have a jQuery UI draggable() that works in Firefox and Chrome. The user interface concept is basically click to create a "post-it" type item. Basically, I click or tap on div#everything (100% high and wide) that listens for clicks, and an input textarea displays. You add text, and then when you're done it saves it. You can drag this element around. That is working on normal browsers, but on an iPad I can test with I can't drag the items around. If I touch to select (it then dims slightly), I can't then drag it. It won't drag left or right at all. I can drag up or down, but I'm not dragging the individual div, I'm dragging the whole webpage. So here's the code I use to capture clicks: $('#everything').bind('click', function(e){ var elem = document.createElement('DIV'); STATE.top = e.pageY; STATE.left = e.pageX; var e = $(elem).css({ top: STATE.top, left: STATE.left }).html('<textarea></textarea>') .addClass('instance') .bind('click', function(event){ return false; }); $(this).append(e); }); And here's the code I use to "save" the note and turn the input div into just a display div: $('textarea').live('mouseleave', function(){ var val = jQuery.trim($(this).val()); STATE.content = val; if (val == '') { $(this).parent().remove(); } else { var div = $(this).parent(); div.text(val).css({ height: '30px' }); STATE.height = 30; if ( div.width() !== div[0].clientWidth || div.height () !== div[0].clientHeight ) { while (div.width() !== div[0].clientWidth || div.height () !== div[0].clientHeight) { var h = div.height() + 10; STATE.height = h; div.css({ height: (h) + 'px' }); // element just got scrollbars } } STATE.guid = uniqueID() div.addClass('savedNote').attr('id', STATE.guid).draggable({ stop: function() { var offset = $(this).offset(); STATE.guid = $(this).attr('id'); STATE.top = offset.top; STATE.left = offset.left; STATE.content = $(this).text(); STATE.height = $(this).height(); STATE.save(); } }); STATE.save(); $(this).remove(); } }); And I have this code when I load the page for saved notes: $('.savedNote').draggable({ stop: function() { STATE.guid = $(this).attr('id'); var offset = $(this).offset(); STATE.top = offset.top; STATE.left = offset.left; STATE.content = $(this).text(); STATE.height = $(this).height(); STATE.save(); } }); My STATE object handles saving the notes. Onload, this is the whole html body: <body> <div id="everything"></div> <div class="instance savedNote" id="iddd1b0969-c634-8876-75a9-b274ff87186b" style="top:134px;left:715px;height:30px;">Whatever dude</div> <div class="instance savedNote" id="id8a129f06-7d0c-3cb3-9212-0f38a8445700" style="top:131px;left:347px;height:30px;">Appointment 11:45am</div> <div class="instance savedNote" id="ide92e3d13-afe8-79d7-bc03-818d4c7a471f" style="top:144px;left:65px;height:80px;">What do you think of a board where you can add writing as much as possible?</div> <div class="instance savedNote" id="idef7fe420-4c19-cfec-36b6-272f1e9b5df5" style="top:301px;left:534px;height:30px;">This was submitted</div> <div class="instance savedNote" id="id93b3b56f-5e23-1bd1-ddc1-9be41f1efb44" style="top:390px;left:217px;height:30px;">Hello world from iPad.</div> </body> So, my question is really: how can I make this work better on iPad? I'm not set on jQuery UI, I'm wondering if this is something I'm doing wrong with jQuery UI, or jQuery, or whether there may be better frameworks for doing cross-platform/backward compatible draggable() elements that will work for touchscreen UIs. More general comments about how to write UI components like this would be welcome as well. Thanks!

    Read the article

  • Big smart ViewModels, dumb Views, and any model, the best MVVM approach?

    - by Edward Tanguay
    The following code is a refactoring of my previous MVVM approach (Fat Models, skinny ViewModels and dumb Views, the best MVVM approach?) in which I moved the logic and INotifyPropertyChanged implementation from the model back up into the ViewModel. This makes more sense, since as was pointed out, you often you have to use models that you either can't change or don't want to change and so your MVVM approach should be able to work with any model class as it happens to exist. This example still allows you to view the live data from your model in design mode in Visual Studio and Expression Blend which I think is significant since you could have a mock data store that the designer connects to which has e.g. the smallest and largest strings that the UI can possibly encounter so that he can adjust the design based on those extremes. Questions: I'm a bit surprised that I even have to "put a timer" in my ViewModel since it seems like that is a function of INotifyPropertyChanged, it seems redundant, but it was the only way I could get the XAML UI to constantly (once per second) reflect the state of my model. So it would be interesting to hear anyone who may have taken this approach if you encountered any disadvantages down the road, e.g. with threading or performance. The following code will work if you just copy the XAML and code behind into a new WPF project. XAML: <Window x:Class="TestMvvm73892.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:TestMvvm73892" Title="Window1" Height="300" Width="300"> <Window.Resources> <ObjectDataProvider x:Key="DataSourceCustomer" ObjectType="{x:Type local:CustomerViewModel}" MethodName="GetCustomerViewModel"/> </Window.Resources> <DockPanel DataContext="{StaticResource DataSourceCustomer}"> <StackPanel DockPanel.Dock="Top" Orientation="Horizontal"> <TextBlock Text="{Binding Path=FirstName}"/> <TextBlock Text=" "/> <TextBlock Text="{Binding Path=LastName}"/> </StackPanel> <StackPanel DockPanel.Dock="Top" Orientation="Horizontal"> <TextBlock Text="{Binding Path=TimeOfMostRecentActivity}"/> </StackPanel> </DockPanel> </Window> Code Behind: using System; using System.Windows; using System.ComponentModel; using System.Threading; namespace TestMvvm73892 { public partial class Window1 : Window { public Window1() { InitializeComponent(); } } //view model public class CustomerViewModel : INotifyPropertyChanged { private string _firstName; private string _lastName; private DateTime _timeOfMostRecentActivity; private Timer _timer; public string FirstName { get { return _firstName; } set { _firstName = value; this.RaisePropertyChanged("FirstName"); } } public string LastName { get { return _lastName; } set { _lastName = value; this.RaisePropertyChanged("LastName"); } } public DateTime TimeOfMostRecentActivity { get { return _timeOfMostRecentActivity; } set { _timeOfMostRecentActivity = value; this.RaisePropertyChanged("TimeOfMostRecentActivity"); } } public CustomerViewModel() { _timer = new Timer(CheckForChangesInModel, null, 0, 1000); } private void CheckForChangesInModel(object state) { Customer currentCustomer = CustomerViewModel.GetCurrentCustomer(); MapFieldsFromModeltoViewModel(currentCustomer, this); } public static CustomerViewModel GetCustomerViewModel() { CustomerViewModel customerViewModel = new CustomerViewModel(); Customer currentCustomer = CustomerViewModel.GetCurrentCustomer(); MapFieldsFromModeltoViewModel(currentCustomer, customerViewModel); return customerViewModel; } public static void MapFieldsFromModeltoViewModel(Customer model, CustomerViewModel viewModel) { viewModel.FirstName = model.FirstName; viewModel.LastName = model.LastName; viewModel.TimeOfMostRecentActivity = model.TimeOfMostRecentActivity; } public static Customer GetCurrentCustomer() { return Customer.GetCurrentCustomer(); } //INotifyPropertyChanged implementation public event PropertyChangedEventHandler PropertyChanged; private void RaisePropertyChanged(string property) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(property)); } } } //model public class Customer { public string FirstName { get; set; } public string LastName { get; set; } public DateTime TimeOfMostRecentActivity { get; set; } public static Customer GetCurrentCustomer() { return new Customer { FirstName = "Jim", LastName = "Smith", TimeOfMostRecentActivity = DateTime.Now }; } } }

    Read the article

  • java runtime 6 with socks v5 proxy - Possible?

    - by rwired
    I have written an application that (amongst other things) runs a local service in windows that acts as a SOCKS v5 proxy for Firefox. I'm in the debugging phase right now and have found certain websites that don't work correctly. For example the Java Applet for Picture Uploading on Facebook.com fails because is is unable to lookup domains. My app overrides a hidden FF config setting network.proxy.socks__remote__dns setting it to true. The whole purpose of the app is to allow access to websites when behind a firewall (e.g. if the user is in China), so this setting is essential to ensure domains are resolved remotely also (and not just HTTP requests). In the JRE6 settings (documented here) there isn't an equivalent setting, and since remote DNS resolution is a feature of SOCKS v5 and not v4 as the documentation seems to imply I'm worried that it's just not possible. How can I programmatically make sure the JRE uses a SOCKS v5 proxy for all requests (including DNS)? UPDATE: Steps to reproduce this problem: Make sure you are behind a firewall that blocks (or redirects) internet access including DNS Install PuTTY and add a dynamic SSH tunnel on some port number of your choice (e.g. 9870). Then login to a remote server that has full access to the internet Launch Firefox and you will not be able to browse the web In FF network settings set the SOCKS v5 proxy to localhost:9870 In FF go to about:config, change network.proxy.socks__remote__dns to true You will now be able to browse the web. Go to facebook.com, login, go to your profile and attempt to use the picture uploader java applet to add some pictures It will fail with a series of class not found errors looking similar to: load: class com.facebook.facebookphotouploader5.FacebookPhotoUploader5.class not found. I believe this is failing because the JRE is unable to resolve the domain that the class resides on. I'm basing this belief on the fact that the documentation (http://java.sun.com/javase/6/docs/technotes/guides/deployment/deployment-guide/properties.html) talks only about SOCKS v4 (which as far as I know does not support remote DNS). My deployment.properties file is located in %APPDATA%\Sun\Java\Deployment. I can confirm that modifications I make in the Java Control Panel get written into that file. If instead of "Use browser setting" the network settings for Java I override and attempt to use the SOCKS proxy settings manually, I still have the issue. There does not seem to be an easy way to force the JRE to do DNS remotely through the Proxy. UPDATE 2: Without the SOCKS proxy, from my local client www.facebook.com resolves to 203.161.230.171 upload.facebook.com resolves to 64.33.88.161 Neither host is reachable (because of the firewall) If I login to the remote server, I get: www.facebook.com 69.63.187.17 upload.facebook.com 69.63.178.32 Both these IPs change after a few minutes, as it seems Facebook uses round-robin DNS and other load-balancing. With the Proxy settings set in Firefox, I can navigate to www.facebook.com without any difficulty (since DNS is being resolved remotely on the Proxy). Whey I go to the page with the Java applet it fails with the stacktrace messages I've already reported. However if I edit Windows\System32\drivers\etc\hosts, adding the correct IP for upload.facebook.com I can get the applet to load and work correctly (restart of FF is sometimes necessary). This evidence seems to support my theory that the Java Runtime is not resolving DNS on the Proxy, but instead just routing traffic though it. My application is for mass-deployment, and needs to work with java applets on other sites (not just facebook). I really need a work-around for this problem. UPDATE 3 Stacktrace dump a requested by ZZ Coder: load: class com.facebook.facebookphotouploader5.FacebookPhotoUploader5.class not found. java.lang.ClassNotFoundException: com.facebook.facebookphotouploader5.FacebookPhotoUploader5.class at sun.plugin2.applet.Applet2ClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.plugin2.applet.Plugin2ClassLoader.loadCode(Unknown Source) at sun.plugin2.applet.Plugin2Manager.createApplet(Unknown Source) at sun.plugin2.applet.Plugin2Manager$AppletExecutionRunnable.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.net.SocketException: Connection reset at java.net.SocketInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at sun.net.www.http.HttpClient.parseHTTPHeader(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at java.net.HttpURLConnection.getResponseCode(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader.getBytes(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader.access$000(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) ... 7 more Exception: java.lang.ClassNotFoundException: com.facebook.facebookphotouploader5.FacebookPhotoUploader5.class Dumping class loader cache... Live entry: key=http://upload.facebook.com/controls/2008.10.10_v5.5.8/,FacebookPhotoUploader5.jar,FacebookPhotoUploader5.jar, refCount=1, threadGroup=sun.plugin2.applet.Applet2ThreadGroup[name=http://upload.facebook.com/controls/2008.10.10_v5.5.8/-threadGroup,maxpri=4] Done.

    Read the article

  • HttpModule Init method is called several times - why?

    - by MartinF
    I was creating a http module and while debugging I noticed something which at first (at least) seemed like weird behaviour. When i set a breakpoint in the init method of the httpmodule i can see that the http module init method is being called several times even though i have only started up the website for debugging and made one single request (sometimes it is hit only 1 time, other times as many as 10 times). I know that I should expect several instances of the HttpApplication to be running and for each the http modules will be created, but when i request a single page it should be handled by a single http application object and therefore only fire the events associated once, but still it fires the events several times for each request which makes no sense - other than it must have been added several times within that httpApplication - which means it is the same httpmodule init method which is being called every time and not a new http application being created each time it hits my break point (see my code example at the bottom etc.). What could be going wrong here ? is it because i am debugging and set a breakpoint in the http module? It have noticed that it seems that if i startup the website for debugging and quickly step over the breakpoint in the httpmodule it will only hit the init method once and the same goes for the eventhandler. If I instead let it hang at the breakpoint for a few seconds the init method is being called several times (seems like it depends on how long time i wait before stepping over the breakpoint). Maybe this could be some build in feature to make sure that the httpmodule is initialised and the http application can serve requests , but it also seems like something that could have catastrophic consequences. This could seem logical, as it might be trying to finish the request and since i have set the break point it thinks something have gone wrong and try to call the init method again ? soo it can handle the request ? But is this what is happening and is everything fine (i am just guessing), or is it a real problem ? What i am specially concerned about is that if something makes it hang on the "production/live" server for a few seconds a lot of event handlers are added through the init and therefore each request to the page suddenly fires the eventhandler several times. This behaviour could quickly bring any site down. I have looked at the "original" .net code used for the httpmodules for formsauthentication and the rolemanagermodule etc but my code isnt any different that those modules uses. My code looks like this. public void Init(HttpApplication app) { if (CommunityAuthenticationIntegration.IsEnabled) { FormsAuthenticationModule formsAuthModule = (FormsAuthenticationModule) app.Modules["FormsAuthentication"]; formsAuthModule.Authenticate += new FormsAuthenticationEventHandler(this.OnAuthenticate); } } here is an example how it is done in the RoleManagerModule from the .NET framework public void Init(HttpApplication app) { if (Roles.Enabled) { app.PostAuthenticateRequest += new EventHandler(this.OnEnter); app.EndRequest += new EventHandler(this.OnLeave); } } Do anyone know what is going on? (i just hope someone out there can tell me why this is happening and assure me that everything is perfectly fine) :) UPDATE: I have tried to narrow down the problem and so far i have found that the Init method being called is always on a new object of my http module (contary to what i thought before). I seems that for the first request (when starting up the site) all of the HttpApplication objects being created and its modules are all trying to serve the first request and therefore all hit the eventhandler that is being added. I cant really figure out why this is happening. If i request another page all the HttpApplication's created (and their moduless) will again try to serve the request causing it to hit the eventhandler multiple times. But it also seems that if i then jump back to the first page (or another one) only one HttpApplication will start to take care of the request and everything is as expected - as long as i dont let it hang at a break point. If i let it hang at a breakpoint it begins to create new HttpApplication's objects and starts adding HttpApplications (more than 1) to serve/handle the request (which is already in process of being served by the HttpApplication which is currently stopped at the breakpoint). I guess or hope that it might be some intelligent "behind the scenes" way of helping to distribute and handle load and / or errors. But I have no clue. I hope some out there can assure me that it is perfectly fine and how it is supposed to be?

    Read the article

  • jQuery show hide divs on radio buttons

    - by RyanP13
    I have got a group of radio buttons that when clicked need to display a corresponding unordered list. I have got as far as showing the correct list according to which radio button is clicked but cannot figure out how to hide the other lists that need to be hidden. So if i click the second benefits radion then the second 'spend' ul will display and the other 'spend' uls will hide. This is my jQuery: // hide all except first ul $('div.chevron ul').not(':first').hide(); // $('div.chevron > ul > li > input[name=benefits]').live('click',function() { // work out which radio has been clicked var $radioClick = $(this).index('div.chevron > ul > li > input[name=benefits]'); var $radioClick = $radioClick + 1; $('div.chevron > ul').eq($radioClick).show(); }); // trigger click event to show spend list var $defaultRadio = $('div.chevron > ul > li > input:first[name=benefits]') $defaultRadio.trigger('click'); And this is my HTML: <div class="divider chevron"> <ul> <li><strong>What benefit are you most interested in?</strong></li> <li> <input type="radio" class="inlineSpace" name="benefits" id="firstBenefit" value="Dental" checked="checked" /> <label for="firstBenefit">Dental</label> </li> <li> <input type="radio" class="inlineSpace" name="benefits" id="secondBenefit" value="Optical" /> <label for="secondBenefit">Optical</label> </li> <li> <input type="radio" class="inlineSpace" name="benefits" id="thirdBenefit" value="Physiotherapy" /> <label for="thirdBenefit">Physiotherapy, osteopathy, chiropractic, acupuncture</label> </li> </ul> <ul> <li><strong>How much do you spend a year on Dental?</strong></li> <li> <input type="radio" class="inlineSpace" name="spend" id="rate1a" value="£50" /> <label for="rate1a">&pound;50</label> </li> <li> <input type="radio" class="inlineSpace" name="spend" id="rate2a" value="£100" /> <label for="rate2a">&pound;100</label> </li> <li> <input type="radio" class="inlineSpace" name="spend" id="rate3a" value="£150" /> <label for="rate3a">&pound;150</label> </li> </ul> <ul> <li><strong>How much do you spend a year on Optical?</strong></li> <li> <input type="radio" class="inlineSpace" name="spend" id="rate1b" value="£50" /> <label for="rate1a">&pound;50</label> </li> <li> <input type="radio" class="inlineSpace" name="spend" id="rate2b" value="£100" /> <label for="rate2a">&pound;100</label> </li> <li> <input type="radio" class="inlineSpace" name="spend" id="rate3b" value="£150" /> <label for="rate3a">&pound;150</label> </li> </ul> <ul> <li><strong>How much do you spend a year on Physiotherapy, osteopathy, chiropractic, acupuncture?</strong></li> <li> <input type="radio" class="inlineSpace" name="spend" id="rate1c" value="£50" /> <label for="rate1a">&pound;50</label> </li> <li> <input type="radio" class="inlineSpace" name="spend" id="rate2c" value="£100" /> <label for="rate2a">&pound;100</label> </li> <li> <input type="radio" class="inlineSpace" name="spend" id="rate3c" value="£150" /> <label for="rate3a">&pound;150</label> </li> </ul> <label class="button"><input type="submit" value="View your quote" class="submit" /></label> </div> I tried using: $('div.chevron > ul').not($radioClick).hide(); But that yielded a bad result where all the lists where hidden. Thanks in advance.

    Read the article

  • Need to capture and store receiver's details via IPN by using Paypal Mass Pay API

    - by Devner
    Hi all, This is a question about Paypal Mass Pay IPN. My platform is PHP & mySQL. All over the Paypal support website, I have found IPN for only payments made. I need an IPN on similar lines for Mass Pay but could not find it. Also tried experimenting with already existing Mass Pay NVP code, but that did not work either. What I am trying to do is that for all the recipients to whom the payment has been successfully sent via Mass Pay, I want to record their email, amount and unique_id in my own database table. If possible, I want to capture the payment status as well, whether it has been a success of failure and based upon the same, I need to do some in house processing. The existing code Mass pay code is below: <?php $environment = 'sandbox'; // or 'beta-sandbox' or 'live' /** * Send HTTP POST Request * * @param string The API method name * @param string The POST Message fields in &name=value pair format * @return array Parsed HTTP Response body */ function PPHttpPost($methodName_, $nvpStr_) { global $environment; // Set up your API credentials, PayPal end point, and API version. $API_UserName = urlencode('my_api_username'); $API_Password = urlencode('my_api_password'); $API_Signature = urlencode('my_api_signature'); $API_Endpoint = "https://api-3t.paypal.com/nvp"; if("sandbox" === $environment || "beta-sandbox" === $environment) { $API_Endpoint = "https://api-3t.$environment.paypal.com/nvp"; } $version = urlencode('51.0'); // Set the curl parameters. $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $API_Endpoint); curl_setopt($ch, CURLOPT_VERBOSE, 1); // Turn off the server and peer verification (TrustManager Concept). curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_POST, 1); // Set the API operation, version, and API signature in the request. $nvpreq = "METHOD=$methodName_&VERSION=$version&PWD=$API_Password&USER=$API_UserName&SIGNATURE=$API_Signature$nvpStr_"; // Set the request as a POST FIELD for curl. curl_setopt($ch, CURLOPT_POSTFIELDS, $nvpreq); // Get response from the server. $httpResponse = curl_exec($ch); if(!$httpResponse) { exit("$methodName_ failed: ".curl_error($ch).'('.curl_errno($ch).')'); } // Extract the response details. $httpResponseAr = explode("&", $httpResponse); $httpParsedResponseAr = array(); foreach ($httpResponseAr as $i => $value) { $tmpAr = explode("=", $value); if(sizeof($tmpAr) > 1) { $httpParsedResponseAr[$tmpAr[0]] = $tmpAr[1]; } } if((0 == sizeof($httpParsedResponseAr)) || !array_key_exists('ACK', $httpParsedResponseAr)) { exit("Invalid HTTP Response for POST request($nvpreq) to $API_Endpoint."); } return $httpParsedResponseAr; } // Set request-specific fields. $emailSubject =urlencode('example_email_subject'); $receiverType = urlencode('EmailAddress'); $currency = urlencode('USD'); // or other currency ('GBP', 'EUR', 'JPY', 'CAD', 'AUD') // Add request-specific fields to the request string. $nvpStr="&EMAILSUBJECT=$emailSubject&RECEIVERTYPE=$receiverType&CURRENCYCODE=$currency"; $receiversArray = array(); for($i = 0; $i < 3; $i++) { $receiverData = array( 'receiverEmail' => "[email protected]", 'amount' => "example_amount", 'uniqueID' => "example_unique_id", 'note' => "example_note"); $receiversArray[$i] = $receiverData; } foreach($receiversArray as $i => $receiverData) { $receiverEmail = urlencode($receiverData['receiverEmail']); $amount = urlencode($receiverData['amount']); $uniqueID = urlencode($receiverData['uniqueID']); $note = urlencode($receiverData['note']); $nvpStr .= "&L_EMAIL$i=$receiverEmail&L_Amt$i=$amount&L_UNIQUEID$i=$uniqueID&L_NOTE$i=$note"; } // Execute the API operation; see the PPHttpPost function above. $httpParsedResponseAr = PPHttpPost('MassPay', $nvpStr); if("SUCCESS" == strtoupper($httpParsedResponseAr["ACK"]) || "SUCCESSWITHWARNING" == strtoupper($httpParsedResponseAr["ACK"])) { exit('MassPay Completed Successfully: '.print_r($httpParsedResponseAr, true)); } else { exit('MassPay failed: ' . print_r($httpParsedResponseAr, true)); } ?> In the code above, how and where do I add code to capture the fields that I requested above? Any code indicating the solution is highly appreciated. Thank you very much.

    Read the article

  • What add-in/workbench framework is the best .NET alternative to Eclipse RCP?

    - by Winston Fassett
    I'm looking for a plugin-based application framework that is comparable to the Eclipse Plugin Framework, which to my simple mind consists of: a core plugin management framework (Equinox / OSGI), which provides the ability to declare extension endpoints and then discover and load plugins that service those endpoints. (this is different than Dependency Injection, but admittedly the difference is subtle - configuration is highly de-centralized, there are versioning concerns, it might involve an online plugin repository, and most importantly to me, it should be easy for the user to add plugins without needing to know anything about the underlying architecture / config files) many layers of plugins that provide a basic workbench shell with concurrency support, commands, preference sheets, menus, toolbars, key bindings, etc. That is just scratching the surface of the RCP, which itself is meant to serve as the foundation of your application, which you build by writing / assembling even more plugins. Here's what I've gleaned from the internet in the past couple of days... As far as I can tell, there is nothing in the .NET world that remotely approaches the robustness and maturity of the Eclipse RCP for Java but there are several contenders that do either #1 or #2 pretty well. (I should also mention that I have not made a final decision on WinForms vs WPF, so I'm also trying to understand the level of UI coupling in any candidate framework. I'm also wondering about platform coupling and source code licensing) I must say that the open-source stuff is generally less-documented but easier to understand, while the MS stuff typically has more documentation but is less accessible, so that with many of the MS technologies, I'm left wondering what they actually do, in a practical sense. These are the libraries I have found: SharpDevelop The first thing I looked at was SharpDevelop, which does both #1 and also #2 in a basic way (no insult to SharpDevelop, which is admirable - I just mean more basic than Eclipse RCP). However, SharpDevelop is an application more than a framework, and there are basic assumptions and limitations there (i.e. being somewhat coupled to WinForms). Still, there are some articles on CodeProject explaining how to use it as the foundation for an application. System.Addins It appears that System.Addins is meant to provide a robust add-in loading framework, with some sophisticated options for loading assemblies with varying levels of trusts and even running the out of process. It appears to be primarily code-based, and pretty code-heavy, with lots of assemblies that serve to insulate against versioning issues., using Guidance Automation to generate a good deal of code. So far I haven't found many System.AddIns articles that illustrate how it could be used to build something like an Eclipse RCP, and many people seem to be wringing their hands about its complexity. Mono.Addins It appears that Mono.Addins was influenced by System.Addins, SharpDevelop, and MonoDevelop. It seems to provide the basics from System.Addins, with less sophisticated options for plugin loading, but more simplicity, with attribute-based registration, XML manifests, and the infrastructure for online plugin repositories. It has a pretty good FAQ and documentation, as well as a fairly robust set of examples that really help paint a picture of how to develop an architecture like that of SharpDevelop or Eclipse. The examples use GTK for UI, but the framework itself is not coupled to GTK. So it appears to do #1 (add-in loading) pretty well and points the way to #2 (workbench framework). It appears that Mono.Addins was derived from MonoDevelop, but I haven't actually looked at whether MonoDevelop provides a good core workbench framework. Managed Extensibility Framework This is what everyone's talking about at the moment, and it's slowly getting clearer what it does, but I'm still pretty fuzzy, even after reading several posts on SO. The official word is that it "can live side-by-side" with System.Addins. However, it doesn't reference it and it appears to reproduce some of its functionality. It seems to me, then, that it is a simpler, more accessible alternative to System.Addins. It appears to be more like Mono.Addins in that it provides attribute-based wiring. It provides "catalogs" that can be attribute-based or directory-based. It does not seem to provide any XML or manifest-based wiring. So far I haven't found much documentation and the examples seem to be kind of "magical" and more reminiscent of attribute-based DI, despite the clarifications that MEF is not a DI container. Its license just got opened up, but it does reference WindowsBase -- not sure if that means it's coupled to Windows. Acropolis I'm not sure what this is. Is it MEF, or something that is still coming? Composite Application Blocks There are WPF and Winforms Composite Application blocks that seem to provide much more of a workbench framework. I have very little experience with these but they appear to rely on Guidance Automation quite a bit are obviously coupled with the UI layers. There are a few examples of combining MEF with these application blocks. I've done the best I could to answer my own question here, but I'm really only scratching the surface, and I don't have experience with any of these frameworks. Hopefully some of you can add more detail about the frameworks you have experience with. It would be great if we could end up with some sort of comparison matrix.

    Read the article

  • How to setup custom DNS with Azure Websites Preview?

    - by husainnz
    I created a new Azure Website, using Umbraco as the CMS. I got a page up and going, and I already have a .co.nz domain with www.domains4less.com. There's a whole lot of stuff on the internet about pointing URLs to Azure, but that seems to be more of a redirection service than anything (i.e. my URLs still use azurewebsites.net once I land on my site). Has anybody had any luck getting it to go? Here's the error I get when I try adding the DNS entry to Azure (I'm in reserved mode, reemdairy is the name of the website): There was an error processing your request. Please try again in a few moments. Browser: 5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5 User language: undefined Portal Version: 6.0.6002.18488 (rd_auxportal_stable.120609-0259) Subscriptions: 3aabe358-d178-4790-a97b-ffba902b2851 User email address: [email protected] Last 10 Requests message: Failure: Ajax call to: Websites/UpdateConfig. failed with status: error (500) in 2.57 seconds. x-ms-client-request-id was: 38834edf-c9f3-46bb-a1f7-b2839c692bcf-2012-06-12 22:25:14Z dateTime: Wed Jun 13 2012 10:25:17 GMT+1200 (New Zealand Standard Time) durationSeconds: 2.57 url: Websites/UpdateConfig status: 500 textStatus: error clientMsRequestId: 38834edf-c9f3-46bb-a1f7-b2839c692bcf-2012-06-12 22:25:14Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com response: {"message":"Try again. Contact support if the problem persists.","ErrorMessage":"Try again. Contact support if the problem persists.","httpStatusCode":"InternalServerError","operationTrackingId":"","stackTrace":null} message: Complete: Ajax call to: Websites/GetConfig. completed with status: success (200) in 1.021 seconds. x-ms-client-request-id was: a0cdcced-13d0-44e2-866d-e0b061b9461b-2012-06-12 22:24:43Z dateTime: Wed Jun 13 2012 10:24:44 GMT+1200 (New Zealand Standard Time) durationSeconds: 1.021 url: Websites/GetConfig status: 200 textStatus: success clientMsRequestId: a0cdcced-13d0-44e2-866d-e0b061b9461b-2012-06-12 22:24:43Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: https://manage.windowsazure.com/Service/OperationTracking?subscriptionId=3aabe358-d178-4790-a97b-ffba902b2851. completed with status: success (200) in 1.887 seconds. x-ms-client-request-id was: a7689fe9-b9f9-4d6c-8926-734ec9a0b515-2012-06-12 22:24:40Z dateTime: Wed Jun 13 2012 10:24:42 GMT+1200 (New Zealand Standard Time) durationSeconds: 1.887 url: https://manage.windowsazure.com/Service/OperationTracking?subscriptionId=3aabe358-d178-4790-a97b-ffba902b2851 status: 200 textStatus: success clientMsRequestId: a7689fe9-b9f9-4d6c-8926-734ec9a0b515-2012-06-12 22:24:40Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: /Service/GetUserSettings. completed with status: success (200) in 0.941 seconds. x-ms-client-request-id was: 805e554d-1e2e-4214-afd5-be87c0f255d1-2012-06-12 22:24:40Z dateTime: Wed Jun 13 2012 10:24:40 GMT+1200 (New Zealand Standard Time) durationSeconds: 0.941 url: /Service/GetUserSettings status: 200 textStatus: success clientMsRequestId: 805e554d-1e2e-4214-afd5-be87c0f255d1-2012-06-12 22:24:40Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: Extensions/ApplicationsExtension/SqlAzure/ClusterSuffix. completed with status: success (200) in 0.483 seconds. x-ms-client-request-id was: 85157ceb-c538-40ca-8c1e-5cc07c57240f-2012-06-12 22:24:39Z dateTime: Wed Jun 13 2012 10:24:40 GMT+1200 (New Zealand Standard Time) durationSeconds: 0.483 url: Extensions/ApplicationsExtension/SqlAzure/ClusterSuffix status: 200 textStatus: success clientMsRequestId: 85157ceb-c538-40ca-8c1e-5cc07c57240f-2012-06-12 22:24:39Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: Extensions/ApplicationsExtension/SqlAzure/GetClientIp. completed with status: success (200) in 0.309 seconds. x-ms-client-request-id was: 2eb194b6-66ca-49e2-9016-e0f89164314c-2012-06-12 22:24:39Z dateTime: Wed Jun 13 2012 10:24:40 GMT+1200 (New Zealand Standard Time) durationSeconds: 0.309 url: Extensions/ApplicationsExtension/SqlAzure/GetClientIp status: 200 textStatus: success clientMsRequestId: 2eb194b6-66ca-49e2-9016-e0f89164314c-2012-06-12 22:24:39Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: Extensions/ApplicationsExtension/SqlAzure/DefaultServerLocation. completed with status: success (200) in 0.309 seconds. x-ms-client-request-id was: 1bc165ef-2081-48f2-baed-16c6edf8ea67-2012-06-12 22:24:39Z dateTime: Wed Jun 13 2012 10:24:40 GMT+1200 (New Zealand Standard Time) durationSeconds: 0.309 url: Extensions/ApplicationsExtension/SqlAzure/DefaultServerLocation status: 200 textStatus: success clientMsRequestId: 1bc165ef-2081-48f2-baed-16c6edf8ea67-2012-06-12 22:24:39Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: Extensions/ApplicationsExtension/SqlAzure/ServerLocations. completed with status: success (200) in 0.309 seconds. x-ms-client-request-id was: e1fba7df-6a12-47f8-9434-bf17ca7d93f4-2012-06-12 22:24:39Z dateTime: Wed Jun 13 2012 10:24:40 GMT+1200 (New Zealand Standard Time) durationSeconds: 0.309 url: Extensions/ApplicationsExtension/SqlAzure/ServerLocations status: 200 textStatus: success clientMsRequestId: e1fba7df-6a12-47f8-9434-bf17ca7d93f4-2012-06-12 22:24:39Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com

    Read the article

  • Why I can't implement this simple CSS

    - by nXqd
    <!DOCTYPE html> <html lang="en"> <head> <title>Enjoy BluePrint</title> <link rel="stylesheet" href="css/blueprint/screen.css" type="text/css" media="screen, projection"> <link rel="stylesheet" href="css/blueprint/print.css" type="text/css" media="print"> <!--[if lt IE 8]><link rel="stylesheet" href="css/blueprint/ie.css" type="text/css" media="screen, projection"><![endif]--> <!-- <link rel="stylesheet" href="global.css" type="text/css" media="screen"> --> <script type="text/css"> h1.logo { width:181px; height:181px; background: url("img/logo.png"); text-indent: -9999px; } </script> </head> <body> <div class="container"> <!-- Header --> <div id="header" class="span-24"> <div id="logo" class="span-6"> <h1 class="logo">This is my site</h1> </div> <div id="script" class="span-10"> <p>Frank Chimero is a graphic designer, illustrator, teac`her, maker, writer, thinker-at-large in Portland, Oregon.</p> </div> <div id="contact" class="span-8 last"> contact </div> </div> <!-- Content --> <div id="main-content" class="span-12"> <h3>DISCOVERY</h3> <p>My fascination with the creative process, curiosity, and visual experience informs all of my work in some way. Each piece is the part of an exploration in finding wit, surprise, honesty, and joy in the world around us, then, trying to document those things with all deliberate speed before they vanish.</p><br/> <p>Our creative output can have a myriad intended outcomes: to inform, to persuade or sell, or delight. There are many other creative people who do well in servicing the needs to inform or persuade, but there are not many out there who have taken up the mantle of delighting people. I’ll try my best.</p><br/> <p>It’s not about pretty; it is about beauty. Beauty in form, sure, but also beauty in the fit of a bespoke idea that transcends not only the tasks outlined, but also fulfilling the objectives that caused the work to be produced in the first place.</p><br/> <p>The best creative work connects us by speaking to what we share. From that, we hope to make things that will last. Work made without staying power and lasting relevance leads to audiences that are fickle, strung along on a diet of crumbs.</p><br/> <p>The work should be nourishing in some way, both while a creative person is making it, but also while someone consumes it. When I think of all my favorite books, movies, art and albums, they all make me a little less alone and a little more sentient. Perhaps that is what making is for: to document the things that make us feel most alive.</p> </div> <!-- Side --> <div id="award" class="span-4"> Awards </div> <div id="right-sidebar" class="span-8 last"> Right sidebar </div> </div> </body> </html> I'm 100% sure the code works, and I can't replace image at h1.logo . I try to use live-editing CSS tool and it works fine . Thanks for reading :)

    Read the article

  • Fetch a specific tag from Rally in order to compute a value in another field

    - by 4jas
    I'm extremely new to Rally development so my question may sound dumb (but couldn't find how to do it from rally's help or from previous posts here) :) I've started from the rally freeform grid example - my purpose is to implement a Business Value calculator: I fill the score field with a 5-digit figure where each number is a score in the 1-5 range. Then I compute a business value as the result of a calculation, where each number is weighted by a preset weight. I can sort my stories by Business Value to help me prioritize my backlog: that's the first step, and it works. Now what I want to do is to make my freeform grid editable: I am extracting each of my digits as a separate column, but those columns are display-only. How can I turn them into something editable? What I want to do of course is update back the score field based on the values input in each custom column. Here's an example: I have a record with score "15254", which means Business Value criteria 1 scores 1 out of 5, Business Value criteria 2 scores 5 out of 5, and so on... In the end my Business Value is computed as "1*1 + 5*2 + 2*3 + 5*4 + 4*5 = 57". So far this is the part that works. Now let's say I found that the third criteria should not score 2 but 3, I want to be able to edit the value in the corresponding column and have my score field updated to "15354", and my Business Value to display 60 instead of 57. Here is my current code, I'll be really grateful if you can help me with turning that grid into something editable :) <!--Include SDK--> <script type="text/javascript" src="https://rally1.rallydev.com/apps/2.0p2/sdk-debug.js"></script> <!--App code--> <script type="text/javascript"> Rally.onReady(function() { Ext.define('BVApp', { extend: 'Rally.app.App', componentCls: 'app', launch: function() { Ext.create('Rally.data.WsapiDataStore', { model: 'UserStory', autoLoad: true, listeners: { load: this._onDataLoaded, scope: this } }); }, _onDataLoaded: function(store, data) { var records = []; var li_score; var li_bv1, li_bv2, li_bv3, li_bv4, li_bv5, li_bvtotal; var weights = new Array(1, 2, 3, 4, 5); Ext.Array.each(data, function(record) { //Let's fetch score and compute the business values... li_score = record.get('Score'); if (li_score) { li_bv1 = li_score.toString().substring(0,1); li_bv2 = li_score.toString().substring(1,2); li_bv3 = li_score.toString().substring(2,3); li_bv4 = li_score.toString().substring(3,4); li_bv5 = li_score.toString().substring(4,5); li_bvtotal = li_bv1*weights[0] + li_bv2*weights[1] + li_bv3*weights[2] + li_bv4*weights[3] + li_bv5*weights[4]; } records.push({ FormattedID: record.get('FormattedID'), ref: record.get('_ref'), Name: record.get('Name'), Score: record.get('Score'), Bv1: li_bv1, Bv2: li_bv2, Bv3: li_bv3, Bv4: li_bv4, Bv5: li_bv5, BvTotal: li_bvtotal }); }); this.add({ xtype: 'rallygrid', store: Ext.create('Rally.data.custom.Store', { data: records, pageSize: 5 }), columnCfgs: [ { text: 'FormattedID', dataIndex: 'FormattedID' }, { text: 'ref', dataIndex: 'ref' }, { text: 'Name', dataIndex: 'Name', flex: 1 }, { text: 'Score', dataIndex: 'Score' }, { text: 'BusVal 1', dataIndex: 'Bv1' }, { text: 'BusVal 2', dataIndex: 'Bv2' }, { text: 'BusVal 3', dataIndex: 'Bv3' }, { text: 'BusVal 4', dataIndex: 'Bv4' }, { text: 'BusVal 5', dataIndex: 'Bv5' }, { text: 'BusVal Total', dataIndex: 'BvTotal' } ] }); } }); Rally.launchApp('BVApp', { name: 'Business Values App' }); var exampleHtml = '<div id="example-intro"><h1>Business Values App</h1>' + '<div>Own sample app for Business Values</div>' + '</div>'; // Default app viewport uses layout: 'fit', // so we need to insert a container into the viewport var viewport = Ext.ComponentQuery.query('viewport')[0]; var appComponent = viewport.items.getAt(0); var viewportContainerItems = [{ html: exampleHtml, border: 0 }]; //hide advanced cardboard live previews in examples for now viewportContainerItems.push({ xtype: 'container', items: [appComponent] }); viewport.remove(appComponent, false); viewport.add({ xtype: 'container', layout: 'vbox', items: viewportContainerItems }); }); </script> <!--App styles--> <style type="text/css"> .app { /* Add app styles here */ } </style>

    Read the article

  • Search function fails because it refers to the wrong controller action?

    - by Christoffer
    My Sunspot search function (sunspot_rails gem) works just fine in my index view, but when I duplicate it to my show view my search breaks... views/supplierproducts/show.html.erb <%= form_tag supplierproducts_path, :method => :get, :id => "supplierproducts_search" do %> <p> <%= text_field_tag :search, params[:search], placeholder: "Search by SKU, product name & EAN number..." %> </p> <div id="supplierproducts"><%= render 'supplierproducts' %></div> <% end %> assets/javascripts/application.js $(function () { $('#supplierproducts th a').live('click', function () { $.getScript(this.href); return false; } ); $('#supplierproducts_search input').keyup(function () { $.get($("#supplierproducts_search").attr("action"), $("#supplierproducts_search").serialize(), null, 'script'); return false; }); }); views/supplierproducts/show.js.erb $('#supplierproducts').html('<%= escape_javascript(render("supplierproducts")) %>'); views/supplierproducts/_supplierproducts.hmtl.erb <%= hidden_field_tag :direction, params[:direction] %> <%= hidden_field_tag :sort, params[:sort] %> <table class="table table-bordered"> <thead> <tr> <th><%= sortable "sku", "SKU" %></th> <th><%= sortable "name", "Product name" %></th> <th><%= sortable "stock", "Stock" %></th> <th><%= sortable "price", "Price" %></th> <th><%= sortable "ean", "EAN number" %></th> </tr> </thead> <% for supplierproduct in @supplier.supplierproducts %> <tbody> <tr> <td><%= supplierproduct.sku %></td> <td><%= supplierproduct.name %></td> <td><%= supplierproduct.stock %></td> <td><%= supplierproduct.price %></td> <td><%= supplierproduct.ean %></td> </tr> </tbody> <% end %> </table> controllers/supplierproducts_controller.rb class SupplierproductsController < ApplicationController helper_method :sort_column, :sort_direction def show @supplier = Supplier.find(params[:id]) @search = @supplier.supplierproducts.search do fulltext params[:search] end @supplierproducts = @search.results end end private def sort_column Supplierproduct.column_names.include?(params[:sort]) ? params[:sort] : "name" end def sort_direction %w[asc desc].include?(params[:direction]) ? params[:direction] : "asc" end models/supplierproduct.rb class Supplierproduct < ActiveRecord::Base attr_accessible :ean, :name, :price, :sku, :stock, :supplier_id belongs_to :supplier validates :supplier_id, presence: true searchable do text :ean, :name, :sku end end Visiting show.html.erb works just fine. Log shows: Started GET "/supplierproducts/2" for 127.0.0.1 at 2012-06-24 13:44:52 +0200 Processing by SupplierproductsController#show as HTML Parameters: {"id"=>"2"} Supplier Load (0.1ms) SELECT "suppliers".* FROM "suppliers" WHERE "suppliers"."id" = ? LIMIT 1 [["id", "2"]] SOLR Request (252.9ms) [ path=#<RSolr::Client:0x007fa5880b8e68> parameters={data: fq=type%3ASupplierproduct&start=0&rows=30&q=%2A%3A%2A, method: post, params: {:wt=>:ruby}, query: wt=ruby, headers: {"Content-Type"=>"application/x-www-form-urlencoded; charset=UTF-8"}, path: select, uri: http://localhost:8982/solr/select?wt=ruby, open_timeout: , read_timeout: } ] Supplierproduct Load (0.2ms) SELECT "supplierproducts".* FROM "supplierproducts" WHERE "supplierproducts"."id" IN (1) Supplierproduct Load (0.1ms) SELECT "supplierproducts".* FROM "supplierproducts" WHERE "supplierproducts"."supplier_id" = 2 Rendered supplierproducts/_supplierproducts.html.erb (2.2ms) Rendered supplierproducts/show.html.erb within layouts/application (3.3ms) Rendered layouts/_shim.html.erb (0.0ms) User Load (0.1ms) SELECT "users".* FROM "users" WHERE "users"."remember_token" = 'zMrtTbDun2MjMHRApSthCQ' LIMIT 1 Rendered layouts/_header.html.erb (2.1ms) Rendered layouts/_footer.html.erb (0.2ms) Completed 200 OK in 278ms (Views: 20.6ms | ActiveRecord: 0.6ms | Solr: 252.9ms) But it breaks when I type in a search. Log shows: Started GET "/supplierproducts?utf8=%E2%9C%93&search=a&direction=&sort=&_=1340538830635" for 127.0.0.1 at 2012-06-24 13:53:50 +0200 Processing by SupplierproductsController#index as JS Parameters: {"utf8"=>"?", "search"=>"a", "direction"=>"", "sort"=>"", "_"=>"1340538830635"} Rendered supplierproducts/_supplierproducts.html.erb (2.4ms) Rendered supplierproducts/index.js.erb (2.9ms) Completed 500 Internal Server Error in 6ms ActionView::Template::Error (undefined method `supplierproducts' for nil:NilClass): 10: <th><%= sortable "ean", "EAN number" %></th> 11: </tr> 12: </thead> 13: <% for supplierproduct in @supplier.supplierproducts %> 14: <tbody> 15: <tr> 16: <td><%= supplierproduct.sku %></td> app/views/supplierproducts/_supplierproducts.html.erb:13:in `_app_views_supplierproducts__supplierproducts_html_erb___2251600857885474606_70174444831200' app/views/supplierproducts/index.js.erb:1:in `_app_views_supplierproducts_index_js_erb___1613906916161905600_70174464073480' Rendered /Users/Computer/.rvm/gems/ruby-1.9.3-p194@myapp/gems/actionpack-3.2.3/lib/action_dispatch/middleware/templates/rescues/_trace.erb (33.3ms) Rendered /Users/Computer/.rvm/gems/ruby-1.9.3-p194@myapp/gems/actionpack-3.2.3/lib/action_dispatch/middleware/templates/rescues/_request_and_response.erb (0.9ms) Rendered /Users/Computer/.rvm/gems/ruby-1.9.3-p194@myapp/gems/actionpack-3.2.3/lib/action_dispatch/middleware/templates/rescues/template_error.erb within rescues/layout (39.7ms)

    Read the article

  • NullReferenceException makes me want to shoot myself

    - by rockinthesixstring
    Ok, so once upon a time, my code worked. Since then I did some refactoring (basic stuff so I thought) but now I'm getting a null reference exception, and I can't bloody well figure out why. Here's where it starts. note, this is the logout method, but all of my ActivityLogs sections are throwing this error Function LogOut(ByVal go As String) As ActionResult ActivityLogService.AddActivity(AuthenticationHelper.RetrieveAuthUser.ID, _ IActivityLogService.LogType.UserLogout, _ HttpContext.Request.UserHostAddress) ActivityLogService.SubmitChanges() ''# more stuff happens after this End Function The service is pretty straight forward (notice the ERROR THROWN HERE) Public Sub AddActivity(ByVal userid As Integer, ByVal activity As Integer, ByVal ip As String) Implements IActivityLogService.AddActivity Dim _activity As ActivityLog = New ActivityLog With {.Activity = activity, .UserID = userid, .UserIP = ip.IPAddressToNumber, .ActivityDate = DateTime.UtcNow} ActivityLogRepository.Create(_activity) ''#ERROR THROWN HERE End Sub And the Interface that the Service uses looks like this Public Interface IActivityLogService Sub AddActivity(ByVal userid As Integer, ByVal activity As Integer, ByVal ip As String) Function GetUsersLastActivity(ByVal UserID As Integer) As ActivityLog Sub SubmitChanges() ''' <summary> ''' The type of activity done by the user ''' </summary> ''' <remarks>Each int refers to an activity. ''' There can be no duplicates or modifications ''' after this application goes live</remarks> Enum LogType As Integer ''' <summary> ''' A new session started ''' </summary> SessionStarted = 1 ''' <summary> ''' A new user is Added/Created ''' </summary> UserAdded = 2 ''' <summary> ''' User has updated their profile ''' </summary> UserUpdated = 3 ''' <summary> ''' User has logged into they system ''' </summary> UserLogin = 4 ''' <summary> ''' User has logged out of the system ''' </summary> UserLogout = 5 ''' <summary> ''' A new event has been added ''' </summary> EventAdded = 6 ''' <summary> ''' An event has been updated ''' </summary> EventUpdated = 7 ''' <summary> ''' An event has been deleted ''' </summary> EventDeleted = 8 ''' <summary> ''' An event has received a Up/Down vote ''' </summary> EventVoted = 9 ''' <summary> ''' An event has been closed ''' </summary> EventCloseVoted = 10 ''' <summary> ''' A comment has been added to an event ''' </summary> CommentAdded = 11 ''' <summary> ''' A comment has been updated ''' </summary> CommentUpdated = 12 ''' <summary> ''' A comment has been deleted ''' </summary> CommentDeleted = 13 ''' <summary> ''' An event or comment has been reported as spam ''' </summary> SpamReported = 14 End Enum End Interface And the repository is equally straight forward Public Sub Create(ByVal activity As ActivityLog) Implements IActivityLogRepository.Create dc.ActivityLogs.InsertOnSubmit(activity) End Sub The stack trace is as follows [NullReferenceException: Object reference not set to an instance of an object.] MyApp.Core.Domain.ActivityLogService.AddActivity(Int32 userid, Int32 activity, String ip) in E:\Projects\MyApp\MyApp.Core\Domain\Services\ActivityLogService.vb:49 MyApp.Core.Controllers.UsersController.Authenticate(String go) in E:\Projects\MyApp\MyApp.Core\Controllers\UsersController.vb:258 lambda_method(Closure , ControllerBase , Object[] ) +163 System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) +51 System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary2 parameters) +409 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary2 parameters) +52 System.Web.Mvc.<c_DisplayClass15.b_12() +127 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func1 continuation) +436 System.Web.Mvc.<>c__DisplayClass17.<InvokeActionMethodWithFilters>b__14() +61 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodWithFilters(ControllerContext controllerContext, IList1 filters, ActionDescriptor actionDescriptor, IDictionary2 parameters) +305 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +830 System.Web.Mvc.Controller.ExecuteCore() +136 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +232 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +39 System.Web.Mvc.<>c__DisplayClassb.<BeginProcessRequest>b__5() +68 System.Web.Mvc.Async.<>c__DisplayClass1.<MakeVoidDelegate>b__0() +44 System.Web.Mvc.Async.<>c__DisplayClass81.b__7(IAsyncResult ) +42 System.Web.Mvc.Async.WrappedAsyncResult`1.End() +141 System.Web.Mvc.Async.AsyncResultWrapper.End(IAsyncResult asyncResult, Object tag) +54 System.Web.Mvc.Async.AsyncResultWrapper.End(IAsyncResult asyncResult, Object tag) +40 System.Web.Mvc.<c_DisplayClasse.b_d() +61 System.Web.Mvc.SecurityUtil.b_0(Action f) +31 System.Web.Mvc.SecurityUtil.ProcessInApplicationTrust(Action action) +56 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +110 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +38 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +690 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +194 here's an image of the error when attached to the debugger. And here's an image of the DB schema in question Can anyone shed some light on what I might be missing here?

    Read the article

  • how to use TinyMCE(rich text editor) in google-maps info window..

    - by zjm1126
    this is the demo rar file:http://omploader.org/vM3U1bA when i drag the red block to the google-maps ,it will be changed to a marker, and it will has TinyMCE when you click the info window, but my program is : it can not be written when i click it the second time, the first time: the second time(can not be written): and my code is : <!DOCTYPE html PUBLIC "-//WAPFORUM//DTD XHTML Mobile 1.0//EN" "http://www.wapforum.org/DTD/xhtml-mobile10.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta name="viewport" content="width=device-width,minimum-scale=0.3,maximum-scale=5.0,user-scalable=yes"> </head> <body onload="initialize()" onunload="GUnload()"> <style type="text/css"> *{ margin:0; padding:0; } </style> <!--<div style="width:100px;height:100px;background:blue;"> </div>--> <div id="map_canvas" style="width: 500px; height: 300px;"></div> <div class=b style="width: 20px; height: 20px;background:red;position:absolute;left:700px;top:200px;"></div> <div class=b style="width: 20px; height: 20px;background:red;position:absolute;left:700px;top:200px;"></div> <script src="jquery-1.4.2.js" type="text/javascript"></script> <script type="text/javascript" src="tiny_mce.js"></script> <script src="jquery-ui-1.8rc3.custom.min.js" type="text/javascript"></script> <script src="http://maps.google.com/maps?file=api&amp;v=2&amp;key=ABQIAAAA-7cuV3vqp7w6zUNiN_F4uBRi_j0U6kJrkFvY4-OX2XYmEAa76BSNz0ifabgugotzJgrxyodPDmheRA&sensor=false"type="text/javascript"></script> <script type="text/javascript"> var aFn; //********** function initialize() { if (GBrowserIsCompatible()) { var map = new GMap2(document.getElementById("map_canvas")); var center=new GLatLng(39.9493, 116.3975); map.setCenter(center, 13); aFn=function(x,y){ var point =new GPoint(x,y) point = map.fromContainerPixelToLatLng(point); //console.log(point.x+" "+point.y) var marker = new GMarker(point,{draggable:true}); var a=$( '<form method="post" action="" style="height:100px;overflow:hidden;width:220px;">'+ '<textarea id="" class="mce" name="content" cols="22" rows="5" style="border:none">sss</textarea>'+ '</form>') a.click(function(){ // }) GEvent.addListener(marker, "click", function() { marker.openInfoWindowHtml(a[0]); }); /****************** GEvent.addListener(marker, 'click', function() { marker.openInfoWindowHtml('<div contentEditable="true" ' + 'style="height: 100px; overflow: auto;">' + 'wwww</div>'); }); ***************/ map.addOverlay(marker); /********** var marker = new GMarker(point, {draggable: true}); GEvent.addListener(marker, "dragstart", function() { map.closeInfoWindow(); }); GEvent.addListener(marker, "dragend", function() { marker.openInfoWindowHtml("????..."); }); map.addOverlay(marker); //*/ } $(".b").draggable({ revert: true, revertDuration: 0 }); $("#map_canvas").droppable({ drop: function(event,ui) { //console.log(ui.offset.left+' '+ui.offset.top) aFn(event.pageX-$("#map_canvas").offset().left,event.pageY-$("#map_canvas").offset().top); } }); } } //********** $(".mce").live("click", function(){ var once=0; mce(); }); function mce(once){ if(once)return; tinyMCE.init({ // General options mode : "textareas", theme : "advanced", plugins : "safari,pagebreak,style,layer,table,save,advhr,advimage,advlink,emotions,iespell,inlinepopups,insertdatetime,preview,media,searchreplace,print,contextmenu,paste,directionality,fullscreen,noneditable,visualchars,nonbreaking,xhtmlxtras,template", // Theme options theme_advanced_buttons1 : "bold,forecolor,|,justifyleft,justifycenter,justifyright,|,fontsizeselect", theme_advanced_buttons2 : "", theme_advanced_buttons3 : "", theme_advanced_buttons4 : "", theme_advanced_toolbar_location : "top", theme_advanced_toolbar_align : "left", theme_advanced_statusbar_location : "bottom", theme_advanced_resizing : true, // Example content CSS (should be your site CSS) content_css : "css/example.css", // Drop lists for link/image/media/template dialogs template_external_list_url : "js/template_list.js", external_link_list_url : "js/link_list.js", external_image_list_url : "js/image_list.js", media_external_list_url : "js/media_list.js", // Replace values for the template plugin template_replace_values : { username : "Some User", staffid : "991234" } }); once=1; } //********** </script> </body> </html>

    Read the article

  • Designing an API with compile-time option to remove first parameter to most functions and use a glob

    - by tomlogic
    I'm trying to design a portable API in ANSI C89/ISO C90 to access a wireless networking device on a serial interface. The library will have multiple network layers, and various versions need to run on embedded devices as small as an 8-bit micro with 32K of code and 2K of data, on up to embedded devices with a megabyte or more of code and data. In most cases, the target processor will have a single network interface and I'll want to use a single global structure with all state information for that device. I don't want to pass a pointer to that structure through the network layers. In a few cases (e.g., device with more resources that needs to live on two networks) I will interface to multiple devices, each with their own global state, and will need to pass a pointer to that state (or an index to a state array) through the layers. I came up with two possible solutions, but neither one is particularly pretty. Keep in mind that the full driver will potentially be 20,000 lines or more, cover multiple files, and contain hundreds of functions. The first solution requires a macro that discards the first parameter for every function that needs to access the global state: // network.h typedef struct dev_t { int var; long othervar; char name[20]; } dev_t; #ifdef IF_MULTI #define foo_function( x, a, b, c) _foo_function( x, a, b, c) #define bar_function( x) _bar_function( x) #else extern dev_t DEV; #define IFACE (&DEV) #define foo_function( x, a, b, c) _foo_function( a, b, c) #define bar_function( x) _bar_function( ) #endif int bar_function( dev_t *IFACE); int foo_function( dev_t *IFACE, int a, long b, char *c); // network.c #ifndef IF_MULTI dev_t DEV; #endif int bar_function( dev_t *IFACE) { memset( IFACE, 0, sizeof *IFACE); return 0; } int foo_function( dev_t *IFACE, int a, long b, char *c) { bar_function( IFACE); IFACE->var = a; IFACE->othervar = b; strcpy( IFACE->name, c); return 0; } The second solution defines macros to use in the function declarations: // network.h typedef struct dev_t { int var; long othervar; char name[20]; } dev_t; #ifdef IF_MULTI #define DEV_PARAM_ONLY dev_t *IFACE #define DEV_PARAM DEV_PARAM_ONLY, #else extern dev_t DEV; #define IFACE (&DEV) #define DEV_PARAM_ONLY void #define DEV_PARAM #endif int bar_function( DEV_PARAM_ONLY); // I don't like the missing comma between DEV_PARAM and arg2... int foo_function( DEV_PARAM int a, long b, char *c); // network.c #ifndef IF_MULTI dev_t DEV; #endif int bar_function( DEV_PARAM_ONLY) { memset( IFACE, 0, sizeof *IFACE); return 0; } int foo_function( DEV_PARAM int a, long b, char *c) { bar_function( IFACE); IFACE->var = a; IFACE->othervar = b; strcpy( IFACE->name, c); return 0; } The C code to access either method remains the same: // multi.c - example of multiple interfaces #define IF_MULTI #include "network.h" dev_t if0, if1; int main() { foo_function( &if0, -1, 3.1415926, "public"); foo_function( &if1, 42, 3.1415926, "private"); return 0; } // single.c - example of a single interface #include "network.h" int main() { foo_function( 11, 1.0, "network"); return 0; } Is there a cleaner method that I haven't figured out? I lean toward the second since it should be easier to maintain, and it's clearer that there's some macro magic in the parameters to the function. Also, the first method requires prefixing the function names with "_" when I want to use them as function pointers. I really do want to remove the parameter in the "single interface" case to eliminate unnecessary code to push the parameter onto the stack, and to allow the function to access the first "real" parameter in a register instead of loading it from the stack. And, if at all possible, I don't want to have to maintain two separate codebases. Thoughts? Ideas? Examples of something similar in existing code? (Note that using C++ isn't an option, since some of the planned targets don't have a C++ compiler available.)

    Read the article

  • How do would you use jQuery's .each() to apply the same script to each element with the same class?

    - by derekmx271
    I have a with multiple cart items listed. I have a "x-men logo" looking remove button that I want to fade-in next to the item when the customer hovers over a cart item. I had no issue getting this to work when there is only one item in the list. However, when there are multiple items in the cart, the jQuery operates funky. It still does the fade in, but only when I hover over the last item in the cart, and of course all of the "remove X" images become visible. Argh... So i searched around and think the .each() is my savior. I have been trying to get it to work, with no luck. My script just breaks when I attempt to implement it. Anyone have any pointers on this *.each() thing and how to implement it into my script?* I have tried putting a cartItem.each(function(){ around the mouseEnter/mouseLeave events (and used some $(this) selectors to make it "make sense") and that didn't do anything. Tried some other things as well with no luck... Here is the HTML (Sorry, there's a lot): <ul id="head-cart-items"> <!-- Item #1 --> <li> <!-- Item #1 Wrap --> <div class="head-cart-item"> <div class="head-cart-img" style='background-image:url("/viewimageresize.asp?mh=50&amp;mw=50&amp;p=AFE&amp;f=Air_Intakes_Magnum_FORCE_Stage-1_PRO_5R")'> </div> <div class="head-cart-desc"> <h3> <a href="/partdetails/AFE/Intakes/Air_Intakes/Magnum_FORCE_Stage-1_PRO_5R/19029">AFE Magnum FORCE Stage-1 PRO 5R Air Intakes</a> </h3> <span class="head-cart-qty">Qty: 1</span> <span class="head-cart-price">$195.00</span> <!-- Here is my Remove-X... --> <a class="remove-x" href='/cart//7806887'> <img src="/images/misc/remove-x.png"> </a> </div> </div> </li> <!-- Item #2 --> <li> <!-- Item #2 Wrap --> <div class="head-cart-item"> <div class="head-cart-img" style='background-image:url("/viewimageresize.asp?mh=50&amp;mw=50&amp;p=Exedy&amp;f=Clutch_Kits_Carbon-R")'> </div> <div class="head-cart-desc"> <h3> <a href="/partdetails/Exedy/Clutch/Clutch_Kits/Carbon-R/19684">Exedy Carbon-R Clutch Kits</a> </h3> <span class="head-cart-qty">Qty: 1</span> <span class="head-cart-price">$2,880.00</span> <!-- Here is my other Remove-X... --> <a class="remove-x" href='/cart//7806888'> <img src="/images/misc/remove-x.png"> </a> </div> </div> </li> </ul> And here is the jQuery... $(document).ready(function(){ var removeX = $(".remove-x"); var cartItem = $(".head-cart-item"); // Start with an invisible X removeX.fadeTo(0,0); // When hovering over Cart Item cartItem.mouseenter(function(){ // Fade the X to 100% removeX.fadeTo("normal",1); // On mouseout, fade it back to 0% $(this).mouseleave(function(){ removeX.fadeTo("fast",0); }); }); }); If you didn't see it, here is the "X" I am trying to fade... <!-- Here is my Remove-X... --> <a class="remove-x" href='/cart//7806887'> <img src="/images/misc/remove-x.png"> </a> Thanks for the help in advance. You guys always rock my world on here. I need ya (can't go home til this is live... :(

    Read the article

  • Allowing pop-up's and downloads with flex

    - by ShadowVariable
    I'm building an flex app that is basically a wrapper for a few websites. One of them is a google docs website, and I'm trying to get flex to allow downloads or popups or something that will allow me to do it. I've tried a whole bunch of solutions online and none of them have worked out. Here's the code so far: <?xml version="1.0" encoding="utf-8"?> <s:WindowedApplication xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" width="100%" height="100%" creationComplete="onCreationComplete()"> <s:layout> <s:HorizontalLayout/> </s:layout> <fx:Style source="style.css"/> <fx:Script> <![CDATA[ include "CustomHTMLLoader.as"; private function onCreationComplete():void { // ... other stuff ... var custom:object; custom.htmlHost = new MyHTMLHost(); } ]]> </fx:Script> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <s:BorderContainer width="100%" height="100%" backgroundColor="#87BED0" styleName="container"> <s:Panel x="188" y="17" width="826" height="112" borderAlpha="0.15" chromeColor="#0C5A74" color="#FFFFFF" cornerRadius="20" dropShadowVisible="false" enabled="true" title="Customer Service Control Panel"> <s:controlBarContent/> <s:Button id="home" x="13" y="10" height="44" label="Phones" click="myViewStack.selectedChild=Home;" enabled="true" icon="@Embed('assets/iconmonstr-mobile-phone-6-icon-32.png')"/> <s:Button id="liveagent" x="131" y="10" height="44" label="Live Agent" click="myViewStack.selectedChild=live_agent;" icon="@Embed('assets/iconmonstr-speech-bubble-11-icon-32.png')"/> <s:Button id="bigcommerce" x="260" y="10" width="158" height="44" label="Big Commerce" click="myViewStack.selectedChild=bigcommerce_home;" icon="@Embed('assets/iconmonstr-coin-6-icon-48.png')"/> <s:Button id="faq" x="436" y="10" width="88" height="44" label="FAQ" click="myViewStack.selectedChild=freqaskquestions;" fontFamily="Arial" icon="@Embed('assets/iconmonstr-help-4-icon-32.png')"/> <s:Button id="call" x="540" y="10" width="131" height="44" label="Google Docs" click="myViewStack.selectedChild=call_notes;" icon="@Embed('assets/iconmonstr-text-file-4-icon-32.png')"/> <s:Button id="hoot" x="684" y="10" width="122" height="44" label="HootSuite" click="myViewStack.selectedChild=hoot_suite;" icon="@Embed('assets/iconmonstr-facebook-icon-32.png')"/> </s:Panel> <mx:ViewStack id="myViewStack" x="0" y="140" width="100%" height="100%" borderStyle="solid"> <s:NavigatorContent id="Home"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://mbvphone.mtbakervapor.org/vbx/messages/inbox" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="bigcommerce_home"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://www.mtbakervapor.com/admin" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="live_agent"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://mbvphone.mtbakervapor.org/liveagent/agent/#login" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="freqaskquestions"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://mbvphone.mtbakervapor.org/liveagent/" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="call_notes"> <s:BorderContainer width="100%" height="100%"> <mx:HTML id="html" x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="https://drive.google.com/a/mtbakervapor.com/" htmlHost="{new CustomHost()}" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="hoot_suite"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="https://hootsuite.com/login" /> </s:BorderContainer> </s:NavigatorContent> </mx:ViewStack> <s:Image x="0" y="0" width="180" height="140" scaleMode="letterbox" smooth="false" source="assets/mbvlogo_black.png"/> </s:BorderContainer> </s:WindowedApplication> and the custom class code: package { import flash.html.HTMLHost; import flash.html.HTMLWindowCreateOptions; import flash.html.HTMLLoader; public class MyHTMLHost extends HTMLHost { public function MyHTMLHost(defaultBehaviors:Boolean=true) { super(defaultBehaviors); } override public function createWindow(windowCreateOptions:HTMLWindowCreateOptions):HTMLLoader { // all JS calls and HREFs to open a new window should use the existing window return HTMLLoader.createRootWindow(); } } } any help would be appreciated.

    Read the article

  • In a SQL XDL File, how do I read the waitresource attribute on process nodes which are deadlocking?

    - by skimania
    On SQL Server 2005, I'm getting a deadlock when updating two different keys in the same table. note from below that these two waitresources have the same beginning part, but different ending parts. waitresource="KEY: 6:72057594090487808 (d900ed5a6cc6)" and waitresource="KEY: 6:72057594090487808 (d900fb5261bb)" These two keys are locking, and I need to figure out why. The question: If the values in parenthesis are different, why are the first half of the key's the same? <deadlock-list> <deadlock victim="processffffffff8f5863e8"> <process-list> <process id="processaf02f8" taskpriority="0" logused="0" waitresource="KEY: 6:72057594090487808 (d900fb5261bb)" waittime="2281" ownerId="1370264705" transactionname="user_transaction" lasttranstarted="2010-06-17T00:35:25.483" XDES="0x69453a70" lockMode="U" schedulerid="3" kpid="7624" status="suspended" spid="339" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-06-17T00:35:25.483" lastbatchcompleted="2010-06-17T00:35:25.483" clientapp=".Net SqlClient Data Provider" hostname="RISKBBG_VM" hostpid="5848" loginname="RiskOpt" isolationlevel="read committed (2)" xactid="1370264705" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="MKP_RISKDB.dbo.MarketDataCurrentRtUpload" line="14" stmtstart="840" stmtend="1220" sqlhandle="0x03000600005f9d24c8878f00849d00000100000000000000"> UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID </frame> <frame procname="adhoc" line="1" sqlhandle="0x010006004a58132228bf8d73000000000000000000000000"> MarketDataCurrentBlbgRtUpload </frame> </executionStack> <inputbuf> MarketDataCurrentBlbgRtUpload </inputbuf> </process> <process id="processffffffff8f5863e8" taskpriority="0" logused="0" waitresource="KEY: 6:72057594090487808 (d900ed5a6cc6)" waittime="2281" ownerId="1370264646" transactionname="user_transaction" lasttranstarted="2010-06-17T00:35:25.450" XDES="0x1cb72be8" lockMode="U" schedulerid="5" kpid="1880" status="suspended" spid="287" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-06-17T00:35:25.450" lastbatchcompleted="2010-06-17T00:35:25.450" clientapp=".Net SqlClient Data Provider" hostname="RISKAPPS_VM" hostpid="1424" loginname="RiskOpt" isolationlevel="read committed (2)" xactid="1370264646" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="MKP_RISKDB.dbo.MarketDataCurrent_BulkUpload" line="28" stmtstart="1062" stmtend="1720" sqlhandle="0x03000600a28e5e4ef4fd8e00849d00000100000000000000"> UPDATE c WITH (ROWLOCK) SET LastUpdate = getdate(), Value = t.Value, Source = @source FROM MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid WHERE c.lastUpdate &lt; @updateTime and c.mdid not in (select mdid from MarketData where BloombergTicker is not null and PriceSource like &apos;Live.%&apos;) and c.value &lt;&gt; t.value </frame> <frame procname="adhoc" line="1" stmtstart="88" sqlhandle="0x01000600c1653d0598706ca7000000000000000000000000"> exec MarketDataCurrent_BulkUpload @clearBefore, @source </frame> <frame procname="unknown" line="1" sqlhandle="0x000000000000000000000000000000000000000000000000"> unknown </frame> </executionStack> <inputbuf> (@clearBefore datetime,@source nvarchar(10))exec MarketDataCurrent_BulkUpload @clearBefore, @source </inputbuf> </process> </process-list> <resource-list> <keylock hobtid="72057594090487808" dbid="6" objectname="MKP_RISKDB.dbo.MarketDataCurrent" indexname="PK_MarketDataCurrent" id="lock64ac7940" mode="U" associatedObjectId="72057594090487808"> <owner-list> <owner id="processffffffff8f5863e8" mode="U"/> </owner-list> <waiter-list> <waiter id="processaf02f8" mode="U" requestType="wait"/> </waiter-list> </keylock> <keylock hobtid="72057594090487808" dbid="6" objectname="MKP_RISKDB.dbo.MarketDataCurrent" indexname="PK_MarketDataCurrent" id="lockffffffffb8d2dd40" mode="U" associatedObjectId="72057594090487808"> <owner-list> <owner id="processaf02f8" mode="U"/> </owner-list> <waiter-list> <waiter id="processffffffff8f5863e8" mode="U" requestType="wait"/> </waiter-list> </keylock> </resource-list> </deadlock> </deadlock-list>

    Read the article

  • ActiveMQ - "Cannot send, channel has already failed" every 2 seconds?

    - by quanta
    ActiveMQ 5.7.0 In the activemq.log, I'm seeing this exception every 2 seconds: 2013-11-05 13:00:52,374 | DEBUG | Transport Connection to: tcp://127.0.0.1:37501 failed: org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://127.0.0.1:37501 | org.apache.activemq.broker.TransportConnection.Transport | Async Exception Handler org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://127.0.0.1:37501 at org.apache.activemq.transport.AbstractInactivityMonitor.doOnewaySend(AbstractInactivityMonitor.java:282) at org.apache.activemq.transport.AbstractInactivityMonitor.oneway(AbstractInactivityMonitor.java:271) at org.apache.activemq.transport.TransportFilter.oneway(TransportFilter.java:85) at org.apache.activemq.transport.WireFormatNegotiator.oneway(WireFormatNegotiator.java:104) at org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68) at org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1312) at org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:838) at org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:873) at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129) at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Due to this keyword InactivityIOException, the first thing comes to my mind is InactivityMonitor, but the strange thing is MaxInactivityDuration=30000: 2013-11-05 13:11:02,672 | DEBUG | Sending: WireFormatInfo { version=9, properties={MaxFrameSize=9223372036854775807, CacheSize=1024, CacheEnabled=true, SizePrefixDisabled=false, MaxInactivityDurationInitalDelay=10000, TcpNoDelayEnabled=true, MaxInactivityDuration=30000, TightEncodingEnabled=true, StackTraceEnabled=true}, magic=[A,c,t,i,v,e,M,Q]} | org.apache.activemq.transport.WireFormatNegotiator | ActiveMQ BrokerService[localhost] Task-2 Moreover, I also didn't see something like this: No message received since last read check for ... or: Channel was inactive for too (30000) long Do a netstat, I see these connections in TIME_WAIT state: tcp 0 0 127.0.0.1:38545 127.0.0.1:61616 TIME_WAIT - tcp 0 0 127.0.0.1:38544 127.0.0.1:61616 TIME_WAIT - tcp 0 0 127.0.0.1:38522 127.0.0.1:61616 TIME_WAIT - Here're the output when running tcpdump: Internet Protocol Version 4, Src: 127.0.0.1 (127.0.0.1), Dst: 127.0.0.1 (127.0.0.1) Version: 4 Header length: 20 bytes Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00: Not-ECT (Not ECN-Capable Transport)) 0000 00.. = Differentiated Services Codepoint: Default (0x00) .... ..00 = Explicit Congestion Notification: Not-ECT (Not ECN-Capable Transport) (0x00) Total Length: 296 Identification: 0x7b6a (31594) Flags: 0x02 (Don't Fragment) 0... .... = Reserved bit: Not set .1.. .... = Don't fragment: Set ..0. .... = More fragments: Not set Fragment offset: 0 Time to live: 64 Protocol: TCP (6) Header checksum: 0xc063 [correct] [Good: True] [Bad: False] Source: 127.0.0.1 (127.0.0.1) Destination: 127.0.0.1 (127.0.0.1) Transmission Control Protocol, Src Port: 61616 (61616), Dst Port: 54669 (54669), Seq: 1, Ack: 2, Len: 244 Source port: 61616 (61616) Destination port: 54669 (54669) [Stream index: 11] Sequence number: 1 (relative sequence number) [Next sequence number: 245 (relative sequence number)] Acknowledgement number: 2 (relative ack number) Header length: 32 bytes Flags: 0x018 (PSH, ACK) 000. .... .... = Reserved: Not set ...0 .... .... = Nonce: Not set .... 0... .... = Congestion Window Reduced (CWR): Not set .... .0.. .... = ECN-Echo: Not set .... ..0. .... = Urgent: Not set .... ...1 .... = Acknowledgement: Set .... .... 1... = Push: Set .... .... .0.. = Reset: Not set .... .... ..0. = Syn: Not set .... .... ...0 = Fin: Not set Window size value: 256 [Calculated window size: 32768] [Window size scaling factor: 128] Checksum: 0xff1c [validation disabled] [Good Checksum: False] [Bad Checksum: False] Options: (12 bytes) No-Operation (NOP) No-Operation (NOP) Timestamps: TSval 2304161892, TSecr 2304161891 Kind: Timestamp (8) Length: 10 Timestamp value: 2304161892 Timestamp echo reply: 2304161891 [SEQ/ACK analysis] [Bytes in flight: 244] Constrained Application Protocol, TID: 240, Length: 244 00.. .... = Version: 0 ..00 .... = Type: Confirmable (0) .... 0000 = Option Count: 0 Code: Unknown (0) Transaction ID: 240 Payload Content-Type: text/plain (default), Length: 240, offset: 4 Line-based text data: text/plain [truncated] \001ActiveMQ\000\000\000\t\001\000\000\000<DE>\000\000\000\t\000\fMaxFrameSize\006\177<FF><FF><FF><FF> <FF><FF><FF>\000\tCacheSize\005\000\000\004\000\000\fCacheEnabled\001\001\000\022SizePrefixDisabled\001\000\000 MaxInactivityDurationInitalDelay\006\ It is very likely a tcp port check. This is what I see when trying telnet from another host: 2013-11-05 16:12:41,071 | DEBUG | Transport Connection to: tcp://10.8.20.9:46775 failed: java.io.EOFException | org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ Transport: tcp:///10.8.20.9:46775@61616 java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:375) at org.apache.activemq.openwire.OpenWireFormat.unmarshal(OpenWireFormat.java:275) at org.apache.activemq.transport.tcp.TcpTransport.readCommand(TcpTransport.java:229) at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:221) at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:204) at java.lang.Thread.run(Thread.java:662) 2013-11-05 16:12:41,071 | DEBUG | Transport Connection to: tcp://10.8.20.9:46775 failed: org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection.Transport | Async Exception Handler org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://10.8.20.9:46775 at org.apache.activemq.transport.AbstractInactivityMonitor.doOnewaySend(AbstractInactivityMonitor.java:282) at org.apache.activemq.transport.AbstractInactivityMonitor.oneway(AbstractInactivityMonitor.java:271) at org.apache.activemq.transport.TransportFilter.oneway(TransportFilter.java:85) at org.apache.activemq.transport.WireFormatNegotiator.oneway(WireFormatNegotiator.java:104) at org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68) at org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1312) at org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:838) at org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:873) at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129) at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) 2013-11-05 16:12:41,071 | DEBUG | Unregistering MBean org.apache.activemq:BrokerName=localhost,Type=Connection,ConnectorName=ope nwire,ViewType=address,Name=tcp_//10.8.20.9_46775 | org.apache.activemq.broker.jmx.ManagementContext | ActiveMQ Transport: tcp:/ //10.8.20.9:46775@61616 2013-11-05 16:12:41,073 | DEBUG | Stopping connection: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,073 | DEBUG | Stopping transport tcp:///10.8.20.9:46775@61616 | org.apache.activemq.transport.tcp.TcpTranspo rt | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,073 | DEBUG | Initialized TaskRunnerFactory[ActiveMQ Task] using ExecutorService: java.util.concurrent.Threa dPoolExecutor@23cc2a28 | org.apache.activemq.thread.TaskRunnerFactory | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,074 | DEBUG | Closed socket Socket[addr=/10.8.20.9,port=46775,localport=61616] | org.apache.activemq.transpo rt.tcp.TcpTransport | ActiveMQ Task-1 2013-11-05 16:12:41,074 | DEBUG | Forcing shutdown of ExecutorService: java.util.concurrent.ThreadPoolExecutor@23cc2a28 | org.apache.activemq.util.ThreadPoolUtils | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,074 | DEBUG | Stopped transport: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,074 | DEBUG | Connection Stopped: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,902 | DEBUG | Sending: WireFormatInfo { version=9, properties={MaxFrameSize=9223372036854775807, CacheSize=1024, CacheEnabled=true, SizePrefixDisabled=false, MaxInactivityDurationInitalDelay=10000, TcpNoDelayEnabled=true, MaxInactivityDuration=30000, TightEncodingEnabled=true, StackTraceEnabled=true}, magic=[A,c,t,i,v,e,M,Q]} | org.apache.activemq.transport.WireFormatNegotiator | ActiveMQ BrokerService[localhost] Task-5 So the question is: how can I find out the process that is trying to connect to my ActiveMQ (from localhost) every 2 seconds?

    Read the article

  • Introducing the Earthquake Locator – A Bing Maps Silverlight Application, part 1

    - by Bobby Diaz
    Update: Live demo and source code now available!  The recent wave of earthquakes (no pun intended) being reported in the news got me wondering about the frequency and severity of earthquakes around the world. Since I’ve been doing a lot of Silverlight development lately, I decided to scratch my curiosity with a nice little Bing Maps application that will show the location and relative strength of recent seismic activity. Here is a list of technologies this application will utilize, so be sure to have everything downloaded and installed if you plan on following along. Silverlight 3 WCF RIA Services Bing Maps Silverlight Control * Managed Extensibility Framework (optional) MVVM Light Toolkit (optional) log4net (optional) * If you are new to Bing Maps or have not signed up for a Developer Account, you will need to visit www.bingmapsportal.com to request a Bing Maps key for your application. Getting Started We start out by creating a new Silverlight Application called EarthquakeLocator and specify that we want to automatically create the Web Application Project with RIA Services enabled. I cleaned up the web app by removing the Default.aspx and EarthquakeLocatorTestPage.html. Then I renamed the EarthquakeLocatorTestPage.aspx to Default.aspx and set it as my start page. I also set the development server to use a specific port, as shown below. RIA Services Next, I created a Services folder in the EarthquakeLocator.Web project and added a new Domain Service Class called EarthquakeService.cs. This is the RIA Services Domain Service that will provide earthquake data for our client application. I am not using LINQ to SQL or Entity Framework, so I will use the <empty domain service class> option. We will be pulling data from an external Atom feed, but this example could just as easily pull data from a database or another web service. This is an important distinction to point out because each scenario I just mentioned could potentially use a different Domain Service base class (i.e. LinqToSqlDomainService<TDataContext>). Now we can start adding Query methods to our EarthquakeService that pull data from the USGS web site. Here is the complete code for our service class: using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.ServiceModel.Syndication; using System.Web.DomainServices; using System.Web.Ria; using System.Xml; using log4net; using EarthquakeLocator.Web.Model;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// Provides earthquake data to client applications.     /// </summary>     [EnableClientAccess()]     public class EarthquakeService : DomainService     {         private static readonly ILog log = LogManager.GetLogger(typeof(EarthquakeService));           // USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/         private const string FeedForPreviousDay =             "http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml";         private const string FeedForPreviousWeek =             "http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml";           /// <summary>         /// Gets the earthquake data for the previous week.         /// </summary>         /// <returns>A queryable collection of <see cref="Earthquake"/> objects.</returns>         public IQueryable<Earthquake> GetEarthquakes()         {             var feed = GetFeed(FeedForPreviousWeek);             var list = new List<Earthquake>();               if ( feed != null )             {                 foreach ( var entry in feed.Items )                 {                     var quake = CreateEarthquake(entry);                     if ( quake != null )                     {                         list.Add(quake);                     }                 }             }               return list.AsQueryable();         }           /// <summary>         /// Creates an <see cref="Earthquake"/> object for each entry in the Atom feed.         /// </summary>         /// <param name="entry">The Atom entry.</param>         /// <returns></returns>         private Earthquake CreateEarthquake(SyndicationItem entry)         {             Earthquake quake = null;             string title = entry.Title.Text;             string summary = entry.Summary.Text;             string point = GetElementValue<String>(entry, "point");             string depth = GetElementValue<String>(entry, "elev");             string utcTime = null;             string localTime = null;             string depthDesc = null;             double? magnitude = null;             double? latitude = null;             double? longitude = null;             double? depthKm = null;               if ( !String.IsNullOrEmpty(title) && title.StartsWith("M") )             {                 title = title.Substring(2, title.IndexOf(',')-3).Trim();                 magnitude = TryParse(title);             }             if ( !String.IsNullOrEmpty(point) )             {                 var values = point.Split(' ');                 if ( values.Length == 2 )                 {                     latitude = TryParse(values[0]);                     longitude = TryParse(values[1]);                 }             }             if ( !String.IsNullOrEmpty(depth) )             {                 depthKm = TryParse(depth);                 if ( depthKm != null )                 {                     depthKm = Math.Round((-1 * depthKm.Value) / 100, 2);                 }             }             if ( !String.IsNullOrEmpty(summary) )             {                 summary = summary.Replace("</p>", "");                 var values = summary.Split(                     new string[] { "<p>" },                     StringSplitOptions.RemoveEmptyEntries);                   if ( values.Length == 3 )                 {                     var times = values[1].Split(                         new string[] { "<br>" },                         StringSplitOptions.RemoveEmptyEntries);                       if ( times.Length > 0 )                     {                         utcTime = times[0];                     }                     if ( times.Length > 1 )                     {                         localTime = times[1];                     }                       depthDesc = values[2];                     depthDesc = "Depth: " + depthDesc.Substring(depthDesc.IndexOf(":") + 2);                 }             }               if ( latitude != null && longitude != null )             {                 quake = new Earthquake()                 {                     Id = entry.Id,                     Title = entry.Title.Text,                     Summary = entry.Summary.Text,                     Date = entry.LastUpdatedTime.DateTime,                     Url = entry.Links.Select(l => Path.Combine(l.BaseUri.OriginalString,                         l.Uri.OriginalString)).FirstOrDefault(),                     Age = entry.Categories.Where(c => c.Label == "Age")                         .Select(c => c.Name).FirstOrDefault(),                     Magnitude = magnitude.GetValueOrDefault(),                     Latitude = latitude.GetValueOrDefault(),                     Longitude = longitude.GetValueOrDefault(),                     DepthInKm = depthKm.GetValueOrDefault(),                     DepthDesc = depthDesc,                     UtcTime = utcTime,                     LocalTime = localTime                 };             }               return quake;         }           private T GetElementValue<T>(SyndicationItem entry, String name)         {             var el = entry.ElementExtensions.Where(e => e.OuterName == name).FirstOrDefault();             T value = default(T);               if ( el != null )             {                 value = el.GetObject<T>();             }               return value;         }           private double? TryParse(String value)         {             double d;             if ( Double.TryParse(value, out d) )             {                 return d;             }             return null;         }           /// <summary>         /// Gets the feed at the specified URL.         /// </summary>         /// <param name="url">The URL.</param>         /// <returns>A <see cref="SyndicationFeed"/> object.</returns>         public static SyndicationFeed GetFeed(String url)         {             SyndicationFeed feed = null;               try             {                 log.Debug("Loading RSS feed: " + url);                   using ( var reader = XmlReader.Create(url) )                 {                     feed = SyndicationFeed.Load(reader);                 }             }             catch ( Exception ex )             {                 log.Error("Error occurred while loading RSS feed: " + url, ex);             }               return feed;         }     } }   The only method that will be generated in the client side proxy class, EarthquakeContext, will be the GetEarthquakes() method. The reason being that it is the only public instance method and it returns an IQueryable<Earthquake> collection that can be consumed by the client application. GetEarthquakes() calls the static GetFeed(String) method, which utilizes the built in SyndicationFeed API to load the external data feed. You will need to add a reference to the System.ServiceModel.Web library in order to take advantage of the RSS/Atom reader. The API will also allow you to create your own feeds to serve up in your applications. Model I have also created a Model folder and added a new class, Earthquake.cs. The Earthquake object will hold the various properties returned from the Atom feed. Here is a sample of the code for that class. Notice the [Key] attribute on the Id property, which is required by RIA Services to uniquely identify the entity. using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ComponentModel.DataAnnotations;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     [DataContract]     public class Earthquake     {         /// <summary>         /// Gets or sets the id.         /// </summary>         /// <value>The id.</value>         [Key]         [DataMember]         public string Id { get; set; }           /// <summary>         /// Gets or sets the title.         /// </summary>         /// <value>The title.</value>         [DataMember]         public string Title { get; set; }           /// <summary>         /// Gets or sets the summary.         /// </summary>         /// <value>The summary.</value>         [DataMember]         public string Summary { get; set; }           // additional properties omitted     } }   View Model The recent trend to use the MVVM pattern for WPF and Silverlight provides a great way to separate the data and behavior logic out of the user interface layer of your client applications. I have chosen to use the MVVM Light Toolkit for the Earthquake Locator, but there are other options out there if you prefer another library. That said, I went ahead and created a ViewModel folder in the Silverlight project and added a EarthquakeViewModel class that derives from ViewModelBase. Here is the code: using System; using System.Collections.ObjectModel; using System.ComponentModel.Composition; using System.ComponentModel.Composition.Hosting; using Microsoft.Maps.MapControl; using GalaSoft.MvvmLight; using EarthquakeLocator.Web.Model; using EarthquakeLocator.Web.Services;   namespace EarthquakeLocator.ViewModel {     /// <summary>     /// Provides data for views displaying earthquake information.     /// </summary>     public class EarthquakeViewModel : ViewModelBase     {         [Import]         public EarthquakeContext Context;           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         public EarthquakeViewModel()         {             var catalog = new AssemblyCatalog(GetType().Assembly);             var container = new CompositionContainer(catalog);             container.ComposeParts(this);             Initialize();         }           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         /// <param name="context">The context.</param>         public EarthquakeViewModel(EarthquakeContext context)         {             Context = context;             Initialize();         }           private void Initialize()         {             MapCenter = new Location(20, -170);             ZoomLevel = 2;         }           #region Private Methods           private void OnAutoLoadDataChanged()         {             LoadEarthquakes();         }           private void LoadEarthquakes()         {             var query = Context.GetEarthquakesQuery();             Context.Earthquakes.Clear();               Context.Load(query, (op) =>             {                 if ( !op.HasError )                 {                     foreach ( var item in op.Entities )                     {                         Earthquakes.Add(item);                     }                 }             }, null);         }           #endregion Private Methods           #region Properties           private bool autoLoadData;         /// <summary>         /// Gets or sets a value indicating whether to auto load data.         /// </summary>         /// <value><c>true</c> if auto loading data; otherwise, <c>false</c>.</value>         public bool AutoLoadData         {             get { return autoLoadData; }             set             {                 if ( autoLoadData != value )                 {                     autoLoadData = value;                     RaisePropertyChanged("AutoLoadData");                     OnAutoLoadDataChanged();                 }             }         }           private ObservableCollection<Earthquake> earthquakes;         /// <summary>         /// Gets the collection of earthquakes to display.         /// </summary>         /// <value>The collection of earthquakes.</value>         public ObservableCollection<Earthquake> Earthquakes         {             get             {                 if ( earthquakes == null )                 {                     earthquakes = new ObservableCollection<Earthquake>();                 }                   return earthquakes;             }         }           private Location mapCenter;         /// <summary>         /// Gets or sets the map center.         /// </summary>         /// <value>The map center.</value>         public Location MapCenter         {             get { return mapCenter; }             set             {                 if ( mapCenter != value )                 {                     mapCenter = value;                     RaisePropertyChanged("MapCenter");                 }             }         }           private double zoomLevel;         /// <summary>         /// Gets or sets the zoom level.         /// </summary>         /// <value>The zoom level.</value>         public double ZoomLevel         {             get { return zoomLevel; }             set             {                 if ( zoomLevel != value )                 {                     zoomLevel = value;                     RaisePropertyChanged("ZoomLevel");                 }             }         }           #endregion Properties     } }   The EarthquakeViewModel class contains all of the properties that will be bound to by the various controls in our views. Be sure to read through the LoadEarthquakes() method, which handles calling the GetEarthquakes() method in our EarthquakeService via the EarthquakeContext proxy, and also transfers the loaded entities into the view model’s Earthquakes collection. Another thing to notice is what’s going on in the default constructor. I chose to use the Managed Extensibility Framework (MEF) for my composition needs, but you can use any dependency injection library or none at all. To allow the EarthquakeContext class to be discoverable by MEF, I added the following partial class so that I could supply the appropriate [Export] attribute: using System; using System.ComponentModel.Composition;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// The client side proxy for the EarthquakeService class.     /// </summary>     [Export]     public partial class EarthquakeContext     {     } }   One last piece I wanted to point out before moving on to the user interface, I added a client side partial class for the Earthquake entity that contains helper properties that we will bind to later: using System;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     public partial class Earthquake     {         /// <summary>         /// Gets the location based on the current Latitude/Longitude.         /// </summary>         /// <value>The location.</value>         public string Location         {             get { return String.Format("{0},{1}", Latitude, Longitude); }         }           /// <summary>         /// Gets the size based on the Magnitude.         /// </summary>         /// <value>The size.</value>         public double Size         {             get { return (Magnitude * 3); }         }     } }   View Now the fun part! Usually, I would create a Views folder to place all of my View controls in, but I took the easy way out and added the following XAML code to the default MainPage.xaml file. Be sure to add the bing prefix associating the Microsoft.Maps.MapControl namespace after adding the assembly reference to your project. The MVVM Light Toolkit project templates come with a ViewModelLocator class that you can use via a static resource, but I am instantiating the EarthquakeViewModel directly in my user control. I am setting the AutoLoadData property to true as a way to trigger the LoadEarthquakes() method call. The MapItemsControl found within the <bing:Map> control binds its ItemsSource property to the Earthquakes collection of the view model, and since it is an ObservableCollection<T>, we get the automatic two way data binding via the INotifyCollectionChanged interface. <UserControl x:Class="EarthquakeLocator.MainPage"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:bing="clr-namespace:Microsoft.Maps.MapControl;assembly=Microsoft.Maps.MapControl"     xmlns:vm="clr-namespace:EarthquakeLocator.ViewModel"     mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" >     <UserControl.Resources>         <DataTemplate x:Key="EarthquakeTemplate">             <Ellipse Fill="Red" Stroke="Black" StrokeThickness="1"                      Width="{Binding Size}" Height="{Binding Size}"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="{Binding UtcTime}" />                         <TextBlock Text="{Binding LocalTime}" />                         <TextBlock Text="{Binding DepthDesc}" />                     </StackPanel>                 </ToolTipService.ToolTip>             </Ellipse>         </DataTemplate>     </UserControl.Resources>       <UserControl.DataContext>         <vm:EarthquakeViewModel AutoLoadData="True" />     </UserControl.DataContext>       <Grid x:Name="LayoutRoot">           <bing:Map x:Name="map" CredentialsProvider="--Your-Bing-Maps-Key--"                   Center="{Binding MapCenter, Mode=TwoWay}"                   ZoomLevel="{Binding ZoomLevel, Mode=TwoWay}">             <bing:MapItemsControl ItemsSource="{Binding Earthquakes}"                                   ItemTemplate="{StaticResource EarthquakeTemplate}" />         </bing:Map>       </Grid> </UserControl>   The EarthquakeTemplate defines the Ellipse that will represent each earthquake, the Width and Height that are determined by the Magnitude, the Position on the map, and also the tooltip that will appear when we mouse over each data point. Running the application will give us the following result (shown with a tooltip example): That concludes this portion of our show but I plan on implementing additional functionality in later blog posts. Be sure to come back soon to see the next installments in this series. Enjoy!   Additional Resources USGS Earthquake Data Feeds Brad Abrams shows how RIA Services and MVVM can work together

    Read the article

  • Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix

    - by ScottGu
    I’m excited to announce the release today of several products: ASP.NET MVC 3 NuGet IIS Express 7.5 SQL Server Compact Edition 4 Web Deploy and Web Farm Framework 2.0 Orchard 1.0 WebMatrix 1.0 The above products are all free. They build upon the .NET 4 and VS 2010 release, and add a ton of additional value to ASP.NET (both Web Forms and MVC) and the Microsoft Web Server stack. ASP.NET MVC 3 Today we are shipping the final release of ASP.NET MVC 3.  You can download and install ASP.NET MVC 3 here.  The ASP.NET MVC 3 source code (released under an OSI-compliant open source license) can also optionally be downloaded here. ASP.NET MVC 3 is a significant update that brings with it a bunch of great features.  Some of the improvements include: Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to continuing to support/enhance the existing .aspx view engine).  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, with Razor you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type.  You can learn more about Razor from some of the blog posts I’ve done about it over the last 6 months Introducing Razor New @model keyword in Razor Layouts with Razor Server-Side Comments with Razor Razor’s @: and <text> syntax Implicit and Explicit code nuggets with Razor Layouts and Sections with Razor Today’s release supports full code intellisense support for Razor (both VB and C#) with Visual Studio 2010 and the free Visual Web Developer 2010 Express. JavaScript Improvements ASP.NET MVC 3 enables richer JavaScript scenarios and takes advantage of emerging HTML5 capabilities. The AJAX and Validation helpers in ASP.NET MVC 3 now use an Unobtrusive JavaScript based approach.  Unobtrusive JavaScript avoids injecting inline JavaScript into HTML, and enables cleaner separation of behavior using the new HTML 5 “data-“ attribute convention (which conveniently works on older browsers as well – including IE6). This keeps your HTML tight and clean, and makes it easier to optionally swap out or customize JS libraries.  ASP.NET MVC 3 now includes built-in support for posting JSON-based parameters from client-side JavaScript to action methods on the server.  This makes it easier to exchange data across the client and server, and build rich JavaScript front-ends.  We think this capability will be particularly useful going forward with scenarios involving client templates and data binding (including the jQuery plugins the ASP.NET team recently contributed to the jQuery project).  Previous releases of ASP.NET MVC included the core jQuery library.  ASP.NET MVC 3 also now ships the jQuery Validate plugin (which our validation helpers use for client-side validation scenarios).  We are also now shipping and including jQuery UI by default as well (which provides a rich set of client-side JavaScript UI widgets for you to use within projects). Improved Validation ASP.NET MVC 3 includes a bunch of validation enhancements that make it even easier to work with data. Client-side validation is now enabled by default with ASP.NET MVC 3 (using an onbtrusive javascript implementation).  Today’s release also includes built-in support for Remote Validation - which enables you to annotate a model class with a validation attribute that causes ASP.NET MVC to perform a remote validation call to a server method when validating input on the client. The validation features introduced within .NET 4’s System.ComponentModel.DataAnnotations namespace are now supported by ASP.NET MVC 3.  This includes support for the new IValidatableObject interface – which enables you to perform model-level validation, and allows you to provide validation error messages specific to the state of the overall model, or between two properties within the model.  ASP.NET MVC 3 also supports the improvements made to the ValidationAttribute class in .NET 4.  ValidationAttribute now supports a new IsValid overload that provides more information about the current validation context, such as what object is being validated.  This enables richer scenarios where you can validate the current value based on another property of the model.  We’ve shipped a built-in [Compare] validation attribute  with ASP.NET MVC 3 that uses this support and makes it easy out of the box to compare and validate two property values. You can use any data access API or technology with ASP.NET MVC.  This past year, though, we’ve worked closely with the .NET data team to ensure that the new EF Code First library works really well for ASP.NET MVC applications.  These two posts of mine cover the latest EF Code First preview and demonstrates how to use it with ASP.NET MVC 3 to enable easy editing of data (with end to end client+server validation support).  The final release of EF Code First will ship in the next few weeks. Today we are also publishing the first preview of a new MvcScaffolding project.  It enables you to easily scaffold ASP.NET MVC 3 Controllers and Views, and works great with EF Code-First (and is pluggable to support other data providers).  You can learn more about it – and install it via NuGet today - from Steve Sanderson’s MvcScaffolding blog post. Output Caching Previous releases of ASP.NET MVC supported output caching content at a URL or action-method level. With ASP.NET MVC V3 we are also enabling support for partial page output caching – which allows you to easily output cache regions or fragments of a response as opposed to the entire thing.  This ends up being super useful in a lot of scenarios, and enables you to dramatically reduce the work your application does on the server.  The new partial page output caching support in ASP.NET MVC 3 enables you to easily re-use cached sub-regions/fragments of a page across multiple URLs on a site.  It supports the ability to cache the content either on the web-server, or optionally cache it within a distributed cache server like Windows Server AppFabric or memcached. I’ll post some tutorials on my blog that show how to take advantage of ASP.NET MVC 3’s new output caching support for partial page scenarios in the future. Better Dependency Injection ASP.NET MVC 3 provides better support for applying Dependency Injection (DI) and integrating with Dependency Injection/IOC containers. With ASP.NET MVC 3 you no longer need to author custom ControllerFactory classes in order to enable DI with Controllers.  You can instead just register a Dependency Injection framework with ASP.NET MVC 3 and it will resolve dependencies not only for Controllers, but also for Views, Action Filters, Model Binders, Value Providers, Validation Providers, and Model Metadata Providers that you use within your application. This makes it much easier to cleanly integrate dependency injection within your projects. Other Goodies ASP.NET MVC 3 includes dozens of other nice improvements that help to both reduce the amount of code you write, and make the code you do write cleaner.  Here are just a few examples: Improved New Project dialog that makes it easy to start new ASP.NET MVC 3 projects from templates. Improved Add->View Scaffolding support that enables the generation of even cleaner view templates. New ViewBag property that uses .NET 4’s dynamic support to make it easy to pass late-bound data from Controllers to Views. Global Filters support that allows specifying cross-cutting filter attributes (like [HandleError]) across all Controllers within an app. New [AllowHtml] attribute that allows for more granular request validation when binding form posted data to models. Sessionless controller support that allows fine grained control over whether SessionState is enabled on a Controller. New ActionResult types like HttpNotFoundResult and RedirectPermanent for common HTTP scenarios. New Html.Raw() helper to indicate that output should not be HTML encoded. New Crypto helpers for salting and hashing passwords. And much, much more… Learn More about ASP.NET MVC 3 We will be posting lots of tutorials and samples on the http://asp.net/mvc site in the weeks ahead.  Below are two good ASP.NET MVC 3 tutorials available on the site today: Build your First ASP.NET MVC 3 Application: VB and C# Building the ASP.NET MVC 3 Music Store We’ll post additional ASP.NET MVC 3 tutorials and videos on the http://asp.net/mvc site in the future. Visit it regularly to find new tutorials as they are published. How to Upgrade Existing Projects ASP.NET MVC 3 is compatible with ASP.NET MVC 2 – which means it should be easy to update existing MVC projects to ASP.NET MVC 3.  The new features in ASP.NET MVC 3 build on top of the foundational work we’ve already done with the MVC 1 and MVC 2 releases – which means that the skills, knowledge, libraries, and books you’ve acquired are all directly applicable with the MVC 3 release.  MVC 3 adds new features and capabilities – it doesn’t obsolete existing ones. You can upgrade existing ASP.NET MVC 2 projects by following the manual upgrade steps in the release notes.  Alternatively, you can use this automated ASP.NET MVC 3 upgrade tool to easily update your  existing projects. Localized Builds Today’s ASP.NET MVC 3 release is available in English.  We will be releasing localized versions of ASP.NET MVC 3 (in 9 languages) in a few days.  I’ll blog pointers to the localized downloads once they are available. NuGet Today we are also shipping NuGet – a free, open source, package manager that makes it easy for you to find, install, and use open source libraries in your projects. It works with all .NET project types (including ASP.NET Web Forms, ASP.NET MVC, WPF, WinForms, Silverlight, and Class Libraries).  You can download and install it here. NuGet enables developers who maintain open source projects (for example, .NET projects like Moq, NHibernate, Ninject, StructureMap, NUnit, Windsor, Raven, Elmah, etc) to package up their libraries and register them with an online gallery/catalog that is searchable.  The client-side NuGet tools – which include full Visual Studio integration – make it trivial for any .NET developer who wants to use one of these libraries to easily find and install it within the project they are working on. NuGet handles dependency management between libraries (for example: library1 depends on library2). It also makes it easy to update (and optionally remove) libraries from your projects later. It supports updating web.config files (if a package needs configuration settings). It also allows packages to add PowerShell scripts to a project (for example: scaffold commands). Importantly, NuGet is transparent and clean – and does not install anything at the system level. Instead it is focused on making it easy to manage libraries you use with your projects. Our goal with NuGet is to make it as simple as possible to integrate open source libraries within .NET projects.  NuGet Gallery This week we also launched a beta version of the http://nuget.org web-site – which allows anyone to easily search and browse an online gallery of open source packages available via NuGet.  The site also now allows developers to optionally submit new packages that they wish to share with others.  You can learn more about how to create and share a package here. There are hundreds of open-source .NET projects already within the NuGet Gallery today.  We hope to have thousands there in the future. IIS Express 7.5 Today we are also shipping IIS Express 7.5.  IIS Express is a free version of IIS 7.5 that is optimized for developer scenarios.  It works for both ASP.NET Web Forms and ASP.NET MVC project types. We think IIS Express combines the ease of use of the ASP.NET Web Server (aka Cassini) currently built-into Visual Studio today with the full power of IIS.  Specifically: It’s lightweight and easy to install (less than 5Mb download and a quick install) It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, and other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all Windows OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios.  You can also optionally redistribute IIS Express with your own applications if you want a lightweight web-server.  The standard IIS Express EULA now includes redistributable rights. Visual Studio 2010 SP1 adds support for IIS Express.  Read my VS 2010 SP1 and IIS Express blog post to learn more about what it enables.  SQL Server Compact Edition 4 Today we are also shipping SQL Server Compact Edition 4 (aka SQL CE 4).  SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Tooling Support with VS 2010 SP1 Visual Studio 2010 SP1 adds support for SQL CE 4 and ASP.NET Projects.  Read my VS 2010 SP1 and SQL CE 4 blog post to learn more about what it enables.  Web Deploy and Web Farm Framework 2.0 Today we are also releasing Microsoft Web Deploy V2 and Microsoft Web Farm Framework V2.  These services provide a flexible and powerful way to deploy ASP.NET applications onto either a single server, or across a web farm of machines. You can learn more about these capabilities from my previous blog posts on them: Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Visit the http://iis.net website to learn more and install them. Both are free. Orchard 1.0 Today we are also releasing Orchard v1.0.  Orchard is a free, open source, community based project.  It provides Content Management System (CMS) and Blogging System support out of the box, and makes it possible to easily create and manage web-sites without having to write code (site owners can customize a site through the browser-based editing tools built-into Orchard).  Read these tutorials to learn more about how you can setup and manage your own Orchard site. Orchard itself is built as an ASP.NET MVC 3 application using Razor view templates (and by default uses SQL CE 4 for data storage).  Developers wishing to extend an Orchard site with custom functionality can open and edit it as a Visual Studio project – and add new ASP.NET MVC Controllers/Views to it.  WebMatrix 1.0 WebMatrix is a new, free, web development tool from Microsoft that provides a suite of technologies that make it easier to enable website development.  It enables a developer to start a new site by browsing and downloading an app template from an online gallery of web applications (which includes popular apps like Umbraco, DotNetNuke, Orchard, WordPress, Drupal and Joomla).  Alternatively it also enables developers to create and code web sites from scratch. WebMatrix is task focused and helps guide developers as they work on sites.  WebMatrix includes IIS Express, SQL CE 4, and ASP.NET - providing an integrated web-server, database and programming framework combination.  It also includes built-in web publishing support which makes it easy to find and deploy sites to web hosting providers. You can learn more about WebMatrix from my Introducing WebMatrix blog post this summer.  Visit http://microsoft.com/web to download and install it today. Summary I’m really excited about today’s releases – they provide a bunch of additional value that makes web development with ASP.NET, Visual Studio and the Microsoft Web Server a lot better.  A lot of folks worked hard to share this with you today. On behalf of my whole team – we hope you enjoy them! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL SERVER – Securing TRUNCATE Permissions in SQL Server

    - by pinaldave
    Download the Script of this article from here. On December 11, 2010, Vinod Kumar, a Databases & BI technology evangelist from Microsoft Corporation, graced Ahmedabad by spending some time with the Community during the Community Tech Days (CTD) event. As he was running through a few demos, Vinod asked the audience one of the most fundamental and common interview questions – “What is the difference between a DELETE and TRUNCATE?“ Ahmedabad SQL Server User Group Expert Nakul Vachhrajani has come up with excellent solutions of the same. I must congratulate Nakul for this excellent solution and as a encouragement to User Group member, I am publishing the same article over here. Nakul Vachhrajani is a Software Specialist and systems development professional with Patni Computer Systems Limited. He has functional experience spanning legacy code deprecation, system design, documentation, development, implementation, testing, maintenance and support of complex systems, providing business intelligence solutions, database administration, performance tuning, optimization, product management, release engineering, process definition and implementation. He has comprehensive grasp on Database Administration, Development and Implementation with MS SQL Server and C, C++, Visual C++/C#. He has about 6 years of total experience in information technology. Nakul is an member of the Ahmedabad and Gandhinagar SQL Server User Groups, and actively contributes to the community by actively participating in multiple forums and websites like SQLAuthority.com, BeyondRelational.com, SQLServerCentral.com and many others. Please note: The opinions expressed herein are Nakul own personal opinions and do not represent his employer’s view in anyway. All data from everywhere here on Earth go through a series of  four distinct operations, identified by the words: CREATE, READ, UPDATE and DELETE, or simply, CRUD. Putting in Microsoft SQL Server terms, is the process goes like this: INSERT, SELECT, UPDATE and DELETE/TRUNCATE. Quite a few interesting responses were received and evaluated live during the session. To summarize them, the most important similarity that came out was that both DELETE and TRUNCATE participate in transactions. The major differences (not all) that came out of the exercise were: DELETE: DELETE supports a WHERE clause DELETE removes rows from a table, row-by-row Because DELETE moves row-by-row, it acquires a row-level lock Depending upon the recovery model of the database, DELETE is a fully-logged operation. Because DELETE moves row-by-row, it can fire off triggers TRUNCATE: TRUNCATE does not support a WHERE clause TRUNCATE works by directly removing the individual data pages of a table TRUNCATE directly occupies a table-level lock. (Because a lock is acquired, and because TRUNCATE can also participate in a transaction, it has to be a logged operation) TRUNCATE is, therefore, a minimally-logged operation; again, this depends upon the recovery model of the database Triggers are not fired when TRUNCATE is used (because individual row deletions are not logged) Finally, Vinod popped the big homework question that must be critically analyzed: “We know that we can restrict a DELETE operation to a particular user, but how can we restrict the TRUNCATE operation to a particular user?” After returning home and having a nice cup of coffee, I noticed that my gray cells immediately started to work. Below was the result of my research. As what is always said, the devil is in the details. Upon looking at the Permissions section for the TRUNCATE statement in Books On Line, the following jumps right out: “The minimum permission required is ALTER on table_name. TRUNCATE TABLE permissions default to the table owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and are not transferable. However, you can incorporate the TRUNCATE TABLE statement within a module, such as a stored procedure, and grant appropriate permissions to the module using the EXECUTE AS clause.“ Now, what does this mean? Unlike DELETE, one cannot directly assign permissions to a user/set of users allowing or revoking TRUNCATE rights. However, there is a way to circumvent this. It is important to recall that in Microsoft SQL Server, database engine security surrounds the concept of a “securable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). urable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). SETTING UP THE ENVIRONMENT – (01A_Truncate Table Permissions.sql) Script Provided at the end of the article. By the end of this demo, one will be able to do all the CRUD operations, except the TRUNCATE, and the other will only be able to execute the TRUNCATE. All you will need for this test is any edition of SQL Server 2008. (With minor changes, these scripts can be made to work with SQL 2005.) We begin by creating the following: 1.       A test database 2.        Two database roles: associated logins and users 3.       Switch over to the test database and create a test table. Then, add some data into it. I am using row constructors, which is new to SQL 2008. Creating the modules that will be used to enforce permissions 1.       We have already created one of the modules that we will be assigning permissions to. That module is the table: TruncatePermissionsTest 2.       We will now create two stored procedures; one is for the DELETE operation and the other for the TRUNCATE operation. Please note that for all practical purposes, the end result is the same – all data from the table TruncatePermissionsTest is removed Assigning the permissions Now comes the most important part of the demonstration – assigning permissions. A permissions matrix can be worked out as under: To apply the security rights, we use the GRANT and DENY clauses, as under: That’s it! We are now ready for our big test! THE TEST (01B_Truncate Table Test Queries.sql) Script Provided at the end of the article. I will now need two separate SSMS connections, one with the login AllowedTruncate and the other with the login RestrictedTruncate. Running the test is simple; all that’s required is to run through the script – 01B_Truncate Table Test Queries.sql. What I will demonstrate here via screen-shots is the behavior of SQL Server when logged in as the AllowedTruncate user. There are a few other combinations than what are highlighted here. I will leave the reader the right to explore the behavior of the RestrictedTruncate user and these additional scenarios, as a form of self-study. 1.       Testing SELECT permissions 2.       Testing TRUNCATE permissions (Remember, “deny by default”?) 3.       Trying to circumvent security by trying to TRUNCATE the table using the stored procedure Hence, we have now proved that a user can indeed be assigned permissions to specifically assign TRUNCATE permissions. I also hope that the above has sparked curiosity towards putting some security around the probably “destructive” operations of DELETE and TRUNCATE. I would like to wish each and every one of the readers a very happy and secure time with Microsoft SQL Server. (Please find the scripts – 01A_Truncate Table Permissions.sql and 01B_Truncate Table Test Queries.sql that have been used in this demonstration. Please note that these scripts contain purely test-level code only. These scripts must not, at any cost, be used in the reader’s production environments). 01A_Truncate Table Permissions.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Run through, step-by-step through the sequence till Step 08 to create a test database 2. Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows, one where you have logged in as 'RestrictedTruncate', and the other as 'AllowedTruncate' 3. Come back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 13, 2010 - NAV - Updated to add a security matrix and improve code readability when applying security December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 01: Create a new test database CREATE DATABASE TruncateTestDB GO USE TruncateTestDB GO -- Step 02: Add roles and users to demonstrate the security of the Truncate operation -- 2a. Create the new roles CREATE ROLE AllowedTruncateRole; GO CREATE ROLE RestrictedTruncateRole; GO -- 2b. Create new logins CREATE LOGIN AllowedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO CREATE LOGIN RestrictedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO -- 2c. Create new Users using the roles and logins created aboave CREATE USER TruncateUser FOR LOGIN AllowedTruncate WITH DEFAULT_SCHEMA = dbo GO CREATE USER NoTruncateUser FOR LOGIN RestrictedTruncate WITH DEFAULT_SCHEMA = dbo GO -- 2d. Add the newly created login to the newly created role sp_addrolemember 'AllowedTruncateRole','TruncateUser' GO sp_addrolemember 'RestrictedTruncateRole','NoTruncateUser' GO -- Step 03: Change over to the test database USE TruncateTestDB GO -- Step 04: Create a test table within the test databse CREATE TABLE TruncatePermissionsTest (Id INT IDENTITY(1,1), Name NVARCHAR(50)) GO -- Step 05: Populate the required data INSERT INTO TruncatePermissionsTest VALUES (N'Delhi'), (N'Mumbai'), (N'Ahmedabad') GO -- Step 06: Encapsulate the DELETE within another module CREATE PROCEDURE proc_DeleteMyTable WITH EXECUTE AS SELF AS DELETE FROM TruncateTestDB..TruncatePermissionsTest GO -- Step 07: Encapsulate the TRUNCATE within another module CREATE PROCEDURE proc_TruncateMyTable WITH EXECUTE AS SELF AS TRUNCATE TABLE TruncateTestDB..TruncatePermissionsTest GO -- Step 08: Apply Security /* *****************************SECURITY MATRIX*************************************** =================================================================================== Object                   | Permissions |                 Login |             | AllowedTruncate   |   RestrictedTruncate |             |User:NoTruncateUser|   User:TruncateUser =================================================================================== TruncatePermissionsTest  | SELECT,     |      GRANT        |      (Default) | INSERT,     |                   | | UPDATE,     |                   | | DELETE      |                   | -------------------------+-------------+-------------------+----------------------- TruncatePermissionsTest  | ALTER       |      DENY         |      (Default) -------------------------+-------------+----*/----------------+----------------------- proc_DeleteMyTable | EXECUTE | GRANT | DENY -------------------------+-------------+-------------------+----------------------- proc_TruncateMyTable | EXECUTE | DENY | GRANT -------------------------+-------------+-------------------+----------------------- *****************************SECURITY MATRIX*************************************** */ /* Table: TruncatePermissionsTest*/ GRANT SELECT, INSERT, UPDATE, DELETE ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO DENY ALTER ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO /* Procedure: proc_DeleteMyTable*/ GRANT EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO NoTruncateUser GO DENY EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO TruncateUser GO /* Procedure: proc_TruncateMyTable*/ DENY EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO NoTruncateUser GO GRANT EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO TruncateUser GO -- Step 09: Test --Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows: --    1. one where you have logged in as 'RestrictedTruncate', and --    2. the other as 'AllowedTruncate' -- Step 10: Cleanup sp_droprolemember 'AllowedTruncateRole','TruncateUser' GO sp_droprolemember 'RestrictedTruncateRole','NoTruncateUser' GO DROP USER TruncateUser GO DROP USER NoTruncateUser GO DROP LOGIN AllowedTruncate GO DROP LOGIN RestrictedTruncate GO DROP ROLE AllowedTruncateRole GO DROP ROLE RestrictedTruncateRole GO USE MASTER GO DROP DATABASE TruncateTestDB GO 01B_Truncate Table Test Queries.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Switch over to this from "Truncate Table Permissions.sql", Step #09 2. Execute this step-by-step in two different SSMS windows a. One where you have logged in as 'RestrictedTruncate', and b. The other as 'AllowedTruncate' 3. Return back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 09A: Switch to the test database USE TruncateTestDB GO -- Step 09B: Ensure that we have valid data SELECT * FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The SELECT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09C: Attempt to Truncate Data from the table without using the stored procedure TRUNCATE TABLE TruncatePermissionsTest GO -- (Expected: Following error will occur) --  Msg 1088, Level 16, State 7, Line 2 --  Cannot find the object "TruncatePermissionsTest" because it does not exist or you do not have permissions. -- Step 09D:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'London'), (N'Paris'), (N'Berlin') GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The INSERT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09E: Attempt to Truncate Data from the table using the stored procedure EXEC proc_TruncateMyTable GO -- (Expected: Will execute successfully with 'AllowedTruncate' user, will error out as under with 'RestrictedTruncate') -- Msg 229, Level 14, State 5, Procedure proc_TruncateMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_TruncateMyTable', database 'TruncateTestDB', schema 'dbo'. -- Step 09F:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Madrid'), (N'Rome'), (N'Athens') GO --Step 09G: Attempt to Delete Data from the table without using the stored procedure DELETE FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 2 -- The DELETE permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. -- Step 09H:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Spain'), (N'Italy'), (N'Greece') GO --Step 09I: Attempt to Delete Data from the table using the stored procedure EXEC proc_DeleteMyTable GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Procedure proc_DeleteMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_DeleteMyTable', database 'TruncateTestDB', schema 'dbo'. --Step 09J: Close this SSMS window and return back to "Truncate Table Permissions.sql" Thank you Nakul to take up the challenge and prove that Ahmedabad and Gandhinagar SQL Server User Group has talent to solve difficult problems. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Windows Phone 7 development: reading RSS feeds

    - by DigiMortal
    One limitation on Windows Phone 7 is related to System.Net namespace classes. There is no convenient way to read data from web. There is no WebClient class. There is no GetResponse() method – we have to do it all asynchronously because compact framework has limited set of classes we can use in our applications to communicate with internet. In this posting I will show you how to read RSS-feeds on Windows Phone 7. NB! This is my draft code and it may contain some design flaws and some questionable solutions. This code is intended to use as test-drive for Windows Phone 7 CTP developer tools and I don’t suppose you are going to use this code in production environment. Current state of my RSS-reader Currently my RSS-reader for Windows Phone 7 is very simple, primitive and uses almost all defaults that come out-of-box with Windows Phone 7 CTP developer tools. My first goal before going on with nicer user interface design was making RSS-reading work because instead of convenient classes from .NET Framework we have to use very limited classes from .NET Framework CE. This is why I took the reading of RSS-feeds as my first task. There are currently more things to solve regarding user-interface. As I am pretty new to all this Silverlight stuff I am not very sure if I can modify default controls easily or should I write my own controls that have better look and that work faster. The image on right shows you how my RSS-reader looks like right now. Upper side of screen is filled with list that shows headlines from this blog. The bottom part of screen is used to show description of selected posting. You can click on the image to see it in original size. In my next posting I will show you some improvements of my RSS-reader user interface that make it look nicer. But currently it is nice enough to make sure that RSS-feeds are read correctly. FeedItem class As this is most straight-forward part of the following code I will show you RSS-feed items class first. I think we have to stop on it because it is simple one. public class FeedItem {     public string Title { get; set; }     public string Description { get; set; }     public DateTime PublishDate { get; set; }     public List<string> Categories { get; set; }     public string Link { get; set; }       public FeedItem()     {         Categories = new List<string>();     } } RssClient RssClient takes feed URL and when asked it loads all items from feed and gives them back to caller through ItemsReceived event. Why it works this way? Because we can make responses only using asynchronous methods. I will show you in next section how to use this class. Although the code here is not very good but it works like expected. I will refactor this code later because it needs some more efforts and investigating. But let’s hope I find excellent solution. :) public class RssClient {     private readonly string _rssUrl;       public delegate void ItemsReceivedDelegate(RssClient client, IList<FeedItem> items);     public event ItemsReceivedDelegate ItemsReceived;       public RssClient(string rssUrl)     {         _rssUrl = rssUrl;     }       public void LoadItems()     {         var request = (HttpWebRequest)WebRequest.Create(_rssUrl);         var result = (IAsyncResult)request.BeginGetResponse(ResponseCallback, request);     }       void ResponseCallback(IAsyncResult result)     {         var request = (HttpWebRequest)result.AsyncState;         var response = request.EndGetResponse(result);           var stream = response.GetResponseStream();         var reader = XmlReader.Create(stream);         var items = new List<FeedItem>(50);           FeedItem item = null;         var pointerMoved = false;           while (!reader.EOF)         {             if (pointerMoved)             {                 pointerMoved = false;             }             else             {                 if (!reader.Read())                     break;             }               var nodeName = reader.Name;             var nodeType = reader.NodeType;               if (nodeName == "item")             {                 if (nodeType == XmlNodeType.Element)                     item = new FeedItem();                 else if (nodeType == XmlNodeType.EndElement)                     if (item != null)                     {                         items.Add(item);                         item = null;                     }                   continue;             }               if (nodeType != XmlNodeType.Element)                 continue;               if (item == null)                 continue;               reader.MoveToContent();             var nodeValue = reader.ReadElementContentAsString();             // we just moved internal pointer             pointerMoved = true;               if (nodeName == "title")                 item.Title = nodeValue;             else if (nodeName == "description")                 item.Description =  Regex.Replace(nodeValue,@"<(.|\n)*?>",string.Empty);             else if (nodeName == "feedburner:origLink")                 item.Link = nodeValue;             else if (nodeName == "pubDate")             {                 if (!string.IsNullOrEmpty(nodeValue))                     item.PublishDate = DateTime.Parse(nodeValue);             }             else if (nodeName == "category")                 item.Categories.Add(nodeValue);         }           if (ItemsReceived != null)             ItemsReceived(this, items);     } } This method is pretty long but it works. Now let’s try to use it in Windows Phone 7 application. Using RssClient And this is the fragment of code behing the main page of my application start screen. You can see how RssClient is initialized and how items are bound to list that shows them. public MainPage() {     InitializeComponent();       SupportedOrientations = SupportedPageOrientation.Portrait | SupportedPageOrientation.Landscape;     listBox1.Width = Width;       var rssClient = new RssClient("http://feedproxy.google.com/gunnarpeipman");     rssClient.ItemsReceived += new RssClient.ItemsReceivedDelegate(rssClient_ItemsReceived);     rssClient.LoadItems(); }   void rssClient_ItemsReceived(RssClient client, IList<FeedItem> items) {     Dispatcher.BeginInvoke(delegate()     {         listBox1.ItemsSource = items;     });            } Conclusion As you can see it was not very hard task to read RSS-feed and populate list with feed entries. Although we are not able to use more powerful classes that are part of full version on .NET Framework we can still live with limited set of classes that .NET Framework CE provides.

    Read the article

  • Where do I find scripts generated by SharePoint MCMS Migration Profiles

    - by HipCzeck
    I am attempting to migrate data from an Microsoft Content Management Server (MCMS) 2002 instance into a new Microsoft Office Sharepoint Server (MOSS) 2007 installation using the Manage Microsoft Content Management Server Migration Profiles tool in the Operations space of MOSS Central Administration. When analyzing the profile, I receive 4 warnings, all of which may be safely ignored, but when I actually execute the migration profile, I get the same warnings and an additional error with a description of: Line 6: Incorrect syntax near ';'. I have seen this error numerous times when mucking about in SQL Server and recognize it as a Transact SQL error message, but can't find the actual SQL statement that is being executed so that I may determine the source of the error. EDIT: After enabling verbose logging on the MCMS 2002 Migration category, and poring through the Unified Logging Service (ULS) logs, I received a more complete stack trace at the point of the error, and a couple more anomalies listed below. Anomalies: The following is an abbreviated listing from the ULS logs around the time of the pre-migration analysis. 01 MCMS 2002 Migration Verbose Start ConnectionCheck 02 MCMS 2002 Migration Verbose End ConnectionCheck 03 MCMS 2002 Migration Verbose Start DatabaseCheck 04 MCMS 2002 Migration High Extra table SiteDeployLock will not be migrated 05 MCMS 2002 Migration High Analysis: Extra index PK__SiteDeployLock__05D8E0BE 06 MCMS 2002 Migration Verbose End DatabaseCheck 07 MCMS 2002 Migration Medium Pre-migration analysis: RootCheckTask is skipped because database check is blocked. 08 MCMS 2002 Migration Medium Pre-migration analysis: RightsGroupNameCheckTask is skipped because database check is blocked. 09 MCMS 2002 Migration Medium Pre-migration analysis: InvalidNameCheckTask is skipped because database check is blocked. 10 MCMS 2002 Migration Medium Pre-migration analysis: LeafNameCheckTask is skipped because database check is blocked. 11 MCMS 2002 Migration Medium Pre-migration analysis: LeafLengthCheckTask is skipped because database check is blocked. 12 MCMS 2002 Migration Medium Pre-migration analysis: TemplateNameCheckTask is skipped because database check is blocked. 13 MCMS 2002 Migration Medium Pre-migration analysis: TemplateCollisionCheckTask is skipped because database check is blocked. 14 MCMS 2002 Migration Medium Pre-migration analysis: PlaceholderCheckTask is skipped because database check is blocked. 15 MCMS 2002 Migration Medium Pre-migration analysis: CheckedOutItemsCheckTask is skipped because database check is blocked. 16 MCMS 2002 Migration Medium Pre-migration analysis: SubmittedItemsCheckTask is skipped because database check is blocked. 17 MCMS 2002 Migration Medium Pre-migration analysis: DeletedItemsCheckTask is skipped because database check is blocked. 18 MCMS 2002 Migration Medium Pre-migration analysis: UserCheckTask is skipped because database check is blocked. 19 MCMS 2002 Migration Medium Pre-migration analysis: FileSizeCheckTask is skipped because database check is blocked. 20 MCMS 2002 Migration Medium Pre-migration analysis: HostHeaderMapCheckTask is skipped because database check is blocked. 21 MCMS 2002 Migration Verbose Start Server check 22 MCMS 2002 Migration Verbose End Server check 23 MCMS 2002 Migration Verbose Start Server emptyness check 24 MCMS 2002 Migration Verbose End Server emptyness check 25 MCMS 2002 Migration Medium PreMigrationAnalyzer: Dry run starts 26 MCMS 2002 Migration Verbose CleanLockProcedure: start. 27 MCMS 2002 Migration High CleanLockProcedure: connection system lock is null 28 MCMS 2002 Migration Verbose Finished all tasks 29 MCMS 2002 Migration High PreMigrationAnalyzer ends with True 30 MCMS 2002 Migration Verbose Migration profile status is changed to AnalysisPassed Specifically, the two High level alerts on lines 4 and 5 are reflected in the migration report as warnings when running Pre-migration Analysis or running the migration profile. In addition, two other warnings appear in the migration report indicating two tables containing data (LayoutProperty and NodeLayout) that should be empty. According to the documentation, warnings are not sufficient cause to stop migration from occurring. Other anomalies are on lines 7-20 indicating a series of tests that are skipped because database check is blocked. The ULS doesn't give any additional warnings to indicate that the database check was blocked or exited in exceptional circumstances. After switching the profile from pre-migration analysis to exporting, there is one medium level warning that LastChangeTime is not set or incorrect. (null). As with all the skipped test names and SQL table names from the warnings, the major search engines are unable (with the exception of LayoutProperty) to find any reference to these objects or tests. Finally, the section of the log indicating the actual live migration attempt is appended below: 01 MCMS 2002 Migration Medium LastChangeTime is not set or incorrect. (null) 02 MCMS 2002 Migration Verbose Set export lock 03 MCMS 2002 Migration Verbose CleanLockProcedure: start. 04 MCMS 2002 Migration Verbose CleanLockProcedure: end. 05 MCMS 2002 Migration Verbose Prepare for export 06 MCMS 2002 Migration Verbose Open connection... 07 MCMS 2002 Migration Verbose Create temporary stored procedures 08 MCMS 2002 Migration Verbose Create temporary tables... 09 MCMS 2002 Migration Verbose Initialize temporary tables... 10 MCMS 2002 Migration Verbose InitializeTemporaryTables: start 11 MCMS 2002 Migration Verbose Initialize export table... 12 MCMS 2002 Migration Verbose InitializeExportTable: start 13 MCMS 2002 Migration Verbose CleanLockProcedure: start. 14 MCMS 2002 Migration Verbose CleanLockProcedure: end. 15 MCMS 2002 Migration High Migration throws exception: Line 6: Incorrect syntax near ';'.. Stacktrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SharePoint.Publishing.Internal.Administration... 16 MCMS 2002 Migration High ....MigrationBatchCommand.ExecuteImmediate(String command) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand.ExecuteWaitingCommands() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDBSerializer.SerializeSelectedExportObject(StringCollection objectAttribs) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeExportTable(ScopeType scopeType) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeTemporaryTables(DateTime lastChangeTime) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis, SqlConnection connection) at Microsoft.SharePoint.Publishing.Internal.Admin... 17 MCMS 2002 Migration High ...stration.MigrationDataAccess.InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.Export(MigrationDataAccess dataAccess) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.MigrateInternal(). 18 MCMS 2002 Migration Verbose MigrationProfile: GetInstance. Start. 19 MCMS 2002 Migration Verbose MigrationProfile: GetInstance. End. 20 MCMS 2002 Migration Verbose Migration profile status is changed to Failed The stack trace of the failed parsing of the SQL command appear on lines 15-17. A cleaner version of the stack trace is appended below. Full Stack Trace: Migration throws exception: Line 6: Incorrect syntax near ';'.. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning( TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand .ExecuteImmediate(String command) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand .ExecuteWaitingCommands() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDBSerializer .SerializeSelectedExportObject(StringCollection objectAttribs) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeExportTable(ScopeType scopeType) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeTemporaryTables(DateTime lastChangeTime) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis, SqlConnection connection) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.Export (MigrationDataAccess dataAccess) at Microsoft.SharePoint.Publishing.Administration.ContentMigration .MigrateInternal(). None of this log information indicates the SQL command that is failing a parser check. I've checked the SQL servers hosting the source and destination databases for a trace of the query, but neither seems to have triggered the parse failure condition. That appears to have happened on the SharePoint server. Are there any other locations I should investigate that might tell me where to find the source of the error?

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322  | Next Page >