Search Results

Search found 5842 results on 234 pages for 'break'.

Page 121/234 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Windows Phone 7: Building a simple dictionary web client

    - by TechTwaddle
    Like I mentioned in this post a while back, I came across a dictionary web service called Aonaware that serves up word definitions from various dictionaries and is really easy to use. The services page on their website, http://services.aonaware.com/DictService/DictService.asmx, lists all the operations that are supported by the dictionary service. Here they are, Word Dictionary Web Service The following operations are supported. For a formal definition, please review the Service Description. Define Define given word, returning definitions from all dictionaries DefineInDict Define given word, returning definitions from specified dictionary DictionaryInfo Show information about the specified dictionary DictionaryList Returns a list of available dictionaries DictionaryListExtended Returns a list of advanced dictionaries (e.g. translating dictionaries) Match Look for matching words in all dictionaries using the given strategy MatchInDict Look for matching words in the specified dictionary using the given strategy ServerInfo Show remote server information StrategyList Return list of all available strategies on the server Follow the links above to get more information on each API. In this post we will be building a simple windows phone 7 client which uses this service to get word definitions for words entered by the user. The application will also allow the user to select a dictionary from all the available ones and look up the word definition in that dictionary. So of all the apis above we will be using only two, DictionaryList() to get a list of all supported dictionaries and DefineInDict() to get the word definition from a particular dictionary. Before we get started, a note to you all; I would have liked to implement this application using concepts from data binding, item templates, data templates etc. I have a basic understanding of what they are but, being a beginner, I am not very comfortable with those topics yet so I didn’t use them. I thought I’ll get this version out of the way and maybe in the next version I could give those a try. A somewhat scary mock-up of the what the final application will look like, Select Dictionary is a list picker control from the silverlight toolkit (you need to download and install the toolkit if you haven’t already). Below it is a textbox where the user can enter words to look up and a button beside it to fetch the word definition when clicked. Finally we have a textblock which occupies the remaining area and displays the word definition from the selected dictionary. Create a silverlight application for windows phone 7, AonawareDictionaryClient, and add references to the silverlight toolkit and the web service. From the solution explorer right on References and select Microsoft.Phone.Controls.Toolkit from under the .NET tab, Next, add a reference to the web service. Again right click on References and this time select Add Service Reference In the resulting dialog paste the service url in the Address field and press go, (url –> http://services.aonaware.com/DictService/DictService.asmx) once the service is discovered, provide a name for the NameSpace, in this case I’ve called it AonawareDictionaryService. Press OK. You can now use the classes and functions that are generated in the AonawareDictionaryClient.AonawareDictionaryService namespace. Let’s get the UI done now. In MainPage.xaml add a namespace declaration to use the toolkit controls, xmlns:toolkit="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Toolkit" the content of LayoutRoot is changed as follows, (sorry, no syntax highlighting in this post) <StackPanel x:Name="TitlePanel" Grid.Row="0" Margin="12,5,0,5">     <TextBlock x:Name="ApplicationTitle" Text="AONAWARE DICTIONARY CLIENT" Style="{StaticResource PhoneTextNormalStyle}"/>     <!--<TextBlock x:Name="PageTitle" Text="page name" Margin="9,-7,0,0" Style="{StaticResource PhoneTextTitle1Style}"/>--> </StackPanel> <!--ContentPanel - place additional content here--> <Grid x:Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0">     <Grid.RowDefinitions>         <RowDefinition Height="Auto"/>         <RowDefinition Height="Auto"/>         <RowDefinition Height="*"/>     </Grid.RowDefinitions>     <toolkit:ListPicker Grid.Column="1" x:Name="listPickerDictionaryList"                         Header="Select Dictionary :">     </toolkit:ListPicker>     <Grid Grid.Row="1" Margin="0,5,0,0">         <Grid.ColumnDefinitions>             <ColumnDefinition Width="*"/>             <ColumnDefinition Width="Auto" />         </Grid.ColumnDefinitions>         <TextBox x:Name="txtboxInputWord" Grid.Column="0" GotFocus="OnTextboxInputWordGotFocus" />         <Button x:Name="btnGo" Grid.Column="1" Click="OnButtonGoClick" >             <Button.Content>                 <Image Source="/images/button-go.png"/>             </Button.Content>         </Button>     </Grid>     <ScrollViewer Grid.Row="2" x:Name="scrollViewer">         <TextBlock  Margin="12,5,12,5"  x:Name="txtBlockWordMeaning" HorizontalAlignment="Stretch"                    VerticalAlignment="Stretch" TextWrapping="Wrap"                    FontSize="26" />     </ScrollViewer> </Grid> I have commented out the PageTitle as it occupies too much valuable space, and the ContentPanel is changed to contain three rows. First row contains the list picker control, second row contains the textbox and the button, and the third row contains a textblock within a scroll viewer. The designer will now be showing the final ui, Now go to MainPage.xaml.cs, and add the following namespace declarations, using Microsoft.Phone.Controls; using AonawareDictionaryClient.AonawareDictionaryService; using System.IO.IsolatedStorage; A class called DictServiceSoapClient would have been created for you in the background when you added a reference to the web service. This class functions as a wrapper to the services exported by the web service. All the web service functions that we saw at the start can be access through this class, or more precisely through an object of this class. Create a data member of type DictServiceSoapClient in the Mainpage class, and a function which initializes it, DictServiceSoapClient DictSvcClient = null; private DictServiceSoapClient GetDictServiceSoapClient() {     if (null == DictSvcClient)     {         DictSvcClient = new DictServiceSoapClient();     }     return DictSvcClient; } We have two major tasks remaining. First, when the application loads we need to populate the list picker with all the supported dictionaries and second, when the user enters a word and clicks on the arrow button we need to fetch the word’s meaning. Populating the List Picker In the OnNavigatingTo event of the MainPage, we call the DictionaryList() api. This can also be done in the OnLoading event handler of the MainPage; not sure if one has an advantage over the other. Here’s the code for OnNavigatedTo, protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e) {     DictServiceSoapClient client = GetDictServiceSoapClient();     client.DictionaryListCompleted += new EventHandler<DictionaryListCompletedEventArgs>(OnGetDictionaryListCompleted);     client.DictionaryListAsync();     base.OnNavigatedTo(e); } Windows Phone 7 supports only async calls to web services. When we added a reference to the dictionary service, asynchronous versions of all the functions were generated automatically. So in the above function we register a handler to the DictionaryListCompleted event which will occur when the call to DictionaryList() gets a response from the server. Then we call the DictionaryListAsynch() function which is the async version of the DictionaryList() api. The result of this api will be sent to the handler OnGetDictionaryListCompleted(), void OnGetDictionaryListCompleted(object sender, DictionaryListCompletedEventArgs e) {     IsolatedStorageSettings settings = IsolatedStorageSettings.ApplicationSettings;     Dictionary[] listOfDictionaries;     if (e.Error == null)     {         listOfDictionaries = e.Result;         PopulateListPicker(listOfDictionaries, settings);     }     else if (settings.Contains("SavedDictionaryList"))     {         listOfDictionaries = settings["SavedDictionaryList"] as Dictionary[];         PopulateListPicker(listOfDictionaries, settings);     }     else     {         MessageBoxResult res = MessageBox.Show("An error occured while retrieving dictionary list, do you want to try again?", "Error", MessageBoxButton.OKCancel);         if (MessageBoxResult.OK == res)         {             GetDictServiceSoapClient().DictionaryListAsync();         }     }     settings.Save(); } I have used IsolatedStorageSettings to store a few things; the entire dictionary list and the dictionary that is selected when the user exits the application, so that the next time when the user starts the application the current dictionary is set to the last selected value. First we check if the api returned any error, if the error object is null e.Result will contain the list (actually array) of Dictionary type objects. If there was an error, we check the isolated storage settings to see if there is a dictionary list stored from a previous instance of the application and if so, we populate the list picker based on this saved list. Note that in this case there are chances that the dictionary list might be out of date if there have been changes on the server. Finally, if none of these cases are true, we display an error message to the user and try to fetch the list again. PopulateListPicker() is passed the array of Dictionary objects and the settings object as well, void PopulateListPicker(Dictionary[] listOfDictionaries, IsolatedStorageSettings settings) {     listPickerDictionaryList.Items.Clear();     foreach (Dictionary dictionary in listOfDictionaries)     {         listPickerDictionaryList.Items.Add(dictionary.Name);     }     settings["SavedDictionaryList"] = listOfDictionaries;     string savedDictionaryName;     if (settings.Contains("SavedDictionary"))     {         savedDictionaryName = settings["SavedDictionary"] as string;     }     else     {         savedDictionaryName = "WordNet (r) 2.0"; //default dictionary, wordnet     }     foreach (string dictName in listPickerDictionaryList.Items)     {         if (dictName == savedDictionaryName)         {             listPickerDictionaryList.SelectedItem = dictName;             break;         }     }     settings["SavedDictionary"] = listPickerDictionaryList.SelectedItem as string; } We first clear all the items from the list picker, add the dictionary names from the array and then create a key in the settings called SavedDictionaryList and store the dictionary list in it. We then check if there is saved dictionary available from a previous instance, if there is, we set it as the selected item in the list picker. And if not, we set “WordNet ® 2.0” as the default dictionary. Before returning, we save the selected dictionary in the “SavedDictionary” key of the isolated storage settings. Fetching word definitions Getting this part done is very similar to the above code. We get the input word from the textbox, call into DefineInDictAsync() to fetch the definition and when DefineInDictAsync completes, we get the result and display it in the textblock. Here is the handler for the button click, private void OnButtonGoClick(object sender, RoutedEventArgs e) {     txtBlockWordMeaning.Text = "Please wait..";     IsolatedStorageSettings settings = IsolatedStorageSettings.ApplicationSettings;     if (txtboxInputWord.Text.Trim().Length <= 0)     {         MessageBox.Show("Please enter a word in the textbox and press 'Go'");     }     else     {         Dictionary[] listOfDictionaries = settings["SavedDictionaryList"] as Dictionary[];         string selectedDictionary = listPickerDictionaryList.SelectedItem.ToString();         string dictId = "wn"; //default dictionary is wordnet (wn is the dict id)         foreach (Dictionary dict in listOfDictionaries)         {             if (dict.Name == selectedDictionary)             {                 dictId = dict.Id;                 break;             }         }         DictServiceSoapClient client = GetDictServiceSoapClient();         client.DefineInDictCompleted += new EventHandler<DefineInDictCompletedEventArgs>(OnDefineInDictCompleted);         client.DefineInDictAsync(dictId, txtboxInputWord.Text.Trim());     } } We validate the input and then select the dictionary id based on the currently selected dictionary. We need the dictionary id because the api DefineInDict() expects the dictionary identifier and not the dictionary name. We could very well have stored the dictionary id in isolated storage settings too. Again, same as before, we register a event handler for the DefineInDictCompleted event and call the DefineInDictAsync() method passing in the dictionary id and the input word. void OnDefineInDictCompleted(object sender, DefineInDictCompletedEventArgs e) {     WordDefinition wd = e.Result;     scrollViewer.ScrollToVerticalOffset(0.0f);     if (wd.Definitions.Length == 0)     {         txtBlockWordMeaning.Text = String.Format("No definitions were found for '{0}' in '{1}'", txtboxInputWord.Text.Trim(), listPickerDictionaryList.SelectedItem.ToString().Trim());     }     else     {         foreach (Definition def in wd.Definitions)         {             string str = def.WordDefinition;             str = str.Replace("  ", " "); //some formatting             txtBlockWordMeaning.Text = str;         }     } } When the api completes, e.Result will contain a WordDefnition object. This class is also generated in the background while adding the service reference. We check the word definitions within this class to see if any results were returned, if not, we display a message to the user in the textblock. If a definition was found the text on the textblock is set to display the definition of the word. Adding final touches, we now need to save the current dictionary when the application exits. A small but useful thing is selecting the entire word in the input textbox when the user selects it. This makes sure that if the user has looked up a definition for a really long word, he doesn’t have to press ‘clear’ too many times to enter the next word, protected override void OnNavigatingFrom(System.Windows.Navigation.NavigatingCancelEventArgs e) {     IsolatedStorageSettings settings = IsolatedStorageSettings.ApplicationSettings;     settings["SavedDictionary"] = listPickerDictionaryList.SelectedItem as string;     settings.Save();     base.OnNavigatingFrom(e); } private void OnTextboxInputWordGotFocus(object sender, RoutedEventArgs e) {     TextBox txtbox = sender as TextBox;     if (txtbox.Text.Trim().Length > 0)     {         txtbox.SelectionStart = 0;         txtbox.SelectionLength = txtbox.Text.Length;     } } OnNavigatingFrom() is called whenever you navigate away from the MainPage, since our application contains only one page that would mean that it is exiting. I leave you with a short video of the application in action, but before that if you have any suggestions on how to make the code better and improve it please do leave a comment. Until next time…

    Read the article

  • A Communication System for XAML Applications

    - by psheriff
    In any application, you want to keep the coupling between any two or more objects as loose as possible. Coupling happens when one class contains a property that is used in another class, or uses another class in one of its methods. If you have this situation, then this is called strong or tight coupling. One popular design pattern to help with keeping objects loosely coupled is called the Mediator design pattern. The basics of this pattern are very simple; avoid one object directly talking to another object, and instead use another class to mediate between the two. As with most of my blog posts, the purpose is to introduce you to a simple approach to using a message broker, not all of the fine details. IPDSAMessageBroker Interface As with most implementations of a design pattern, you typically start with an interface or an abstract base class. In this particular instance, an Interface will work just fine. The interface for our Message Broker class just contains a single method “SendMessage” and one event “MessageReceived”. public delegate void MessageReceivedEventHandler( object sender, PDSAMessageBrokerEventArgs e); public interface IPDSAMessageBroker{  void SendMessage(PDSAMessageBrokerMessage msg);   event MessageReceivedEventHandler MessageReceived;} PDSAMessageBrokerMessage Class As you can see in the interface, the SendMessage method requires a type of PDSAMessageBrokerMessage to be passed to it. This class simply has a MessageName which is a ‘string’ type and a MessageBody property which is of the type ‘object’ so you can pass whatever you want in the body. You might pass a string in the body, or a complete Customer object. The MessageName property will help the receiver of the message know what is in the MessageBody property. public class PDSAMessageBrokerMessage{  public PDSAMessageBrokerMessage()  {  }   public PDSAMessageBrokerMessage(string name, object body)  {    MessageName = name;    MessageBody = body;  }   public string MessageName { get; set; }   public object MessageBody { get; set; }} PDSAMessageBrokerEventArgs Class As our message broker class will be raising an event that others can respond to, it is a good idea to create your own event argument class. This class will inherit from the System.EventArgs class and add a couple of additional properties. The properties are the MessageName and Message. The MessageName property is simply a string value. The Message property is a type of a PDSAMessageBrokerMessage class. public class PDSAMessageBrokerEventArgs : EventArgs{  public PDSAMessageBrokerEventArgs()  {  }   public PDSAMessageBrokerEventArgs(string name,     PDSAMessageBrokerMessage msg)  {    MessageName = name;    Message = msg;  }   public string MessageName { get; set; }   public PDSAMessageBrokerMessage Message { get; set; }} PDSAMessageBroker Class Now that you have an interface class and a class to pass a message through an event, it is time to create your actual PDSAMessageBroker class. This class implements the SendMessage method and will also create the event handler for the delegate created in your Interface. public class PDSAMessageBroker : IPDSAMessageBroker{  public void SendMessage(PDSAMessageBrokerMessage msg)  {    PDSAMessageBrokerEventArgs args;     args = new PDSAMessageBrokerEventArgs(      msg.MessageName, msg);     RaiseMessageReceived(args);  }   public event MessageReceivedEventHandler MessageReceived;   protected void RaiseMessageReceived(    PDSAMessageBrokerEventArgs e)  {    if (null != MessageReceived)      MessageReceived(this, e);  }} The SendMessage method will take a PDSAMessageBrokerMessage object as an argument. It then creates an instance of a PDSAMessageBrokerEventArgs class, passing to the constructor two items: the MessageName from the PDSAMessageBrokerMessage object and also the object itself. It may seem a little redundant to pass in the message name when that same message name is part of the message, but it does make consuming the event and checking for the message name a little cleaner – as you will see in the next section. Create a Global Message Broker In your WPF application, create an instance of this message broker class in the App class located in the App.xaml file. Create a public property in the App class and create a new instance of that class in the OnStartUp event procedure as shown in the following code: public partial class App : Application{  public PDSAMessageBroker MessageBroker { get; set; }   protected override void OnStartup(StartupEventArgs e)  {    base.OnStartup(e);     MessageBroker = new PDSAMessageBroker();  }} Sending and Receiving Messages Let’s assume you have a user control that you load into a control on your main window and you want to send a message from that user control to the main window. You might have the main window display a message box, or put a string into a status bar as shown in Figure 1. Figure 1: The main window can receive and send messages The first thing you do in the main window is to hook up an event procedure to the MessageReceived event of the global message broker. This is done in the constructor of the main window: public MainWindow(){  InitializeComponent();   (Application.Current as App).MessageBroker.     MessageReceived += new MessageReceivedEventHandler(       MessageBroker_MessageReceived);} One piece of code you might not be familiar with is accessing a property defined in the App class of your XAML application. Within the App.Xaml file is a class named App that inherits from the Application object. You access the global instance of this App class by using Application.Current. You cast Application.Current to ‘App’ prior to accessing any of the public properties or methods you defined in the App class. Thus, the code (Application.Current as App).MessageBroker, allows you to get at the MessageBroker property defined in the App class. In the MessageReceived event procedure in the main window (shown below) you can now check to see if the MessageName property of the PDSAMessageBrokerEventArgs is equal to “StatusBar” and if it is, then display the message body into the status bar text block control. void MessageBroker_MessageReceived(object sender,   PDSAMessageBrokerEventArgs e){  switch (e.MessageName)  {    case "StatusBar":      tbStatus.Text = e.Message.MessageBody.ToString();      break;  }} In the Page 1 user control’s Loaded event procedure you will send the message “StatusBar” through the global message broker to any listener using the following code: private void UserControl_Loaded(object sender,  RoutedEventArgs e){  // Send Status Message  (Application.Current as App).MessageBroker.    SendMessage(new PDSAMessageBrokerMessage("StatusBar",      "This is Page 1"));} Since the main window is listening for the message ‘StatusBar’, it will display the value “This is Page 1” in the status bar at the bottom of the main window. Sending a Message to a User Control The previous example sent a message from the user control to the main window. You can also send messages from the main window to any listener as well. Remember that the global message broker is really just a broadcaster to anyone who has hooked into the MessageReceived event. In the constructor of the user control named ucPage1 you can hook into the global message broker’s MessageReceived event. You can then listen for any messages that are sent to this control by using a similar switch-case structure like that in the main window. public ucPage1(){  InitializeComponent();   // Hook to the Global Message Broker  (Application.Current as App).MessageBroker.    MessageReceived += new MessageReceivedEventHandler(      MessageBroker_MessageReceived);} void MessageBroker_MessageReceived(object sender,  PDSAMessageBrokerEventArgs e){  // Look for messages intended for Page 1  switch (e.MessageName)  {    case "ForPage1":      MessageBox.Show(e.Message.MessageBody.ToString());      break;  }} Once the ucPage1 user control has been loaded into the main window you can then send a message using the following code: private void btnSendToPage1_Click(object sender,  RoutedEventArgs e){  PDSAMessageBrokerMessage arg =     new PDSAMessageBrokerMessage();   arg.MessageName = "ForPage1";  arg.MessageBody = "Message For Page 1";   // Send a message to Page 1  (Application.Current as App).MessageBroker.SendMessage(arg);} Since the MessageName matches what is in the ucPage1 MessageReceived event procedure, ucPage1 can do anything in response to that event. It is important to note that when the message gets sent it is sent to all MessageReceived event procedures, not just the one that is looking for a message called “ForPage1”. If the user control ucPage1 is not loaded and this message is broadcast, but no other code is listening for it, then it is simply ignored. Remove Event Handler In each class where you add an event handler to the MessageReceived event you need to make sure to remove those event handlers when you are done. Failure to do so can cause a strong reference to the class and thus not allow that object to be garbage collected. In each of your user control’s make sure in the Unloaded event to remove the event handler. private void UserControl_Unloaded(object sender, RoutedEventArgs e){  if (_MessageBroker != null)    _MessageBroker.MessageReceived -=         _MessageBroker_MessageReceived;} Problems with Message Brokering As with most “global” classes or classes that hook up events to other classes, garbage collection is something you need to consider. Just the simple act of hooking up an event procedure to a global event handler creates a reference between your user control and the message broker in the App class. This means that even when your user control is removed from your UI, the class will still be in memory because of the reference to the message broker. This can cause messages to still being handled even though the UI is not being displayed. It is up to you to make sure you remove those event handlers as discussed in the previous section. If you don’t, then the garbage collector cannot release those objects. Instead of using events to send messages from one object to another you might consider registering your objects with a central message broker. This message broker now becomes a collection class into which you pass an object and what messages that object wishes to receive. You do end up with the same problem however. You have to un-register your objects; otherwise they still stay in memory. To alleviate this problem you can look into using the WeakReference class as a method to store your objects so they can be garbage collected if need be. Discussing Weak References is beyond the scope of this post, but you can look this up on the web. Summary In this blog post you learned how to create a simple message broker system that will allow you to send messages from one object to another without having to reference objects directly. This does reduce the coupling between objects in your application. You do need to remember to get rid of any event handlers prior to your objects going out of scope or you run the risk of having memory leaks and events being called even though you can no longer access the object that is responding to that event. NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “A Communication System for XAML Applications” from the drop down list.

    Read the article

  • MSCC: Global Windows Azure Bootcamp

    Mauritius participated and contributed to the Global Windows Azure Bootcamp 2014 (GWAB). Again! And this time stronger than ever, and together with 137 other locations in 56 countries world-wide. We had 62 named registrations, 7 guest additions and approximately 10 offline participants prior to the event day. Most interestingly the organisation of the GWAB through the MSCC helped to increased the number of craftsmen. The Mauritius Software Craftsmanship Community has currently 138 registered members - in less than one year! Only with those numbers we can proudly say that all the preparations and hard work towards this event already paid off. Personally, I'm really grateful that we had this kind of response and the feedback from some attendees confirmed that the MSCC is on the right track here on Cyber Island Mauritius. Inspired and motivated by the success of this event, rest assured that there will be more public events like the GWAB. This time it took some time to reflect on our meetup, following my first impression right on spot: "Wow, what an experience to organise and participate in this global event. Overall, I've been very pleased with the preparations and the event itself. Surely, there have been some nicks that we have to address and to improve for future activities like this. Quite frankly, we are not professional event organisers (not yet) but we learned a lot over the past couple of days. A big Thank You to our event sponsors, namely Microsoft Indian Ocean Islands & French Pacific, Ceridian Mauritius and Emtel. Without them this event wouldn't have happened - Thank You! And to the cool team members of Microsoft Student Partners (MSPs). You geeks did a great job! Thanks!" So, how many attendees did we actually have? 61! - Awesome - 61 cloud computing instances to help on the research of diabetes. During Saturday afternoon there was even an online publication on L'Express: Les développeurs mauriciens se joignent au combat contre le diabète Reactions of other attendees Don't take my word for granted... Here are some impressions and feedback from our participants: "Awesome event, really appreciated the presentations :-)" -- Kevin on event comments "very interesting and enriching." -- Diana on event comments "#gwab #gwabmru 2014 great success. Looking forward for gwab 2015" -- Wasiim on Twitter "Was there till the end. Awesome Event. I'll surely join upcoming meetup sessions :)" -- Luchmun on event comments "#gwabmru was not that cool. left early" -- Mohammad on Twitter The overall feedback is positive but we are absolutely aware that there quite a number of problems we had to face. We are already looking into that and ideas / action plans on how we will be able to improve it for future events. The sessions We started the day with welcoming speeches by Thierry Coret, Sr. Marketing Manager of Microsoft Indian Ocean Islands & French Pacific and Vidia Mooneegan, Managing Director and Sr. Vice President of Ceridian Mauritius. The clear emphasis was on the endless possibilities of cloud computing and how it can enable any kind of sectors here in the country. Then it was about time to set up the cloud computing services in order to contribute each attendees cloud computing resources to the global research of diabetes, a step by step guide presented by Arnaud Meslier, Technical Evangelist at Microsoft. Given a rendering package and a configuration file it was very interesting to follow the single steps in Windows Azure. Also, during the day we were not sure whether the set up had been correctly, as Mauritius didn't show up on the results board - which should have been the case after approximately 20 to 30 minutes. Anyways, let the minions work... Next, Arnaud gave a brief overview of the variety of services Windows Azure has to offer. Whether you need a development environment for your websites or mobiles app, running a virtual machine with your existing applications or simply putting a SQL database online. No worries, Windows Azure has the right packages available and the online management portal is really easy t handle. After this, we got a little bit more business oriented while Wasiim Hosenbocus, employee at Ceridian, took the attendees through the inerts of a real-life application, and demoed a couple of the existing features. He did a great job by showing how the different services of Windows Azure can be created and pulled together. After the lunch break it is always tough to keep the audience awake... And it was my turn. I gave a brief overview on operating and managing a SQL database on Windows Azure. Well, there are actually two options available and depending on your individual requirements you should be aware of both. The simpler version is called SQL Database and while provisioning only takes a couple of seconds, you should take into consideration that SQL Database has a number of constraints, like limitations on the actual database size - up to 5 GB as web edition and up to 150 GB maximum as business edition -, among others. Next, it was Chervine Bhiwoo's session on Windows Azure Mobile Services. It was absolutely amazing to see that the mobiles services directly offers you various project templates, like Windows 8 Store App, Android app, iOS app, and even Xamarin cross-platform app development. So, within a couple of minutes you can have your first mobile app active and running on Windows Azure. Furthermore, Chervine showed the attendees that adding another user interface, like Web Sites running on ASP.NET MVC 4 only takes a couple of minutes, too. And last but not least, we rounded up the day with Windows Azure Websites and hosting of Virtual Machines presented by some members of the local Microsoft Students Partners programme. Surely, one of the big advantages using Windows Azure is the availability of pre-defined installation packages of known web applications, like WordPress, Joomla!, or Ghost. Compared to running your own web site with a traditional web hoster it is surely en par, and depending on your personal level of expertise, Windows Azure provides you more liberty in terms of configuration than maybe a shared hosting environment. Running a pre-defined web application is one thing but in case that you would like to have more control over your hosting environment it is highly advised to opt for a virtual machine. Provisioning of an Ubuntu 12.04 LTS system was very simple to do but it takes some more minutes than you might expect. So, please be patient and take your time while Windows Azure gets everything in place for you. Afterwards, you can use a SecureShell (ssh) client like Putty in case of a Linux-based machine, or Remote Desktop Services when operating a Windows Server system to log in into your virtual machine. At the end of the day we had a great Q&A session and we finalised the event with our raffle of goodies. Participation in the raffle was bound to submission of the session survey and most gratefully we had a give-away for everyone. What a nice coincidence to finish of the day. Note: All session slides (and demo codes) will be made available on the MSCC event page. Please, check the Files section over there. (Some) Visual impressions from the event Just to give you an idea about what has happened during the GWAB 2014 at Ebene... Speakers and Microsoft Student Partners are getting ready for the Global Windows Azure Bootcamp 2014 GWAB 2014 attendees are fully integrated into the hands-on-labs and setting up their individuals cloud computing services 60 attendees at the GWAB 2014. Despite some technical difficulties we had a great time running the event GWAB 2014: Using the lunch break for networking and exchange of ideas - Great conversations and topics amongst attendees There are more pictures on the original event page: Questions & Answers Following are a couple of questions which have been asked and didn't get an answer during the event: Q: Is it possible to upload static pages via FTP? A: Yes, you can. Have a look at the right side column on the dashboard of your website. There you'll find information about the FTP and SFTP host names. You can use any FTP client, like ie. FileZilla to log in. FTP also gives you access to your log files. Q: What are the size limitations on SQL Database? A: 5 GB on the Web Edition, and 150 GB on the business edition. A maximum 150 databases (inclusing 'master') per SQL Database server. More details here: General Guidelines and Limitations (Azure SQL Database) Q: What's the maximum size of a SQL Server running in a Virtual Machine? A: The maximum Windows Azure VM has currently 8 CPU cores, 14 or 56 GB of RAM and 16x 1 TB hard drives. More details here: Virtual Machine and Cloud Service Sizes for Azure Q: How can we register for Windows Azure? A: Mauritius is currently not listed for phone verification. Please get in touch with Arnaud Meslier at Microsoft IOI & FP Q: Can I use my own domain name for Windows Azure websites? A: Yes, you can. But this might require to upscale your account to Standard. In case that I missed a question and answer, please use the comment section at the end of the article. Thanks! Final results Every participant was instructed during the hands-on-lab session on how to set up a cloud computing service in their account. Of course, I won't keep the results from you... Global Azure Lab GWAB 2014: Our cloud computing contribution to the research of diabetes And I would say Mauritius did a good job! Upcoming Events What are the upcoming events here in Mauritius? So far, we have the following ones (incomplete list as usual) in chronological order: Launch of Microsoft SQL Server 2014 (15.4.2014) Corsair Hackers Reboot (19.4.2014) WebCup (TBA ~ June 2014) Developers Conference (TBA ~ July 2014) Linuxfest 2014 (TBA ~ November 2014) Hopefully, there will be more announcements during the next couple of weeks and months. If you know about any other event, like a bootcamp, a code challenge or hackathon here in Mauritius, please drop me a note in the comment section below this article. Thanks! Networking and job/project opportunities Despite having technical presentations on Windows Azure an event like this always offers a great bunch of possibilities and opportunities to get in touch with new people in IT, have an exchange of experience with other like-minded people. As I already wrote about Communities - The importance of exchange and discussion - I had a great conversation with representatives of the University des Mascareignes which are currently embracing cloud infrastructure and cloud computing for their various campuses in the Indian Ocean. As for the MSCC it would be a great experience to stay in touch with them, and to work on upcoming, common activities. Furthermore, I had a very good conversation with Thierry and Ludovic of Microsoft IOI & FP on the necessity of user groups and IT communities here on the island. It's great to see that the MSCC is currently on a run and that local companies are sharing our thoughts on promoting IT careers and exchange of IT knowledge in such an open way. I'm also looking forward to be able to participate and to contribute on more events in the near future. My resume of the day We learned a lot today and there is always room for improvement! It was an awesome event and quite frankly it was a pleasure to spend the day with so many enthuastic IT people in the same room. It was a great experience to organise such event locally and participate on a global scale to support the GlyQ-IQ Technology in their research on diabetes. I was so pleased to see the involvement of new MSCC members in taking the opportunity to share and learn about the power of cloud computing. The Mauritius Software Craftsmanship Community is on the right way and this year's bootcamp on Windows Azure only marked the beginning of our journey. Thank you to our sponsors and my kudos to the MSPs! Update: Media coverage The event has been reported in local media, too. Following are some resources: Orange - Local - Business: Le cloud, pour des recherches approfondies sur le diabète Maurice Info.mu: Le cloud, pour des recherches approfondies sur le diabète Le Quotidien Pg 2: Global Windows Azure Bootcamp 2014 - Le cloud pour des recherches approfondies sur le diabète The Observer Pg 12: Le cloud, pour des recherches approfondies sur le diabète

    Read the article

  • Windows Azure Service Bus Splitter and Aggregator

    - by Alan Smith
    This article will cover basic implementations of the Splitter and Aggregator patterns using the Windows Azure Service Bus. The content will be included in the next release of the “Windows Azure Service Bus Developer Guide”, along with some other patterns I am working on. I’ve taken the pattern descriptions from the book “Enterprise Integration Patterns” by Gregor Hohpe. I bought a copy of the book in 2004, and recently dusted it off when I started to look at implementing the patterns on the Windows Azure Service Bus. Gregor has also presented an session in 2011 “Enterprise Integration Patterns: Past, Present and Future” which is well worth a look. I’ll be covering more patterns in the coming weeks, I’m currently working on Wire-Tap and Scatter-Gather. There will no doubt be a section on implementing these patterns in my “SOA, Connectivity and Integration using the Windows Azure Service Bus” course. There are a number of scenarios where a message needs to be divided into a number of sub messages, and also where a number of sub messages need to be combined to form one message. The splitter and aggregator patterns provide a definition of how this can be achieved. This section will focus on the implementation of basic splitter and aggregator patens using the Windows Azure Service Bus direct programming model. In BizTalk Server receive pipelines are typically used to implement the splitter patterns, with sequential convoy orchestrations often used to aggregate messages. In the current release of the Service Bus, there is no functionality in the direct programming model that implements these patterns, so it is up to the developer to implement them in the applications that send and receive messages. Splitter A message splitter takes a message and spits the message into a number of sub messages. As there are different scenarios for how a message can be split into sub messages, message splitters are implemented using different algorithms. The Enterprise Integration Patterns book describes the splatter pattern as follows: How can we process a message if it contains multiple elements, each of which may have to be processed in a different way? Use a Splitter to break out the composite message into a series of individual messages, each containing data related to one item. The Enterprise Integration Patterns website provides a description of the Splitter pattern here. In some scenarios a batch message could be split into the sub messages that are contained in the batch. The splitting of a message could be based on the message type of sub-message, or the trading partner that the sub message is to be sent to. Aggregator An aggregator takes a stream or related messages and combines them together to form one message. The Enterprise Integration Patterns book describes the aggregator pattern as follows: How do we combine the results of individual, but related messages so that they can be processed as a whole? Use a stateful filter, an Aggregator, to collect and store individual messages until a complete set of related messages has been received. Then, the Aggregator publishes a single message distilled from the individual messages. The Enterprise Integration Patterns website provides a description of the Aggregator pattern here. A common example of the need for an aggregator is in scenarios where a stream of messages needs to be combined into a daily batch to be sent to a legacy line-of-business application. The BizTalk Server EDI functionality provides support for batching messages in this way using a sequential convoy orchestration. Scenario The scenario for this implementation of the splitter and aggregator patterns is the sending and receiving of large messages using a Service Bus queue. In the current release, the Windows Azure Service Bus currently supports a maximum message size of 256 KB, with a maximum header size of 64 KB. This leaves a safe maximum body size of 192 KB. The BrokeredMessage class will support messages larger than 256 KB; in fact the Size property is of type long, implying that very large messages may be supported at some point in the future. The 256 KB size restriction is set in the service bus components that are deployed in the Windows Azure data centers. One of the ways of working around this size restriction is to split large messages into a sequence of smaller sub messages in the sending application, send them via a queue, and then reassemble them in the receiving application. This scenario will be used to demonstrate the pattern implementations. Implementation The splitter and aggregator will be used to provide functionality to send and receive large messages over the Windows Azure Service Bus. In order to make the implementations generic and reusable they will be implemented as a class library. The splitter will be implemented in the LargeMessageSender class and the aggregator in the LargeMessageReceiver class. A class diagram showing the two classes is shown below. Implementing the Splitter The splitter will take a large brokered message, and split the messages into a sequence of smaller sub-messages that can be transmitted over the service bus messaging entities. The LargeMessageSender class provides a Send method that takes a large brokered message as a parameter. The implementation of the class is shown below; console output has been added to provide details of the splitting operation. public class LargeMessageSender {     private static int SubMessageBodySize = 192 * 1024;     private QueueClient m_QueueClient;       public LargeMessageSender(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public void Send(BrokeredMessage message)     {         // Calculate the number of sub messages required.         long messageBodySize = message.Size;         int nrSubMessages = (int)(messageBodySize / SubMessageBodySize);         if (messageBodySize % SubMessageBodySize != 0)         {             nrSubMessages++;         }           // Create a unique session Id.         string sessionId = Guid.NewGuid().ToString();         Console.WriteLine("Message session Id: " + sessionId);         Console.Write("Sending {0} sub-messages", nrSubMessages);           Stream bodyStream = message.GetBody<Stream>();         for (int streamOffest = 0; streamOffest < messageBodySize;             streamOffest += SubMessageBodySize)         {                                     // Get the stream chunk from the large message             long arraySize = (messageBodySize - streamOffest) > SubMessageBodySize                 ? SubMessageBodySize : messageBodySize - streamOffest;             byte[] subMessageBytes = new byte[arraySize];             int result = bodyStream.Read(subMessageBytes, 0, (int)arraySize);             MemoryStream subMessageStream = new MemoryStream(subMessageBytes);               // Create a new message             BrokeredMessage subMessage = new BrokeredMessage(subMessageStream, true);             subMessage.SessionId = sessionId;               // Send the message             m_QueueClient.Send(subMessage);             Console.Write(".");         }         Console.WriteLine("Done!");     }} The LargeMessageSender class is initialized with a QueueClient that is created by the sending application. When the large message is sent, the number of sub messages is calculated based on the size of the body of the large message. A unique session Id is created to allow the sub messages to be sent as a message session, this session Id will be used for correlation in the aggregator. A for loop in then used to create the sequence of sub messages by creating chunks of data from the stream of the large message. The sub messages are then sent to the queue using the QueueClient. As sessions are used to correlate the messages, the queue used for message exchange must be created with the RequiresSession property set to true. Implementing the Aggregator The aggregator will receive the sub messages in the message session that was created by the splitter, and combine them to form a single, large message. The aggregator is implemented in the LargeMessageReceiver class, with a Receive method that returns a BrokeredMessage. The implementation of the class is shown below; console output has been added to provide details of the splitting operation.   public class LargeMessageReceiver {     private QueueClient m_QueueClient;       public LargeMessageReceiver(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public BrokeredMessage Receive()     {         // Create a memory stream to store the large message body.         MemoryStream largeMessageStream = new MemoryStream();           // Accept a message session from the queue.         MessageSession session = m_QueueClient.AcceptMessageSession();         Console.WriteLine("Message session Id: " + session.SessionId);         Console.Write("Receiving sub messages");           while (true)         {             // Receive a sub message             BrokeredMessage subMessage = session.Receive(TimeSpan.FromSeconds(5));               if (subMessage != null)             {                 // Copy the sub message body to the large message stream.                 Stream subMessageStream = subMessage.GetBody<Stream>();                 subMessageStream.CopyTo(largeMessageStream);                   // Mark the message as complete.                 subMessage.Complete();                 Console.Write(".");             }             else             {                 // The last message in the sequence is our completeness criteria.                 Console.WriteLine("Done!");                 break;             }         }                     // Create an aggregated message from the large message stream.         BrokeredMessage largeMessage = new BrokeredMessage(largeMessageStream, true);         return largeMessage;     } }   The LargeMessageReceiver initialized using a QueueClient that is created by the receiving application. The receive method creates a memory stream that will be used to aggregate the large message body. The AcceptMessageSession method on the QueueClient is then called, which will wait for the first message in a message session to become available on the queue. As the AcceptMessageSession can throw a timeout exception if no message is available on the queue after 60 seconds, a real-world implementation should handle this accordingly. Once the message session as accepted, the sub messages in the session are received, and their message body streams copied to the memory stream. Once all the messages have been received, the memory stream is used to create a large message, that is then returned to the receiving application. Testing the Implementation The splitter and aggregator are tested by creating a message sender and message receiver application. The payload for the large message will be one of the webcast video files from http://www.cloudcasts.net/, the file size is 9,697 KB, well over the 256 KB threshold imposed by the Service Bus. As the splitter and aggregator are implemented in a separate class library, the code used in the sender and receiver console is fairly basic. The implementation of the main method of the sending application is shown below.   static void Main(string[] args) {     // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Open the input file.     FileStream fileStream = new FileStream(AccountDetails.TestFile, FileMode.Open);       // Create a BrokeredMessage for the file.     BrokeredMessage largeMessage = new BrokeredMessage(fileStream, true);       Console.WriteLine("Sending: " + AccountDetails.TestFile);     Console.WriteLine("Message body size: " + largeMessage.Size);     Console.WriteLine();         // Send the message with a LargeMessageSender     LargeMessageSender sender = new LargeMessageSender(queueClient);     sender.Send(largeMessage);       // Close the messaging facory.     factory.Close();  } The implementation of the main method of the receiving application is shown below. static void Main(string[] args) {       // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Create a LargeMessageReceiver and receive the message.     LargeMessageReceiver receiver = new LargeMessageReceiver(queueClient);     BrokeredMessage largeMessage = receiver.Receive();       Console.WriteLine("Received message");     Console.WriteLine("Message body size: " + largeMessage.Size);       string testFile = AccountDetails.TestFile.Replace(@"\In\", @"\Out\");     Console.WriteLine("Saving file: " + testFile);       // Save the message body as a file.     Stream largeMessageStream = largeMessage.GetBody<Stream>();     largeMessageStream.Seek(0, SeekOrigin.Begin);     FileStream fileOut = new FileStream(testFile, FileMode.Create);     largeMessageStream.CopyTo(fileOut);     fileOut.Close();       Console.WriteLine("Done!"); } In order to test the application, the sending application is executed, which will use the LargeMessageSender class to split the message and place it on the queue. The output of the sender console is shown below. The console shows that the body size of the large message was 9,929,365 bytes, and the message was sent as a sequence of 51 sub messages. When the receiving application is executed the results are shown below. The console application shows that the aggregator has received the 51 messages from the message sequence that was creating in the sending application. The messages have been aggregated to form a massage with a body of 9,929,365 bytes, which is the same as the original large message. The message body is then saved as a file. Improvements to the Implementation The splitter and aggregator patterns in this implementation were created in order to show the usage of the patterns in a demo, which they do quite well. When implementing these patterns in a real-world scenario there are a number of improvements that could be made to the design. Copying Message Header Properties When sending a large message using these classes, it would be great if the message header properties in the message that was received were copied from the message that was sent. The sending application may well add information to the message context that will be required in the receiving application. When the sub messages are created in the splitter, the header properties in the first message could be set to the values in the original large message. The aggregator could then used the values from this first sub message to set the properties in the message header of the large message during the aggregation process. Using Asynchronous Methods The current implementation uses the synchronous send and receive methods of the QueueClient class. It would be much more performant to use the asynchronous methods, however doing so may well affect the sequence in which the sub messages are enqueued, which would require the implementation of a resequencer in the aggregator to restore the correct message sequence. Handling Exceptions In order to keep the code readable no exception handling was added to the implementations. In a real-world scenario exceptions should be handled accordingly.

    Read the article

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

  • TFS API Change WorkItem CreatedDate And ChangedDate To Historic Dates

    - by Tarun Arora
    There may be times when you need to modify the value of the fields “System.CreatedDate” and “System.ChangedDate” on a work item. Richard Hundhausen has a great blog with ample of reason why or why not you should need to set the values of these fields to historic dates. In this blog post I’ll show you, Create a PBI WorkItem linked to a Task work item by pre-setting the value of the field ‘System.ChangedDate’ to a historic date Change the value of the field ‘System.Created’ to a historic date Simulate the historic burn down of a task type work item in a sprint Explain the impact of updating values of the fields CreatedDate and ChangedDate on the Sprint burn down chart Rules of Play      1. You need to be a member of the Project Collection Service Accounts              2. You need to use ‘WorkItemStoreFlags.BypassRules’ when you instantiate the WorkItemStore service // Instanciate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules);      3. You cannot set the ChangedDate         - Less than the changed date of previous revision         - Greater than current date Walkthrough The walkthrough contains 5 parts 00 – Required References 01 – Connect to TFS Programmatically 02 – Create a Work Item Programmatically 03 – Set the values of fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates 04 – Results of our experiment Lets get started………………………………………………… 00 – Required References Microsoft.TeamFoundation.dll Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll 01 – Connect to TFS Programmatically I have a in depth blog post on how to connect to TFS programmatically in case you are interested. However, the code snippet below will enable you to connect to TFS using the Team Project Picker. // Services I need access to globally private static TfsTeamProjectCollection _tfs; private static ProjectInfo _selectedTeamProject; private static WorkItemStore _wis; // Connect to TFS Using Team Project Picker public static bool ConnectToTfs() { var isSelected = false; // The user is allowed to select only one project var tfsPp = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPp.ShowDialog(); // The TFS project collection _tfs = tfsPp.SelectedTeamProjectCollection; if (tfsPp.SelectedProjects.Any()) { // The selected Team Project _selectedTeamProject = tfsPp.SelectedProjects[0]; isSelected = true; } return isSelected; } 02 – Create a Work Item Programmatically In the below code snippet I have create a Product Backlog Item and a Task type work item and then link them together as parent and child. Note – You will have to set the ChangedDate to a historic date when you created the work item. Remember, If you try and set the ChangedDate to a value earlier than last assigned you will receive the following exception… TF26212: Team Foundation Server could not save your changes. There may be problems with the work item type definition. Try again or contact your Team Foundation Server administrator. If you notice below I have added a few seconds each time I have modified the ‘ChangedDate’ just to avoid running into the exception listed above. // Create Linked Work Items and return Ids private static List<int> CreateWorkItemsProgrammatically() { // Instantiate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules); // List of work items to return var listOfWorkItems = new List<int>(); // Create a new Product Backlog Item var p = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Product Backlog Item"]); p.Title = "This is a new PBI"; p.Description = "Description"; p.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); p.AreaPath = _selectedTeamProject.Name; p["Effort"] = 10; // Just double checking that ByPassRules is set to true if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (p.Validate().Count == 0) { p.Save(); listOfWorkItems.Add(p.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var t = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Task"]); t.Title = "This is a task"; t.Description = "Task Description"; t.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); t.AreaPath = _selectedTeamProject.Name; t["Remaining Work"] = 10; if (_wis.BypassRules) { t.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (t.Validate().Count == 0) { t.Save(); listOfWorkItems.Add(t.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in t.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var linkTypEnd = _wis.WorkItemLinkTypes.LinkTypeEnds["Child"]; p.Links.Add(new WorkItemLink(linkTypEnd, t.Id) {ChangedDate = Convert.ToDateTime("2012-01-01").AddSeconds(20)}); if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01").AddSeconds(20); } if (p.Validate().Count == 0) { p.Save(); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } return listOfWorkItems; } 03 – Set the value of “Created Date” and Change the value of “Changed Date” to Historic Dates The CreatedDate can only be changed after a work item has been created. If you try and set the CreatedDate to a historic date at the time of creation of a work item, it will not work. // Lets do a work item effort burn down simulation by updating the ChangedDate & CreatedDate to historic Values private static void WorkItemChangeSimulation(IEnumerable<int> listOfWorkItems) { foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); switch (wi.Type.Name) { case "ProductBacklogItem": if (wi.State.ToLower() == "new") wi.State = "Approved"; // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed Date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; case "Task": // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; } } // A mock sprint start date var sprintStart = DateTime.Today.AddDays(-5); // A mock sprint end date var sprintEnd = DateTime.Today.AddDays(5); // What is the total Sprint duration var totalSprintDuration = (sprintEnd - sprintStart).Days; // How much of the sprint have we already covered var noOfDaysIntoSprint = (DateTime.Today - sprintStart).Days; // Get the effort assigned to our tasks var totalEffortRemaining = QueryTaskTotalEfforRemaining(listOfWorkItems); // Defining how much effort to burn every day decimal dailyBurnRate = totalEffortRemaining / totalSprintDuration < 1 ? 1 : totalEffortRemaining / totalSprintDuration; // we have just created one task var totalNoOfTasks = 1; var simulation = sprintStart; var currentDate = DateTime.Today.Date; // Carry on till effort has been burned down from sprint start to today while (simulation.Date != currentDate.Date) { var dailyBurnRate1 = dailyBurnRate; // A fixed amount needs to be burned down each day while (dailyBurnRate1 > 0) { // burn down bit by bit from all unfinished task type work items foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); var isDirty = false; // Set the status to in progress if (wi.State.ToLower() == "to do") { wi.State = "In Progress"; isDirty = true; } // Ensure that there is enough effort remaining in tasks to burn down the daily burn rate if (QueryTaskTotalEfforRemaining(listOfWorkItems) > dailyBurnRate1) { // If there is less than 1 unit of effort left in the task, burn it all if (Convert.ToDecimal(wi["Remaining Work"]) <= 1) { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } else { // How much to burn from each task? var toBurn = (dailyBurnRate / totalNoOfTasks) < 1 ? 1 : (dailyBurnRate / totalNoOfTasks); // Check that the task has enough effort to allow burnForTask effort if (Convert.ToDecimal(wi["Remaining Work"]) >= toBurn) { wi["Remaining Work"] = Convert.ToDecimal(wi["Remaining Work"]) - toBurn; dailyBurnRate1 = dailyBurnRate1 - toBurn; isDirty = true; } else { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } } } else { dailyBurnRate1 = 0; } if (isDirty) { if (Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).Date == simulation.Date) { wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(20); } else { wi.Fields["System.ChangedDate"].Value = simulation.AddSeconds(20); } wi.Save(); } } } // Increase date by 1 to perform daily burn down by day simulation = Convert.ToDateTime(simulation).AddDays(1); } } // Get the Total effort remaining in the current sprint private static decimal QueryTaskTotalEfforRemaining(List<int> listOfWorkItems) { var unfinishedWorkInCurrentSprint = _wis.GetQueryDefinition( new Guid(QueryAndGuid.FirstOrDefault(c => c.Key == "Unfinished Work").Value)); var parameters = new Dictionary<string, object> { { "project", _selectedTeamProject.Name } }; var q = new Query(_wis, unfinishedWorkInCurrentSprint.QueryText, parameters); var results = q.RunLinkQuery(); var wis = new List<WorkItem>(); foreach (var result in results) { var _wi = _wis.GetWorkItem(result.TargetId); if (_wi.Type.Name == "Task" && listOfWorkItems.Contains(_wi.Id)) wis.Add(_wi); } return wis.Sum(r => Convert.ToDecimal(r["Remaining Work"])); }   04 – The Results If you are still reading, the results are beautiful! Image 1 – Create work item with Changed Date pre-set to historic date Image 2 – Set the CreatedDate to historic date (Same as the ChangedDate) Image 3 – Simulate of effort burn down on a task via the TFS API   Image 4 – The history of changes on the Task. So, essentially this task has burned 1 hour per day Sprint Burn Down Chart – What’s not possible? The Sprint burn down chart is calculated from the System.AuthorizedDate and not the System.ChangedDate/System.CreatedDate. So, though you can change the System.ChangedDate and System.CreatedDate to historic dates you will not be able to synthesize the sprint burn down chart. Image 1 – By changing the Created Date and Changed Date to ‘18/Oct/2012’ you would have expected the burn down to have been impacted, but it won’t be, because the sprint burn down chart uses the value of field ‘System.AuthorizedDate’ to calculate the unfinished work points. The AsOf queries that are used to calculate the unfinished work points use the value of the field ‘System.AuthorizedDate’. Image 2 – Using the above code I burned down 1 hour effort per day over 5 days from the task work item, I would have expected the sprint burn down to show a constant burn down, instead the burn down shows the effort exhausted on the 24th itself. Simply because the burn down is calculated using the ‘System.AuthorizedDate’. Now you would ask… “Can I change the value of the field System.AuthorizedDate to a historic date” Unfortunately that’s not possible! You will run into the exception ValidationException –  “TF26194: The value for field ‘Authorized Date’ cannot be changed.” Conclusion - You need to be a member of the Project Collection Service account group in order to set the fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates - You need to instantiate the WorkItemStore using the flag ByPassValidation - The System.ChangedDate needs to be set to a historic date at the time of work item creation. You cannot reset the ChangedDate to a date earlier than the existing ChangedDate and you cannot reset the ChangedDate to a date greater than the current date time. - The System.CreatedDate can only be reset after a work item has been created. You cannot set the CreatedDate at the time of work item creation. The CreatedDate cannot be greater than the current date. You can however reset the CreatedDate to a date earlier than the existing value. - You will not be able to synthesize the Sprint burn down chart by changing the value of System.ChangedDate and System.CreatedDate to historic dates, since the burn down chart uses AsOf queries to calculate the unfinished work points which internally uses the System.AuthorizedDate and NOT the System.ChangedDate & System.CreatedDate - System.AuthorizedDate cannot be set to a historic date using the TFS API Read other posts on using the TFS API here… Enjoy!

    Read the article

  • Reading Excel using OpenXML

    public DataTable ReadDataFromExcel()        {         string filePath = @"c:/temp/temp.xlsx";            using (SpreadsheetDocument LobjDocument = SpreadsheetDocument.Open(filePath, false))            {                            WorkbookPart LobjWorkbookPart = LobjDocument.WorkbookPart;                Sheet LobjSheetToImport = LobjWorkbookPart.Workbook.Descendants<Sheet>().First<Sheet>();                WorksheetPart LobjWorksheetPart = (WorksheetPart)(LobjWorkbookPart.GetPartById(LobjSheetToImport.Id));                SheetData LobjSheetData = LobjWorksheetPart.Worksheet.Elements<SheetData>().First();                //Read only the data rows and skip all the header rows.                int LiRowIterator = 1;                //  for progress bar                int LiTotal = LobjSheetData.Elements<Row>().Count() - MobjImportMapper.HeaderRowIndex;                // =================                foreach (Row LobjRowItem in LobjSheetData.Elements<Row>().Skip(6))                {                    DataRow LdrDataRow = LdtExcelData.NewRow();                    int LiColumnIndex = 0;                    int LiHasData = 0;                    LdrDataRow[LiColumnIndex] = LobjRowItem.RowIndex; //LiRowIterator;                    LiColumnIndex++;                    //TODO: handle restriction of column range.                    foreach (Cell LobjCellItem in LobjRowItem.Elements<Cell>().Where(PobjCell                        => ImportHelper.GetColumnIndexFromExcelColumnName(ImportHelper.GetColumnName(PobjCell.CellReference))                        <= MobjImportMapper.LastColumnIndex))                    {                                             // Gets the column index of the cell with data                        int LiCellColumnIndex = 10;                        if (LiColumnIndex < LiCellColumnIndex)                        {                            do                            {                                LdrDataRow[LiColumnIndex] = string.Empty;                                LiColumnIndex++;                            }                            while (LiColumnIndex < LiCellColumnIndex);                        }                        string LstrCellValue = LobjCellItem.InnerText;                        if (LobjCellItem.DataType != null)                        {                            switch (LobjCellItem.DataType.Value)                            {                                case CellValues.SharedString:                                    var LobjStringTable = LobjWorkbookPart.GetPartsOfType<SharedStringTablePart>().FirstOrDefault();                                    DocumentFormat.OpenXml.OpenXmlElement LXMLElment = null;                                    string LstrXMLString = String.Empty;                                    if (LobjStringTable != null)                                    {                                        LstrXMLString =                                            LobjStringTable.SharedStringTable.ElementAt(int.Parse(LstrCellValue, CultureInfo.InvariantCulture)).InnerXml;                                        if (LstrXMLString.IndexOf("<x:rPh", StringComparison.CurrentCulture) != -1)                                        {                                            LXMLElment = LobjStringTable.SharedStringTable.ElementAt(int.Parse(LstrCellValue, CultureInfo.InvariantCulture)).FirstChild;                                            LstrCellValue = LXMLElment.InnerText;                                        }                                        else                                        {                                            LstrCellValue = LobjStringTable.SharedStringTable.ElementAt(int.Parse(LstrCellValue, CultureInfo.InvariantCulture)).InnerText;                                        }                                    }                                    break;                                default:                                    break;                            }                        }                        LdrDataRow[LiColumnIndex] = LstrCellValue.Trim();                        if (!string.IsNullOrEmpty(LstrCellValue))                            LiHasData++;                       LiColumnIndex++;                    }                    if (LiHasData > 0)                    {                        LiRowIterator++;                        LdtExcelData.Rows.Add(LdrDataRow);                    }                }            }                       return LdtExcelData;        } span.fullpost {display:none;}

    Read the article

  • .Net DataGridView "Index 0 does not have a value."

    - by Ben
    I am having trouble with a DataGridView. I have a collection of 3 Items bound to the grid, when trying to delete one of the items and reload the grid. If have code of If (dlg.ShowDialog() = DialogResult.OK) Then 'Show dialog with grid on it End If On the opened dialog, I delete an item from the grid (which should in turn, delete the item from the collection, and re-load the grid), and it returns to the "If (dlg.show..." line, with the error of "A first chance exception of type 'System.IndexOutOfRangeException' occurred in System.Windows.Forms.dll Additional information: Index 2 does not have a value. " (I have break into debugger set on for common language runtime errors) I can understand this error if i were trying to access any cells, row or columns, but im not, and then I would expect the exception to stop on the line of code that is trying to access this grid data, not the "If (dlg.ShowDialog()... " line Any ideas? Cheers

    Read the article

  • Mysterious dbboon folder with proxy.php file on my godaddy account

    - by Paul
    When doing some web maintenance today, I noticed a strange new folder on my GoDaddy hosting account at the root level named "dbboon", with a single file inside, called proxy.php. It's code is listed below, and seems to be some sort of proxy function. I was kind of troubled because I didn't put it there. I googled all this to learn more, but didn't find anything, except for the proxy file happened to be also stored at pastebin.com: http://pastebin.com/PQsSPbCr I called GoDaddy and they confirmed that it belonged to them, said it was put there by their advanced hosting group for testing purposes but didn't have any more information. I thought this was all really weird: why would they put something in my folder without giving me a heads-up, and why would they need to do something like this? anybody know anything about this? <?php $version = '1.2'; if(isset($_GET['dbboon_version'])) { echo '{"version":"' . $version . '"}'; exit; } function dbboon_parseHeaders($subject) { global $version; $subject = trim($subject); $parsed = Array(); $len = strlen($subject); $position = $field = 0; $position = strpos($subject, "\r\n") + 2; while(isset($subject[$position])) { $nextC = strpos($subject, ':', $position); $fieldName = substr($subject, $position, ($nextC-$position)); $position += strlen($fieldName) + 1; $fieldValue = NULL; while(1) { $nextCrlf = strpos($subject, "\r\n", $position - 1); if(FALSE === $nextCrlf) { $t = substr($subject, $position); $position = $len; } else { $t = substr($subject, $position, $nextCrlf-$position); $position += strlen($t) + 2; } $fieldValue .= $t; if(!isset($subject[$position]) || (' ' != $subject[$position] && "\t" != $subject[$position])) { break; } } $parsed[strtolower($fieldName)] = trim($fieldValue); if($position > $len) { echo '{"result":false,"error":{"code":4,"message":"Communication error, unable to contact proxy service.","version":"' . $version . '"}}'; exit; } } return $parsed; } if(!function_exists('http_build_query')) { function http_build_query($data, $prefix = '', $sep = '', $key = '') { $ret = Array(); foreach((array) $data as $k => $v) { if(is_int($k) && NULL != $prefix) { $k = urlencode($prefix . $k); } if(!empty($key) || $key === 0) { $k = $key . '[' . urlencode($k) . ']'; } if(is_array($v) || is_object($v)) { array_push($ret, http_build_query($v, '', $sep, $k)); } else { array_push($ret, $k . '=' . urlencode($v)); } } if(empty($sep)) { $sep = '&'; } return implode($sep, $ret); } } $host = 'dbexternalsubscriber.secureserver.net'; $get = http_build_query($_GET); $post = http_build_query($_POST); $url = $get ? "?$get" : ''; $fp = fsockopen($host, 80, $errno, $errstr); if($fp) { $payload = "POST /embed/$url HTTP/1.1\r\n"; $payload .= "Host: $host\r\n"; $payload .= "Content-Length: " . strlen($post) . "\r\n"; $payload .= "Content-Type: application/x-www-form-urlencoded\r\n"; $payload .= "Connection: Close\r\n\r\n"; $payload .= $post; fwrite($fp, $payload); $httpCode = NULL; $response = NULL; $timeout = time() + 15; do { while($line = fgets($fp)) { $response .= $line; if(!trim($line)) { break; } } } while($timeout > time() && NULL === $response); $headers = dbboon_parseHeaders($response); if(isset($headers['transfer-encoding']) && 'chunked' === $headers['transfer-encoding']) { do { $cSize = $read = hexdec(trim(fgets($fp))); while($read > 0) { $buff = fread($fp, $read); $read -= strlen($buff); $response .= $buff; } $response .= fgets($fp); } while($cSize > 0); } else { preg_match('/Content-Length:\s([0-9]+)\r\n/msi', $response, $match); if(!isset($match[1])) { echo '{"result":false,"error":{"code":3,"message":"Communication error, unable to contact proxy service.","version":"' . $version . '"}}'; exit; } else { while($match[1] > 0) { $buff = fread($fp, $match[1]); $match[1] -= strlen($buff); $response .= $buff; } } } fclose($fp); if(!$pos = strpos($response, "\r\n\r\n")) { echo '{"result":false,"error":{"code":2,"message":"Communication error, unable to contact proxy service.","version":"' . $version . '"}}'; exit; } echo substr($response, $pos + 4); } else { echo '{"result":false,"error":{"code":1,"message":"Communication error, unable to contact proxy service.","version":"' . $version . '"}}'; exit; }

    Read the article

  • AJAX Return Problem from data sent via jQuery.ajax

    - by Anthony Garand
    I am trying to receive a json object back from php after sending data to the php file from the js file. All I get is undefined. Here are the contents of the php and js file. data.php <?php $action = $_GET['user']; $data = array( "first_name" = "Anthony", "last_name" = "Garand", "email" = "[email protected]", "password" = "changeme"); switch ($action) { case '[email protected]': echo $_GET['callback'] . '('. json_encode($data) . ');'; break; } ? core.js $(document).ready(function(){ $.ajax({ url: "data.php", data: {"user":"[email protected]"}, context: document.body, data: "jsonp", success: function(data){renderData(data);} }); }); function renderData(data) { document.write(data.first_name); }

    Read the article

  • NSLock deadlock

    - by twinkle
    I have an instance variable in class Foo @property (nonatomic, retain) NSLock *mLock; initialized as: self.mLock=[NSLock new]; Foo also has -(void)getLock { while (![self.mLock tryLock]) { NSLog(@"Trying to lock... sleep(1)"); sleep(1); } NSLog(@">>>>>>> Acquiring LOCK"); [self.mLock lock]; NSLog(@">>>>>>> LOCK acquired"); } From another method in the Foo class, I call [Foo getLock]. This immediately results in a deadlock. Log below: 2010-03-18 07:06:01.660 test[9816:207] >>>>>>> Acquiring LOCK 2010-03-18 07:06:01.665 test[9816:207] *** -[NSLock lock]: deadlock (<NSLock: 0x3c0f820> '(null)') 2010-03-18 07:06:01.666 test[9816:207] *** Break on _NSLockError() to debug. Thanks!

    Read the article

  • Extra blank page when converting HTML to PDF using abcPDF.

    - by ProfK
    I have an HTML report, with each print page contained by a <div class="page">. The page class is defined as width: 180mm; height: 250mm; page-break-after: always; background-position: centre top; background-image: url(Images/MainBanner.png); background-repeat: no-repeat; padding-top: 30mm; After making a few changes to my report content, when I call abcPDF to convert the report to PDF, suddenly I'm getting a blank page inserted after every real report page. I don't want to roll back the changes I've just made to remove this problem, so I'm hoping someone may know why the extra pages are being inserted.

    Read the article

  • vs2010: trying to debug javascript using Chrome: this is not a valid location for a breakpoint

    - by George
    Everytime I try to set a debug point in Javascript, eietehr while in Design mode or while runniong, I get the error: trying to set a breakpoint in javascript: this is not a valid location for a breakpoint When I go to VS2010's Options screen under Debugging Just In Time, I see that Managed, Native & Script are selected, I also placed the line "debugger;" in the first line of a javascript function that is called but the break is never hit. In the Web.Config (although this is probably for compiled code:): <compilation debug="true I'm reliving this problem on a new machine...Can u help? Edit: I left out a huge detail: Google Chrome is my default browser. (I am trying to debug a Chrome-only error.) Must I resort to other debug tools other than VS2010? I am thinking that it should work. Too hopeful, eh?

    Read the article

  • Java invalid stream header Problem

    - by David zsl
    Hi all, im writen a client-server app, and now i´m facing a problem that I dont know how to solve: This is the client: try { Socket socket = new Socket(ip, port); ObjectOutputStream ooos = new ObjectOutputStream(socket .getOutputStream()); SendMessage message = new SendMessage(); message.numDoc = value.numDoc; message.docFreq = value.docFreq; message.queryTerms = query; message.startIndex = startIndex; message.count = count; message.multiple = false; message.ips = null; message.ports = null; message.value = true; message.docFreq = value.docFreq; message.numDoc = value.numDoc; ooos.writeObject(message); ObjectInputStream ois = new ObjectInputStream(socket .getInputStream()); ComConstants mensajeRecibido; Object mensajeAux; String mensa = null; byte[] by = null; do { mensajeAux = ois.readObject(); if (mensajeAux instanceof ComConstants) { System.out.println("Thread by Thread has Search Results"); String test; ByteArrayOutputStream testo = new ByteArrayOutputStream(); mensajeRecibido = (ComConstants) mensajeAux; byte[] wag; testo.write( mensajeRecibido.fileContent, 0, mensajeRecibido.okBytes); wag = testo.toByteArray(); if (by == null) { by = wag; } else { int size = wag.length; System.arraycopy(wag, 0, by, 0, size); } } else { System.err.println("Mensaje no esperado " + mensajeAux.getClass().getName()); break; } } while (!mensajeRecibido.lastMessage); //ByteArrayInputStream bs = new ByteArrayInputStream(by.toByteArray()); // bytes es el byte[] ByteArrayInputStream bs = new ByteArrayInputStream(by); ObjectInputStream is = new ObjectInputStream(bs); QueryWithResult[] unObjetoSerializable = (QueryWithResult[])is.readObject(); is.close(); //AQUI TOCARIA METER EL QUICKSORT XmlConverter xce = new XmlConverter(unObjetoSerializable, startIndex, count); String serializedd = xce.runConverter(); tempFinal = serializedd; ois.close(); socket.close(); } catch (Exception e) { e.printStackTrace(); } i++; } And this is the sender: try { QueryWithResult[] outputLine; Operations op = new Operations(); boolean enviadoUltimo=false; ComConstants mensaje = new ComConstants(); mensaje.queryTerms = query; outputLine = op.processInput(query, value); //String c = new String(); //c = outputLine.toString(); //StringBuffer swa = sw.getBuffer(); ByteArrayOutputStream bs= new ByteArrayOutputStream(); ObjectOutputStream os = new ObjectOutputStream (bs); os.writeObject(outputLine); os.close(); byte[] mybytearray = bs.toByteArray(); ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(mybytearray); BufferedInputStream bis = new BufferedInputStream(byteArrayInputStream); int readed = bis.read(mensaje.fileContent,0,4000); while (readed > -1) { mensaje.okBytes = readed; if (readed < ComConstants.MAX_LENGTH) { mensaje.lastMessage = true; enviadoUltimo=true; } else mensaje.lastMessage = false; oos.writeObject(mensaje); if (mensaje.lastMessage) break; mensaje = new ComConstants(); mensaje.queryTerms = query; readed = bis.read(mensaje.fileContent); } if (enviadoUltimo==false) { mensaje.lastMessage=true; mensaje.okBytes=0; oos.writeObject(mensaje); } oos.close(); } catch (Exception e) { e.printStackTrace(); } } And this is the error log: Thread by Thread has Search Results java.io.StreamCorruptedException: invalid stream header: 20646520 at java.io.ObjectInputStream.readStreamHeader(Unknown Source) at java.io.ObjectInputStream.<init>(Unknown Source) at org.tockit.comunication.ServerThread.enviaFicheroMultiple(ServerThread.java:747) at org.tockit.comunication.ServerThread.run(ServerThread.java:129) at java.lang.Thread.run(Unknown Source) Where at org.tockit.comunication.ServerThread.enviaFicheroMultiple(ServerThread.java:747) is this line ObjectInputStream is = new ObjectInputStream(bs); on the 1st code just after while (!mensajeRecibido.lastMessage); Any ideas?

    Read the article

  • Python: divisors of a number [closed]

    - by kame
    Possible Duplicate: What is the best way to get all the divisors of a number? The most part of this code was written by an other programmer, but I cant run his code. Please show me where the mistake is. I was searching a long time. I get the error 'NoneType' object is not iterable (in divisorGen(n)). from __future__ import division #calculate the divisors #this is fast-working-code from: #http://stackoverflow.com/questions/171765/what-is-the-best-way-to-get-all-the-divisors-of-a-number def factorGenerator(n): for x in range(1,n): n = n * 1.0 r = n / x if r % 1 == 0: yield x # edited def divisorGen(n): factors = list(factorGenerator(n)) nfactors = len(factors) f = [0] * nfactors while True: yield reduce(lambda x, y: x*y, [factors[x][0]**f[x] for x in range(nfactors)], 1) i = 0 while True: f[i] += 1 if f[i] <= factors[i][1]: break f[i] = 0 i += 1 if i >= nfactors: return for n in range(100): for i in divisorGen(n): print i

    Read the article

  • jQuery to add vertical scroll bar for a fixed height div container

    - by Michael Mao
    Hi all: This is uni homework so sorry about :my password protected url Please enter usersame as "uts" and password as "10479475", without the wrapper quotes. It it almost completed, please follow simple steps to see where my problem dwells: step1: click on any continent highlighted on world map; step2: click on any city marked on continent map; step3: enter quantity ranging from 1 to 6 into the upper-right table; step4 : there you go, in the table below, once row is equal or greater than 3, then the size of table starts to "overlap" on top of the footer section and continues to grow, which is not what I want. I want to achieve the effect that the table size has a limited max-height, so once actual height exceeds the threshold, then I can have a vertical scroll bar so to "stack up" my table and not to break the overall structure. It turns out that my lazy initial design now gets myself stuck in a problem... Any suggestion and hint on how to get this?

    Read the article

  • Python XLWT attempt to overwrite cell workaround

    - by PPTim
    Hi, Using the python module xlwt, writing to the same cell twice throws an error: Message File Name Line Position Traceback <module> S:\******** write C:\Python26\lib\site-packages\xlwt\Worksheet.py 1003 write C:\Python26\lib\site-packages\xlwt\Row.py 231 insert_cell C:\Python26\lib\site-packages\xlwt\Row.py 150 Exception: Attempt to overwrite cell: sheetname=u'Sheet 1' rowx=1 colx=12 with the code snippet def insert_cell(self, col_index, cell_obj): if col_index in self.__cells: if not self.__parent._cell_overwrite_ok: msg = "Attempt to overwrite cell: sheetname=%r rowx=%d colx=%d" \ % (self.__parent.name, self.__idx, col_index) raise Exception(msg) #row 150 prev_cell_obj = self.__cells[col_index] sst_idx = getattr(prev_cell_obj, 'sst_idx', None) if sst_idx is not None: self.__parent_wb.del_str(sst_idx) self.__cells[col_index] = cell_obj Looks like the code 'raise'es an exception which halts the entire process. Is removing the 'raise' term enough to allow for overwriting cells? I appreciate xlwt's warning, but i thought the pythonic way is to assume "we know what we're doing". I don't want to break anything else in touching the module.

    Read the article

  • Multiple Silverlight Unit Test Projects in Solution

    - by IUnknown
    I am building out a number of Silverlight 4.0 libraries that are part of the same solution. I like to break them into separate projects and have a Unit Test project for each: SolutionX -LibraryProject1 ---Class1.cs ---Class2.cs -LibraryProject1.Test ---Tests1.cs ---Tests2.cs -LibraryProject2 ---Class1.cs ---Class2.cs ---CLass3.cs -LibraryProject2.Test ---Tests1.cs ---Tests2.cs ---Tests3.cs -LibraryProject3 ---Class1.cs -LibraryProject3.Test ---Tests1.cs This works great when using VS regular test projects and infrastructure because I can create and execute list of test that are aggregated from each Test project. But with the Silverlight Unit Test Framework since the Silverlight Unit Test Project must be the "start up project" I cannot figure how to run a collection of tests from each test project in one go. I have to run each separately then switch the starting project each time. I would prefer to avoid create complex build scripts or build definitions - is there a way to run all the tests at once? -Thanks

    Read the article

  • c#, asp.net, dynamicdata Validation Exception Message Caught in JavaScript, not DynamicValidator (Dy

    - by Perplexed
    I have a page here with a few list views on it that are all bound to Linq data sources and they seem to be working just fine. I want to add validation such that when a checkbox (IsVoid on the object) is checked, comments must be entered (VoidedComments on the object). Here's the bound object's OnValidate method: partial void OnValidate(ChangeAction action) { if (action == ChangeAction.Update) { if (_IsVoid) { string comments = this.VoidedComments; if (string.IsNullOrEmpty(this._VoidedComments)) { throw new ValidationException("Voided Comments are Required to Void an Error"); } } } } Despite there being a dynamic validator on the page referencing the same ValidationGroup as the dynaimc control, when the exception fires, it's caught in JavaScript and the debugger wants to break in. The message is never delivered to the UI as expected. Any thoughts as to What's going on?

    Read the article

  • Ruby syntax checking in vim

    - by roybotnik
    Hi everyone, I use vim with various plugins for editing ruby code. I have proper syntax highlighting set up but I was wondering if anyone knew of a way to get ruby syntax checking, similar to what you might see in an IDE such as visual studio, radrails, etc? Something that would simply turn stuff red or break highlighting when I'm missing an 'end' or have an improperly constructed line of code would be sweet. I googled and came across this plugin, http://github.com/scrooloose/syntastic/tree/master but I was wondering if anyone had any better suggestions.

    Read the article

  • Weighted random selection using Walker's Alias Method (c# implementation)

    - by Chuck Norris
    I was looking for this algorithm (algorithm which will randomly select from a list of elements where each element has different probability of being picked (weight) ) and found only python and c implementations, after I did a C# one, a bit different (but I think simpler) I thought I should share it, and ask your opinion ? this is it: using System; using System.Collections.Generic; using System.Linq; namespace ChuckNorris { class Program { static void Main(string[] args) { var oo = new Dictionary<string, int> { {"A",7}, {"B",1}, {"C",9}, {"D",8}, {"E",11}, }; var rnd = new Random(); var pick = rnd.Next(oo.Values.Sum()); var sum = 0; var res = ""; foreach (var o in oo) { sum += o.Value; if(sum >= pick) { res = o.Key; break; } } Console.WriteLine("result is "+ res); } } } if anyone can remake it in f# please post your code

    Read the article

  • Android 2.2 deprecates restartPackage but adds another headache...

    - by mob1lejunkie
    Android 2.2 release notes have just been released. ActivityManager.restartPackage method has been deprecated and the description is: the previous behavior here is no longer available to applications because it allows them to break other applications by removing their alarms, stopping their services, etc. Instead 2.2 has given another tool for pesky "task killer" apps by introducing new ActivityManager.killBackgroundProcesses method. More Info Can someone explain whether ActivityManager.killBackgroundProcesses will kill our scheduled alarms? If so, deprecating ActivityManager.restartPackage was pointless as "task killer" will now abuse ActivityManager.killBackgroundProcesses.

    Read the article

  • jQuery: shorten string length to fit a set width.

    - by Marius
    Hello there, I have a table, and in each cell I want to place strings, but they are much wider than the cell width. To prevent line break, I would like to shorten the strings to fit the cell, and append '...' at end to indicate that the string is much longer. The table has about 40 rows and has to be done to each cell, so its important that its a quick. Should I use JS/jQuery for this? How would I do it? Thank you for your time. Kind regards, Marius

    Read the article

  • build error, warning MSB3258

    - by Steed
    I have recently moved my solution from my main dev machine using vs2010 pro sp1 to a new machine. The setup is supposed to be the same except its failing to build. Its giving errors like c:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1360,9): warning MSB3258: The primary reference "C:\rep\hms\trunk\ikassystemv3\ikasDAL\bin\Debug\ikasDAL.dll" could not be resolved because it has an indirect dependency on the .NET Framework assembly "mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" which has a higher version "4.0.0.0" than the version "2.0.0.0" in the current target framework. However all all the libraries in question are set to use the .net 2 framework and I need it this way or else it will break stuff that uses them. However for some reason it seems to think that somehow my .net 2 system libs are somehow referencing .net 4 stuff. All the referenced libs are .net 2 You can see my build output here http://tinyurl.com/bnugru4

    Read the article

  • How to debug nHibernate/RhinoMocks TypeInitializer exception

    - by Joe Future
    Pulling my hair out trying to debug this one. Earlier this morning, this code was working fine, and I can't see what I've changed to break it. Now, whenever I try to open an nHibernate session, I'm getting the following error: Test method BCMS.Tests.Repositories.BlogBlogRepositoryTests.can_get_recent_blog_posts threw exception: System.TypeInitializationException: The type initializer for 'NHibernate.Cfg.Environment' threw an exception. --- System.Runtime.Serialization.SerializationException: Type is not resolved for member 'Castle.DynamicProxy.Serialization.ProxyObjectReference,Rhino.Mocks, Version=3.5.0.1337, Culture=neutral, PublicKeyToken=0b3305902db7183f'.. Any thoughts on how to debug what's going on here?

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >