Search Results

Search found 4879 results on 196 pages for 'geeks'.

Page 127/196 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Visual Studio 2010: Pro Power Tools

    - by Enrique Lima
    There is a set of Pro Power Tools released yesterday for Visual Studio 2010.  Check them out here. Brian Harry did a feature summary description on his blog.  Check it out here: http://blogs.msdn.com/b/bharry/archive/2010/06/07/announcing-the-first-visual-studio-pro-power-tools.aspx

    Read the article

  • What did you learn today?

    - by Rajesh Pillai
    What did you learn today? Everyday teaches you something, some lesson or the other. Some day you learn a new language, a new skill or a new hobby or visit some new place, learn music, have a different dining experience, learn swimming, make some good friends, get in touch with some old friend etc. etc…. Each of these things teaches you something… So, what did you learn today? Some of the learnings from my past weeks are outlined below… Respect others. Don’t underestimate them. (Though I never consciously do so) Be careful with your words because words have different meanings if the context is not clear. Spend some time for your personal stuff and allow others do so. Every individual is different, their skills different, their thoughts are different, their perceptions are different. So, be polite. Time management. (This is a tough skill to master). At the end of the day I keep looking for more time so may be you. So, again What did you learn today? This reflection is important because if you don’t know what you are learning at every stage in your life, then your are not learning and not growing. In short you are not living. Learning is not memorization but it is self realization….. Happy learning!!!

    Read the article

  • SharePoint Saturday DC 2010 Slides, Demo Scripts, and Pictures

    - by Brian Jackett
    Wow! This past weekend I attended SharePoint Saturday Washington DC (SPSDC) which was quite an event to say the least.  For those unfamiliar, SharePoint Saturday is a community driven event where various speakers gather to present at a FREE conference on all topics related to SharePoint.  This made my fifth SharePoint Saturday attended and fourth I’ve spoken at.  SPSDC was a bit different than most SharePoint Saturdays mostly due to the scale of it.  We had almost 950 attendees, over 80 speakers presenting close to 90 sessions, and dozens of sponsors.  A big thanks goes out to the organizers of this event.  They put in a lot of hard work and time to pull this event off and should be very proud of the end result.      For SPSDC I presented “The Power of PowerShell + SharePoint 2007”.  I want to thank all of the attendees of my session for coming and asking some great questions.  Below you can find the slides and demo scripts for this session.  I also took some photos throughout the day (not as many as usual since so much going on) so check them out.  If you have any follow up questions feel free to drop me a line in the comments or the contact link at the top of the site.   Slides and Scripts Click here for the demo scripts and slides posted on my SkyDrive. VERY IMPORTANT NOTE: One thing I forgot to mention in my presentation.  In order to run code against the SharePoint API you need to load the Microsoft.SharePoint.dll assembly first.  Run the below command on the PowerShell console line to complete that:   [void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") Photos Facebook album -or- My album on Windows Live site (higher res shots). View Full Album         -Frog Out

    Read the article

  • MVP Nomination

    - by Nick Harrison
    I have debated posting this or not. My initial thought was not to post about it. My thought was not to blog about it thinking that I would spare myself the embarrassment if I wasn't awarded. A little paranoid, I know, but these are paranoid times. After more reflection, I realize that there is no embarrassment in not winning. There is great honor in being nominated. Instead of worrying about not winning in the end, I need to enjoy the moment and enjoy being nominated. This is an extreme honor. I would to hear your stories of being nominated? What was the process like? What was your reaction? Hopefully, I will have some good news to share here soon. If not, being nominated truly is an honor.

    Read the article

  • mongoDB Management Studio

    - by Liam McLennan
    This weekend I have been in Sydney at the MS Web Camp, learning about web application development. At the end of the first day we came up with application ideas and pitched them. My idea was to build a web management application for mongoDB. mongoDB I pitched my idea, put down the microphone, and then someone asked, “what’s mongo?”. Good question. MongoDB is a document database that stores JSON style documents. This is a JSON document for a tweet from twitter: db.tweets.find()[0] { "_id" : ObjectId("4bfe4946cfbfb01420000001"), "created_at" : "Thu, 27 May 2010 10:25:46 +0000", "profile_image_url" : "http://a3.twimg.com/profile_images/600304197/Snapshot_2009-07-26_13-12-43_normal.jpg", "from_user" : "drearyclocks", "text" : "Does anyone know who has better coverage, Optus or Vodafone? Telstra is still too expensive.", "to_user_id" : null, "metadata" : { "result_type" : "recent" }, "id" : { "floatApprox" : 14825648892 }, "geo" : null, "from_user_id" : 6825770, "search_term" : "telstra", "iso_language_code" : "en", "source" : "&lt;a href=&quot;http://www.tweetdeck.com&quot; rel=&quot;nofollow&quot;&gt;TweetDeck&lt;/a&gt;" } A mongodb server can have many databases, each database has many collections (instead of tables) and a collection has many documents (instead of rows). Development Day 2 of the Sydney MS Web Camp was allocated to building our applications. First thing in the morning I identified the stories that I wanted to implement: Scenario: View databases Scenario: View Collections in a database Scenario: View Documents in a Collection Scenario: Delete a Collection Scenario: Delete a Database Scenario: Delete Documents Over the course of the day the team (3.5 developers) implemented all of the planned stories (except ‘delete a database’) and also implemented the following: Scenario: Create Database Scenario: Create Collection Lessons Learned I’m new to MongoDB and in the past I have only accessed it from Ruby (for my hare-brained scheme). When it came to implementing our MongoDB management studio we discovered that their is no official MongoDB driver for .NET. We chose to use NoRM, honestly just because it was the only one I had heard of. NoRM was a challenge. I think it is a fine library but it is focused on mapping strongly typed objects to MongoDB. For our application we had no prior knowledge of the types that would be in the MongoDB database so NoRM was probably a poor choice. Here are some screens (click to enlarge):

    Read the article

  • Silverlight Goes Mobile!

    - by PeterTweed
    The most exciting announcements from Mix 2010 last week for me were the release of the Windows Phone 7 Series SDK and the news that the platform utilizes Silverlight for the application development technology. From the press and exposure that the platform is being given and the experience that is promised it looks like the Windows Phone 7 Series could eventually compete with the iPhone. For me this is exciting as Silverlight can now be used to develop RIA apps, easily deployed desktop apps and mobile apps. As someone who delivers enterprise technology solutions this equates to a whole bunch of opportunity knocking at the door and asking to join the party. Watch this space for future posts on developing apps on the Windows Phone 7 Series platform!

    Read the article

  • Dev Lop

    - by Jason Franks
    Back in the early 90s, before I was a professional geek--much less a geek with a blog--I saw this old chop socky movie. I don't remember what it was called, or who was in it... all I remember is that, in one scene, the venerable sensei tells the hero: "You must develop your nunchaku technique." This became a bit fo a catchphrase amongst my high school mates. Well folks, I am developing my technuique. This blog has been renamed and the old posts removed--I could go into my reasons for this, but that would defeat the point of the exercise. Sorry if you liked 'em. It has been a good couple of years since I wrote anything here, so I doubt that I am putting out any regular readers. Will I be posting here more often, now that I've renamed and rethemed the place? I don't know. In the meantime, check it out: Bruce Lee playign ping pong with nunchaku. --JF

    Read the article

  • Conscience and unconscience from an AI/Robotics POV

    - by Tim Huffam
    Just pondering the workings of the human mind - from an AI/robotics point of view (either of which I know little about)..   If conscience is when you're thinking about it (processing it in realtime)... and unconscience is when you're not thinking about it (eg it's autonomous behaviour)..  would it be fair to say then, that:   - conscience is software   - unconscience is hardware   Considering that human learning is attributed to the number of neural connections made - and repetition is the key - the more the connections, the better one understands the subject - until it becomes a 'known'.   Therefore could this be likened to forming hard connections?  Eg maybe learning would progress from an MCU to FPGA's - therefore offloading realtime process to the hardware (FPGA or some such device)? t

    Read the article

  • Launching "Script of the Day" - Learn an amazing IT script sample every 24 hours

    - by Jialiang
    Every day is an opportunity to learn something or discover something new.  Learn one IT script sample every day; Be an IT master in a year! Microsoft All-In-One Script Framework offers "Script of the Day".  "Script of the Day" introduces one amazing script sample every 24 hours that demonstrates the frequently asked IT tasks.  If you are curious about and passionate for learning something new, follow the "Script of the Day” RSS feed or visit the "Script of the Day" homepage, and share your feedback with us [email protected].     Subscribe to the RSS Feed: http://blogs.technet.com/b/onescript/rss.aspx?tags=ScriptOfTheDay

    Read the article

  • Hosting and consuming WCF services without configuration files

    - by martinsj
    In this post, I'll demonstrate how to configure both the host and the client in code without the need for configuring services i the <system.serviceModel> section of the config-file. In fact, you don't need a  <system.serviceModel> section at all. What you'll do need (and want) sometimes, is the Uri of the service in the configuration file. Configuring the Uri of the the service is actually only needed for the client or when self-hosting, not when hosting in IIS. So, exactly What do we need to configure? The binding type and the binding constraints The metadata behavior Debug behavior You can of course configure even more, and even more if you want to, WCF is after all the king of configuration… As an example I'll be hosting and consuming a service that removes most of the default constraints for WCF-services, using a BasicHttpBinding. Of course, in regards to security, it is probably better to have some constraints on the server, but this is only a demonstration. The ServerConfig class in the code beneath is a static helper class that will be used in the examples. In this post, I’ll be using this helper-class for all configuration, for both the server and the client. In WCF, the  client and the server have both their own WCF-configuration. With this piece of code, they will be sharing the same configuration. 1: public static class ServiceConfig 2: { 3: public static Binding DefaultBinding 4: { 5: get 6: { 7: var binding = new BasicHttpBinding(); 8: Configure(binding); 9: return binding; 10: } 11: } 12:  13: public static void Configure(HttpBindingBase binding) 14: { 15: if (binding == null) 16: { 17: throw new ArgumentException("Argument 'binding' cannot be null. Cannot configure binding."); 18: } 19:  20: binding.SendTimeout = new TimeSpan(0, 0, 30, 0); // 30 minute timeout 21: binding.MaxBufferSize = Int32.MaxValue; 22: binding.MaxBufferPoolSize = 2147483647; 23: binding.MaxReceivedMessageSize = Int32.MaxValue; 24: binding.ReaderQuotas.MaxArrayLength = Int32.MaxValue; 25: binding.ReaderQuotas.MaxBytesPerRead = Int32.MaxValue; 26: binding.ReaderQuotas.MaxDepth = Int32.MaxValue; 27: binding.ReaderQuotas.MaxNameTableCharCount = Int32.MaxValue; 28: binding.ReaderQuotas.MaxStringContentLength = Int32.MaxValue; 29: } 30:  31: public static ServiceMetadataBehavior ServiceMetadataBehavior 32: { 33: get 34: { 35: return new ServiceMetadataBehavior 36: { 37: HttpGetEnabled = true, 38: MetadataExporter = {PolicyVersion = PolicyVersion.Policy15} 39: }; 40: } 41: } 42:  43: public static ServiceDebugBehavior ServiceDebugBehavior 44: { 45: get 46: { 47: var smb = new ServiceDebugBehavior(); 48: Configure(smb); 49: return smb; 50: } 51: } 52:  53:  54: public static void Configure(ServiceDebugBehavior behavior) 55: { 56: if (behavior == null) 57: { 58: throw new ArgumentException("Argument 'behavior' cannot be null. Cannot configure debug behavior."); 59: } 60: 61: behavior.IncludeExceptionDetailInFaults = true; 62: } 63: } Configuring the server There are basically two ways to host a WCF service, in IIS and self-hosting. When hosting a WCF service in a production environment using SOA architecture, you'll be most likely hosting it in IIS. When testing the service in integration tests, it's very handy to be able to self-host services in the unit-tests. In fact, you can share the the WCF configuration for self-hosted services and services hosted in IIS. And that is exactly what you want to do, testing the same configurations for test and production environments.   Configuring when Self-hosting When self-hosting, in order to start the service, you'll have to instantiate the ServiceHost class, configure the  service and open it. 1: // Create the service-host. 2: var host = new ServiceHost(typeof(MyService), endpoint); 3:  4: // Configure the binding 5: host.AddServiceEndpoint(typeof(IMyService), ServiceConfig.DefaultBinding, endpoint); 6:  7: // Configure metadata behavior 8: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 9:  10: // Configure debgug behavior 11: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 12: 13: // Start listening to the service 14: host.Open(); 15:  Configuring when hosting in IIS When you create a WCF service application with the wizard in Visual Studio, you'll end up with bits and pieces of code in order to get the service running: Svc-file with codebehind. A interface to the service Web.config In order to get rid of the configuration in the <system.serviceModel> section, which the wizard has generated for us, we must tell the service that we have a factory that will create the service for us. We do this by changing the markup for the svc-file: 1: <%@ ServiceHost Language="C#" Debug="true" Service="Namespace.MyService" Factory="Namespace.ServiceHostFactory" %> The markup tells IIS that we have a factory called ServiceHostFactory for this service. The service factory has a method we can override which will be called when someone asks IIS for the service. There are overloads we can override: 1: System.ServiceModel.ServiceHostBase CreateServiceHost(string constructorString, Uri[] baseAddresses) 2: System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 3:  In this example, we'll be using the last one, so our implementation looks like this: 1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3:  4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12:  1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3: 4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12: As you can see, we are using the same configuration helper we used when self-hosting. Now, when you have a factory, the <system.serviceModel> section of the configuration can be removed, because the section will be ignored when the service has a custom factory. If you want to configure something else in the config-file, one could configure in some other section.   Configuring the client Microsoft has helpfully created a ChannelFactory class in order to create a proxy client. When using this approach, you don't have generate those awfull proxy classes for the client. If you share the contracts with the server in it's own assembly like in the layer diagram under, you can share the same piece of code. The contracts in WCF are the interface to the service and if any, the datacontracts (custom types) the service depends on. Using the ChannelFactory with our configuration helper-class is very simple: 1: var identity = EndpointIdentity.CreateDnsIdentity("localhost"); 2: var endpointAddress = new EndpointAddress(endPoint, identity); 3: var factory = new ChannelFactory<IMyService>(DeployServiceConfig.DefaultBinding, endpointAddress); 4: using (var myService = new factory.CreateChannel()) 5: { 6: myService.Hello(); 7: } 8: factory.Close();   Happy configuration!

    Read the article

  • Visual Tree Enumeration

    - by codingbloke
    I feel compelled to post this blog because I find I’m repeatedly posting this same code in silverlight and windows-phone-7 answers in Stackoverflow. One common task that we feel we need to do is burrow into the visual tree in a Silverlight or Windows Phone 7 application (actually more recently I found myself doing this in WPF as well).  This allows access to details that aren’t exposed directly by some controls.  A good example of this sort of requirement is found in the “Restoring exact scroll position of a listbox in Windows Phone 7”  question on stackoverflow.  This required that the scroll position of the scroll viewer internal to a listbox be accessed. A caveat One caveat here is that we should seriously challenge the need for this burrowing since it may indicate that there is a design problem.  Burrowing into the visual tree or indeed burrowing out to containing ancestors could represent significant coupling between module boundaries and that generally isn’t a good idea. Why isn’t this idea just not cast aside as a no-no?  Well the whole concept of a “Templated Control”, which are in extensive use in these applications, opens the coupling between the content of the visual tree and the internal code of a control.   For example, I can completely change the appearance and positioning of elements that make up a ComboBox.  The ComboBox control relies on specific template parts having set names of a specified type being present in my template.  Rightly or wrongly this does kind of give license to writing code that has similar coupling. Hasn’t this been done already? Yes it has.  There are number of blogs already out there with similar solutions.  In fact if you are using Silverlight toolkit the VisualTreeExtensions class already provides this feature.  However I prefer my specific code because of the simplicity principle I hold to.  Only write the minimum code necessary to give all the features needed.  In this case I add just two extension methods Ancestors and Descendents, note I don’t bother with “Get” or “Visual” prefixes.  Also I haven’t added Parent or Children methods nor additional “AndSelf” methods because all but Children is achievable with the addition of some other Linq methods.  I decided to give Descendents an additional overload for depth hence a depth of 1 is equivalent to Children but this overload is a little more flexible than simply Children. So here is the code:- VisualTreeEnumeration public static class VisualTreeEnumeration {     public static IEnumerable<DependencyObject> Descendents(this DependencyObject root, int depth)     {         int count = VisualTreeHelper.GetChildrenCount(root);         for (int i = 0; i < count; i++)         {             var child = VisualTreeHelper.GetChild(root, i);             yield return child;             if (depth > 0)             {                 foreach (var descendent in Descendents(child, --depth))                     yield return descendent;             }         }     }     public static IEnumerable<DependencyObject> Descendents(this DependencyObject root)     {         return Descendents(root, Int32.MaxValue);     }     public static IEnumerable<DependencyObject> Ancestors(this DependencyObject root)     {         DependencyObject current = VisualTreeHelper.GetParent(root);         while (current != null)         {             yield return current;             current = VisualTreeHelper.GetParent(current);         }     } }   Usage examples The following are some examples of how to combine the above extension methods with Linq to generate the other axis scenarios that tree traversal code might require. Missing Axis Scenarios var parent = control.Ancestors().Take(1).FirstOrDefault(); var children = control.Descendents(1); var previousSiblings = control.Ancestors().Take(1)     .SelectMany(p => p.Descendents(1).TakeWhile(c => c != control)); var followingSiblings = control.Ancestors().Take(1)     .SelectMany(p => p.Descendents(1).SkipWhile(c => c != control).Skip(1)); var ancestorsAndSelf = Enumerable.Repeat((DependencyObject)control, 1)     .Concat(control.Ancestors()); var descendentsAndSelf = Enumerable.Repeat((DependencyObject)control, 1)     .Concat(control.Descendents()); You might ask why I don’t just include these in the VisualTreeEnumerator.  I don’t on the principle of only including code that is actually needed.  If you find that one or more of the above  is needed in your code then go ahead and create additional methods.  One of the downsides to Extension methods is that they can make finding the method you actually want in intellisense harder. Here are some real world usage scenarios for these methods:- Real World Scenarios //Gets the internal scrollviewer of a ListBox ScrollViewer sv = someListBox.Descendents().OfType<ScrollViewer>().FirstOrDefault(); // Get all text boxes in current UserControl:- var textBoxes = this.Descendents().OfType<TextBox>(); // All UIElement direct children of the layout root grid:- var topLevelElements = LayoutRoot.Descendents(0).OfType<UIElement>(); // Find the containing `ListBoxItem` for a UIElement:- var container = elem.Ancestors().OfType<ListBoxItem>().FirstOrDefault(); // Seek a button with the name "PinkElephants" even if outside of the current Namescope:- var pinkElephantsButton = this.Descendents()     .OfType<Button>()     .FirstOrDefault(b => b.Name == "PinkElephants"); //Clear all checkboxes with the name "Selector" in a Treeview foreach (CheckBox checkBox in elem.Descendents()     .OfType<CheckBox>().Where(c => c.Name == "Selector")) {     checkBox.IsChecked = false; }   The last couple of examples above demonstrate a common requirement of finding controls that have a specific name.  FindName will often not find these controls because they exist in a different namescope. Hope you find this useful, if not I’m just glad to be able to link to this blog in future stackoverflow answers.

    Read the article

  • The provider did not return a ProviderManifestToken string Entity Framework

    - by PearlFactory
    Moved from Home to work and went to fire up my project and after long pause "The provider did not return a ProviderManifestToken string" or even More Abscure ProviderIncompatable Exception Now after 20 mins of chasing my tail re different ver of EntityFramework 4.1 vs 4.2...blahblahblah Look inside at the inner exception A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible DOH!!!! Or a clean translation is that it cant find SQL or is offline or not running. SO check the power is on/Service running or as in my case Edit web.config & change back to Work SQL box   Hope you dont have this pain as the default errors @ the moment suck balls in the EntityFramework 4.XX releases   Cheers

    Read the article

  • Business Choices and Evony

    - by Robert May
    Recently, I’ve been playing a game called Evony, and I finally decided to quit the game and thought I should warn others who might be tempted.  I also find a lot of insight with this game as an example.  A few of the companies that I’ve worked with or worked for have been like this and they are NOT good places to be. Evony is a joke designed to milk as much money out of people as possible.  As a professional software developer who mentors teams on how to build better software, here's what I see: They obviously offshore all development and have little oversight over that offshore development, and they probably have a small team at that.  Evidenced by the poor grammar throughout the game. They're seeking to maximize revenue and pushing to do as little development as possible, which would mean a small team. They're horribly understaffed in the customer support department as evidenced by never replying to this forum and never responding to bug reports or help requests (I've had one open with no response AT ALL for over a month . . .) They have way inadequate testing, no CI, and probably no automated unit tests.  You can see this by the poor grammar throughout the game and the type of bugs that show up. They aren't following a formal development process (no Agile, Waterfall, or anything else) as evidenced by their lack of predictable release cycle and lack of visibility. I'm guessing that the internal code base is terrible, otherwise, there wouldn't be an "Age II" that had nothing more than a new visual interface and a few rule tweaks.  This is also evidenced by the itty bitty scope of bug fixes and their inability to really fix bugs. Their Architect sucks.  Really, 42k user is all you can handle on a single server?  Could you REALLY not come up with a better way to scale to handle users?  They've built isolated worlds, instead of a single continuous world. Back to milking people for money--to really progress, you have to spend money. All of this adds up to knowing, deliberate actions on the part of management.  They CHOOSE to do this (like AOL choosing to send more discs instead of improve quality). So, what can we learn? This game will never really improve, since the bosses don't care, they're only in it for the money. The game will never have good support.  Again, the owners don't care. Giving them money only perpetuates this scam (and yes, I've given them money, way too much money. :() They don't care if you quit.  There's a new sucker born every day. Don't EVER go to work for them.  I've worked both with and for people like this and the culture is NEVER good. Ah well. Technorati Tags: Evony

    Read the article

  • Windows Azure Horror Story: The Phantom App that Ate my Data

    - by jasont
    Originally posted on: http://geekswithblogs.net/jasont/archive/2013/10/31/windows-azure-horror-story-the-phantom-app-that-ate-my.aspxI’ve posted in the past about how my company, Stackify, makes some great tools that enhance the Windows Azure experience. By just using the Azure Management Portal, you don’t get a lot of insight into what is happening inside your instances, which are just Windows VMs. This morning, I noticed something quite scary, and fittingly enough, it’s Halloween. I deleted a deployment after doing a new deployment and VIP swap. But, after a few minutes, the instance still appeared in my Stackify dashboard. I thought this odd, as we monitor these operations and remove devices that have been deleted. Digging in showed that it wasn’t a problem with Stackify. This instance was still there, and still sending data. You’ll notice though that the “Status” check shows that the instance no longer exists. And it gets scarier. I quickly checked the running services, and sure enough, the Windows Azure Guest Agent (which runs worker roles) was still running: “Surely, this can’t be” I’m thinking at this point. Knowing that this worker role is in our QA environment, it has debug logging turned on. And, using our log file tailing feature, I can see that, yes, indeed, this deleted deployment is still executing code. Pulling messages off queues. Sending alerts. Changing data. Fortunately, our tools give me the ability to manually disable the Windows service that runs this code. In the meantime, I have contacted everyone I know (and support) at Microsoft to address this issue. Needless to say, this is concerning. When you delete a deployment, it needs to actually delete. Not continue to eat your data. As soon as I learn more, I’ll be posting an update.

    Read the article

  • Why would you dual-run an app on Azure and AWS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/11/10/why-would-you-dual-run-an-app-on-azure-and-aws.aspxI had this question from a viewer of my Pluralsight course, Implementing the Reactive Manifesto with Azure and AWS, and thought I’d publish the response. So why would you dual-run your cloud app by hosting it on Azure and AWS? Sounds like a lot of extra development and management overhead. Well the most compelling reasons are reliability and portability. In 2012 I was working for a client who was making a big investment in the cloud, and at the end of the year we published their first external API for business partners. It was hosted in Azure and used some really nice features to route back into existing on-premise services. We were able to publish a clean, simple API to partners, and hide away the underlying complexity of the internal services while still leveraging them to do all the work. Two days after we went live, we were hit by the Azure SSL certificate expiry outage, and our API was unavailable for the best part of 3 days. Fortunately we had planned a gradual roll-out to partners, so the impact was minimal, but we’d been intending to ramp up quickly, and if the outage had happened a week or two later we would have been in a very bad place. Not least because our app could only run on Azure, we couldn’t package it up for another service without going back and reworking the code. More recently AWS had an issue with a networking device in one of their data centres which caused an outage that took the best part of a day to resolve. In both scenarios the SLAs are worthless, as you’ll get back a small percentage of your cloud expenditure, which is going to be negligible compared to your costs in dealing with the outage. And if your app is built specifically for AWS or Azure then if there’s an extended outage you can’t just deploy it onto a new set of kit from a different supplier. And the chances are pretty good there will be another extended outage, both for Microsoft and for Amazon. But the chances are small that it will happen to both at the same time. So my basic guidance has been: ignore the SLAs, go for better uptime by using two clouds. As soon as you need to scale beyond a single instance, start by scaling out to another cloud. Then scale out to different data centres in both clouds. Then you’ve got dual-cloud, quadruple-datacentre redundancy, so any more scaling you need can be left to the clouds to auto-scale themselves. By running in both clouds, you’ve made your app portable, so in the highly unlikely event that both AWS and Azure go down in multiple regions, you’ll have a deployment package which will let you spin up a new stack on yet another cloud, without having to rework your solution.

    Read the article

  • ImageResizer - AzureReader2 with Azure SDK 2.2

    - by Chris Skardon
    Originally posted on: http://geekswithblogs.net/cskardon/archive/2013/10/29/imageresizer---azurereader2-with-azure-sdk-2.2.aspxSo Azure SDK 2.2 came out recently, which means I can open my azure projects in VS 2013 (yay), so I decided to do an upgrade of my MVC4 project to MVC5, I followed this link on how to do the upgrade, and generally things went ok. I fire up my app, and run into a ‘binding’ issue, that AzureReader2 was trying to use Microsoft.WindowsAzure.Storage, Version=2.1.0.0 but alas, it couldn’t find it. I am not the only one (see stackoverflow), and one solution is to run ‘Add-BindingRedirect’ from the Package Manager Console, but that didn’t solve the problem for me, as it didn’t pick up on the Azure stuff, so I resorted to adding the redirect manually. So, in short, to get AzureReader2 to work with Azure SDK 2.2, you need to add the following to your web.config: <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <!-- Other bindings here! --> <dependentAssembly> <assemblyIdentity name="Microsoft.WindowsAzure.Storage" publicKeyToken="31bf3856ad364e35" culture="neutral"/> <bindingRedirect oldVersion="0.0.0.0-2.1.0.3" newVersion="2.1.0.3"/> </dependentAssembly> </assemblyBinding>

    Read the article

  • Free HTML5 & CSS3 Fundamentals course

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/10/13/free-html5--css3-fundamentals-course.aspxAt http://www.microsoftvirtualacademy.com/training-courses/html5-css3-fundamentals-development-for-absolute-beginners there is a free course on HTML5 & CSS3 FundamentalsThis is not a course for pretty web design but for writing good standards compliant HTML. Please note that to get the work files for the course you need to go to http://channel9.msdn.com/Series/HTML5-CSS3-Fundamentals-Development-for-Absolute-Beginners/Series-Introduction-01 as the Microsoft Academy downloads do not seem to work!The course is done by Bob Tabor who runs http://www.learnvisualstudio.net

    Read the article

  • A starting point for Use Cases and User Stories

    - by Mike Benkovich
    Originally posted on: http://geekswithblogs.net/benko/archive/2013/07/23/a-starting-point-for-use-cases-and-user-stories.aspxSoftware is a challenging business and is rife with opportunities to go wrong. Over the years a number of methodologies have evolved to help make sure that things go right. In an effort to contribute to this I’ve created a list of user stories that I think should be included and sometimes are just assumed. Note this is a work in progress, so I’m looking for your feedback. I’m curious what you would add or change in my list. · As a DBA I am working with a Normalized data model that reflects an agreed upon logical model for the system · As a DBA I am using consistent names for my fields which match the naming standards of my organization · As a DBA my model supports simple CRUD operations against all the entities · As an Application Architect the UI has been validated against the Business requirements and a complete set of user story’s have been created · As an Application Architect the database model has been validated against the UI · As an Application Architect we have a logical business model that describes all the known and/or expected usage of the system during the software’s expected lifecycle · As an Application Architect we have a Deployment diagram that describes how the application components will be deployed · As an Application Architect we have a navigation diagram that describes the typical application flow · As an Application Architect we have identified points of interaction which describes how the UI interacts with the services and the data storage · As an Application Architect we have identified external systems which may now or in the future use the data of this application and have adapted the logical model to include these interactions · As an Application Architect we have identified existing systems and tools that can be extended and/or reused to help this application achieve it’s business goals · As a Project Manager all team members understand the goals of each release and iteration as they are planned · As a Project Manager all team members understand their role and the roles of others · As a Project Manager we have support of the business to do the right thing even if it is not the expedient thing · As a Test/QA Analyst we have created a simulation environment for testing the system which does not use sensitive data and accurately reflects the scenarios of all the data that will be supported by the system · As a Test/QA Analyst we have identified the matrix of supported clients used to access the system including the likely browsers, mobile devices and other interfaces to work with the application · As a Test/QA Analyst we have created exit criteria for each user story that match the requirements of the business story that was used to create them · As a Test/QA Analyst we have access to a Test environment that is isolated from production and staging environments · As a Test/QA Analyst there we have a way to reset the environment so we can rerun tests when a new version of the software becomes available · As a Test/QA Analyst I am able to automate portions of the test process Thoughts? -mike

    Read the article

  • Trying to Organise a Software Craftsman Pilgrimage

    - by Liam McLennan
    As I have previously written, I am trying to organise a software craftsman pilgrimage. The idea is to donate some time working with quality developers so that we learn from each other. To be honest I am also trying to be the worst. “Always be the worst guy in every band you’re in.” Pat Metheny I ended up posting a message to both the software craftsmanship group and the Seattle Alt.NET group and I got a good response from both. I have had discussions with people based in: Seattle, New York, Long Island, Austin and Chicago. Over the next week I have to juggle my schedule and confirm the company(s) I will be spending time with, but the good news is it seems that I will not be left hanging.

    Read the article

  • Existential CAML - does an item exist?

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved More CAML and existence. In “SharePoint List Issues” and “Passing the CAML thru the EY of the NEEDL we saw how to use CAML to return a subset of a list and also how to check the existence of lists, fields, defaults, and values.   Here is a general function that may be used to get a subset of a list by comparing a “text” type field to a given value.  The function is pretty smart. It can be used to check existence or to return a collection of items that may be further processed. It handles non existing fields and replaces them with the ubiquitous “Title”, but only once!  /// Build an SPQuery that returns a selected set of columns from a List /// titleField must be a "Text" type field /// When the titleField parameter is empty ("") "Title" is assumed /// When the title parameter is empty ("") All is assumed /// When the columnNames parameter is null, the query returns all the fields /// When the rowLimit parameter is 0, the query return all the items. /// with a non-zero, the query returns at most rowLimits /// /// usage: to check if an item titled "Blah" exists in your list, do: /// colNames = {"Title"} /// col = GetListItemColumnByTitle(myList, "", "Blah", colNames, 1) /// Check the col.Count. if > 0 the item exists and is in the collection private static SPListItemCollection GetListItemColumnByTitle(SPList list, string titleField, string title, string[] columnNames, uint rowLimit) {   try   {     char QT = Convert.ToChar((int)34);     SPQuery query = new SPQuery();     if (title != "")     {       string tf = titleField;       if (titleField == "") tf = "Title";       tf = CAMLThisName(list, tf, "Title");        StringBuilder titleQuery = new StringBuilder  ("<Where><Eq><FieldRef Name=");       titleQuery.Append(QT);       titleQuery.Append(tf);       titleQuery.Append(QT);       titleQuery.Append("/><Value Type=");       titleQuery.Append(QT);       titleQuery.Append("Text");       titleQuery.Append(QT);       titleQuery.Append(">");       titleQuery.Append(title);       titleQuery.Append("</Value></Eq></Where>");       query.Query = titleQuery.ToString();     }     if (columnNames.Length != 0)     {       StringBuilder sb = new StringBuilder("");       bool TitleAlreadyIncluded = false;       foreach (string columnName in columnNames)       {         string tst = CAMLThisName(list, columnName, "Title");         //Allow Title only once         if (tst != "Title" || !TitleAlreadyIncluded)         {           sb.Append("<FieldRef Name=");           sb.Append(QT);           sb.Append(tst);           sb.Append(QT);           sb.Append("/>");           if (tst == "Title") TitleAlreadyIncluded = true;         }       }       query.ViewFields = sb.ToString();     }     if (rowLimit > 0)     {        query.RowLimit = rowLimit;     }     SPListItemCollection col = list.GetItems(query);     return col;   }   catch (Exception ex)   {     //Console.WriteLine("GetListItemColumnByTitle" + ex.ToString());     //sw.WriteLine("GetListItemColumnByTitle" + ex.ToString());     return null;   } } Here I called it for a list in which “Author” (it is the internal name for “Created”) and “Blah” do not exist. The list of column names is:  string[] columnNames = {"Test Column1", "Title", "Author", "Allow Multiple Ratings", "Blah"};  So if I use this call, I get all the items for which “01-STD MIL_some” has the value of 1. the fields returned are: “Test Column1”, “Title”, and “Allow Multiple Ratings”. Because “Title” was already included and the default for non exixsting is “Title”, it was not replicated for the 2 non-existing fields.  SPListItemCollection col = GetListItemColumnByTitle(masterList, "01-STD MIL_some", "1", columnNames, 0); The following call checks if there are any items where “01-STD MIL_some” has the value of “1”. Note that I limited the number of returned items to 1.  SPListItemCollection col = GetListItemColumnByTitle(masterList, "01-STD MIL_some", "1", columnNames, 1); The code also uses the CAMLThisName function that checks for an existence of a field and returns its InternalName. This is yet another useful function that I use again and again.  /// <summary> /// return a fields internal name (CAMLName)  /// or the "default" name that you passed. /// To check existence pass "" or some funny name like "mud in your eye" /// </summary> public static string CAMLThisName(SPList list, string name, string def) {   String CAMLName = def;   SPField fld = GetFieldByName(list, name);   if (fld != null)   {      CAMLName = fld.InternalName;   }   return CAMLName; } That’s all folks?!

    Read the article

  • Silverlight: Creating great UIs

    - by xamlnotes
    I was always told I was left brained and could not draw. And I bought into that view. Somewhere down the road years ago I did learn to play guitar and to play by ear at that.  Now that’s not all left brained so my right brain must be working.  About a year ago, my good friend Billy Hollis turned me own to a book by Betty Edwards (http://www.drawright.com/).  I started reading this and soon I found my self drawing on napkins in restaurants while we were waiting on food and at many other times too.  Dang’d if I could not draw! Check out my UI article at Dev Pro Connections (Great UIs article) on some of my experiences. Heres a few more links that are really cool too. Cool color combinations web site Simply painting is awesome. Saw this guy on tv. This site has some great tools for color contrasting

    Read the article

  • Microsoft BUILD 2013&ndash;Day 1 Summary

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/microsoft-build-2013ndashday-1-summary.aspx I’m happy to be at BUILD this week, mainly because my flights finally got me here late on Tuesday.  My biggest complaints so far are the flights and the hotel.  It seems that almost every flight into San Francisco were delayed multiple hours.  The Sequester so lovingly forced on America by congress means that the airport was short controllers.   That, along with poor weather and airport construction meant most people were 2-3 hours late arriving.  Add on top of that the fact that the hotel that I picked durring registration is absolutely horrid.  It looks like something out of a ghost hunters show and smells like it too.  I think if Microsoft is going to select a hotel they need to make sure that it is adequate. Rant over! So what happened the first day?  Steve Balmer started off the keynote along with Julie Larson-Green and a cast of others.  We finally found out that there were around six thousand people attending BUILD and that the focus this year would be Windows 8, Windows Phone 8 and Azure.  For the rest of the keynote I am going to have a separate post. You can’t have a Microsoft conference without some fun.  This year they have a hunt for pins that represent different gestures in Windows 8.  I got all of mine.  Now they just need to pull my name. The sessions I attended were really good. They covered live tiles, what’s new in XAML and building Windows Phone UIs presented by Kraig Brockschmidt, Tim Heuer and Shawn Oster respectively.  These will also be covered in separate posts. The exhibit area was interesting, but somewhat disappointing.  TechEd 2012 I think was better organized and better staffed by the vendors.  It also seemed that the Microsoft teams’ booths were also in need of some organization and staffing. Overall it was a really fun day capped off by all six thousand attendees standing in like to get their Acer 8” tables and Surface Pros.  What a day!  Stay tuned for follow up posts. del.icio.us Tags: BUILD 2013,Windows 8.1,Winodws Phone,XAML,Keynote

    Read the article

  • F# Objects &ndash; Integrating with the other .Net Languages &ndash; Part 1

    - by MarkPearl
    In the next few blog posts I am going to explore objects in F#. Up to now, my dabbling in F# has really been a few liners and while I haven’t reached the point where F# is my language of preference – I am already seeing the benefits of the language when solving certain types of programming problems. For me I believe that the F# language will be used in a silo like architecture and that the real benefit of having F# under your belt is so that you can solve problems that F# lends itself towards and then interact with other .Net languages in doing the rest. When I was still very new to the F# language I did the following post covering how to get F# & C# to talk to each other. Today I am going to use a similar approach to demonstrate the structure of F# objects when inter-operating with other languages. Lets start with an empty F# class … type Person() = class end   Very simple, and all we really have done is declared an object that has nothing in it. We could if we wanted to make an object that takes a constructor with parameters… the code for this would look something like this… type Person =     {         Firstname : string         Lastname : string     }   What’s interesting about this syntax is when you try and interop with this object from another .Net language like C# - you would see the following…   Not only has a constructor been created that accepts two parameters, but Firstname and Lastname are also exposed on the object. Now it’s important to keep in mind that value holders in F# are immutable by default, so you would not be able to change the value of Firstname after the construction of the object – in C# terms it has been set to readonly. One could however explicitly state that the value holders were mutable, which would then allow you to change the values after the actual creation of the object. type Person = { mutable Firstname : string mutable Lastname : string }   Something that bugged me for a while was what if I wanted to have an F# object that requires values in its constructor, but does not expose them as part of the object. After bashing my head for a few moments I came up with the following syntax – which achieves this result. type Person(Firstname : string, Lastname : string) = member v.Fullname = Firstname + " " + Lastname What I haven’t figured out yet is what is the difference between the () & {} brackets when declaring an object.

    Read the article

  • Introduction to Microsoft SQL Azure: Free self-paced Microsoft class

    - by Jim Duffy
    Here is a wonderful opportunity to take advantage of some FREE Microsoft Learning content on SQL Azure. This self-paced 2 hour class is broken down into 4 segments each with a self test at the end. Class Segments 1) Understanding the SQL Azure Platform 2) Designing Applications for SQL Azure 3) Migrating Applications to SQL Azure 4) Achieving Scale with SQL Azure If you’re getting started with Windows Azure or have been working with it for a while and need to take advantage of the storage capabilities offered by SQL Azure this is going to be a great place for you to start learning. Have a day. :-|

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >