Search Results

Search found 4879 results on 196 pages for 'geeks'.

Page 36/196 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Silverlight Cream for February 10, 2011 -- #1045

    - by Dave Campbell
    In this Issue: Mark Monster, Jaime Rodriguez, Mark Hopkins, WindowsPhoneGeek, David Anson, Jesse Liberty, Jeremy Likness, Martin Krüger(-2-), Beth Massi, Joost van Schaik, Laurent Bugnion, and Arik Poznanski. Above the Fold: Silverlight: "Parsing the Visual Tree with LINQ" Jeremy Likness WP7: "Silverlight-ready PNG encoder implementation shows one way to use .NET IEnumerables effectively" David Anson Lightswitch: "How to Send Automated Appointments from a LightSwitch Application" Beth Massi Shoutouts: Be sure to visit SilverlightShow... check out their top hits last week: SilverlightShow for Jan 31- Feb 06, 2011 Jaime Rodriguez has a post up that all the WP7 folks will be interested in: FAQ about copy paste functionality in upcoming release From SilverlightCream.com: Make use of WCF FaultContracts in Silverlight clients Mark Monster takes a shot at answering “The remote server returned an error: NotFound” while connecting to a WCF Service problem we all see. Communication between HTML in WebBrowser and Silverlight app Jaime Rodriguez responds to questions he received about communication between HTML and SIlverlight with this post about the bi-directional communication between the control and HTML. WP7 - Real Apps, Real Code Mark Hopkins has a post up about some WP7 starter kits that you can get all the source for and actually download the app from the Marketplace first to see if it interests you! WP7 AboutPrompt in depth WindowsPhoneGeek has this cool post up about the AboutPrompt from the Coding4Fun toolkit in detail... great diagrams showing where all the elements are and code examples with images. Silverlight-ready PNG encoder implementation shows one way to use .NET IEnumerables effectively David Anson describes why he took it upon himself to write his own png encoder for Silverlight... and we all thank him for doing so and providing us with the code! Navigation 101–Cancelling Navigation Jesse Liberty's latest WP7 From Scratch episode is up (number 32), and he's talking about Navigation and how to cancel it if you need to. Parsing the Visual Tree with LINQ Jeremy Likness demonstrates using LINQ to rat out information in the visual tree of your XAML. To Quote Jeremy: "you can easily check for intersections between elements and find any type of element no matter how deep within the tree it is". SpriteAnimationBehavior Martin Krüger has a couple more fun things in the Expression Gallery that I haven't discussed. First up is a behavior that animates up to 999 images and lets you control the FramesPerSecond... great demo on the ExpressionGallery to play with. Second alternative: Storyboard should not start before the Silverlight application is loaded Martin Krüger's latest is a way to programmatically wait for the Loaded event so that you know you can let your animations fly. How to Send Automated Appointments from a LightSwitch Application Beth Massi's latest Lightswitch post follows up her Outlook automation one with sending appointments using the standard iCalendar format... all the code included of course. The case for the Bindable Application Bar for Windows Phone 7 Joost van Schaik posts about a bindable Application Bar for your WP7 apps... grab the code and don't leave home without it :) MVVM Light V4 preview (BL0014) release notes Laurent Bugnion posted an update to MVVMLight to Codeplex a couple days ago. This is an early preview of what he plans on having in version 4, so check out the post for what's new and fun. Search Digg on Windows Phone 7 Arik Poznanski followed up his RSS post from last week with this one on searching Digg on WP7... and he's discussing and providing a utility class for doing it. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • Cross platform application revolution

    - by anirudha
    Every developer know that if they make a windows application that they work only on windows. that’s a small pity thing we all know. this is a lose point for windows application who make developer thing small means only for windows and other only for mac. this is a big point behind success of web because who purchase a operating system if they want to use a application on other platform. why they purchase when they can’t try them. that’s a thing better in Web means IE 6 no problem IE 6 to IE 8 chrome to chrome 8 Firefox to Firefox 3.6.13 even that’s beta no problem the good website is shown as same as other browser. some minor difference may be can see. the cross platform application development thinking is much big then making a application who is only for some audience. the difference between audience make by OS what they use Windows or mac. if they use mac they can’t use this they use windows they can’t use this. Web for Everyone starting from a children to grandfather. male and female Everyone can use internet.no worrying what you have even you have Windows or mac , any browser even as silly IE 6. the cross platform have a good thing that “People”. everyone can use them without a problem that. just like some time problem come in windows that “some component is missing click here to get them” , you can’t use this [apps] software because you have windows sp1 , sp2  sp3. you need to install this first before this. this stupidity mainly comes in Microsoft software. in last year i found a issue on WPI that they force user to install another software when they get them from WPI. ex:- you need to install Visual studio 2008 before installing Visual studio 2010 express. are anyone tell me why user get old version 2008 when they get latest and express version. i never try again their to check the issue is solved or not. a another thing is you can’t get IE 9 on windows XP version. in that’case don’t thing and worrying about them because Firefox and Chrome is much better. the stupidity from Microsoft is too much. they never told you about Firebug even sometime they discuss about damage tool in IE they called them developer tool because they are Microsoft and they only thing how they can market their products. you need to install many thing without any reason such as many SQL server component even you use other RDBMS. you can’t say no to them because you need a tool and tool require a useless component called SQL server. i never found any software force me to install this for this and this for this before install me. that’s another good thing in WEB that no thing require i means you not need to install dotnet framework 4 before enjoy facebook or twitter. may be you found out that Microsoft's fail project Window planet force you to get silverlight before going their. i never hear about them. some month ago my friend talked to me about them i found nothing better their. Wha’t user do when facebook force user to install silverlight or adobe flash or may be Microsoft dotnet framework 4. if you not install them facebook tell  you bye bye tata ! never come here before installing Microsoft dotnet framework 4. the door is open for you after installing them not before. the story is same as “ tell me sorry before coming in home” as mother says to their child when they do something wrong. the web never force you to do something for them. sometime they allow you to use other website account their that’s very fast login for you. because they know the importance of your time.

    Read the article

  • C#/.NET Little Wonders: Of LINQ and Lambdas - A Presentation

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Today I’m giving a brief beginner’s guide to LINQ and Lambdas at the St. Louis .NET User’s Group so I thought I’d post the presentation here as well.  I updated the presentation a bit as well as added some notes on the query syntax.  Enjoy! The C#/.NET Fundaments: Of Lambdas and LINQ Presentation Of Lambdas and LINQ View more presentations from BlackRabbitCoder   Technorati Tags: C#, CSharp, .NET, Little Wonders, LINQ, Lambdas

    Read the article

  • Scaling-out Your Services by Message Bus based WCF Transport Extension &ndash; Part 1 &ndash; Background

    - by Shaun
    Cloud computing gives us more flexibility on the computing resource, we can provision and deploy an application or service with multiple instances over multiple machines. With the increment of the service instances, how to balance the incoming message and workload would become a new challenge. Currently there are two approaches we can use to pass the incoming messages to the service instances, I would like call them dispatcher mode and pulling mode.   Dispatcher Mode The dispatcher mode introduces a role which takes the responsible to find the best service instance to process the request. The image below describes the sharp of this mode. There are four clients communicate with the service through the underlying transportation. For example, if we are using HTTP the clients might be connecting to the same service URL. On the server side there’s a dispatcher listening on this URL and try to retrieve all messages. When a message came in, the dispatcher will find a proper service instance to process it. There are three mechanism to find the instance: Round-robin: Dispatcher will always send the message to the next instance. For example, if the dispatcher sent the message to instance 2, then the next message will be sent to instance 3, regardless if instance 3 is busy or not at that moment. Random: Dispatcher will find a service instance randomly, and same as the round-robin mode it regardless if the instance is busy or not. Sticky: Dispatcher will send all related messages to the same service instance. This approach always being used if the service methods are state-ful or session-ful. But as you can see, all of these approaches are not really load balanced. The clients will send messages at any time, and each message might take different process duration on the server side. This means in some cases, some of the service instances are very busy while others are almost idle. For example, if we were using round-robin mode, it could be happened that most of the simple task messages were passed to instance 1 while the complex ones were sent to instance 3, even though instance 1 should be idle. This brings some problem in our architecture. The first one is that, the response to the clients might be longer than it should be. As it’s shown in the figure above, message 6 and 9 can be processed by instance 1 or instance 2, but in reality they were dispatched to the busy instance 3 since the dispatcher and round-robin mode. Secondly, if there are many requests came from the clients in a very short period, service instances might be filled by tons of pending tasks and some instances might be crashed. Third, if we are using some cloud platform to host our service instances, for example the Windows Azure, the computing resource is billed by service deployment period instead of the actual CPU usage. This means if any service instance is idle it is wasting our money! Last one, the dispatcher would be the bottleneck of our system since all incoming messages must be routed by the dispatcher. If we are using HTTP or TCP as the transport, the dispatcher would be a network load balance. If we wants more capacity, we have to scale-up, or buy a hardware load balance which is very expensive, as well as scaling-out the service instances. Pulling Mode Pulling mode doesn’t need a dispatcher to route the messages. All service instances are listening to the same transport and try to retrieve the next proper message to process if they are idle. Since there is no dispatcher in pulling mode, it requires some features on the transportation. The transportation must support multiple client connection and server listening. HTTP and TCP doesn’t allow multiple clients are listening on the same address and port, so it cannot be used in pulling mode directly. All messages in the transportation must be FIFO, which means the old message must be received before the new one. Message selection would be a plus on the transportation. This means both service and client can specify some selection criteria and just receive some specified kinds of messages. This feature is not mandatory but would be very useful when implementing the request reply and duplex WCF channel modes. Otherwise we must have a memory dictionary to store the reply messages. I will explain more about this in the following articles. Message bus, or the message queue would be best candidate as the transportation when using the pulling mode. First, it allows multiple application to listen on the same queue, and it’s FIFO. Some of the message bus also support the message selection, such as TIBCO EMS, RabbitMQ. Some others provide in memory dictionary which can store the reply messages, for example the Redis. The principle of pulling mode is to let the service instances self-managed. This means each instance will try to retrieve the next pending incoming message if they finished the current task. This gives us more benefit and can solve the problems we met with in the dispatcher mode. The incoming message will be received to the best instance to process, which means this will be very balanced. And it will not happen that some instances are busy while other are idle, since the idle one will retrieve more tasks to make them busy. Since all instances are try their best to be busy we can use less instances than dispatcher mode, which more cost effective. Since there’s no dispatcher in the system, there is no bottleneck. When we introduced more service instances, in dispatcher mode we have to change something to let the dispatcher know the new instances. But in pulling mode since all service instance are self-managed, there no extra change at all. If there are many incoming messages, since the message bus can queue them in the transportation, service instances would not be crashed. All above are the benefits using the pulling mode, but it will introduce some problem as well. The process tracking and debugging become more difficult. Since the service instances are self-managed, we cannot know which instance will process the message. So we need more information to support debug and track. Real-time response may not be supported. All service instances will process the next message after the current one has done, if we have some real-time request this may not be a good solution. Compare with the Pros and Cons above, the pulling mode would a better solution for the distributed system architecture. Because what we need more is the scalability, cost-effect and the self-management.   WCF and WCF Transport Extensibility Windows Communication Foundation (WCF) is a framework for building service-oriented applications. In the .NET world WCF is the best way to implement the service. In this series I’m going to demonstrate how to implement the pulling mode on top of a message bus by extending the WCF. I don’t want to deep into every related field in WCF but will highlight its transport extensibility. When we implemented an RPC foundation there are many aspects we need to deal with, for example the message encoding, encryption, authentication and message sending and receiving. In WCF, each aspect is represented by a channel. A message will be passed through all necessary channels and finally send to the underlying transportation. And on the other side the message will be received from the transport and though the same channels until the business logic. This mode is called “Channel Stack” in WCF, and the last channel in the channel stack must always be a transport channel, which takes the responsible for sending and receiving the messages. As we are going to implement the WCF over message bus and implement the pulling mode scaling-out solution, we need to create our own transport channel so that the client and service can exchange messages over our bus. Before we deep into the transport channel, let’s have a look on the message exchange patterns that WCF defines. Message exchange pattern (MEP) defines how client and service exchange the messages over the transportation. WCF defines 3 basic MEPs which are datagram, Request-Reply and Duplex. Datagram: Also known as one-way, or fire-forgot mode. The message sent from the client to the service, and no need any reply from the service. The client doesn’t care about the message result at all. Request-Reply: Very common used pattern. The client send the request message to the service and wait until the reply message comes from the service. Duplex: The client sent message to the service, when the service processing the message it can callback to the client. When callback the service would be like a client while the client would be like a service. In WCF, each MEP represent some channels associated. MEP Channels Datagram IInputChannel, IOutputChannel Request-Reply IRequestChannel, IReplyChannel Duplex IDuplexChannel And the channels are created by ChannelListener on the server side, and ChannelFactory on the client side. The ChannelListener and ChannelFactory are created by the TransportBindingElement. The TransportBindingElement is created by the Binding, which can be defined as a new binding or from a custom binding. For more information about the transport channel mode, please refer to the MSDN document. The figure below shows the transport channel objects when using the request-reply MEP. And this is the datagram MEP. And this is the duplex MEP. After investigated the WCF transport architecture, channel mode and MEP, we finally identified what we should do to extend our message bus based transport layer. They are: Binding: (Optional) Defines the channel elements in the channel stack and added our transport binding element at the bottom of the stack. But we can use the build-in CustomBinding as well. TransportBindingElement: Defines which MEP is supported in our transport and create the related ChannelListener and ChannelFactory. This also defines the scheme of the endpoint if using this transport. ChannelListener: Create the server side channel based on the MEP it’s. We can have one ChannelListener to create channels for all supported MEPs, or we can have ChannelListener for each MEP. In this series I will use the second approach. ChannelFactory: Create the client side channel based on the MEP it’s. We can have one ChannelFactory to create channels for all supported MEPs, or we can have ChannelFactory for each MEP. In this series I will use the second approach. Channels: Based on the MEPs we want to support, we need to implement the channels accordingly. For example, if we want our transport support Request-Reply mode we should implement IRequestChannel and IReplyChannel. In this series I will implement all 3 MEPs listed above one by one. Scaffold: In order to make our transport extension works we also need to implement some scaffold stuff. For example we need some classes to send and receive message though out message bus. We also need some codes to read and write the WCF message, etc.. These are not necessary but would be very useful in our example.   Message Bus There is only one thing remained before we can begin to implement our scaling-out support WCF transport, which is the message bus. As I mentioned above, the message bus must have some features to fulfill all the WCF MEPs. In my company we will be using TIBCO EMS, which is an enterprise message bus product. And I have said before we can use any message bus production if it’s satisfied with our requests. Here I would like to introduce an interface to separate the message bus from the WCF. This allows us to implement the bus operations by any kinds bus we are going to use. The interface would be like this. 1: public interface IBus : IDisposable 2: { 3: string SendRequest(string message, bool fromClient, string from, string to = null); 4:  5: void SendReply(string message, bool fromClient, string replyTo); 6:  7: BusMessage Receive(bool fromClient, string replyTo); 8: } There are only three methods for the bus interface. Let me explain one by one. The SendRequest method takes the responsible for sending the request message into the bus. The parameters description are: message: The WCF message content. fromClient: Indicates if this message was came from the client. from: The channel ID that this message was sent from. The channel ID will be generated when any kinds of channel was created, which will be explained in the following articles. to: The channel ID that this message should be received. In Request-Reply and Duplex MEP this is necessary since the reply message must be received by the channel which sent the related request message. The SendReply method takes the responsible for sending the reply message. It’s very similar as the previous one but no “from” parameter. This is because it’s no need to reply a reply message again in any MEPs. The Receive method takes the responsible for waiting for a incoming message, includes the request message and specified reply message. It returned a BusMessage object, which contains some information about the channel information. The code of the BusMessage class is 1: public class BusMessage 2: { 3: public string MessageID { get; private set; } 4: public string From { get; private set; } 5: public string ReplyTo { get; private set; } 6: public string Content { get; private set; } 7:  8: public BusMessage(string messageId, string fromChannelId, string replyToChannelId, string content) 9: { 10: MessageID = messageId; 11: From = fromChannelId; 12: ReplyTo = replyToChannelId; 13: Content = content; 14: } 15: } Now let’s implement a message bus based on the IBus interface. Since I don’t want you to buy and install the TIBCO EMS or any other message bus products, I will implement an in process memory bus. This bus is only for test and sample purpose. It can only be used if the service and client are in the same process. Very straightforward. 1: public class InProcMessageBus : IBus 2: { 3: private readonly ConcurrentDictionary<Guid, InProcMessageEntity> _queue; 4: private readonly object _lock; 5:  6: public InProcMessageBus() 7: { 8: _queue = new ConcurrentDictionary<Guid, InProcMessageEntity>(); 9: _lock = new object(); 10: } 11:  12: public string SendRequest(string message, bool fromClient, string from, string to = null) 13: { 14: var entity = new InProcMessageEntity(message, fromClient, from, to); 15: _queue.TryAdd(entity.ID, entity); 16: return entity.ID.ToString(); 17: } 18:  19: public void SendReply(string message, bool fromClient, string replyTo) 20: { 21: var entity = new InProcMessageEntity(message, fromClient, null, replyTo); 22: _queue.TryAdd(entity.ID, entity); 23: } 24:  25: public BusMessage Receive(bool fromClient, string replyTo) 26: { 27: InProcMessageEntity e = null; 28: while (true) 29: { 30: lock (_lock) 31: { 32: var entity = _queue 33: .Where(kvp => kvp.Value.FromClient == fromClient && (kvp.Value.To == replyTo || string.IsNullOrWhiteSpace(kvp.Value.To))) 34: .FirstOrDefault(); 35: if (entity.Key != Guid.Empty && entity.Value != null) 36: { 37: _queue.TryRemove(entity.Key, out e); 38: } 39: } 40: if (e == null) 41: { 42: Thread.Sleep(100); 43: } 44: else 45: { 46: return new BusMessage(e.ID.ToString(), e.From, e.To, e.Content); 47: } 48: } 49: } 50:  51: public void Dispose() 52: { 53: } 54: } The InProcMessageBus stores the messages in the objects of InProcMessageEntity, which can take some extra information beside the WCF message itself. 1: public class InProcMessageEntity 2: { 3: public Guid ID { get; set; } 4: public string Content { get; set; } 5: public bool FromClient { get; set; } 6: public string From { get; set; } 7: public string To { get; set; } 8:  9: public InProcMessageEntity() 10: : this(string.Empty, false, string.Empty, string.Empty) 11: { 12: } 13:  14: public InProcMessageEntity(string content, bool fromClient, string from, string to) 15: { 16: ID = Guid.NewGuid(); 17: Content = content; 18: FromClient = fromClient; 19: From = from; 20: To = to; 21: } 22: }   Summary OK, now I have all necessary stuff ready. The next step would be implementing our WCF message bus transport extension. In this post I described two scaling-out approaches on the service side especially if we are using the cloud platform: dispatcher mode and pulling mode. And I compared the Pros and Cons of them. Then I introduced the WCF channel stack, channel mode and the transport extension part, and identified what we should do to create our own WCF transport extension, to let our WCF services using pulling mode based on a message bus. And finally I provided some classes that need to be used in the future posts that working against an in process memory message bus, for the demonstration purpose only. In the next post I will begin to implement the transport extension step by step.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Generate Strongly Typed Observable Events for the Reactive Extensions for .NET (Rx)

    - by Bobby Diaz
    I must have tried reading through the various explanations and introductions to the new Reactive Extensions for .NET before the concepts finally started sinking in.  The article that gave me the ah-ha moment was over on SilverlightShow.net and titled Using Reactive Extensions in Silverlight.  The author did a good job comparing the "normal" way of handling events vs. the new "reactive" methods. Admittedly, I still have more to learn about the Rx Framework, but I wanted to put together a sample project so I could start playing with the new Observable and IObservable<T> constructs.  I decided to throw together a whiteboard application in Silverlight based on the Drawing with Rx example on the aforementioned article.  At the very least, I figured I would learn a thing or two about a new technology, but my real goal is to create a fun application that I can share with the kids since they love drawing and coloring so much! Here is the code sample that I borrowed from the article: var mouseMoveEvent = Observable.FromEvent<MouseEventArgs>(this, "MouseMove"); var mouseLeftButtonDown = Observable.FromEvent<MouseButtonEventArgs>(this, "MouseLeftButtonDown"); var mouseLeftButtonUp = Observable.FromEvent<MouseButtonEventArgs>(this, "MouseLeftButtonUp");       var draggingEvents = from pos in mouseMoveEvent                              .SkipUntil(mouseLeftButtonDown)                              .TakeUntil(mouseLeftButtonUp)                              .Let(mm => mm.Zip(mm.Skip(1), (prev, cur) =>                                  new                                  {                                      X2 = cur.EventArgs.GetPosition(this).X,                                      X1 = prev.EventArgs.GetPosition(this).X,                                      Y2 = cur.EventArgs.GetPosition(this).Y,                                      Y1 = prev.EventArgs.GetPosition(this).Y                                  })).Repeat()                          select pos;       draggingEvents.Subscribe(p =>     {         Line line = new Line();         line.Stroke = new SolidColorBrush(Colors.Black);         line.StrokeEndLineCap = PenLineCap.Round;         line.StrokeLineJoin = PenLineJoin.Round;         line.StrokeThickness = 5;         line.X1 = p.X1;         line.Y1 = p.Y1;         line.X2 = p.X2;         line.Y2 = p.Y2;         this.LayoutRoot.Children.Add(line);     }); One thing that was nagging at the back of my mind was having to deal with the event names as strings, as well as the verbose syntax for the Observable.FromEvent<TEventArgs>() method.  I came up with a couple of static/helper classes to resolve both issues and also created a T4 template to auto-generate these helpers for any .NET type.  Take the following code from the above example: var mouseMoveEvent = Observable.FromEvent<MouseEventArgs>(this, "MouseMove"); var mouseLeftButtonDown = Observable.FromEvent<MouseButtonEventArgs>(this, "MouseLeftButtonDown"); var mouseLeftButtonUp = Observable.FromEvent<MouseButtonEventArgs>(this, "MouseLeftButtonUp"); Turns into this with the new static Events class: var mouseMoveEvent = Events.Mouse.Move.On(this); var mouseLeftButtonDown = Events.Mouse.LeftButtonDown.On(this); var mouseLeftButtonUp = Events.Mouse.LeftButtonUp.On(this); Or better yet, just remove the variable declarations altogether:     var draggingEvents = from pos in Events.Mouse.Move.On(this)                              .SkipUntil(Events.Mouse.LeftButtonDown.On(this))                              .TakeUntil(Events.Mouse.LeftButtonUp.On(this))                              .Let(mm => mm.Zip(mm.Skip(1), (prev, cur) =>                                  new                                  {                                      X2 = cur.EventArgs.GetPosition(this).X,                                      X1 = prev.EventArgs.GetPosition(this).X,                                      Y2 = cur.EventArgs.GetPosition(this).Y,                                      Y1 = prev.EventArgs.GetPosition(this).Y                                  })).Repeat()                          select pos; The Move, LeftButtonDown and LeftButtonUp members of the Events.Mouse class are readonly instances of the ObservableEvent<TTarget, TEventArgs> class that provide type-safe access to the events via the On() method.  Here is the code for the class: using System; using System.Collections.Generic; using System.Linq;   namespace System.Linq {     /// <summary>     /// Represents an event that can be managed via the <see cref="Observable"/> API.     /// </summary>     /// <typeparam name="TTarget">The type of the target.</typeparam>     /// <typeparam name="TEventArgs">The type of the event args.</typeparam>     public class ObservableEvent<TTarget, TEventArgs> where TEventArgs : EventArgs     {         /// <summary>         /// Initializes a new instance of the <see cref="ObservableEvent"/> class.         /// </summary>         /// <param name="eventName">Name of the event.</param>         protected ObservableEvent(String eventName)         {             EventName = eventName;         }           /// <summary>         /// Registers the specified event name.         /// </summary>         /// <param name="eventName">Name of the event.</param>         /// <returns></returns>         public static ObservableEvent<TTarget, TEventArgs> Register(String eventName)         {             return new ObservableEvent<TTarget, TEventArgs>(eventName);         }           /// <summary>         /// Creates an enumerable sequence of event values for the specified target.         /// </summary>         /// <param name="target">The target.</param>         /// <returns></returns>         public IObservable<IEvent<TEventArgs>> On(TTarget target)         {             return Observable.FromEvent<TEventArgs>(target, EventName);         }           /// <summary>         /// Gets or sets the name of the event.         /// </summary>         /// <value>The name of the event.</value>         public string EventName { get; private set; }     } } And this is how it's used:     /// <summary>     /// Categorizes <see cref="ObservableEvents"/> by class and/or functionality.     /// </summary>     public static partial class Events     {         /// <summary>         /// Implements a set of predefined <see cref="ObservableEvent"/>s         /// for the <see cref="System.Windows.System.Windows.UIElement"/> class         /// that represent mouse related events.         /// </summary>         public static partial class Mouse         {             /// <summary>Represents the MouseMove event.</summary>             public static readonly ObservableEvent<UIElement, MouseEventArgs> Move =                 ObservableEvent<UIElement, MouseEventArgs>.Register("MouseMove");               // additional members omitted...         }     } The source code contains a static Events class with prefedined members for various categories (Key, Mouse, etc.).  There is also an Events.tt template that you can customize to generate additional event categories for any .NET type.  All you should have to do is add the name of your class to the types collection near the top of the template:     types = new Dictionary<String, Type>()     {         //{ "Microsoft.Maps.MapControl.Map, Microsoft.Maps.MapControl", null }         { "System.Windows.FrameworkElement, System.Windows", null },         { "Whiteboard.MainPage, Whiteboard", null }     }; The template is also a bit rough at this point, but at least it generates code that *should* compile.  Please let me know if you run into any issues with it.  Some people have reported errors when trying to use T4 templates within a Silverlight project, but I was able to get it to work with a little black magic...  You can download the source code for this project or play around with the live demo.  Just be warned that it is at a very early stage so don't expect to find much today.  I plan on adding alot more options like pen colors and sizes, saving, printing, etc. as time permits.  HINT: hold down the ESC key to erase! Enjoy! Additional Resources Using Reactive Extensions in Silverlight DevLabs: Reactive Extensions for .NET (Rx) Rx Framework Part III - LINQ to Events - Generating GetEventName() Wrapper Methods using T4

    Read the article

  • SharePoint 2010 Hosting :: How to Customize SharePoint 2010 Global Navigation

    - by mbridge
    Requirements - SharePoint Foundation or SharePoint Server 2010 site - SharePoint Designer 2010 Steps 1. The first step in my process was to download from codeplex a starter masterpage http://startermasterpages.codeplex.com/ . 2. Once you downloaded the starter master page, open up your SharePoint site in SharePoint Designer 2010 and on the left in the “Site Objects “ area click on the folder “All Files” and drill down to catalogs >> masterpages . Once you are in the Masterpage folder copy and paste the _starter.master into this folder. 3. The first step in the customization process is to create your custom style sheet. To create your custom style sheet, click on the “all Files” folder and click on “Style Library.” Right click in the style library section and choose Style sheet. Once the style sheet is created, rename it style.css. Now open the style sheet you created in SharePoint Designer. 4. In this next step you will copy and paste the SharePoint core styles for the global navigation into your custom style sheet. Copy and paste the css below into the style sheet and save file .s4-tn{ padding:0px; margin:0px; } .s4-tn ul.static{ white-space:nowrap; } .s4-tn li.static > .menu-item{ /* [ReplaceColor(themeColor:"Dark2")] */ color:#3b4f65; white-space:nowrap; border:1px solid transparent; padding:4px 10px; display:inline-block; height:15px; vertical-align:middle; } .s4-tn ul.dynamic{ /* [ReplaceColor(themeColor:"Light2")] */ background-color:white; /* [ReplaceColor(themeColor:"Dark2-Lighter")] */ border:1px solid #D9D9D9; } .s4-tn li.dynamic > .menu-item{ display:block; padding:3px 10px; white-space:nowrap; font-weight:normal; } .s4-tn li.dynamic > a:hover{ font-weight:normal; /* [ReplaceColor(themeColor:"Light2-Lighter")] */ background-color:#D9D9D9; } .s4-tn li.static > a:hover { /* [ReplaceColor(themeColor:"Accent1")] */ color:#44aff6; text-decoration:underline; } 5. Once you created the style sheet, go back to the masterpage folder and open the _starter.master file and in the Customization category click edit file. 6. Next, when the edit file opens make sure you view it in split view. Now you are going to search for the reference to our custom masterpage in the code. Make sure you are scrolled to the top in the code section and press “ctrl f” on the key board. This will pop up the find and replace tool. In the” find what field”, copy and paste and then click find next. 7. Now, in the code replace You have now referenced your custom style sheet in your masterpage. 8. The next step is to locate your Global Navigation control, make sure you are scrolled to the top in the code section and press “ctrl f” on the key board. This will pop up the find and replace tool. In the” find what field”, copy and paste ID="TopNavigationMenuV4” and then click find next. Once you find ID="TopNavigationMenuV4” , you should see the following block of code which is the global navigation control: ID="TopNavigationMenuV4" Runat="server" EnableViewState="false" DataSourceID="topSiteMap" AccessKey="" UseSimpleRendering="true" UseSeparateCss="false" Orientation="Horizontal" StaticDisplayLevels="1" MaximumDynamicDisplayLevels="1" SkipLinkText="" CssClass="s4-tn" 9. In the global navigation code above you should see CssClass="s4-tn" . As an additional step you can replace "s4-tn" your own custom name like CssClass="MyNav" . If you can the name of the CSS class make sure you update your custom style sheet with the new name, example below: .MyNav{ padding:0px; margin:0px; } .MyNav ul.static{ white-space:nowrap; } 10. At this point you are ready to brand your global navigation. The next step is to modify your style.css with your customizations to the default SharePoint styles. Have fun styling and make sure you save your work often. Hope it helps!!

    Read the article

  • Free CodeSmith License!

    - by Randy Walker
    The catch?  Attend the Ozarks .Net User Group meeting on April 1st. Here’s a list of the other prizes for the event GRAND PRIZE 1 - iPad (Wi-Fi 16GB) THIRD PARTY COMPONENTS 6 - Telerik Premium Collection 5 - Infragistics NetAdvantage for .NET 1 - Nevron Chart for .NET Lite DevExpress Xceed PRODUCTIVITY 2 - CodeRush with Refactor! Pro 2 - ReSharper CodeSmith GAMES 3 - Halo3 ODST (XBox 360) 3 - Forza Motorsport (Xbox 360) OTHER SOFTWARE 3 - Windows 7 Ultimate 2 - Microsoft Office Standard 2007 HARDWARE 2 - Microsoft Arc Mouse BOOKS 12 - OReilly eBooks 12 - Microsoft Press books 5 - Apress books 3 - Addison-Wesley books 2 - Manning books 2 - Sams books The Info: "Be a Professional Developer and Write Clean Code!" by Claudio Lassala on April 1, 2010 PRESENTATION TOPIC "Be a Professional Developer and Write Clean Code!" - by Claudio Lassala Poorly written code can be created quickly, but it comes at a cost of high maintenance. Most of the time, code can be improved easily by following some simple practices. Professional developers should know these practices and tools and apply it to their work every day. This session will cover the importance of writing clean code, the kind of attitude all developers should have towards the code they produce, as well as the practices and tools that can be used to aid you in becoming a better developer. BIOGRAPHY Claudio Lassala is a Senior Developer at EPS Software Corp. He has presented several lectures at Microsoft events such as PDC Brazil and various other Microsoft seminars, as well as several conferences and user groups across North America and Brazil. He is a multiple winner of the Microsoft MVP Award since 2001 (for Visual FoxPro in 2001-2002, and for C# ever since), an INETA speaker, and also holds the MCSD for .NET certification. He has articles published on several magazines, such as MSDN Brazil Magazine, CoDe Magazine, UTMag, Developers Magazine, and FoxPro Advisor. More detailed information regarding his presentations and articles can be found in his MVP Profile. You can also read more about Claudio on his blog or on Twitter Schedule 5:30 PM – 6:30 PM Social Networking 6:30 PM - 7:00 PM  Prizes 7:00 PM - 8:30 PM Presentation:  "Be a Professional Developer and Write Clean Code!" by Claudio Lassala 8:30 PM - 9:00 PM Wrap-Up

    Read the article

  • Creating Wildcard Certificates with makecert.exe

    - by Shawn Cicoria
    Be nice to be able to make wildcard certificates for use in development with makecert – turns out, it’s real easy.  Just ensure that your CN=  is the wildcard string to use. The following sequence generates a CA cert, then the public/private key pair for a wildcard certificate REM make the CA makecert -pe -n "CN=*.contosotest.com" -a sha1 -len 2048 -sky exchange -eku 1.3.6.1.5.5.7.3.1 -ic CA.cer -iv CA.pvk -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 -sv wildcard.pvk wildcard.cer pvk2pfx -pvk wildcard.pvk -spc wildcard.cer -pfx wildcard.pfx REM now make the server wildcard cert makecert -pe -n "CN=*.contosotest.com" -a sha1 -len 2048 -sky exchange -eku 1.3.6.1.5.5.7.3.1 -ic CA.cer -iv CA.pvk -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 -sv wildcard.pvk wildcard.cer pvk2pfx -pvk wildcard.pvk -spc wildcard.cer -pfx wildcard.pfx

    Read the article

  • Expert F# &ndash; Pattern Matching with Adam and Eve

    - by MarkPearl
    So I am loving my Expert F# book. I wish I had more time with it, but the little time I get I really enjoy. However today I was completely stumped by what the book was trying to get across with regards to pattern matching. On Page 38 – Chapter 3, it briefly describes F# option values. On this page it gives the code snippet along the code lines below and then goes on to speak briefly about pattern matching... open System type 'a option = | None | Some of 'a let people = [ ("Adam", None); ("Eve", None); ("Cain", Some("Adam", "Eve")); ("Abel", Some("Adam", "Eve")) ] let showParents(name, parents) = match parents with | Some(dad, mum) -> printfn "%s has father %s, mother %s" name dad mum | None -> printfn "%s has no parents!" name Console.WriteLine(showParents("Adam", None))   Originally when I read this code I think I misunderstood the purpose of the example code. I for some reason thought that the showParents function would magically be parsing the people array and looking for a match of name and then showing the parents. But obviously it cannot do this since there is no reference to the people array in the showParents method. After rereading the page I realized that I had just combined the two segments of code together, possibly incorrectly, and that a better example would have been to have a code snippet like the following. let showParents(name, parents) = match parents with | Some(dad, mum) -> printfn "%s has father %s, mother %s" name dad mum | None -> printfn "%s has no parents!" name Console.WriteLine(showParents("Adam", None)) Console.WriteLine(showParents("Cain", Some("Adam", "Eve"))) Console.ReadLine()   However, what if I wanted to have a function that was passed a list of people and a name would then show the parents of the name if there were any, and if not would show that they had no parents… so that doesnt seem to difficult does it… lets look at my very unoptimized noob F# code to try and achieve this… open System let people = [ ("Adam", None); ("Eve", None); ("Cain", Some("Adam", "Eve")); ("Abel", Some("Adam", "Eve")) ] // // returns the name of the person // let showName(person : string * (string * string) option) = let name = fst(person) name // // Returns a string with the parents details or not // let showParents(itemData : string * (string * string) option) = let name = fst(itemData) let parents = snd(itemData) match parents with | Some(dad, mum) -> "Father " + dad + " and Mother " + mum | None -> "Has no parents!" // // Prints the details // let showDetails(person : string * (string * string) option) = Console.WriteLine(showName(person)) Console.WriteLine(showParents(person)) // // Check if the name matches the first portion of person // if so, return true, else return false // let nameMatch(name : string , person : string * (string * string) option) = match name with | x when x = fst(person) -> true | _ -> false // // Searches an array of people and looks for a match of names // let findPerson(name : string, people : (string * (string * string) option) list) = let o = Seq.tryFind(fun x -> nameMatch(name, x)) people if Option.isSome o then o else Option.None // // Try and find a person, if found show their details // else show no match // let FoundPerson = findPerson("Cain", people) match FoundPerson with | None -> Console.WriteLine("Not found") | Some(x) -> showDetails(x) Console.ReadLine() So, my code isn’t the cleanest but it did teach me a bit more F#. The area that I learnt about was the option keyword. The challenge being, if a match of the name isn’t found – and if a name is found but the person doesn’t have parents it should react accordingly. I’m pretty sure I can optimize this code quite a bit more and I think I may come back to it sometime in the future and relook at it, but for now at least I was able to achieve what I wanted.. and my brain has gone just that wee little bit more functional.

    Read the article

  • Running Windows Phone Developers Tools CTP under VMWare Player - Yes you can! - But do you want to?

    - by Liam Westley
    This blog is the result of a quick investigation of running the Windows Phone Developer Tools CTP under VMWare Player.  In the release notes for Windows Phone Developer Tools CTP it mentions that it is not supported under VirtualPC or Hyper-V.  Some developers have policies where ‘no non-production code’ can be installed on their development workstation and so the only way they can use a CTP like this is in a virtual machine. The dilemma here is that the emulator for Windows Phone itself is a virtual machine and running a virtual machine within another virtual machine is normally frowned upon.  Even worse, previous Windows Mobile emulators detected they were in a virtual machine and refused to run.  Why VMWare? I selected VMWare as a possible solution as it is possible to run VMWare ESXi under VMWare Workstation by manually setting configuration options in the VMX configuration file so that it does not detect the presence of a virtual environment. I actually found that I could use VMWare Player (the free version, that can now create VM images) and that there was no need for any editing of the configuration file (I tried various switches, none of which made any difference to performance). So you can run the CTP under VMWare Player, that’s the good news. The bad news is that it is incredibly slow, bordering on unusable.  However, if it’s the only way you can use the CTP, at least this is an option. VMWare Player configuration I used the latest VMWare Player, 3.0, running under Windows x64 on my HP 6910p laptop with an Intel T7500 Dual Core CPU running at 2.2GHz, 4Gb of memory and using a separate drive for the virtual machines. I created a machine in VMWare Player with a single CPU, 1536 Mb memory and installed Windows 7 x64 from an ISO image.  I then performed a Windows Update, installed VMWare Tools, and finally the Windows Phone Developer Tools CTP After a few warnings about performance, I configured Windows 7 to run with Windows 7 Basic theme rather than use Aero (which is available under VMWare Player as it has a WDDM driver). Timings As a test I first launched Microsoft Visual Studio 2010 Express for Windows Phone, and created a default Windows Phone Application project.  I then clicked the run button, which starts the emulator and then loads the default application onto the emulator. For the second test I left the emulator running, stopped the default application, added a single button to change the page title and redeployed to the already running emulator by clicking the run button.   Test 1 (1st run) Test 2 (emulator already running)   VMWare Player 10 minutes  1 minute   Windows x64 native 1 minute  < 10 seconds   Conclusion You can run the Windows Phone Developer Tools CTP under VMWare Player, but it’s really, really slow and you would have to have very good reasons to try this approach. If you need to keep a development system free of non production code, and the two systems aren’t required to run simultaneously, then I’d consider a boot from VHD option.  Then you can completely isolate the Windows Phone Developer Tools CTP and development environment into a single VHD separate from your main development system.

    Read the article

  • The Inkremental Architect&acute;s Napkin - #4 - Make increments tangible

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/12/the-inkremental-architectacutes-napkin---4---make-increments-tangible.aspxThe driver of software development are increments, small increments, tiny increments. With an increment being a slice of the overall requirement scope thin enough to implement and get feedback from a product owner within 2 days max. Such an increment might concern Functionality or Quality.[1] To make such high frequency delivery of increments possible, the transition from talking to coding needs to be as easy as possible. A user story or some other documentation of what´s supposed to get implemented until tomorrow evening at latest is one side of the medal. The other is where to put the logic in all of the code base. To implement an increment, only logic statements are needed. Functionality like Quality are just about expressions and control flow statements. Think of Assembler code without the CALL/RET instructions. That´s all is needed. Forget about functions, forget about classes. To make a user happy none of that is really needed. It´s just about the right expressions and conditional executions paths plus some memory allocation. Automatic function inlining of compilers which makes it clear how unimportant functions are for delivering value to users at runtime. But why then are there functions? Because they were invented for optimization purposes. We need them for better Evolvability and Production Efficiency. Nothing more, nothing less. No software has become faster, more secure, more scalable, more functional because we gathered logic under the roof of a function or two or a thousand. Functions make logic easier to understand. Functions make us faster in producing logic. Functions make it easier to keep logic consistent. Functions help to conserve memory. That said, functions are important. They are even the pivotal element of software development. We can´t code without them - whether you write a function yourself or not. Because there´s always at least one function in play: the Entry Point of a program. In Ruby the simplest program looks like this:puts "Hello, world!" In C# more is necessary:class Program { public static void Main () { System.Console.Write("Hello, world!"); } } C# makes the Entry Point function explicit, not so Ruby. But still it´s there. So you can think of logic always running in some function. Which brings me back to increments: In order to make the transition from talking to code as easy as possible, it has to be crystal clear into which function you should put the logic. Product owners might be content once there is a sticky note a user story on the Scrum or Kanban board. But developers need an idea of what that sticky note means in term of functions. Because with a function in hand, with a signature to run tests against, they have something to focus on. All´s well once there is a function behind whose signature logic can be piled up. Then testing frameworks can be used to check if the logic is correct. Then practices like TDD can help to drive the implementation. That´s why most code katas define exactly how the API of a solution should look like. It´s a function, maybe two or three, not more. A requirement like “Write a function f which takes this as parameters and produces such and such output by doing x” makes a developer comfortable. Yes, there are all kinds of details to think about, like which algorithm or technology to use, or what kind of state and side effects to consider. Even a single function not only must deliver on Functionality, but also on Quality and Evolvability. Nevertheless, once it´s clear which function to put logic in, you have a tangible starting point. So, yes, what I´m suggesting is to find a single function to put all the logic in that´s necessary to deliver on a the requirements of an increment. Or to put it the other way around: Slice requirements in a way that each increment´s logic can be located under the roof of a single function. Entry points Of course, the logic of a software will always be spread across many, many functions. But there´s always an Entry Point. That´s the most important function for each increment, because that´s the root to put integration or even acceptance tests on. A batch program like the above hello-world application only has a single Entry Point. All logic is reached from there, regardless how deep it´s nested in classes. But a program with a user interface like this has at least two Entry Points: One is the main function called upon startup. The other is the button click event handler for “Show my score”. But maybe there are even more, like another Entry Point being a handler for the event fired when one of the choices gets selected; because then some logic could check if the button should be enabled because all questions got answered. Or another Entry Point for the logic to be executed when the program is close; because then the choices made should be persisted. You see, an Entry Point to me is a function which gets triggered by the user of a software. With batch programs that´s the main function. With GUI programs on the desktop that´s event handlers. With web programs that´s handlers for URL routes. And my basic suggestion to help you with slicing requirements for Spinning is: Slice them in a way so that each increment is related to only one Entry Point function.[2] Entry Points are the “outer functions” of a program. That´s where the environment triggers behavior. That´s where hardware meets software. Entry points always get called because something happened to hardware state, e.g. a key was pressed, a mouse button clicked, the system timer ticked, data arrived over a wire.[3] Viewed from the outside, software is just a collection of Entry Point functions made accessible via buttons to press, menu items to click, gestures, URLs to open, keys to enter. Collections of batch processors I´d thus say, we haven´t moved forward since the early days of software development. We´re still writing batch programs. Forget about “event-driven programming” with its fancy GUI applications. Software is just a collection of batch processors. Earlier it was just one per program, today it´s hundreds we bundle up into applications. Each batch processor is represented by an Entry Point as its root that works on a number of resources from which it reads data to process and to which it writes results. These resources can be the keyboard or main memory or a hard disk or a communication line or a display. Together many batch processors - large and small - form applications the user perceives as a single whole: Software development that way becomes quite simple: just implement one batch processor after another. Well, at least in principle ;-) Features Each batch processor entered through an Entry Point delivers value to the user. It´s an increment. Sometimes its logic is trivial, sometimes it´s very complex. Regardless, each Entry Point represents an increment. An Entry Point implemented thus is a step forward in terms of Agility. At the same time it´s a tangible unit for developers. Therefore, identifying the more or less numerous batch processors in a software system is a rewarding task for product owners and developers alike. That´s where user stories meet code. In this example the user story translates to the Entry Point triggered by clicking the login button on a dialog like this: The batch then retrieves what has been entered via keyboard, loads data from a user store, and finally outputs some kind of response on the screen, e.g. by displaying an error message or showing the next dialog. This is all very simple, but you see, there is not just one thing happening, but several. Get input (email address, password) Load user for email address If user not found report error Check password Hash password Compare hash to hash stored in user Show next dialog Viewed from 10,000 feet it´s all done by the Entry Point function. And of course that´s technically possible. It´s just a bunch of logic and calling a couple of API functions. However, I suggest to take these steps as distinct aspects of the overall requirement described by the user story. Such aspects of requirements I call Features. Features too are increments. Each provides some (small) value of its own to the user. Each can be checked individually by a product owner. Instead of implementing all the logic behind the Login() entry point at once you can move forward increment by increment, e.g. First implement the dialog, let the user enter any credentials, and log him/her in without any checks. Features 1 and 4. Then hard code a single user and check the email address. Features 2 and 2.1. Then check password without hashing it (or use a very simple hash like the length of the password). Features 3. and 3.2 Replace hard coded user with a persistent user directoy, but a very simple one, e.g. a CSV file. Refinement of feature 2. Calculate the real hash for the password. Feature 3.1. Switch to the final user directory technology. Each feature provides an opportunity to deliver results in a short amount of time and get feedback. If you´re in doubt whether you can implement the whole entry point function until tomorrow night, then just go for a couple of features or even just one. That´s also why I think, you should strive for wrapping feature logic into a function of its own. It´s a matter of Evolvability and Production Efficiency. A function per feature makes the code more readable, since the language of requirements analysis and design is carried over into implementation. It makes it easier to apply changes to features because it´s clear where their logic is located. And finally, of course, it lets you re-use features in different context (read: increments). Feature functions make it easier for you to think of features as Spinning increments, to implement them independently, to let the product owner check them for acceptance individually. Increments consist of features, entry point functions consist of feature functions. So you can view software as a hierarchy of requirements from broad to thin which map to a hierarchy of functions - with entry points at the top.   I like this image of software as a self-similar structure on many levels of abstraction where requirements and code match each other. That to me is true agile design: the core tenet of Agility to move forward in increments is carried over into implementation. Increments on paper are retained in code. This way developers can easily relate to product owners. Elusive and fuzzy requirements are not tangible. Software production is moving forward through requirements one increment at a time, and one function at a time. In closing Product owners and developers are different - but they need to work together towards a shared goal: working software. So their notions of software need to be made compatible, they need to be connected. The increments of the product owner - user stories and features - need to be mapped straightforwardly to something which is relevant to developers. To me that´s functions. Yes, functions, not classes nor components nor micro services. We´re talking about behavior, actions, activities, processes. Their natural representation is a function. Something has to be done. Logic has to be executed. That´s the purpose of functions. Later, classes and other containers are needed to stay on top of a growing amount of logic. But to connect developers and product owners functions are the appropriate glue. Functions which represent increments. Can there always be such a small increment be found to deliver until tomorrow evening? I boldly say yes. Yes, it´s always possible. But maybe you´ve to start thinking differently. Maybe the product owner needs to start thinking differently. Completion is not the goal anymore. Neither is checking the delivery of an increment through the user interface of a software. Product owners need to become comfortable using test beds for certain features. If it´s hard to slice requirements thin enough for Spinning the reason is too little knowledge of something. Maybe you don´t yet understand the problem domain well enough? Maybe you don´t yet feel comfortable with some tool or technology? Then it´s time to acknowledge this fact. Be honest about your not knowing. And instead of trying to deliver as a craftsman officially become a researcher. Research an check back with the product owner every day - until your understanding has grown to a level where you are able to define the next Spinning increment. ? Sometimes even thin requirement slices will cover several Entry Points, like “Add validation of email addresses to all relevant dialogs.” Validation then will it put into a dozen functons. Still, though, it´s important to determine which Entry Points exactly get affected. That´s much easier, if strive for keeping the number of Entry Points per increment to 1. ? If you like call Entry Point functions event handlers, because that´s what they are. They all handle events of some kind, whether that´s palpable in your code or note. A public void btnSave_Click(object sender, EventArgs e) {…} might look like an event handler to you, but public static void Main() {…} is one also - for then event “program started”. ?

    Read the article

  • Get to Know a Candidate (17 of 25): James Harris&ndash;Socialist Workers Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting.  Information sourced for Wikipedia. Harris (born 1948) is an African American communist politician and member of the National Committee of the Socialist Workers Party. He was the party's candidate for President of the United States in 1996 receiving 8,463 votes and again in 2000 when his ticket received 7,378 votes. Harris also served as an alternate candidate for Róger Calero in 2004 and 2008 in states where Calero could not qualify for the ballot (due to being born in Nicaragua). In 2004 he received 7,102 votes of the parties 10,791 votes. In 2008 he received 2,424 votes. More recently Harris was the SWP candidate in the 2009 Los Angeles mayoral election receiving 2,057 votes for 0.89% of the vote. Harris served for a time as the national organization secretary of the SWP. He was a staff writer for the socialist newsweekly The Militant in New York. He wrote about the internal resistance to South African apartheid and in 1994 traveled to South Africa to attend the Congress of South African Trade Unions convention. The Socialist Workers Party is a far-left political organization in the United States. The group places a priority on "solidarity work" to aid strikes and is strongly supportive of Cuba. The SWP publishes The Militant, a weekly newspaper that dates back to 1928, and maintains Pathfinder Press. Harris has Ballot Access in: CO, IO, LA, MN, NJ, WA (write-in access: NY) Learn more about James Harris and Socialist Workers Party on Wikipedia.

    Read the article

  • LI .Net User Group June 3rd, 2010 Meeting with Sam Abraham

    - by Sam Abraham
    It was a pleasure seeing old friends and meeting new ones at the LI .Net User Group Meeting on Thursday June 3rd 2010. I was very impressed as more than 35 developers were present which highlights the buzz MVC is creating with its latest release. We covered an introduction to MVC then went on to discuss new features in MVC2. I enjoyed the good dialogue among the group as we discussed how MVC can fit side-by-side with an existing WebForms paradigm and how MVC Support for TDD can dramatically shift Architecture practices as we know them. Looking forward to meeting you all next time I am on the Island. Below are some photos of the event. --Sam Abraham Site Director - West Palm Beach .Net User Group

    Read the article

  • Silverlight Cream for December 09, 2010 -- #1006

    - by Dave Campbell
    In this Issue: Adam Kinney, Jonathan van de Veen, René Schulte(-2-), Vikas, Chad Campbell, Chris Koenig, John Papa, and Martin Krüger. Above the Fold: Silverlight: "Silverlight TV #54: Introducing 11 Brand New Labs" John Papa WP7: "Gestures in Windows Phone 7" Chris Koenig Training: "New Windows Phone 7 tutorials for Designers on toolbox!" Adam Kinney Shoutouts: Jesse Liberty posted ways to get help when you get stuck: Top 10 Tips To Getting Help With Silverlight From SilverlightCream.com: New Windows Phone 7 tutorials for Designers on toolbox! Adam Kinney posted about some WP7 design goodness he's had the opportunity to take part in putting together that is now available for all of us on the Microsoft design Toolbox site.... detailed info about what's there. Adventures while building a Silverlight Enterprise application part #39 Jonathan van de Veen has his latest Silverlight coding adventure detailed on his blog... in the final throes of releasing, he came across some issues surrounding CRUD operations... Windows Phone Unplugged - How to Detect the Zune Software René Schulte has a post up about my two favorite devices: Zune and WP7 ... and how to detect if the Zune software is running when the device is connected to the PC. Issue with the WP7 PictureDecoder and Workaround René Schulte has a second post up today about strange behavior with the PictureDecoder DecodeJpeg method... he describes the problem and a workaround for it. Performance Wizard for Silverlight Vikas reports some Silverlight goodness in the VS2010 SP1 beta that's out ... a Performance Wizard... and he's ratted out it's use and sharing that info... Submitting an App to the Windows Phone Marketplace Chad Campbell details the user experience of getting an app through the marketplace to users... from the standpoint of someone that's been there. Gestures in Windows Phone 7 Chris Koenig is talking about Gestures in WP7, documenting how he used some XNA to get some side-to-side image scrolling going on... and gave us the source! Silverlight TV #54: Introducing 11 Brand New Labs John Papa has his latest Silverlight TV up and he's talking to two great guys: Dan Wahlin and Corey Schuman who have produced the labs you've seen referenced... awesome stuff guys... WP7: precisely position the text cursor when writing text Martin Krüger has a quick WP7 usage tip up for precisely positioning the text cursor in a textbox ... I could have used that today when "Nick's Frame Shop" came up as "Nix Frame Shop" in a search. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Windows Azure Evolution &ndash; TFS Integration (WAWS Part 2)

    - by Shaun
    So this is the fourth blog post about the new features of Windows Azure and the second part of Windows Azure Web Sites. But this is not just focus on the WAWS since the function I’m going to introduce is available in both Windows Azure Web Sites and Windows Azure Cloud Service (a.k.a. hosted service). In the previous post I talked about the Windows Azure Web Sites and how to use its gallery to build a WordPress personal blog without coding. Besides the gallery we can create an empty web site and upload our website from vary approaches. And one of the highlighted feature here is that, we can make our web site integrated with a source control service, such as TFS and Git, so that it will be deployed automatically once a new commit or build available.   Create New Empty Web Site In the developer portal when creating a new web site, we can select QUICK CREATE item. This will create an empty web site with only one shared instance without any database associated. Let’s specify the URL, region and subscription and click OK. After a few seconds our website will be ready. And now we can click the BROWSE button to open this empty website. As you can see there is a welcome page available in my website even thought I didn’t upload or deploy anything. This means even though the website will be charged even before anything was deployed, similar as the cloud service (hosted service). It is because once we created a website, Windows Azure platform had arranged a hosting process (w3wp.exe) in the group of virtual machines.   Create Project in TFS Preview Service and Setup Link Currently the Windows Azure Web Sites can integrate with TFS and Git as its deployment source, and it only support the Microsoft TFS Preview Service for now. I will not deep into how to use the TFS preview service in this post but once we click into the website we had just created and then clicked the “Set up TFS publishing”, there will be a dialog helping us to connect to this service. If you don’t have an account you can click the link shown below to request one. Assuming we have already had an account of TFS service then we need to create a new project firstly. Go to your TFS service website and create a new project, giving the project name, description and the process template. Then, back to the developer portal and clicked the “Set up TFS publishing” link. In the popping up window I will provide my TFS service URL and click the “Authorize now” link. Click “Accept” button to allow my windows azure to connect to my TFS service. Then it will be back to the developer portal and list all projects in my account. Just select the one I had just created and click OK. Then our website is linking to the TFS project I specified and finally it will show similar like this below. This means the web site had been linked to the TFS successfully.   Work with TFS Preview Service in VS2010 In the figure above there are some links to guide us how to connect to the TFS server through Visual Studio 2010 and 2012 RC. If you are using Visual Studio 2012 RC, you don’t need any extension. But if you are using Visual Studio 2010 you must have SP1 and KB2581206 installed. To connect to my TFS service just open the Visual Studio and in the Team Explorer, we can add a new TFS server and paste the URL of my TFS service from the developer portal. And select the project I had just created, then it will be listed in my Team Explorer. Now let’s start to build our website. Since the website we are going to build will be deployed to WAWS, it’s NOT a cloud service, NOT a web role. So in this case we need to create a normal ASP.NET web application. For example, an ASP.NET MVC 3 web application. Next, right click on the solution and select “Add Solution to Source Control”, select the project I had just created. Then check my code in. Once the check-in finished we can see that there is a build running in the TFS server. And if we back to the developer portal, we will see in our web site deployment page there’s a deployment running. In fact, once we linked our web site to our TFS then it will create a new build definition in our TFS project. It will be triggered by each check-in and deploy to the web site we linked automatically. So that when our code had been compiled it will be published to our web site from our TFS server. Once the build and deployment finished we can see it’s now active on our developer portal. Now we can see the web site that created from my Visual Studio and deployed by my TFS.   Continue Deployment through VS and TFS A big benefit when using TFS publishing is the continue deployment. Now if I changed some code in my Visual Studio, for example update some text on the home page and check in my changes, then it will trigger an new build and deploy to my WAWS automatically. And even more, if we wanted to rollback to a previous version we can just select an existing deployment listed in the portal and click REDEPLOY at the bottom.   Q&A: Can Web Site use Storage work with a Worker Role? Stacy asked a question in my previous post, which was “can a web site use Windows Azure Storage and furthermore working with a worker role”. Since the web site is deployed on the windows azure virtual machines in data center, it must be able to use all windows azure features such as the storage, SQL databases, CDN, etc.. But since when using web site we normally have a standard ASP.NET web application, PHP website or NodeJS, the windows azure SDK was not referenced by default. But we can add them by ourselves. In our sample project let’s right click on my MVC project and clicked the “Manage NuGet packages”. And in the dialog I will search windows azure packages and select the “Windows Azure Storage” to install. Then we will have the assemblies to access windows azure storage such as tables, queues and blobs. Since I have a storage account already, let’s have a quick demo, just to list all blobs in a container. The code would be like this. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using Microsoft.WindowsAzure; 7: using Microsoft.WindowsAzure.StorageClient; 8:  9: namespace WAASTFSDemo.Controllers 10: { 11: public class HomeController : Controller 12: { 13: public ActionResult Index() 14: { 15: ViewBag.Message = "Welcome to Windows Azure!"; 16:  17: var credentials = new StorageCredentialsAccountAndKey("[STORAGE_ACCOUNT]", "[STORAGE_KEY]"); 18: var account = new CloudStorageAccount(credentials, false); 19: var client = account.CreateCloudBlobClient(); 20: var container = client.GetContainerReference("shared"); 21: ViewBag.Blobs = container.ListBlobs().Select(b => b.Uri.AbsoluteUri); 22:  23: return View(); 24: } 25:  26: public ActionResult About() 27: { 28: return View(); 29: } 30: } 31: } 1: @{ 2: ViewBag.Title = "Home Page"; 3: } 4:  5: <h2>@ViewBag.Message</h2> 6: <p> 7: To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. 8: </p> 9: <div> 10: <ul> 11: @foreach (var blob in ViewBag.Blobs) 12: { 13: <li>@blob</li> 14: } 15: </ul> 16: </div> And then just check in the code, it will be deployed to my web site. Finally we can see the blobs in my storage.   This is just an example but it proves that web sites can connect to storage, table, blob and queue as well. So the answer to Stacy should be “yes”. The web site can use queue storage to work with worker role.   Summary In this post I demonstrated how to integrate with TFS from Windows Azure Web Sites. You can see our website can be built, uploaded and deployed automatically by TFS service. All we need to do is to provide the TFS name and select the project. Not only the Windows Azure Web Site, in this upgrade the Windows Azure Cloud Services (hosted service) can be published through TFS as well. Very similar as what we have shown below. But currently, only Microsoft TFS Service Preview can be integrated with Windows Azure. But I think in the future we can link the TFS in our enterprise and some 3rd party TFS such as CodePlex to Windows Azure.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Book Review: Inside Windows Communicat?ion Foundation by Justin Smith

    - by Sam Abraham
    In gearing up for a new major project, I have taken it upon myself to research and review various aspects of our Microsoft stack of choice seeking new creative ways for us to leverage in our upcoming state-of-the-art solution projected to position us ahead of the competition. While I am a big supporter of search engines and online articles as a quick and usually reliable source of information, I have opted in my investigative quest to actually “hit the books”.  I have also made it a habit to provide quick reviews for material I go over hoping this can be of help to someone who may be looking for items others may have had success using for reference. I have started a few months ago by investigating better ways to implementing, profiling and troubleshooting SQL Server 2008. My reference of choice was Itzik Ben-Gan et al’s “Inside Microsoft SQL Server 2008” series. While it has been a month since my last book review, this by no means meant that I have been sitting idle. It has been pretty challenging to balance research with the continuous flow of projects and deadlines all while balancing that with my family duties which, of course, always comes first. In this post, I will be providing a quick review of my latest reading: Inside Windows Communication Foundation by Justin Smith. This book has been on my reading list for a very long time and I am proud to have finally tackled it. Justin’s book presents a great coverage of WCF internals. His simple, concise and well-worded style has simplified the relatively complex internals of WCF and made it comprehensible. Justin opted to organize the book into three parts: an introduction to WCF, coverage of the Channel Layer and a look at WCF internals at the ServiceModel layer. Part I introduced the concepts and made the case behind WCF while covering a simplified version of WCF’s message patterns, endpoints and contracts. In Part II, Justin provided a thorough coverage of the internals of Messages, Channels and Channel Managers. Part III concluded this nice reading with coverage of Bindings, Contracts, Dispatchers and Clients. While one would not likely need to extend WCF at that low level of the API, an understanding of the inner-workings of WCF is a must to avoid pitfalls mainly caused by misinformation or erroneous assumptions. Problems can quickly arise in high-traffic hosted solutions, but most can be easily avoided with some minimal time investment and education. My next goal is to pay a closer look at WCF from the programmer’s API perspective now that I have acquired a better understanding of its inner working.   Many thanks to the O’Reilly User Group Program and its support of our West Palm Beach Developers’ Group.   Stay tuned for more… All the best, --Sam

    Read the article

  • Sync custom AD properties to SharePoint Profile

    - by KunaalKapoor
    Here are some step-by-step instructions regarding configuring SharePoint to sync with custom AD attributes:Add the custom attribute in Active DirectoryThis part will have to be your doing; here is some documentation regarding creating customattributes in AD:http://msdn.microsoft.com/en-us/library/ms675085(VS.85).aspxhttp://technet.microsoft.com/en-us/magazine/2008.05.schema.aspxhttp://blogs.technet.com/b/isingh/archive/2007/02/18/adding-custom-attributes-in-active-directory.aspx2. Open up the miisclient.exe (C:\Program Files\Microsoft Office Servers\14.0\Synchronization Service\UIShell\miisclient.exe)a. This will have to be opened up with the farm admin account3. Click on "Management Agents" in the ribbon4. Right-click the Active Directory Management Agent ("MOSS-<name of sync connection>") and click "Refresh Schema"a. When prompted, enter the credentials for the farm account5. Once complete, close out of miisclient.exe6. Go into Central Admin --> Application Management --> Manage Service Applications --> Go into the User Profile Service Application7. Click on "Manage User Properties"8. Click on "New Property"9. Put in the correct information regarding the attribute that was created10. At the bottom of this page, under the "Source Data Connection" drop down, select the AD synchronization connection you have already configured11. For the "Attribute" drop down, select the new attribute you have created12. For the "Direction" drop down, select "Import"13. Click "OK"14. Run a full synchronization for the user profile service application and the custom property will get synced (as long as the attribute is set in Active Directory for the desired users)

    Read the article

  • Share Files and Folders and Internet between Guest OS and the Host in Hyper-V

    - by Manesh Karunakaran
    For those who are familiar with the VirtualPC, vmWare and VirtualBox environments will be quite irritated to find out that there is no direct way to share files from the Host machine to the Virtualized guest environment. This is a good thing from a CIO perspective because there’s excellent isolation for the virtualized environments this way, but for the developer junkies like us, this is an irritant, especially for those who have nuked their Windows 7 OS and installed Windows Server 2008 R2 for all the the SharePoint friendliness that it offers. Here’s a quick 5 minutes howto on Enabling Shared Folders and Internet Access for the Hyper-V images, for those who are still struggling with this. Step 1: Add a Virtual Network Adapter to your Guest OS For this, shut down the guest machine, go to its settings and add a Virtual Network Adapter as given in the images below     Step 2: Enable Virtual Networking in Hyper-V   Setting this up is very easy. In the Hyper-V Manager, under Actions (right panel), click the Virtual Network Manager. In the Virtual Network Manager in the Create virtual network panel, select Internal and click the Add button.        At this point if you open Control Panel\Network and Internet\Network Connections you will be able to see the new Network Adapter, Now name it to something meaningful other than Network Adapter X. Now you can add this network to each of your virtual machines, but at this point, unless you assign an IP address in each connection, you won't be able to do much.   Step 3: Enable Internet Connection Sharing so that Guest OS’es also can connect to the internet. To enable ICS follow these steps: Click on the network icon in the tray of your host machine and select Network and Sharing Center. From there click Manage network connections. Select the network adapter that you use to access the Internet. Right click it and select Properties. In the properties dialog select the Sharing tab. On this tab check the box that says "Allow other network users..." and then set the Home networking connection to be the network adapter that was created above (now you see why I said to rename it to something useful). Now your virtual machines that have this network connection will automatically get an IP address and will be able to connect to the Internet (provided your internet connection is working). Because each adapter also gets an automatic address you can now share files and folders between your host and your virtual machines which is important since you can't just drag-and-drop files like you can with Virtual PC.   Step 4: Create a Shared Folder in the Host Machine and use it in the Guest machine. Right click on the folder that you want to Share and select ‘Share with\Specific People’ and specify who all can access the share. Open the Guest OS from Hyper V Navigate to Start > Run and type in the Address of the Share (Or Map a Drive to the Share) Bingo! The Share opens!! :)   Now you can share as many files and folders as you want between the host and the guest, and you also have internet access inside the Virtual machines. Hope that helps.   Technorati Tags: Shared folder,Hyper-V,Share Files,Share files and folders between guest and host,Hyper-V Networking,Share Internet Access in Hyper-V,Internet,Files,Shared folders in Hyper-V

    Read the article

  • O&rsquo;Reilly Deal of the Day 10/June/2014 - AngularJS Directives

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/06/10/orsquoreilly-deal-of-the-day-10june2014---angularjs-directives.aspxToday’s half-price E-Book offer from O’Reilly at http://shop.oreilly.com/product/9781783280339.do is AngularJS Directives. “AngularJS, propelled by Google, is quickly becoming one of the most popular JavaScript MVC frameworks available, working to invert the development paradigm and bring data-driven modularity to the web frontend. Directives serve as the core building blocks in AngularJS and enable you to create reusable models that mold around your data structures and breathe new life into the intersection of HTML and JavaScript.”

    Read the article

  • Refactor This (Ugly Code)!

    - by Alois Kraus
    Ayende has put on his blog some ugly code to refactor. First and foremost it is nearly impossible to reason about other peoples code without knowing the driving forces behind the current code. It is certainly possible to make it much cleaner when potential sources of errors cannot happen in the first place due to good design. I can see what the intention of the code is but I do not know about every brittle detail if I am allowed to reorder things here and there to simplify things. So I decided to make it much simpler by identifying the different responsibilities of the methods and encapsulate it in different classes. The code we need to refactor seems to deal with a handler after a message has been sent to a message queue. The handler does complete the current transaction if there is any and does handle any errors happening there. If during the the completion of the transaction errors occur the transaction is at least disposed. We can enter the handler already in a faulty state where we try to deliver the complete event in any case and signal a failure event and try to resend the message again to the queue if it was not inside a transaction. All is decorated with many try/catch blocks, duplicated code and some state variables to route the program flow. It is hard to understand and difficult to reason about. In other words: This code is a mess and could be written by me if I was under pressure. Here comes to code we want to refactor:         private void HandleMessageCompletion(                                      Message message,                                      TransactionScope tx,                                      OpenedQueue messageQueue,                                      Exception exception,                                      Action<CurrentMessageInformation, Exception> messageCompleted,                                      Action<CurrentMessageInformation> beforeTransactionCommit)         {             var txDisposed = false;             if (exception == null)             {                 try                 {                     if (tx != null)                     {                         if (beforeTransactionCommit != null)                             beforeTransactionCommit(currentMessageInformation);                         tx.Complete();                         tx.Dispose();                         txDisposed = true;                     }                     try                     {                         if (messageCompleted != null)                             messageCompleted(currentMessageInformation, exception);                     }                     catch (Exception e)                     {                         Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);                     }                     return;                 }                 catch (Exception e)                 {                     Trace.TraceWarning("Failed to complete transaction, moving to error mode"+ e);                     exception = e;                 }             }             try             {                 if (txDisposed == false && tx != null)                 {                     Trace.TraceWarning("Disposing transaction in error mode");                     tx.Dispose();                 }             }             catch (Exception e)             {                 Trace.TraceWarning("Failed to dispose of transaction in error mode."+ e);             }             if (message == null)                 return;                 try             {                 if (messageCompleted != null)                     messageCompleted(currentMessageInformation, exception);             }             catch (Exception e)             {                 Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);             }               try             {                 var copy = MessageProcessingFailure;                 if (copy != null)                     copy(currentMessageInformation, exception);             }             catch (Exception moduleException)             {                 Trace.TraceError("Module failed to process message failure: " + exception.Message+                                              moduleException);             }               if (messageQueue.IsTransactional == false)// put the item back in the queue             {                 messageQueue.Send(message);             }         }     You can see quite some processing and handling going on there. Yes this looks like real world code one did put together to make things work and he does not trust his callbacks. I guess these are event handlers which are optional and the delegates were extracted from an event to call them back later when necessary.  Lets see what the author of this code did intend:          private void HandleMessageCompletion(             TransactionHandler transactionHandler,             MessageCompletionHandler handler,             CurrentMessageInformation messageInfo,             ErrorCollector errors             )         {               // commit current pending transaction             transactionHandler.CallHandlerAndCommit(messageInfo, errors);               // We have an error for a null message do not send completion event             if (messageInfo.CurrentMessage == null)                 return;               // Send completion event in any case regardless of errors             handler.OnMessageCompleted(messageInfo, errors);               // put message back if queue is not transactional             transactionHandler.ResendMessageOnError(messageInfo.CurrentMessage, errors);         }   I did not bother to write the intention here again since the code should be pretty self explaining by now. I have used comments to explain the still nontrivial procedure step by step revealing the real intention about all this complex program flow. The original complexity of the problem domain does not go away but by applying the techniques of SRP (Single Responsibility Principle) and some functional style but we can abstract the necessary complexity away in useful abstractions which make it much easier to reason about it. Since most of the method seems to deal with errors I thought it was a good idea to encapsulate the error state of our current message in an ErrorCollector object which stores all exceptions in a list along with a description what the error all was about in the exception itself. We can log it later or not depending on the log level or whatever. It is really just a simple list that encapsulates the current error state.          class ErrorCollector          {              List<Exception> _Errors = new List<Exception>();                public void Add(Exception ex, string description)              {                  ex.Data["Description"] = description;                  _Errors.Add(ex);              }                public Exception Last              {                  get                  {                      return _Errors.LastOrDefault();                  }              }                public bool HasError              {                  get                  {                      return _Errors.Count > 0;                  }              }          }   Since the error state is global we have two choices to store a reference in the other helper objects (TransactionHandler and MessageCompletionHandler)or pass it to the method calls when necessary. I did chose the latter one because a second argument does not hurt and makes it easier to reason about the overall state while the helper objects remain stateless and immutable which makes the helper objects much easier to understand and as a bonus thread safe as well. This does not mean that the stored member variables are stateless or thread safe as well but at least our helper classes are it. Most of the complexity is located the transaction handling I consider as a separate responsibility that I delegate to the TransactionHandler which does nothing if there is no transaction or Call the Before Commit Handler Commit Transaction Dispose Transaction if commit did throw In fact it has a second responsibility to resend the message if the transaction did fail. I did see a good fit there since it deals with transaction failures.          class TransactionHandler          {              TransactionScope _Tx;              Action<CurrentMessageInformation> _BeforeCommit;              OpenedQueue _MessageQueue;                public TransactionHandler(TransactionScope tx, Action<CurrentMessageInformation> beforeCommit, OpenedQueue messageQueue)              {                  _Tx = tx;                  _BeforeCommit = beforeCommit;                  _MessageQueue = messageQueue;              }                public void CallHandlerAndCommit(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  if (_Tx != null && !errors.HasError)                  {                      try                      {                          if (_BeforeCommit != null)                          {                              _BeforeCommit(currentMessageInfo);                          }                            _Tx.Complete();                          _Tx.Dispose();                      }                      catch (Exception ex)                      {                          errors.Add(ex, "Failed to complete transaction, moving to error mode");                          Trace.TraceWarning("Disposing transaction in error mode");                          try                          {                              _Tx.Dispose();                          }                          catch (Exception ex2)                          {                              errors.Add(ex2, "Failed to dispose of transaction in error mode.");                          }                      }                  }              }                public void ResendMessageOnError(Message message, ErrorCollector errors)              {                  if (errors.HasError && !_MessageQueue.IsTransactional)                  {                      _MessageQueue.Send(message);                  }              }          } If we need to change the handling in the future we have a much easier time to reason about our application flow than before. After we did complete our transaction and called our callback we can call the completion handler which is the main purpose of the HandleMessageCompletion method after all. The responsiblity o the MessageCompletionHandler is to call the completion callback and the failure callback when some error has occurred.            class MessageCompletionHandler          {              Action<CurrentMessageInformation, Exception> _MessageCompletedHandler;              Action<CurrentMessageInformation, Exception> _MessageProcessingFailure;                public MessageCompletionHandler(Action<CurrentMessageInformation, Exception> messageCompletedHandler,                                              Action<CurrentMessageInformation, Exception> messageProcessingFailure)              {                  _MessageCompletedHandler = messageCompletedHandler;                  _MessageProcessingFailure = messageProcessingFailure;              }                  public void OnMessageCompleted(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageCompletedHandler != null)                      {                          _MessageCompletedHandler(currentMessageInfo, errors.Last);                      }                  }                  catch (Exception ex)                  {                      errors.Add(ex, "An error occured when raising the MessageCompleted event, the error will NOT affect the message processing");                  }                    if (errors.HasError)                  {                      SignalFailedMessage(currentMessageInfo, errors);                  }              }                void SignalFailedMessage(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageProcessingFailure != null)                          _MessageProcessingFailure(currentMessageInfo, errors.Last);                  }                  catch (Exception moduleException)                  {                      errors.Add(moduleException, "Module failed to process message failure");                  }              }            }   If for some reason I did screw up the logic and we need to call the completion handler from our Transaction handler we can simple add to the CallHandlerAndCommit method a third argument to the MessageCompletionHandler and we are fine again. If the logic becomes even more complex and we need to ensure that the completed event is triggered only once we have now one place the completion handler to capture the state. During this refactoring I simple put things together that belong together and came up with useful abstractions. If you look at the original argument list of the HandleMessageCompletion method I have put many things together:   Original Arguments New Arguments Encapsulate Message message CurrentMessageInformation messageInfo         Message message TransactionScope tx Action<CurrentMessageInformation> beforeTransactionCommit OpenedQueue messageQueue TransactionHandler transactionHandler        TransactionScope tx        OpenedQueue messageQueue        Action<CurrentMessageInformation> beforeTransactionCommit Exception exception,             ErrorCollector errors Action<CurrentMessageInformation, Exception> messageCompleted MessageCompletionHandler handler          Action<CurrentMessageInformation, Exception> messageCompleted          Action<CurrentMessageInformation, Exception> messageProcessingFailure The reason is simple: Put the things that have relationships together and you will find nearly automatically useful abstractions. I hope this makes sense to you. If you see a way to make it even more simple you can show Ayende your improved version as well.

    Read the article

  • Updating a SharePoint master page via a solution (WSP)

    - by Kelly Jones
    In my last blog post, I wrote how to deploy a SharePoint theme using Features and a solution package.  As promised in that post, here is how to update an already deployed master page. There are several ways to update a master page in SharePoint.  You could upload a new version to the master page gallery, or you could upload a new master page to the gallery, and then set the site to use this new page.  Manually uploading your master page to the master page gallery might be the best option, depending on your environment.  For my client, I did these steps in code, which is what they preferred. (Image courtesy of: http://www.joiningdots.net/blog/2007/08/sharepoint-and-quick-launch.html ) Before you decide which method you need to use, take a look at your existing pages.  Are they using the SharePoint dynamic token or the static token for the master page reference?  The wha, huh? SO, there are four ways to tell an .aspx page hosted in SharePoint which master page it should use: “~masterurl/default.master” – tells the page to use the default.master property of the site “~masterurl/custom.master” – tells the page to use the custom.master property of the site “~site/default.master” – tells the page to use the file named “default.master” in the site’s master page gallery “~sitecollection/default.master” – tells the page to use the file named “default.master” in the site collection’s master page gallery For more information about these tokens, take a look at this article on MSDN. Once you determine which token your existing pages are pointed to, then you know which file you need to update.  So, if the ~masterurl tokens are used, then you upload a new master page, either replacing the existing one or adding another one to the gallery.  If you’ve uploaded a new file with a new name, you’ll just need to set it as the master page either through the UI (MOSS only) or through code (MOSS or WSS Feature receiver code – or using SharePoint Designer). If the ~site or ~sitecollection tokens were used, then you’re limited to either replacing the existing master page, or editing all of your existing pages to point to another master page.  In most cases, it probably makes sense to just replace the master page. For my project, I’m working with WSS and the existing pages are set to the ~sitecollection token.  Based on this, I decided to just upload a new version of the existing master page (and not modify the dozens of existing pages). Also, since my client prefers Features and solutions, I created a master page Feature and a corresponding Feature Receiver.  For information on creating the elements and feature files, check out this post: http://sharepointmagazine.net/technical/development/deploying-the-master-page . This works fine, unless you are overwriting an existing master page, which was my case.  You’ll run into errors because the master page file needs to be checked out, replaced, and then checked in.  I wrote code in my Feature Activated event handler to accomplish these steps. Here are the steps necessary in code: Get the file name from the elements file of the Feature Check out the file from the master page gallery Upload the file to the master page gallery Check in the file to the master page gallery Here’s the code in my Feature Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3: try 4: { 5:   6: SPElementDefinitionCollection col = properties.Definition.GetElementDefinitions(System.Globalization.CultureInfo.CurrentCulture); 7:   8: using (SPWeb curweb = GetCurWeb(properties)) 9: { 10: foreach (SPElementDefinition ele in col) 11: { 12: if (string.Compare(ele.ElementType, "Module", true) == 0) 13: { 14: // <Module Name="DefaultMasterPage" List="116" Url="_catalogs/masterpage" RootWebOnly="FALSE"> 15: // <File Url="myMaster.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" 16: // Path="MasterPages/myMaster.master" /> 17: // </Module> 18: string Url = ele.XmlDefinition.Attributes["Url"].Value; 19: foreach (System.Xml.XmlNode file in ele.XmlDefinition.ChildNodes) 20: { 21: string Url2 = file.Attributes["Url"].Value; 22: string Path = file.Attributes["Path"].Value; 23: string fileType = file.Attributes["Type"].Value; 24:   25: if (string.Compare(fileType, "GhostableInLibrary", true) == 0) 26: { 27: //Check out file in document library 28: SPFile existingFile = curweb.GetFile(Url + "/" + Url2); 29:   30: if (existingFile != null) 31: { 32: if (existingFile.CheckOutStatus != SPFile.SPCheckOutStatus.None) 33: { 34: throw new Exception("The master page file is already checked out. Please make sure the master page file is checked in, before activating this feature."); 35: } 36: else 37: { 38: existingFile.CheckOut(); 39: existingFile.Update(); 40: } 41: } 42:   43: //Upload file to document library 44: string filePath = System.IO.Path.Combine(properties.Definition.RootDirectory, Path); 45: string fileName = System.IO.Path.GetFileName(filePath); 46: char slash = Convert.ToChar("/"); 47: string[] folders = existingFile.ParentFolder.Url.Split(slash); 48:   49: if (folders.Length > 2) 50: { 51: Logger.logMessage("More than two folders were detected in the library path for the master page. Only two are supported.", 52: Logger.LogEntryType.Information); //custom logging component 53: } 54:   55: SPFolder myLibrary = curweb.Folders[folders[0]].SubFolders[folders[1]]; 56:   57: FileStream fs = File.OpenRead(filePath); 58:   59: SPFile newFile = myLibrary.Files.Add(fileName, fs, true); 60:   61: myLibrary.Update(); 62: newFile.CheckIn("Updated by Feature", SPCheckinType.MajorCheckIn); 63: newFile.Update(); 64: } 65: } 66: } 67: } 68: } 69: } 70: catch (Exception ex) 71: { 72: string msg = "Error occurred during feature activation"; 73: Logger.logException(ex, msg, ""); 74: } 75:   76: } 77:   78: /// <summary> 79: /// Using a Feature's properties, get a reference to the Current Web 80: /// </summary> 81: /// <param name="properties"></param> 82: public SPWeb GetCurWeb(SPFeatureReceiverProperties properties) 83: { 84: SPWeb curweb; 85:   86: //Check if the parent of the web is a site or a web 87: if (properties != null && properties.Feature.Parent.GetType().ToString() == "Microsoft.SharePoint.SPWeb") 88: { 89:   90: //Get web from parent 91: curweb = (SPWeb)properties.Feature.Parent; 92: 93: } 94: else 95: { 96: //Get web from Site 97: using (SPSite cursite = (SPSite)properties.Feature.Parent) 98: { 99: curweb = (SPWeb)cursite.OpenWeb(); 100: } 101: } 102:   103: return curweb; 104: } This did the trick.  It allowed me to update my existing master page, through an easily repeatable process (which is great when you are working with more than one environment and what to do things like TEST it!).  I did run into what I would classify as a strange issue with one of my subsites, but that’s the topic for another blog post.

    Read the article

  • More Retro Games

    - by Matt Christian
    Last week I made 2 stops to my local game stores and spent a load of cash on a bunch of new retro games for my collection.  Here are the recent additions: NES - Mega Man 2 - The Adventures of Bayou Billy - Ducktales - Metal Gear - Super Mario Bros / Duck Hunt - Firestorm - Dragon's Lair - Bartman Meets Radioactive Man N64 - Superman 64 - Zelda: Ocarina of Time (in original box, box is in poor condition) Atari - Superman - Adventure - Donkey Kong - Raiders of the Lost Ark Dreamcast - Memory card with view screen - Space Channel 5 Genesis (all in case) - Jurassic Park - Sonic Spinball - Sonic the Hegehog 3 (missing manual) - Spiderman (also called Spiderman vs. The Kingpin) GameGear - Bart vs The Space Mutants Quite a large haul given it was all purchased in 2 days.  Although, Metal Gear I got for a great deal and almost considered buying their other copy simply to resale for more though I decided against it to let another lucky soul find it.  I may need to run over there again because I think they had TMNT 2 (NES) for around $6 and it usually sells for more than that.  I could have sworn I grabbed it and bought it but my receipt tells me differently. I also found my copy of Super Mario 3 and added that to my collection.  Unfortunately one of the corners of the label has begun to peel up pretty badly which sucks although it's still a good item for the collection. In other retro news, this weekend was Easter and while at my grandparents the cousins wanted to play on their NES which was not working.  Me being the retro NES nerd I am, grabbed a screw driver, some Windex, a few toothpicks, and a few cotton swabs and had it up and running under an hour (that includes eating dinner!).  The NES holds the games tighter, has a better connection, and works almost instantly.  I should do THAT for a living!

    Read the article

  • Error al instalar Visual Studio 2008 en Windows 7

    - by José Marcos García Espinosa
    Post/Install() failed in ISetupManager::InternalInstallManager() with HRESULT -2147023293. - Es el mensaje que aparece en la descripción del error o en los archivos de log. Algunos dicen que si buscas más a detalle encuentras el problema. Después de reintentar la instalación como 8 veces, desinstalando cada vez más cosas, te encuentras con que la solución es sencilla: ¡Desinstala Office! No sé por qué, pero creo que Microsoft asume que si eres un desarrollador, primero instalarás VS2008 y después Office; si lo haces al revés, se generan problemas de compatibilidad (ya que VS2008 instala herramientas de interoperabilidad/desarrollo/conexión con Office, Visual Studio Tools for Office) y la instalación no pasa a veces ni del .net Framework. A final de cuentas, creo, ésta es una medio sencilla.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >