Search Results

Search found 7338 results on 294 pages for 'useful'.

Page 195/294 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • When Less is More

    - by aditya.agarkar
    How do you reconcile the fact that while the overall warehouse volume is down you still need more workers in the warehouse to ship all the orders? A WMS customer recently pointed out this seemingly perplexing fact in a customer conference. So what is going on? Didn't we tell you before that for a warehouse the customer is really the "king"? In this case customers are merely responding to a low overall low demand and uncertainty. They do not want to hold down inventory and one of the ways to do that is by decreasing the order size and ordering more frequently. Overall impact to the warehouse? Two words: "More work!!" This is not all. Smaller order sizes also mean challenges from a transportation perspective including a rise in costlier parcel or LTL shipments instead of cheaper TL shipments. Here is a hypothetical scenario where a customer reduces the order size by 10% and increases the order frequency by 10%. As you can see in the following table, the overall volume declines by 1% but the warehouse has to ship roughly 10% more lines. Order Frequency (Line Count)Order Size (Units)Total VolumeChange (%)10010010,000 -110909,900-1% If you want to see how "Less is More" in graphical terms, this is how it appears: Even though the volume is down, there is going to be more work in the warehouse in terms of number of lines shipped. The operators need to pick more discrete orders, pack them into more shipping containers and ship more deliveries. What do you do differently if you are facing this situation?In this case here are some obvious steps to take:Uno: Change your pick methods. If you are used to doing order picks, it needs to go out the door. You need to evaluate batch picking and grouping techniques. Go for cluster picking, go for zone picking, pick and pass...anything that improves your picker productivity. More than anything, cluster picking works like a charm and above all, its simple and very effective. Dos: Are you minimize "touch" points in your pick process? Consider doing one step pick, pack and confirm i.e. pick and pack stuff directly into shipping cartons. Done correctly the container will not require any more "touch" points all the way to the trailer loading. Use cartonization!Tres: Are the being picked from an optimized pick face? Are the items slotted correctly? This needs to be looked into. Consider automated "pull" or "push" replenishment into your pick face and also make sure that high demand items are occupying the golden zones.  Cuatro: Are you tracking labor productivity? If not there needs to be a concerted push for having labor standards in place. Hope you found these ideas useful.

    Read the article

  • Java Space on Parleys

    - by Yolande Poirier
    Now available! A great selection of JavaOne 2010 and JVM Language Summit 2010 sessions as well as Oracle Technology Network TechCasts on the new Java Space on Parleys website. Oracle partnered with Stephan Janssen, founder of Parleys to make this happen. Parleys website offers a user friendly experience to view online content. You can download some of the talks to your desktop or watch them on the go on mobile devices. The current selection is a well of expertise from top Java luminaries and Oracle experts. JavaOne 2010 sessions: ·        Best practices for signing code by Sean Mullan   ·        Building software using rich client platforms by Rickard Thulin ·        Developing beyond the component libraries by Ryan Cuprak ·        Java API for keyhole markup language by Florian Bachmann ·        Avoiding common user experience anti-patterns by Burk Hufnagel ·        Accelerating Java workloads via GPUs by Gary Frost JVM Languages Summit 2010 sessions: ·      Mixed language project compilation in Eclipse by Andy Clement  ·      Gathering the threads by John Rose  ·      LINQ: language features for concurrency by Neal Gafter  ·      Improvements in OpenJDK useful for JVM languages by Eric Caspole  ·      Symmetric Multilanguage - VM Architecture by Oleg Pliss  Special interviews with Oracle experts on product innovations: ·      Ludovic Champenois, Java EE architect on Glassfish 3.1 and Java EE. ·      John Jullion-Ceccarelli and Martin Ryzl on NetBeans IDE 6.9 You can chose to listen to a section of talks using the agenda view and search for related content while watching a presentation.  Enjoy the Java content and vote on it! 

    Read the article

  • StreamInsight and Reactive Framework Challenge

    In his blogpost Roman from the StreamInsight team asked if we could create a Reactive Framework version of what he had done in the post using StreamInsight.  For those who don’t know, the Reactive Framework or Rx to its friends is a library for composing asynchronous and event-based programs using observable collections in the .Net framework.  Yes, there is some overlap between StreamInsight and the Reactive Extensions but StreamInsight has more flexibility and power in its temporal algebra (Windowing, Alteration of event headers) Well here are two alternate ways of doing what Roman did. The first example is a mix of StreamInsight and Rx var rnd = new Random(); var RandomValue = 0; var interval = Observable.Interval(TimeSpan.FromMilliseconds((Int32)rnd.Next(500,3000))) .Select(i => { RandomValue = rnd.Next(300); return RandomValue; }); Server s = Server.Create("Default"); Microsoft.ComplexEventProcessing.Application a = s.CreateApplication("Rx SI Mischung"); var inputStream = interval.ToPointStream(a, evt => PointEvent.CreateInsert( System.DateTime.Now.ToLocalTime(), new { RandomValue = evt}), AdvanceTimeSettings.IncreasingStartTime, "Rx Sample"); var r = from evt in inputStream select new { runningVal = evt.RandomValue }; foreach (var x in r.ToPointEnumerable().Where(e => e.EventKind != EventKind.Cti)) { Console.WriteLine(x.Payload.ToString()); } This next version though uses the Reactive Extensions Only   var rnd = new Random(); var RandomValue = 0; Observable.Interval(TimeSpan.FromMilliseconds((Int32)rnd.Next(500, 3000))) .Select(i => { RandomValue = rnd.Next(300); return RandomValue; }).Subscribe(Console.WriteLine, () => Console.WriteLine("Completed")); Console.ReadKey();   These are very simple examples but both technologies allow us to do a lot more.  The ICEPObservable() design pattern was reintroduced in StreamInsight 1.1 and the more I use it the more I like it.  It is a very useful pattern when wanting to show StreamInsight samples as is the IEnumerable() pattern.

    Read the article

  • Silverlight Cream for January 14, 2011 -- #1027

    - by Dave Campbell
    In this Issue: Sigurd Snørteland, Yochay Kiriaty, WindowsPhoneGeek(-2-), Jesse Liberty(-2-), Kunal Chowdhury, Martin Krüger(-2-), Jonathan Cardy. Above the Fold: Silverlight: "Image Viewer using a GridSplitter control" Martin Krüger WP7: "Implementing WP7 ToggleImageControl from the ground up: Part1" WindowsPhoneGeek VS2010 Templates: "MVVM Project Templates for Visual Studio 2010" Jonathan Cardy From SilverlightCream.com: BabySmash7 - a WP7 children's game (source code included) Sigurd Snørteland not only brings Scott Hanselman's Baby Smash to WP7, but he's delivering the source to us as well as discussion of the app. Windows Push Notification Server Side Helper Library Yochay Kiriaty has a tutorial up on Push Notification... not explaining them, but a discussion of a WP7 Push Recipe that provides an easy way for sending all 3 kinds of push notification messages currently supported. Implementing WP7 ToggleImageControl from the ground up: Part1 WindowsPhoneGeek has a great 2-part series up on building a useful WP7 custom control -- a ToggleImage control... this part 1 is definition, deciding on Visual states, etc... buckle up... this is good stuff Implementing WP7 ToggleImageControl from the ground up: Part2 Part 2 in WindowsPhoneGeek's series is also up and where the real fun lives -- implementing the behavior of the control... and the source is available at the end of this post. The Full Stack #5 – Entity Framework Code First Jesse Liberty has episode 5 of the "Full Stack" series he and Jon Galloway are doing and are discussing Entity Framework Code First. Windows Phone From Scratch #18 – MVVM Light Toolkit Soup To Nuts 3 Jesse Liberty also has part 3 of his MVVMLight and WP7 post up and is digging into messaging in this one... for example view <--> ViewModel communication. Exploring Ribbon Control for Silverlight (Part - 1) Kunal Chowdhury has part 1 of a series up on using the Silverlight Ribbon Control from DevComponents... lots of information and a great intro to a great control. Image Viewer using a GridSplitter control Martin Krüger has a very nice picture viewer up as a demo and code available that simply uses the GridSplitter to implement tha aperture... check it out. How to: Gentle animation of a magnify effect Martin Krüger's latest is a take-off on a prior post he links to called 'just for fun' in which he smoothly animates a magnify effect... just very cool animation... explanation and source. MVVM Project Templates for Visual Studio 2010 Jonathan Cardy has a couple resources you probably wanna grab... two MVVM project templates for VS2010... one WPF and one Silverlight Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Manage Your Twitter Account from the Sidebar in Firefox

    - by Asian Angel
    Are you a Twitter addict and need an easy way to manage your account in Firefox? Now you can access Twitter in your Sidebar or as a separate window with the TwitKit+ extension for Firefox. Accessing TwitKit+ There are three ways that you can access TwitKit+ after installing the extension. The first is by adding the “Toolbar Button” to your browser’s UI. The second and third methods are through the “View & Tools Menus”.   TwitKit+ in Action When you open TwitKit+ for the first time you will see Twitter’s “Public Tweet Stream”. To get started login into your account. Note: If you do not care for the “brown theme” you can select a different one in “Preferences”. Here is a closer look at the top area and the commands available. Notice the “blue arrow symbol” in the upper left corner…very useful if you want to separate TwitKit+ from your main browser window for a bit. Secure Mode, Undock, Preferences, Login/Logout Google Search, Twitter Search, Copy Selection To Status Box, Shorten Selected URL Public, User, Friends, Followers, @ Messages, Direct Messages, Profile Note: To use Google or Twitter search enter your term in the “Status Area” and click on the appropriate service icon. Here is the regular timeline for our account…the “clickable tab buttons” make everything easy to view and work with. You can perform actions such as replying, retweeting, marking as a favorite, etc. using the set of “management buttons” at the bottom of each tweet. To add a new tweet to your timeline enter your text and press “Enter”. A look at the “Following List” for our account. Having a more defined and separate “view categories” set makes this better than directly accessing the Twitter website. Preferences The preferences can be quickly sorted out…choose how often the timeline is updated, name display, favorite URL shortening service, theme, and font size. Note: The default connection setting is for “Secure Access”. Conclusion TwitKit+ makes a nice addition to Firefox for anyone who loves keeping up with Twitter throughout the day. There when you want it and out of your way the rest of the time. Links Download the TwitKit+ extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Move Add-on Management to the Sidebar in FirefoxPreview and Manage Multiple Tabs in Firefox with Tab SidebarDisable Windows Sidebar in VistaQuick Tip: Use Google Talk Sidebar in FirefoxRun Windows Sidebar Gadgets Without the Sidebar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Manage Your Twitter Account from the Sidebar in Firefox

    - by Asian Angel
    Are you a Twitter addict and need an easy way to manage your account in Firefox? Now you can access Twitter in your Sidebar or as a separate window with the TwitKit+ extension for Firefox. Accessing TwitKit+ There are three ways that you can access TwitKit+ after installing the extension. The first is by adding the “Toolbar Button” to your browser’s UI. The second and third methods are through the “View & Tools Menus”.   TwitKit+ in Action When you open TwitKit+ for the first time you will see Twitter’s “Public Tweet Stream”. To get started login into your account. Note: If you do not care for the “brown theme” you can select a different one in “Preferences”. Here is a closer look at the top area and the commands available. Notice the “blue arrow symbol” in the upper left corner…very useful if you want to separate TwitKit+ from your main browser window for a bit. Secure Mode, Undock, Preferences, Login/Logout Google Search, Twitter Search, Copy Selection To Status Box, Shorten Selected URL Public, User, Friends, Followers, @ Messages, Direct Messages, Profile Note: To use Google or Twitter search enter your term in the “Status Area” and click on the appropriate service icon. Here is the regular timeline for our account…the “clickable tab buttons” make everything easy to view and work with. You can perform actions such as replying, retweeting, marking as a favorite, etc. using the set of “management buttons” at the bottom of each tweet. To add a new tweet to your timeline enter your text and press “Enter”. A look at the “Following List” for our account. Having a more defined and separate “view categories” set makes this better than directly accessing the Twitter website. Preferences The preferences can be quickly sorted out…choose how often the timeline is updated, name display, favorite URL shortening service, theme, and font size. Note: The default connection setting is for “Secure Access”. Conclusion TwitKit+ makes a nice addition to Firefox for anyone who loves keeping up with Twitter throughout the day. There when you want it and out of your way the rest of the time. Links Download the TwitKit+ extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Move Add-on Management to the Sidebar in FirefoxPreview and Manage Multiple Tabs in Firefox with Tab SidebarDisable Windows Sidebar in VistaQuick Tip: Use Google Talk Sidebar in FirefoxRun Windows Sidebar Gadgets Without the Sidebar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Scaling-out Your Services by Message Bus based WCF Transport Extension &ndash; Part 1 &ndash; Background

    - by Shaun
    Cloud computing gives us more flexibility on the computing resource, we can provision and deploy an application or service with multiple instances over multiple machines. With the increment of the service instances, how to balance the incoming message and workload would become a new challenge. Currently there are two approaches we can use to pass the incoming messages to the service instances, I would like call them dispatcher mode and pulling mode.   Dispatcher Mode The dispatcher mode introduces a role which takes the responsible to find the best service instance to process the request. The image below describes the sharp of this mode. There are four clients communicate with the service through the underlying transportation. For example, if we are using HTTP the clients might be connecting to the same service URL. On the server side there’s a dispatcher listening on this URL and try to retrieve all messages. When a message came in, the dispatcher will find a proper service instance to process it. There are three mechanism to find the instance: Round-robin: Dispatcher will always send the message to the next instance. For example, if the dispatcher sent the message to instance 2, then the next message will be sent to instance 3, regardless if instance 3 is busy or not at that moment. Random: Dispatcher will find a service instance randomly, and same as the round-robin mode it regardless if the instance is busy or not. Sticky: Dispatcher will send all related messages to the same service instance. This approach always being used if the service methods are state-ful or session-ful. But as you can see, all of these approaches are not really load balanced. The clients will send messages at any time, and each message might take different process duration on the server side. This means in some cases, some of the service instances are very busy while others are almost idle. For example, if we were using round-robin mode, it could be happened that most of the simple task messages were passed to instance 1 while the complex ones were sent to instance 3, even though instance 1 should be idle. This brings some problem in our architecture. The first one is that, the response to the clients might be longer than it should be. As it’s shown in the figure above, message 6 and 9 can be processed by instance 1 or instance 2, but in reality they were dispatched to the busy instance 3 since the dispatcher and round-robin mode. Secondly, if there are many requests came from the clients in a very short period, service instances might be filled by tons of pending tasks and some instances might be crashed. Third, if we are using some cloud platform to host our service instances, for example the Windows Azure, the computing resource is billed by service deployment period instead of the actual CPU usage. This means if any service instance is idle it is wasting our money! Last one, the dispatcher would be the bottleneck of our system since all incoming messages must be routed by the dispatcher. If we are using HTTP or TCP as the transport, the dispatcher would be a network load balance. If we wants more capacity, we have to scale-up, or buy a hardware load balance which is very expensive, as well as scaling-out the service instances. Pulling Mode Pulling mode doesn’t need a dispatcher to route the messages. All service instances are listening to the same transport and try to retrieve the next proper message to process if they are idle. Since there is no dispatcher in pulling mode, it requires some features on the transportation. The transportation must support multiple client connection and server listening. HTTP and TCP doesn’t allow multiple clients are listening on the same address and port, so it cannot be used in pulling mode directly. All messages in the transportation must be FIFO, which means the old message must be received before the new one. Message selection would be a plus on the transportation. This means both service and client can specify some selection criteria and just receive some specified kinds of messages. This feature is not mandatory but would be very useful when implementing the request reply and duplex WCF channel modes. Otherwise we must have a memory dictionary to store the reply messages. I will explain more about this in the following articles. Message bus, or the message queue would be best candidate as the transportation when using the pulling mode. First, it allows multiple application to listen on the same queue, and it’s FIFO. Some of the message bus also support the message selection, such as TIBCO EMS, RabbitMQ. Some others provide in memory dictionary which can store the reply messages, for example the Redis. The principle of pulling mode is to let the service instances self-managed. This means each instance will try to retrieve the next pending incoming message if they finished the current task. This gives us more benefit and can solve the problems we met with in the dispatcher mode. The incoming message will be received to the best instance to process, which means this will be very balanced. And it will not happen that some instances are busy while other are idle, since the idle one will retrieve more tasks to make them busy. Since all instances are try their best to be busy we can use less instances than dispatcher mode, which more cost effective. Since there’s no dispatcher in the system, there is no bottleneck. When we introduced more service instances, in dispatcher mode we have to change something to let the dispatcher know the new instances. But in pulling mode since all service instance are self-managed, there no extra change at all. If there are many incoming messages, since the message bus can queue them in the transportation, service instances would not be crashed. All above are the benefits using the pulling mode, but it will introduce some problem as well. The process tracking and debugging become more difficult. Since the service instances are self-managed, we cannot know which instance will process the message. So we need more information to support debug and track. Real-time response may not be supported. All service instances will process the next message after the current one has done, if we have some real-time request this may not be a good solution. Compare with the Pros and Cons above, the pulling mode would a better solution for the distributed system architecture. Because what we need more is the scalability, cost-effect and the self-management.   WCF and WCF Transport Extensibility Windows Communication Foundation (WCF) is a framework for building service-oriented applications. In the .NET world WCF is the best way to implement the service. In this series I’m going to demonstrate how to implement the pulling mode on top of a message bus by extending the WCF. I don’t want to deep into every related field in WCF but will highlight its transport extensibility. When we implemented an RPC foundation there are many aspects we need to deal with, for example the message encoding, encryption, authentication and message sending and receiving. In WCF, each aspect is represented by a channel. A message will be passed through all necessary channels and finally send to the underlying transportation. And on the other side the message will be received from the transport and though the same channels until the business logic. This mode is called “Channel Stack” in WCF, and the last channel in the channel stack must always be a transport channel, which takes the responsible for sending and receiving the messages. As we are going to implement the WCF over message bus and implement the pulling mode scaling-out solution, we need to create our own transport channel so that the client and service can exchange messages over our bus. Before we deep into the transport channel, let’s have a look on the message exchange patterns that WCF defines. Message exchange pattern (MEP) defines how client and service exchange the messages over the transportation. WCF defines 3 basic MEPs which are datagram, Request-Reply and Duplex. Datagram: Also known as one-way, or fire-forgot mode. The message sent from the client to the service, and no need any reply from the service. The client doesn’t care about the message result at all. Request-Reply: Very common used pattern. The client send the request message to the service and wait until the reply message comes from the service. Duplex: The client sent message to the service, when the service processing the message it can callback to the client. When callback the service would be like a client while the client would be like a service. In WCF, each MEP represent some channels associated. MEP Channels Datagram IInputChannel, IOutputChannel Request-Reply IRequestChannel, IReplyChannel Duplex IDuplexChannel And the channels are created by ChannelListener on the server side, and ChannelFactory on the client side. The ChannelListener and ChannelFactory are created by the TransportBindingElement. The TransportBindingElement is created by the Binding, which can be defined as a new binding or from a custom binding. For more information about the transport channel mode, please refer to the MSDN document. The figure below shows the transport channel objects when using the request-reply MEP. And this is the datagram MEP. And this is the duplex MEP. After investigated the WCF transport architecture, channel mode and MEP, we finally identified what we should do to extend our message bus based transport layer. They are: Binding: (Optional) Defines the channel elements in the channel stack and added our transport binding element at the bottom of the stack. But we can use the build-in CustomBinding as well. TransportBindingElement: Defines which MEP is supported in our transport and create the related ChannelListener and ChannelFactory. This also defines the scheme of the endpoint if using this transport. ChannelListener: Create the server side channel based on the MEP it’s. We can have one ChannelListener to create channels for all supported MEPs, or we can have ChannelListener for each MEP. In this series I will use the second approach. ChannelFactory: Create the client side channel based on the MEP it’s. We can have one ChannelFactory to create channels for all supported MEPs, or we can have ChannelFactory for each MEP. In this series I will use the second approach. Channels: Based on the MEPs we want to support, we need to implement the channels accordingly. For example, if we want our transport support Request-Reply mode we should implement IRequestChannel and IReplyChannel. In this series I will implement all 3 MEPs listed above one by one. Scaffold: In order to make our transport extension works we also need to implement some scaffold stuff. For example we need some classes to send and receive message though out message bus. We also need some codes to read and write the WCF message, etc.. These are not necessary but would be very useful in our example.   Message Bus There is only one thing remained before we can begin to implement our scaling-out support WCF transport, which is the message bus. As I mentioned above, the message bus must have some features to fulfill all the WCF MEPs. In my company we will be using TIBCO EMS, which is an enterprise message bus product. And I have said before we can use any message bus production if it’s satisfied with our requests. Here I would like to introduce an interface to separate the message bus from the WCF. This allows us to implement the bus operations by any kinds bus we are going to use. The interface would be like this. 1: public interface IBus : IDisposable 2: { 3: string SendRequest(string message, bool fromClient, string from, string to = null); 4:  5: void SendReply(string message, bool fromClient, string replyTo); 6:  7: BusMessage Receive(bool fromClient, string replyTo); 8: } There are only three methods for the bus interface. Let me explain one by one. The SendRequest method takes the responsible for sending the request message into the bus. The parameters description are: message: The WCF message content. fromClient: Indicates if this message was came from the client. from: The channel ID that this message was sent from. The channel ID will be generated when any kinds of channel was created, which will be explained in the following articles. to: The channel ID that this message should be received. In Request-Reply and Duplex MEP this is necessary since the reply message must be received by the channel which sent the related request message. The SendReply method takes the responsible for sending the reply message. It’s very similar as the previous one but no “from” parameter. This is because it’s no need to reply a reply message again in any MEPs. The Receive method takes the responsible for waiting for a incoming message, includes the request message and specified reply message. It returned a BusMessage object, which contains some information about the channel information. The code of the BusMessage class is 1: public class BusMessage 2: { 3: public string MessageID { get; private set; } 4: public string From { get; private set; } 5: public string ReplyTo { get; private set; } 6: public string Content { get; private set; } 7:  8: public BusMessage(string messageId, string fromChannelId, string replyToChannelId, string content) 9: { 10: MessageID = messageId; 11: From = fromChannelId; 12: ReplyTo = replyToChannelId; 13: Content = content; 14: } 15: } Now let’s implement a message bus based on the IBus interface. Since I don’t want you to buy and install the TIBCO EMS or any other message bus products, I will implement an in process memory bus. This bus is only for test and sample purpose. It can only be used if the service and client are in the same process. Very straightforward. 1: public class InProcMessageBus : IBus 2: { 3: private readonly ConcurrentDictionary<Guid, InProcMessageEntity> _queue; 4: private readonly object _lock; 5:  6: public InProcMessageBus() 7: { 8: _queue = new ConcurrentDictionary<Guid, InProcMessageEntity>(); 9: _lock = new object(); 10: } 11:  12: public string SendRequest(string message, bool fromClient, string from, string to = null) 13: { 14: var entity = new InProcMessageEntity(message, fromClient, from, to); 15: _queue.TryAdd(entity.ID, entity); 16: return entity.ID.ToString(); 17: } 18:  19: public void SendReply(string message, bool fromClient, string replyTo) 20: { 21: var entity = new InProcMessageEntity(message, fromClient, null, replyTo); 22: _queue.TryAdd(entity.ID, entity); 23: } 24:  25: public BusMessage Receive(bool fromClient, string replyTo) 26: { 27: InProcMessageEntity e = null; 28: while (true) 29: { 30: lock (_lock) 31: { 32: var entity = _queue 33: .Where(kvp => kvp.Value.FromClient == fromClient && (kvp.Value.To == replyTo || string.IsNullOrWhiteSpace(kvp.Value.To))) 34: .FirstOrDefault(); 35: if (entity.Key != Guid.Empty && entity.Value != null) 36: { 37: _queue.TryRemove(entity.Key, out e); 38: } 39: } 40: if (e == null) 41: { 42: Thread.Sleep(100); 43: } 44: else 45: { 46: return new BusMessage(e.ID.ToString(), e.From, e.To, e.Content); 47: } 48: } 49: } 50:  51: public void Dispose() 52: { 53: } 54: } The InProcMessageBus stores the messages in the objects of InProcMessageEntity, which can take some extra information beside the WCF message itself. 1: public class InProcMessageEntity 2: { 3: public Guid ID { get; set; } 4: public string Content { get; set; } 5: public bool FromClient { get; set; } 6: public string From { get; set; } 7: public string To { get; set; } 8:  9: public InProcMessageEntity() 10: : this(string.Empty, false, string.Empty, string.Empty) 11: { 12: } 13:  14: public InProcMessageEntity(string content, bool fromClient, string from, string to) 15: { 16: ID = Guid.NewGuid(); 17: Content = content; 18: FromClient = fromClient; 19: From = from; 20: To = to; 21: } 22: }   Summary OK, now I have all necessary stuff ready. The next step would be implementing our WCF message bus transport extension. In this post I described two scaling-out approaches on the service side especially if we are using the cloud platform: dispatcher mode and pulling mode. And I compared the Pros and Cons of them. Then I introduced the WCF channel stack, channel mode and the transport extension part, and identified what we should do to create our own WCF transport extension, to let our WCF services using pulling mode based on a message bus. And finally I provided some classes that need to be used in the future posts that working against an in process memory message bus, for the demonstration purpose only. In the next post I will begin to implement the transport extension step by step.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • SQL SERVER – 2011 – Introduction to SEQUENCE – Simple Example of SEQUENCE

    - by pinaldave
    SQL Server 2011 will contain one of the very interesting feature called SEQUENCE. I have waited for this feature for really long time. I am glad it is here finally. SEQUENCE allows you to define a single point of repository where SQL Server will maintain in memory counter. USE AdventureWorks2008R2 GO CREATE SEQUENCE [Seq] AS [int] START WITH 1 INCREMENT BY 1 MAXVALUE 20000 GO SEQUENCE is very interesting concept and I will write few blog post on this subject in future. Today we will see only working example of the same. Let us create a sequence. We can specify various values like start value, increment value as well maxvalue. -- First Run SELECT NEXT VALUE FOR Seq, c.CustomerID FROM Sales.Customer c GO -- Second Run SELECT NEXT VALUE FOR Seq, c.AccountNumber FROM Sales.Customer c GO Once the sequence is defined, it can be fetched using following method. Every single time new incremental value is provided, irrespective of sessions. Sequence will generate values till the max value specified. Once the max value is reached, query will stop and will return error message. Msg 11728, Level 16, State 1, Line 2 The sequence object ‘Seq’ has reached its minimum or maximum value. Restart the sequence object to allow new values to be generated. We can restart the sequence from any particular value and it will work fine. -- Restart the Sequence ALTER SEQUENCE [Seq] RESTART WITH 1 GO -- Sequence Restarted SELECT NEXT VALUE FOR Seq, c.CustomerID FROM Sales.Customer c GO Let us do final clean up. -- Clean Up DROP SEQUENCE [Seq] GO There are lots of things one can find useful about this feature. We will see that in future posts. Here is the complete code for easy reference. USE AdventureWorks2008R2 GO CREATE SEQUENCE [Seq] AS [int] START WITH 1 INCREMENT BY 1 MAXVALUE 20000 GO -- First Run SELECT NEXT VALUE FOR Seq, c.CustomerID FROM Sales.Customer c GO -- Second Run SELECT NEXT VALUE FOR Seq, c.AccountNumber FROM Sales.Customer c GO -- Restart the Sequence ALTER SEQUENCE [Seq] RESTART WITH 1 GO -- Sequence Restarted SELECT NEXT VALUE FOR Seq, c.CustomerID FROM Sales.Customer c GO -- Clean Up DROP SEQUENCE [Seq] GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Add "My Dropbox" to Your Windows 7 Start Menu

    - by The Geek
    Over here at How-To Geek, we’re huge fans of Dropbox, the amazingly fast online file sync utility, but we’d be even happier if we could natively add it to the Windows 7 Start Menu, where it belongs. And today, that’s what we’ll do. Yep, that’s right. You can add it to the Start Menu… using a silly hack to the Libraries feature and renaming the Recorded TV library to a different name. It’s not a perfect solution, but you can access your Dropbox folder this way and it just seems to belong there. First things first, head into the Customize Start Menu panel by right-clicking on the start menu and using Properties, then make sure that Recorded TV is set to “Display as a link”. Next, right-click on Recorded TV, choose Rename, and then change it to something else like My Dropbox.   Now you’ll want to right-click on that button again, and choose Properties, where you’ll see the Library locations in the list… the general idea is that you want to remove Recorded TV, and then add your Dropbox folder. Oh, and you’ll probably want to make sure to set “Optimize this library for” to “General Items”. At this point, you can just click on My Dropbox, and you’ll see, well, Your Dropbox! (no surprise there). Yeah, I know, it’s totally a hack. But it’s a very useful one! Also, if you aren’t already using Dropbox, you should really check it out—2 GB for free, accessible via the web from anywhere, and you can sync to multiple desktops. Similar Articles Productive Geek Tips Use the Windows Key for the "Start" Menu in Ubuntu LinuxAccess Your Dropbox Files in Google ChromeSpeed up Windows Vista Start Menu Search By Limiting ResultsPin Any Folder to the Vista Start Menu the Easy WayEnable "Pin to Start Menu" for Folders in Windows Vista / XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data

    Read the article

  • Using Dependency Walker

    - by Valter Minute
    Dependency Walker is a very useful tool that can be used to find dependencies of a Portable Executable module. The PE format is used also on Windows CE and this means that Dependency Walker can be used to analyze also Windows CE/Windows Embedded Compact module. On Win32 it can be used also to monitor modules loaded by an application during runtime, this feature is not supported on CE. You can download dependency walker for free here: http://dependencywalker.com/. To analyze the dependencies of a Windows CE/Windows Embedded Compact 7 module you can just open it using Dependency Walker. If you want to check if a specific module can run on a Windows CE/Windows Compact 7 OS Image you can copy the executable in the same directory that contains your OS binaries (FLATRELEASEDIR). In this way Dependency Walker will highlight missing dlls or missing entry points inside existing dlls. Let’s do a quick sample. You need to check if myapp.exe (an application from a third party) can run on an image generated with your Test01 OSDesign. Copy Myapp.exe to the flat release directory of your OS Design. Launch depends.exe and use the File\Open option of its main menu to open the application executable file you just copied. You may receive an error if some of the modules required by your applications are missing. Before you analyze the module dependencies is important to configure Dependency Walker to check DLL in the same folder where your application file is stored. This is needed because some Windows CE DLLs have the same name of Win32 system DLLs but different entry points. To configure the DLL search path select “Options\Configure Module Search Order…” from Depenency Walker main menu. Select “The application directory” from the “Current Search Order” list, select it, and move it to the top of the list using the “Move Up” button. The system will ask to refresh the window contents to reflect your configuration change, click on “Yes” to proceed. Now you can inspect myapp.exe dependencies. Some DLLs are missing (XAMLRUNTIME.DLL and TILEENGINE.DLL) and OLE32.DLL exists but does not export the “CoInitialize” entry point that is required by myapp.exe. The bad news is that MyApp.exe will not run on your OS Image, the good news is that now you know what’s missing and you can add the required modules to your OS Design and fix the problem!

    Read the article

  • Fair Comments

    - by Tony Davis
    To what extent is good code self-documenting? In one of the most entertaining sessions I saw at the recent PASS summit, Jeremiah Peschka (blog | twitter) got a laugh out of a sleepy post-lunch audience with the following remark: "Some developers say good code is self-documenting; I say, get off my team" I silently applauded the sentiment. It's not that all comments are useful, but that I mistrust the basic premise that "my code is so clearly written, it doesn't need any comments". I've read many pieces describing the road to self-documenting code, and my problem with most of them is that they feed the myth that comments in code are a sign of weakness. They aren't; in fact, used correctly I'd say they are essential. Regardless of how far intelligent naming can get you in describing what the code does, or how well any accompanying unit tests can explain to your fellow developers why it works that way, it's no excuse not to document fully the public interfaces to your code. Maybe I just mixed with the wrong crowd while learning my favorite language, but when I open a stored procedure I lose the will even to read it unless I see a big Phil Factor- or Jeff Moden-style header summarizing in plain English what the code does, how it fits in to the broader application, and a usage example. This public interface describes the high-level process and should explain the role of the code, clearly, for fellow developers, language non-experts, and even any non-technical stake holders in the project. When you step into the body of the code, the low-level details, then I agree that the rules are somewhat different; especially when code is subject to frequent refactoring that can quickly render comments redundant or misleading. At their worst, here, inline comments are sticking plaster to cover up the scars caused by poor naming conventions, failure in clarity when mapping a complex domain into code, or just by not entirely understanding the problem (/ this is the clever part). If you design and refactor your code carefully so that it is as simple as possible, your functions do one thing only, you avoid having two completely different algorithms in the same piece of code, and your functions, classes and variables are intelligently named, then, yes, the need for inline comments should be minimal. And yet, even given this, I'd still argue that many languages (T-SQL certainly being one) just don't lend themselves to readability when performing even moderately-complex tasks. If the algorithm is complex, I still like to see the occasional helpful comment. Please, therefore, be as liberal as you see fit in the detail of the comments you apply to this editorial, for like code it is bound to increase its' clarity and usefulness. Cheers, Tony.

    Read the article

  • Defining Discovery: Core Concepts

    - by Joe Lamantia
    Discovery tools have had a referencable working definition since at least 2001, when Ben Shneiderman published 'Inventing Discovery Tools: Combining Information Visualization with Data Mining'.  Dr. Shneiderman suggested the combination of the two distinct fields of data mining and information visualization could manifest as new category of tools for discovery, an understanding that remains essentially unaltered over ten years later.  An industry analyst report titled Visual Discovery Tools: Market Segmentation and Product Positioning from March of this year, for example, reads, "Visual discovery tools are designed for visual data exploration, analysis and lightweight data mining." Tools should follow from the activities people undertake (a foundational tenet of activity centered design), however, and Dr. Shneiderman does not in fact describe or define discovery activity or capability. As I read it, discovery is assumed to be the implied sum of the separate fields of visualization and data mining as they were then understood.  As a working definition that catalyzes a field of product prototyping, it's adequate in the short term.  In the long term, it makes the boundaries of discovery both derived and temporary, and leaves a substantial gap in the landscape of core concepts around discovery, making consensus on the nature of most aspects of discovery difficult or impossible to reach.  I think this definitional gap is a major reason that discovery is still an ambiguous product landscape. To help close that gap, I'm suggesting a few definitions of four core aspects of discovery.  These come out of our sustained research into discovery needs and practices, and have the goal of clarifying the relationship between discvoery and other analytical categories.  They are suggested, but should be internally coherent and consistent.   Discovery activity is: "Purposeful sense making activity that intends to arrive at new insights and understanding through exploration and analysis (and for these we have specific defintions as well) of all types and sources of data." Discovery capability is: "The ability of people and organizations to purposefully realize valuable insights that address the full spectrum of business questions and problems by engaging effectively with all types and sources of data." Discovery tools: "Enhance individual and organizational ability to realize novel insights by augmenting and accelerating human sense making to allow engagement with all types of data at all useful scales." Discovery environments: "Enable organizations to undertake effective discovery efforts for all business purposes and perspectives, in an empirical and cooperative fashion." Note: applicability to a world of Big data is assumed - thus the refs to all scales / types / sources - rather than stated explicitly.  I like that Big Data doesn't have to be written into this core set of definitions, b/c I think it's a transitional label - the new version of Web 2.0 - and goes away over time. References and Resources: Inventing Discovery Tools Visual Discovery Tools: Market Segmentation and Product Positioning Logic versus usage: the case for activity-centered design A Taxonomy of Enterprise Search and Discovery

    Read the article

  • Ubuntu 11.10 - can't adjust brightness on my laptop

    - by Danny
    Using every method possible I'm unable to change my laptop brightness.. It's stuck on super max brightness.. Using the slider in the "screens" window int he control panel doesnt do anything, and using the fn keys doesn't do anything... Some info about my system: laptop is a MSI VR420 Running ubuntu 11.10 video card is an integrated intel card Used to work when I ran ubuntu 10.10 and earlier versions (not sure if it worked out of the box or if I inadvertly fixed it in previous versions while installing lots of other packages) Brightness slider on the "screen" window doesn't do anything, "dim when on battery -power" doesnt do anything When I use the fn+f4/f5 keys to adjust brightness there is a popup showing that its receving the input, but i can only go from between 0 brightness and max brightness.. (that is what the output is showing, the brightness does not change though) when attempting to change the brightness with fn+f4/f5 my dmesg log reports "ACPI: Failed to switch the brightness" Here are some outputs from some terminal commands, not sure if any of this is useful or not.. lspci - http://pastebin.com/EimZSGs3 "ls /sys/class/backlight/*/brightness" will output "/sys/class/backlight/intel_backlight/brightness" "cat /sys/class/backlight/intel_backlight/brightness" = 0 (when I use the fn+f4/f5) this will change betwene 0 and, but the actual brightness doesn't change) "cat /sys/class/backlight/intel_backlight/max_brightness" = 1 "lsmod | grep ^i915" = i915 505108 3 here is the list of things I've tried through searching google..... Edit /etc/default/grub?GRUB_CMDLINE_LINUX_DEFAULT: acpi_osi=Linux, acpi_backlight=vendor, nomodeset. (as well as several different combinations of these settings being on or off) Edit /etc/X11/xorg.conf (file doesn't exist on my system) Edit /proc/acpi/video/VGA/LCD/brightness (file doesn't exist) sudo setpci -s 00:02.0 F4.B=XX (does nothing) xbacklight -set XX (does nothing) I've tried about everything with no luck... The only thing i haven't tried is adding the ppa that someone suggested here: Unable to change brightness in a Lenovo laptop .... However according to the notes on the ppa.. all of the changes that are in the ppa are now actually apart of 11.10 and the ppa is only for people with 11.04.. Does anyone have any ideas for me? edit:by setting acpi=off in my /etc/default/grub file I was about to get my fn+f4/f5 keys to work, also "dim when display to save power" now makes my laptop dim when on battery power.. the dimness slider however still doesn't do anything.. also xbacklight doesn't do anything stil or any of the other methods... The thing i don't get is why setting acpi=off makes my fn+f4/f5 keys work? Isn't acpi supposed to be enables the backlight to be dimmed? does anyone know what ubuntu is deciding to do behind the scenes when acpi=off? Does anyone know what/if any features I might be losing with it off?

    Read the article

  • Deploying Socket.IO App to Windows Azure Web Site with Azure CLI

    - by shiju
    In this blog post, I will demonstrate how to deploy Socket.IO app to Windows Azure Website using Windows Azure Cross-Platform Command-Line Interface, which leverages the Windows Azure Website’s new support for Web Sockets. Recently Windows Azure has announced lot of enhancements including the support for Web Sockets in Windows Azure Websites, which lets the Node.js developers deploy Socket.IO apps to Windows Azure Websites. In this blog post, I am using  Windows Azure CLI for create and deploy Windows Azure Website. Install  Windows Azure CLI The Windows Azure CLI available as a NPM module so that you can install Windows Azure CLI using  NPM as shown in the below command. After installing the azure-cli, just enter the command “azure” which will show the useful commands provided by Azure CLI. Import Windows Azure Subscription Account In order to import our Azure subscription account, we need to download the Windows Azure subscription profile. The Azure CLI command “account download” lets you download the  Windows Azure subscription profile as shown in the below command. The command redirect you login to Windows Azure portal and allow you to download the Windows Azure publish settings file. The account import command lets you import the downloaded publish settings file so that you can create and manage Websites, Cloud Services, Virtual Machines and Mobile Services in Windows Azure. Create Windows Azure Website and Enable Web Sockets In this post, we are going to deploy Socket.IO app to Windows Azure Website by using the Web Socket support provided by Windows Azure. Let’s create a Website named “socketiochatapp” using the Azure CLI. The above command will create a Windows Azure Website that will also initialize a Git repository with a remote named Azure. We can see the newly created Website from Azure portal. By default, the Web Sockets will be disabled. So let’s enable it by navigating to the Configure tab of the Website, and select “ON” in Web Sockets option and save the configuration changes. Deploy a Node.js Socket.IO App to Windows Azure Now, our Windows Azure Website supports Web Sockets so that we can easily deploy Socket.IO app to Windows Azure Website. Let’s add Node.js chat app which leverages Socket.IO module. Please note that you have to add npm module dependencies in the package.json file so that Windows Azure can install the dependencies when deploying the app. Let’s add the Node.js app and add the files to git repository. Let’s commit the changes to git repository. We have committed the changes to git local repository. Let’s push the changes to Windows Azure production environment. The successful deployment can see from the Windows Azure portal by navigating to the deployments tab of the selected Windows Azure Website. The screen shot below shows that our chat app is running successfully.   You can follow me on Twitter @shijucv

    Read the article

  • Who Are the BI Users in Your Neighborhood?

    - by Brian Dayton
    Forrester's Boris Evelson recently wrote a blog titled "Who are the BI Personas?" that I enjoyed for a number of reasons. It's a quick read, easy to grasp and (refreshingly) focuses on the users of technology VS the technology. As Evelson admits, he meant to keep the reference chart at a high-level because there are too many different permutations and additional sub-categories to make such a chart useful. For me, I wouldn't head into the technical permutations but more the contextual use of BI and the issues that users experience.  My thoughts brought up more questions than answers such as: Context: -          HOW: With the exception of the "Power User" persona--likely some sort of business or operations analyst? -          WHEN: Are they using the information to make real-time decisions on the front lines (a customer service manager or shipping/logistics VP) or are they using this information for cumulative analysis and business planning? Or both? -          WHERE: What areas of the business are more or less likely to rely on BI across an organization? Human Resources, Operations, Facilities, Finance--- and why are some more prone to use data-driven analysis than others? Issues: -          DELAYS & DRAG ON IT?: One of the persona characteristics Evelson calls out is a reliance on IT. Every persona except for the "Power User" has a heavy reliance on IT for support. What business issues or delays does that cause to users? What is the drag on IT resources who could potentially be creating instead of reporting? -          HOW MANY CLICKS: If BI is being used within the context of a transaction (sales manager looking for upsell opportunities as an example) is that person getting the information within the context of that action or transaction? Or are they minimizing screens, logging into another application or reporting tool, running queries, etc.?   Who are the BI Users in your neighborhood or line of business? Do Evelson's personas resonate--and do the tools that he calls out (he refers to it as "BI Style") resonate with what your personas have or need? Finally, I'm very interested if BI use is viewed as  a bolt-on...or an integrated part of your daily enterprise processes?

    Read the article

  • Unity DontDestroyOnLoad causing scenes to stay open

    - by jkrebsbach
    Originally posted on: http://geekswithblogs.net/jkrebsbach/archive/2014/08/11/unity-dontdestroyonload-causing-scenes-to-stay-open.aspxMy Unity project has a class (ClientSettings) where most of the game state & management properties are stored.  Among these are some utility functions that derive from MonoBehavior.  However, between every scene this object was getting recreated and I was losing all sorts of useful data.  I learned that with DontDestroyOnLoad, I can persist this entity between scenes.  Super.Persisting information between scenesThe problem with adding DontDestroyOnLoad to my "ClientSettings" was suddenly my previous scene would stay alive, and continue to execute its update routines.  An important part of the documentation helps shed light to my issues:"If the object is a component or game object then its entire transform hierarchy will not be destroyed either."My ClientSettings script was attached to the main camera on my first scene.  Because of this, the Main Camera was part of the hierarchy of the component, and therefore was also not able to destroy when switching scenes.  Now the first scene's main camera Update routine continues to execute after the second scene is running - causing me to have some very nasty bugs.Suddenly I wasn't sure how I should be creating a persistent entity - so I created a new sandbox project and tested different approaches until I found one that works:In the main scene: Create an empty Game Object:  "GameManager" - and attach the ClientSettings script to this game object.  Set any properties to the clientsettings script as appropriate.Create a prefab, using the GameManager.Remove the Game Object from the main scene.In the Main Camera, I created a script:  Main Script.  This is my primary script for the main scene.<code> public GameObject[] prefabs; private ClientSettings _clientSettings; // Use this for initialization void Start () { GameObject res = (GameObject)Instantiate(prefabs[0]); }</code>Now go back out to scene view, and add the new GameManager prefab to the prefabs collection of MainScript.When the main scene loads, the GameManager is set up, but is not part of the main scene's hierarchy, so the two are no longer tied up together.Now in our second scene, we have a script - SecondScript - and we can get a reference to the ClientSettings we created in the previous scene like so:<code>private ConnectionSettings _clientSettings; // Use this for initialization void Start () { _clientSettings = FindObjectOfType<ConnectionSettings> (); }</code>And the scenes can start and finish without creating strange long-running scene side effects.

    Read the article

  • Sites To Download Free eBooks For Kindle

    - by Gopinath
    Amazon Kindle is the top selling gadget of this holiday season and many of you would have received it as a gift. For those who got a Amazon Kindle here are few websites that offer free eBooks to fulfil reading appetite at no cost. 1. Free Kindle Books – Amazon Website – This page on Amazon lists nice collection of free books available for Kindle that includes Serial by Jack Kiborn, The Wild’s Call by Jeri Smith, Star Wars by John Jackson MIller and several other books from a list of 40 books. 2. Project Gutenberg: This site as 33,000 + free books that not work let you read on Kindle but also on iPad, PCs and smart phones.  This site is very popular for free ebooks. 3. Google E-Bookstore: Google’s eBookStore has thousands of free ebooks for Kindle in their free books section. 4. Internet Archive: Here you find millions of rare print works that are especially useful for academic research. Multiple language books are also available for Kindle. 5. Open Library: This site is sort of Wikipedia for eBooks with over 20 million user-contributed books and magazines. They are all Kindle friendly. 6. ManyBooks.net: Nearly 30,000 titles, many of which have been pulled from Project Gutenberg. Has a good collection of little-known Creative Commons works. 7. Freebooks.com – the public domain section of this site contains many free ebooks that are perfect for your Kindle. 8. freecomputerbooks.com, freetechbooks.com and onlinecomputerbooks.com - if you are geek and looking for technology books, this is the site you should visit to grab free books. Image credit: bike/flickr This article titled,Sites To Download Free eBooks For Kindle, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Exploding maps in Reporting Services 2008 R2

    - by Rob Farley
    Kaboom! Well, that was the imagery that secretly appeared in my mind when I saw “USA By State Exploded” in the list of installed maps in Report Builder 3.0 – part of the spatial offering of SQL Server Reporting Server 2008 R2. Alas, it just means that the borders are bigger. Clicking on it showed me. Unfortunately, I’m not interested in maps of the US. None of my clients are there (at least, not yet – feel free to get in touch if you want to change this ‘feature’ of my company). So instead, I’ve recently been getting hold of some data for Australian areas. I’ve just bought some PostCode shapes for South Australia, and will use this in demos for conferences and for showing clients how this kind of report can really impact their reporting. One of the companies I was talking about getting shape files sent me a sample. So I chose the “ESRI shapefile” option you see above, and browsed to my file. It appeared in the window like this: Australians will immediately recognise this as the area around Wollongong, just south of Sydney. Well, apart from me. I didn’t. I had to put a Bing Maps layer behind it to work that out, but that’s not for this post. The thing that I discovered was that if I selected the Exploded USA option (but without clicking Next), and then chose my shape file, then my area around Wollongong would be exploded too! Huh! I think this is actually a bug, but a potentially useful one! Some further investigation (involving creating two identical reports, one with this exploded view, one without), showed that the Exploded View is done by reducing the ScaleFactor property of the PolygonLayer in the map control. The Exploded version has it below 1. If you set to above one, your shapes overlap. I discovered this by accident… I guess I hadn’t looked through all the PolygonLayer options to work out what they all do. And because this post is about Reporting, it can qualify for this month’s T-SQL Tuesday, hosted by Aaron Nelson (@sqlvariant). Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Visual Studio 10 crashed when tried to open one of solutions

    - by Michael Freidgeim
    Visual Studio 10 crashed when I tried to open  one of my solutions. Closing Visual Studio and rebooting the machine didn’t help.The error message that was logged(see below), didn’t give any useful ideas.Finally It was fixed after I’ve deleted MySolution.suo file, which was quite big, and also Resharper folders.Log Name:      ApplicationSource:        Application ErrorEvent ID:      1000Task Category: (100)Level:         ErrorKeywords:      ClassicUser:          N/ADescription:Faulting application name: devenv.exe, version: 10.0.40219.1, time stamp: 0x4d5f2a73Faulting module name: msenv.dll, version: 10.0.40219.1, time stamp: 0x4d5f2d48Exception code: 0xc0000005Fault offset: 0x00355770Faulting process id: 0x1dc0Faulting application start time: 0x01cd1836888599f4Faulting application path: C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exeFaulting module path: c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\msenv.dllReport Id: 9924b2f9-844e-11e1-bc19-782bcba513eaEvent Xml:<Event >  <System>    <Provider Name="Application Error" />    <EventID Qualifiers="0">1000</EventID>    <Level>2</Level>    <Task>100</Task>    <Keywords>0x80000000000000</Keywords>    <TimeCreated SystemTime="2012-04-12T03:21:31.000000000Z" />    <EventRecordID>401998</EventRecordID>    <Channel>Application</Channel>    <Security />  </System>  <EventData>    <Data>devenv.exe</Data>    <Data>10.0.40219.1</Data>    <Data>4d5f2a73</Data>    <Data>msenv.dll</Data>    <Data>10.0.40219.1</Data>    <Data>4d5f2d48</Data>    <Data>c0000005</Data>    <Data>00355770</Data>    <Data>1dc0</Data>    <Data>01cd1836888599f4</Data>    <Data>C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe</Data>    <Data>c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\msenv.dll</Data>    <Data>9924b2f9-844e-11e1-bc19-782bcba513ea</Data>  </EventData></Event>v

    Read the article

  • Build Mobile App for E-Business Suite Using SOA Suite and ADF Mobile

    - by Michelle Kimihira
    With the upcoming release of Oracle ADF Mobile, I caught up with Srikant Subramaniam, Senior Principal Product Manager, Oracle Fusion Middleware post OpenWorld to learn about the cool hands-on lab at OpenWorld.  For those of you who missed it, you will want to keep reading... Author: Srikant Subramaniam, Senior Principal Product Manager,Oracle Fusion Middleware Oracle ADF Mobile enables rapid and declarative development of native on-device mobile applications. These native applications provide a richer experience for smart devices users running Apple iOS or other mobile platforms. Oracle ADF Mobile protects Oracle customers from technology shifts by adopting a metadata-based development framework that enables developer to develop one app (using Oracle JDeveloper), and deploy to multiple device platforms (starting with iOS and Android).  Oracle ADF Mobile also enables IT organizations to leverage existing expertise in web-based and Java development by adopting a hybrid application architecture that brings together HTML5, Java, and device native container: HTML5 allows developer to deliver device-native user experiences while maintaining portability across different platforms Java allows developers to create modules to support business logic and data services Native container provides integration into device services such as camera, contacts, etc All these technologies are packaged into a development framework that supports declarative application development through Oracle JDeveloper. ADF Mobile also provides out of box integratoin with key Fusion Middleware components, such as SOA Suite and Business Process Management (BPM). Oracle Fusion Middleware provides the necessary infrastructure to extend business processes and services to the mobile device -- enabling the mobile user to participate in human tasks – without the additional “mobile middleware” layer. When coupled with Oracle SOA Suite, this combination can execute business transactions on Oracle E-Business Suite (or any Oracle Application). Demo Use Case: Mobile E-Business Suite (iExpense) Approvals Using an employee expense approval scenario, we illustrate how to use Oracle Fusion Middleware and Oracle ADF Mobile to build application extensions that integrate intelligently with Oracle Applications (For example, E-Business Suite). Building these extensions using Oracle Fusion middleware and ADF makes modifications simple, quick to implement, and easy to maintain/upgrade. As described earlier, this approach also extends Fusion Middleware to mobile users without the additional "Mobile Middleware" layer. The approver is presented with a list of expense reports that have been submitted for approval. These expense reports are retrieved from the backend E-Business Suite and displayed on the mobile device. Approval (or rejection) of the expense report kicks off the workflow in E-Business Suite and takes it to completion. The demo also shows how to integrate with native device services such as email, contacts, BI dashboards as well as a prebuilt PDF viewer (this is especially useful in the expense approval scenario, as there is often a need for the approver to access the submitted receipts). Summary Oracle recommends Fusion Middleware as the application integration platform to deliver critical enterprise data and processes to mobile applications.  Pre-built connectors between Fusion Middleware and Applications greatly accelerates the integration process.  Instead of building individual integration points between mobile applications and individual enterprise applications, Oracle Fusion Middleware enables IT organizations to leverage a common platform to support both desktop and mobile application.  Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Issues with timed out downloads via TomCat?

    - by Ira Baxter
    We get, in our opinion, a lot of failed download attempts and want to understand why. We offer downloads via an email link (typical): http://www.semanticdesigns.com/deliverEval/<productname> This is processed by Tomcat on Linux via a jsp file, with the following code: response.addHeader( "Content-Disposition", "attachment; filename=" + fileTail ); response.addHeader( "Content-Type", "application/x-msdos-program" ); byte[] buf = new byte[8192]; int read; try { java.io.FileInputStream input = new java.io.FileInputStream( filename ); java.io.OutputStream o = response.getOutputStream(); while( ( read = input.read( buf, 0, 8192 ) ) != -1 ){ o.write( buf, 0, read ); } o.flush(); } catch( Exception e ){ util.fatalError( request.getRequestURI(), "Error sending file '" + filename + "' to client", e ); throw e; } We get a lot of reported errors (about 50% error rate): URI --- /deliverEval/download.jsp Code Message: Error sending file '/home/sd/ShippingMasters/DMS/Domains/C/GCC3/Tools/TestCoverage/SD_C~GCC3_TestCoverage.1.6.12.exe' to client Stack Trace ----------- null at org.apache.coyote.tomcat5.OutputBuffer.realWriteBytes(byte[], int, int) (Unknown Source) at org.apache.tomcat.util.buf.ByteChunk.append(byte[], int, int) (Unknown Source) at org.apache.coyote.tomcat5.OutputBuffer.writeBytes(byte[], int, int) (Unknown Source) at org.apache.coyote.tomcat5.OutputBuffer.write(byte[], int, int) (Unknown Source) at org.apache.coyote.tomcat5.CoyoteOutputStream.write(byte[], int, int) (Unknown Source) at org.apache.jsp.deliverEval.download_jsp._jspService(javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse) (Unknown Source) at org.apache.jasper.runtime.HttpJspBase.service(javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse) (Unknown Source) at javax.servlet.http.HttpServlet.service(javax.servlet.ServletRequest, javax.servlet.ServletResponse) (Unknown Source) at org.apache.jasper.servlet.JspServletWrapper.service(javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse, boolean) (Unknown Source) at org.apache.jasper.servlet.JspServlet.serviceJspFile(javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse, java.lang.String, java.lang.Throwable, boolean) (Unknown Source) at org.apache.jasper.servlet.JspServlet.service(javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse) (Unknown Source) at javax.servlet.http.HttpServlet.service(javax.servlet.ServletRequest, javax.servlet.ServletResponse) (Unknown Source) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(javax.servlet.ServletRequest, javax.servlet.ServletResponse) (Unknown Source) at org.apache.catalina.core.ApplicationFilterChain.doFilter(javax.servlet.ServletRequest, javax.servlet.ServletResponse) (Unknown Source) at org.apache.catalina.core.StandardWrapperValve.invoke(org.apache.catalina.Request, org.apache.catalina.Response, org.apache.catalina.ValveContext) (Unknown Source) We don't understand why this rate should be so high. Is there any way to get more information about the cause of the error? It is useful to know that these are pretty big documents, 3-50 megabytes. They reside on the Linux server so reading them is just a local disk read, and is unlikely to be a contributor to the problem. But sheer size might be an issue for the recipients browser? Is this kind of error rate typical for downloads? My personal experience downloading other's documents suggests no; our internal attempts show this to be very reliable, but we're operating on our internal network for such experiments so we're missing the complexity of the intervening internet.

    Read the article

  • Windows Azure Use Case: High-Performance Computing (HPC)

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: High-Performance Computing (also called Technical Computing) at its most simplistic is a layout of computer workloads where a “head node” accepts work requests, and parses them out to “worker nodes'”. This is useful in cases such as scientific simulations, drug research, MatLab work and where other large compute loads are required. It’s not the immediate-result type computing many are used to; instead, a “job” or group of work requests is sent to a cluster of computers and the worker nodes work on individual parts of the calculations and return the work to the scheduler or head node for the requestor in a batch-request fashion. This is typical to the way that many mainframe computing use-cases work. You can use commodity-based computers to create an HPC Cluster, such as the Linux application called Beowulf, and Microsoft has a server product for HPC using standard computers, called the Windows Compute Cluster that you can read more about here. The issue with HPC (from any vendor) that some organization have is the amount of compute nodes they need. Having too many results in excess infrastructure, including computers, buildings, storage, heat and so on. Having too few means that the work is slower, and takes longer to return a result to the calling application. Unless there is a consistent level of work requested, predicting the number of nodes is problematic. Implementation: Recently, Microsoft announced an internal partnership between the HPC group (Now called the Technical Computing Group) and Windows Azure. You now have two options for implementing an HPC environment using Windows. You can extend the current infrastructure you have for HPC by adding in Compute Nodes in Windows Azure, using a “Broker Node”.  You can then purchase time for adding machines, and then stop paying for them when the work is completed. This is a common pattern in groups that have a constant need for HPC, but need to “burst” that load count under certain conditions. The second option is to install only a Head Node and a Broker Node onsite, and host all Compute Nodes in Windows Azure. This is often the pattern for organizations that need HPC on a scheduled and periodic basis, such as financial analysis or actuarial table calculations. References: Blog entry on Hybrid HPC with Windows Azure: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/12/13/high-performance-computing-on-premise-and-in-the-windows-azure-cloud.aspx  Links for further research on HPC, includes Windows Azure information: http://blogs.msdn.com/b/ncdevguy/archive/2011/02/16/handy-links-for-hpc-and-azure.aspx 

    Read the article

  • SQL SERVER – DVM sys.dm_os_sys_info Column Name Changed in SQL Server 2012

    - by pinaldave
    Have you ever faced situation where something does not work and when you try to go and fix it – you like fixing it and started to appreciate the breaking changes. Well, this is exactly I felt yesterday. Before I begin my story of yesterday I want to state it candidly that I do not encourage anybody to use * in the SELECT statement. One of the my DBA friend who always used my performance tuning script yesterday sent me email asking following question - “Every time I want to retrieve OS related information in SQL Server – I used DMV sys.dm_os_sys_info. I just upgraded my SQL Server edition from 2008 R2 to SQL Server 2012 RC0 and it suddenly stopped working. Well, this is not the production server so the issue is not big yet but eventually I need to resolve this error. Any suggestion?” The funny thing was original email was very long but it did not talk about what is the exact error beside the query is not working. I think this is the disadvantage of being too friendly on email sometime. Well, never the less, I quickly looked at the DMV on my SQL Server 2008 R2 and SQL Server 2012 RC0 version. To my surprise I found out that there were few columns which are renamed in SQL Server 2012 RC0. Usually when people see breaking changes they do not like it but when I see these changes I was happy as new names were meaningful and additionally their new conversion is much more practical and useful. Here are the columns previous names - Previous Column Name New Column Name physical_memory_in_bytes physical_memory_kb bpool_commit_target committed_target_kb bpool_visible visible_target_kb virtual_memory_in_bytes virtual_memory_kb bpool_commited committed_kb If you read it carefully you will notice that new columns now display few results in the KB whereas earlier result was in bytes. When I see the results in bytes I always get confused as I could not guess what exactly it will convert to. I like to see results in kb and I am glad that new columns are now displaying the results in the kb. I sent the details of the new columns to my friend and ask him to check the columns used in application. From my comment he quickly realized why he was facing error and fixed it immediately. Overall – all was well at the end and I learned something new. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • BAM design pointers

    - by Kavitha Srinivasan
    In working recently with a large Oracle customer on SOA and BAM, I discovered that some BAM best practices are not quite well known as I had always assumed ! There is a doc bug out to formally incorporate those learnings but here are a few notes..  EMS-DO parity When using EMS (Enterprise Message Source) as a BAM feed, the best practice is to use one EMS to write to one Data Object. There is a possibility of collisions and duplicates when multiple EMS write to the same row of a DO at the same time. This customer had 17 EMS writing to one DO at the same time. Every sensor in their BPEL process writes to one topic but the Topic was read by 1 EMS corresponding to one sensor. They then used XSL within BAM to transform the payload into the BAM DO format. And hence for a given BPEL instance, 17 sensors fired, populated 1 JMS topic, was consumed by 17 EMS which in turn wrote to 1 DataObject.(You can image what would happen for later versions of the application that needs to send more information to BAM !).  We modified their design to use one Master XSL based on sensorname for all sensors relating to a DO- say Data Object 'Orders' and were able to thus reduce the 17 EMS to 1 with a master XSL. For those of you wondering about how squeaky clean this design is, you are right ! This is indeed not squeaky clean and that brings us to yet another 'inferred' best practice. (I try very hard not to state the obvious in my blogs with the hope that everytime I blog, it is very useful but this one is an exception.) Transformations and Calculations It is optimal to do transformations within an engine like BPEL. Not only does this provide modelling ease with a nice GUI XSL mapper in JDeveloper, the XSL engine in BPEL is quite efficient at runtime as well. And so, doing XSL transformations in BAM is not quite prudent.  The same is true for any non-trivial calculations as well. It is best to do all transformations,calcuations and sanitize the data in a BPEL or like layer and then send this to BAM (via JMS, WS etc.) This then delegates simply the function of report rendering and mechanics of real-time reporting to the Oracle BAM reporting tool which it is most suited to do. All nulls are not created equal Here is yet another possibly known fact but reiterated here. For an EMS with an Upsert operation: a) If Empty tags or tags with no value are sent like <Tag1/> or <Tag1></Tag1>, the DO will be overwritten with --null-- b) If Empty tags are suppressed ie not generated at all, the corresponding DO field will NOT be overwritten. The field will have whatever value existed previously.  For an EMS with an Insert operation, both tags with an empty value and no tags result in –null-- being written to the DO. Hope this helps .. Happy 4th!

    Read the article

  • ScrollyFox Provides Automated Page Scrolling in Firefox

    - by Asian Angel
    Do you read a high amount of content each day on the web but get tired of manually scrolling through everything? Now you can set up relaxed pace auto-scrolling in Firefox with the ScrollyFox extension. Note: You may occasionally encounter a website where the extension will not work. This may be due to the particular website’s coding. Using ScrollyFox Once you have the extension installed you may want to have a quick look at the preferences. The default scroll speed is set at “50” and the reverse scrolling setting is enabled. You can easily adjust the settings for speed to suit your needs. Note: For our examples we left the reverse scrolling setting enabled. By default the extension is disabled at first and the status bar button will have a faded coloration. You can see what the button looks like once activated…notice the small arrow type buttons on the right side. In our first example you can see the webpage auto-scrolling in a downward direction. Having reached the bottom it automatically started scrolling back towards the top. Visiting the How-To Geek website you can see that the extension was already working as the page was finishing loading. Going up! Conclusion While this extension may not be for everyone, it can be useful for those who have heavy reading and/or very long articles to read. Links Download the ScrollyFox extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Fixing Firefox Scrolling Problems with Dell Synaptics TouchpadQuick Hits: 11 Firefox Tab How-TosDisable That Irritating AutoScroll Feature in FirefoxEnjoy Customizable Smooth Scrolling in Firefox with SmoothWheelQuick Tip: Disable Firefox Tab Scrolling TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >