Search Results

Search found 2708 results on 109 pages for 'michael williamson'.

Page 10/109 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Access Insurance Company Wins 2010 Technology Innovation Award at IASA

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance, is blogging from the 2010 IASA Annual Conference and Business Show this week. For the second time in two weeks an Oracle Insurance customer has earned recognition at an insurance industry event for its innovative use of technology to transform their business. Access Insurance Company received the 2010 Technology Innovation Award during the 2010 IASA Annual Conference and Business Show this week in Grapevine, Texas. The company earned the recognition for its "Instant Access" application, which executes all the business rules and processes needed to provide a quote, bind, and issue a policy. CIO Andy Dunn and Tim Reynolds stopped by the Oracle Insurance Booth at IASA to visit with the team, show their award, and share how the platform has provided a strategic advantage to the company and helped it increase revenue by penetrating new markets, increasing market share and improving customer retention. Since implementing Instant Access in 2009 - a platform that leverages both Oracle Insurance Insbridge Rating and Underwriting and Oracle Documaker - the carrier has: Increased policies in force by 22%, from 140,185 to more than 270,000 Grown market share by 4.6% Increased 2009 revenue by 26.5% Increased ratio of policyholders per CSR by 30% Increased its appointed independent producers by 43 percent Now that's true innovation! You can learn more about the company's formula for success by reading Access Insurance Holdings CEO and president Michael McMenamin's interview with Insurance & Technology, Data Mastery Drives Access Insurance's 'Instant Access' Business Technology Platform. Congratulations to Michael, Andy, Tim and the entire team at Access Insurance on this well deserved honor - and for your role as a technology leader for the industry. Helen Pitts is senior product marketing manager for Oracle Insurance.

    Read the article

  • AppFabric Caching Feedback

    - by Michael Stephenson
    We are running a survey to collect feedback around scenarios where people were experimenting with velocity on windows 2003 in the CTP but are now in a position where the beta requires windows 2008. We would like to understand how important this scenario is precieved to be. If you are in the Connected Systems Community and would like to provide feedback please complete the following survey http://www.surveymonkey.com/s/N3VKZWN

    Read the article

  • Who IS Brian Solis?

    - by Michael Snow
    Q: Brian, Welcome to the WebCenter Blog. Can you tell our readers your current role and what career path brought you here? A: I’m proudly serving as a principal analyst at Altimeter Group, a research based advisory firm in Silicon Valley. My career path, well, let’s just say it’s a long and winding road. As a kid, I was fascinated with technology. I learned programming at an early age and found myself naturally drawn to all things tech. I started my career as a database programmer at a technology marketing agency in Southern California. When I saw the chance to work with tech companies and help them better market their capabilities to businesses and consumers, I switched focus from programming to marketing and advertising. As technologist, my approach to marketing was different. I didn’t believe in hype, fluff or buzz words. I believed in translating features into benefits and specifications and capabilities into solutions for real world problems and opportunities. In the mid 90’s I experimented with direct to consumer/customer engagement in dedicated technology forums and boards. I quickly realized that the entire approach to do so would need to change. Therefore, I learned and developed new methods for a more social and informed way of engaging people in ways that helped them, marketed the company, and also tied to tangible benefits for the company. This work would lead me to start an agency in 1999 dedicated to interactive marketing. As I continued to experiment with interactive platforms, I developed interesting methods for converting one-to-many forms of media into one-to-one-to-many programs. I ran that company until joining Altimeter Group. Along the way, in the early 2000s, I realized that everything was changing and that there were others like me finding success in what would become a more social form of media. I dedicated a significant amount of my time to sharing everything that I learned in the form of articles, blogs, and eventually books. My mission became to share my experience with anyone who’d listen. It would later become much bigger than marketing, this would lead to a decade of work, that still continues, in business transformation. Then and now, I find myself always assuming the role of a student. Q: As an industry analyst & technology change evangelist, what are you primarily focused on these days? A: As a digital analyst, I study how disruptive technology impacts business. As an aspiring social scientist, I study how technology affects human behavior. I explore both horizons professionally and personally to better understand the future of popular culture and also the opportunities that exist for organizations to improve relationships and experiences with customers and the people that are important to them. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? A: The line between work and life isn’t blurred it’s been overtly crossed and erased. We live in an always on society. The digital lifestyle keeps us connected to one another it keeps us connected all the time. Whether your sending or checking email, trying to catch up, or simply trying to get ahead, people are spending the equivalent of an extra day at work in the time they spend out of work…working. That’s absurd. It’s a matter of survival. It’s also a matter of unintended, subconscious self-causation. We brought this on ourselves and continue to do so. Think about your day. You’re in meetings for the better part of each day. You probably spend evenings and weekends catching up on email and actually doing the work you couldn’t get to during the day. And, your co-workers and executives are doing the same thing. So if you try to slow down, you find yourself at a disadvantage as you’re willfully pulling yourself out of an unfortunate culture of whenever wherever business dynamics. If you’re unresponsive or unreachable, someone within your organization or on your team is accessible. Over time, this could contribute to unfavorable impressions. I choose to steer my life balance in ways that complement one another. But, I don’t pretend to have this figured out by any means. In fact, I find myself swimming upstream like those around me. It’s essentially a competition for relevance and at some point I’ll learn how to earn attention and relevance while redrawing the line between work and life. Q: How can people keep up with what you’re working on? A: The easy answer is that people can keep up with me at briansolis.com. But, I also try to reach people where their attention is focused. Whether it’s Facebook (facebook.com/briansolis), Twitter (@briansolis), Google+ (+briansolis), Youtube (briansolis.tv) or through books and conferences, people can usually find me in a place of their choosing. Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon? A: I spent some time with the Oracle team reviewing the idea of Digital Darwinism and how technology and society are evolving faster than many organizations can adapt. Digital Darwinism: How Brands Can Survive the Rapid Evolution of Society and Technology Thursday, December 13, 2012, 10 a.m. PT / 1 p.m. ET Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast? A: We’re inviting guests to join us online as we dive into the future of business and how the convergence of technology and connected consumerism would ultimately impact how business is done. It’ll be an exciting and revealing conversation that explores just how much everything is changing. We’ll also review the importance of adapting to emergent trends and how to compete for the future. It’s important to recognize that change is not happening to us, it’s happening because of us. We are part of the revolution and therefore we need to help organizations adapt from the inside out. Watch the Entire Oracle Social Business Thought Leaders Webcast Series On-Demand and Stay Tuned for More to Come in 2013!

    Read the article

  • Revisiting ANTS Performance Profiler 7.4

    - by James Michael Hare
    Last year, I did a small review on the ANTS Performance Profiler 6.3, now that it’s a year later and a major version number higher, I thought I’d revisit the review and revise my last post. This post will take the same examples as the original post and update them to show what’s new in version 7.4 of the profiler. Background A performance profiler’s main job is to keep track of how much time is typically spent in each unit of code. This helps when we have a program that is not running at the performance we expect, and we want to know where the program is experiencing issues. There are many profilers out there of varying capabilities. Red Gate’s typically seem to be the very easy to “jump in” and get started with very little training required. So let’s dig into the Performance Profiler. I’ve constructed a very crude program with some obvious inefficiencies. It’s a simple program that generates random order numbers (or really could be any unique identifier), adds it to a list, sorts the list, then finds the max and min number in the list. Ignore the fact it’s very contrived and obviously inefficient, we just want to use it as an example to show off the tool: 1: // our test program 2: public static class Program 3: { 4: // the number of iterations to perform 5: private static int _iterations = 1000000; 6: 7: // The main method that controls it all 8: public static void Main() 9: { 10: var list = new List<string>(); 11: 12: for (int i = 0; i < _iterations; i++) 13: { 14: var x = GetNextId(); 15: 16: AddToList(list, x); 17: 18: var highLow = GetHighLow(list); 19: 20: if ((i % 1000) == 0) 21: { 22: Console.WriteLine("{0} - High: {1}, Low: {2}", i, highLow.Item1, highLow.Item2); 23: Console.Out.Flush(); 24: } 25: } 26: } 27: 28: // gets the next order id to process (random for us) 29: public static string GetNextId() 30: { 31: var random = new Random(); 32: var num = random.Next(1000000, 9999999); 33: return num.ToString(); 34: } 35: 36: // add it to our list - very inefficiently! 37: public static void AddToList(List<string> list, string item) 38: { 39: list.Add(item); 40: list.Sort(); 41: } 42: 43: // get high and low of order id range - very inefficiently! 44: public static Tuple<int,int> GetHighLow(List<string> list) 45: { 46: return Tuple.Create(list.Max(s => Convert.ToInt32(s)), list.Min(s => Convert.ToInt32(s))); 47: } 48: } So let’s run it through the profiler and see what happens! Visual Studio Integration First, let’s look at how the ANTS profilers integrate with Visual Studio’s menu system. Once you install the ANTS profilers, you will get an ANTS menu item with several options: Notice that you can either Profile Performance or Launch ANTS Performance Profiler. These sound similar but achieve two slightly different actions: Profile Performance: this immediately launches the profiler with all defaults selected to profile the active project in Visual Studio. Launch ANTS Performance Profiler: this launches the profiler much the same way as starting it from the Start Menu. The profiler will pre-populate the application and path information, but allow you to change the settings before beginning the profile run. So really, the main difference is that Profile Performance immediately begins profiling with the default selections, where Launch ANTS Performance Profiler allows you to change the defaults and attach to an already-running application. Let’s Fire it Up! So when you fire up ANTS either via Start Menu or Launch ANTS Performance Profiler menu in Visual Studio, you are presented with a very simple dialog to get you started: Notice you can choose from many different options for application type. You can profile executables, services, web applications, or just attach to a running process. In fact, in version 7.4 we see two new options added: ASP.NET Web Application (IIS Express) SharePoint web application (IIS) So this gives us an additional way to profile ASP.NET applications and the ability to profile SharePoint applications as well. You can also choose your level of detail in the Profiling Mode drop down. If you choose Line-Level and method-level timings detail, you will get a lot more detail on the method durations, but this will also slow down profiling somewhat. If you really need the profiler to be as unintrusive as possible, you can change it to Sample method-level timings. This is performing very light profiling, where basically the profiler collects timings of a method by examining the call-stack at given intervals. Which method you choose depends a lot on how much detail you need to find the issue and how sensitive your program issues are to timing. So for our example, let’s just go with the line and method timing detail. So, we check that all the options are correct (if you launch from VS2010, the executable and path are filled in already), and fire it up by clicking the [Start Profiling] button. Profiling the Application Once you start profiling the application, you will see a real-time graph of CPU usage that will indicate how much your application is using the CPU(s) on your system. During this time, you can select segments of the graph and bookmark them, giving them mnemonic names. This can be useful if you want to compare performance in one part of the run to another part of the run. Notice that once you select a block, it will give you the call tree breakdown for that selection only, and the relative performance of those calls. Once you feel you have collected enough information, you can click [Stop Profiling] to stop the application run and information collection and begin a more thorough analysis. Analyzing Method Timings So now that we’ve halted the run, we can look around the GUI and see what we can see. By default, the times are shown in terms of percentage of time of the total run of the application, though you can change it in the View menu item to milliseconds, ticks, or seconds as well. This won’t affect the percentages of methods, it only affects what units the times are shown. Notice also that the major hotspot seems to be in a method without source, ANTS Profiler will filter these out by default, but you can right-click on the line and remove the filter to see more detail. This proves especially handy when a bottleneck is due to a method in the BCL. So now that we’ve removed the filter, we see a bit more detail: In addition, ANTS Performance Profiler gives you the ability to decompile the methods without source so that you can dive even deeper, though typically this isn’t necessary for our purposes. When looking at timings, there are generally two types of timings for each method call: Time: This is the time spent ONLY in this method, not including calls this method makes to other methods. Time With Children: This is the total of time spent in both this method AND including calls this method makes to other methods. In other words, the Time tells you how much work is being done exclusively in this method, and the Time With Children tells you how much work is being done inclusively in this method and everything it calls. You can also choose to display the methods in a tree or in a grid. The tree view is the default and it shows the method calls arranged in terms of the tree representing all method calls and the parent method that called them, etc. This is useful for when you find a hot-spot method, you can see who is calling it to determine if the problem is the method itself, or if it is being called too many times. The grid method represents each method only once with its totals and is useful for quickly seeing what method is the trouble spot. In addition, you can choose to display Methods with source which are generally the methods you wrote (as opposed to native or BCL code), or Any Method which shows not only your methods, but also native calls, JIT overhead, synchronization waits, etc. So these are just two ways of viewing the same data, and you’re free to choose the organization that best suits what information you are after. Analyzing Method Source If we look at the timings above, we see that our AddToList() method (and in particular, it’s call to the List<T>.Sort() method in the BCL) is the hot-spot in this analysis. If ANTS sees a method that is consuming the most time, it will flag it as a hot-spot to help call out potential areas of concern. This doesn’t mean the other statistics aren’t meaningful, but that the hot-spot is most likely going to be your biggest bang-for-the-buck to concentrate on. So let’s select the AddToList() method, and see what it shows in the source window below: Notice the source breakout in the bottom pane when you select a method (from either tree or grid view). This shows you the timings in this method per line of code. This gives you a major indicator of where the trouble-spot in this method is. So in this case, we see that performing a Sort() on the List<T> after every Add() is killing our performance! Of course, this was a very contrived, duh moment, but you’d be surprised how many performance issues become duh moments. Note that this one line is taking up 86% of the execution time of this application! If we eliminate this bottleneck, we should see drastic improvement in the performance. So to fix this, if we still wanted to maintain the List<T> we’d have many options, including: delay Sort() until after all Add() methods, using a SortedSet, SortedList, or SortedDictionary depending on which is most appropriate, or forgoing the sorting all together and using a Dictionary. Rinse, Repeat! So let’s just change all instances of List<string> to SortedSet<string> and run this again through the profiler: Now we see the AddToList() method is no longer our hot-spot, but now the Max() and Min() calls are! This is good because we’ve eliminated one hot-spot and now we can try to correct this one as well. As before, we can then optimize this part of the code (possibly by taking advantage of the fact the list is now sorted and returning the first and last elements). We can then rinse and repeat this process until we have eliminated as many bottlenecks as possible. Calls by Web Request Another feature that was added recently is the ability to view .NET methods grouped by the HTTP requests that caused them to run. This can be helpful in determining which pages, web services, etc. are causing hot spots in your web applications. Summary If you like the other ANTS tools, you’ll like the ANTS Performance Profiler as well. It is extremely easy to use with very little product knowledge required to get up and running. There are profilers built into the higher product lines of Visual Studio, of course, which are also powerful and easy to use. But for quickly jumping in and finding hot spots rapidly, Red Gate’s Performance Profiler 7.4 is an excellent choice. Technorati Tags: Influencers,ANTS,Performance Profiler,Profiler

    Read the article

  • SEO question, getting info about websites

    - by michael
    I don't know much about SEO. I have a csv file with 200,000 links to websites i want to add another field (or maybe couple of them) to each link in the csv file with page ranking of each link and maybe other interesting metrics and info about the link. I saw a free API from http://apiwiki.seomoz.org/ i can maybe use to build a simple script, but it's limited to 3 links for second which will roughly take 1100 minutes or 18 hours to run any other ideas how to get this kind of simple metrics about each link ? thanks !

    Read the article

  • Parsing Extended Events xml_deadlock_report

    - by Michael Zilberstein
    Jonathan Kehayias and Paul Randall posted more than a year ago great articles on how to monitor historical deadlocks using Extended Events system_health default trace. Both tried to fix on the fly the bug in xml output that caused failures in xml validation. Today I've found out that their version isn't bulletproof either. So here is the fixed one: SELECT CAST ( xest.target_data as XML ) xml_data , * INTO #ring_buffer_data FROM     sys.dm_xe_session_targets xest    INNER...(read more)

    Read the article

  • Cut Caseload Costs, Speed Service Delivery For Social Services

    - by michael.seback
    Lower Caseload Costs, Speedier Service Delivery with New Oracle Social Services Solution Oracle has just introduced a new solution for social services agencies that's designed to help case workers address the challenges of rising workloads and growing demands by citizens for additional services. In the past, IT departments developed custom software in an effort to meet program outcomes. "Because this capability is out of the box with the Oracle solution, there's less complexity for organizations and an overall lower total cost of ownership," says Kimberly Ellison-Taylor, Oracle's executive director of health and human services. "Self service brings costs down to just pennies per interaction and makes it possible for clients to receive government services more quickly," Ellison-Taylor says. read more

    Read the article

  • Using Web Services from an XNA 4.0 WP7 Game

    - by Michael Cummings
    Now that the Windows Phone 7 development tools have been out for a while, let’s talk about how you can use them. Windows Phone 7 ( WP7 ) has two application types that you can create, either Silverlight or XNA, and you can’t really mix the two together. The development environment for WP7 is a special edition of Visual Studio 2010 called Visual Studio 2010 Express for Windows Phone. This edition will be installed with the WP7 tools, even if you have a full edition of VS2010 already installed. While you can use your full edition of VS2010 to do WP7 development, this astute developer has noticed that there are a few things that you can only do in the Express for Windows Phone edition. So lets start by discussing WP7 networking. On the WP7 platform the only networking available is through Web Services using WCF or if you’re really masochistic, you’ll use the WebClient to do http. In Silverlight, it’s fairly easy to wire up a WCF proxy to call a web service and get some data. In the XNA projects, not so much. Create WCF Service First, we’ll create our service that will return some information that we need in our game. Open Visual Studio 2010, and create a new WCF Web Service project. We’ll use the default implementation as we only need to see how to use a service, we are not interested in creating a really cool service at this point. However you may want to follow the instructions in the comments of Service1.svc.cs to change the name to something better, I used DataService and IDataService for the interface. You should now be able to run the project and the WCF Test Client will load and properly enumerate your service. At this point we have a functional service that can be consumed by our XNA game. Consume the WCF Service Open Visual Studio 2010 Express for Windows Phone and create a new XNA Game Studio 4.0 Windows Phone Game project. Now if you try to add a service reference to the project, you’ll notice that the option is not available. However, if you add a Silverlight application to your solution, you’ll notice that you can create a service reference there. So using the Silverlight project, we can create the service reference. Unfortunately you can’t reference the Silverlight project from the XNA Game project, so using Windows Explorer copy the Service References folder from the Silverlight project directory to the XNA Game project directory, then add the folder to your XNA Game project. You’ll need to set the property Build Action to None for all the files, except for Reference.cs, which should be Build. Truely, we only need Reference.cs but I find it easier to copy the whole folder. If you try to compile at this point, you’ll notice that we are missing  a couple of references, System.Runtime.Serialization, System.Net and System.ServiceModel. Add these to the XNA Game project and you should build successfully. You’ll also need to copy the ServiceReference.ClientConfig file and add it to your project. The WCF infrastructure looks for this file and will complain if it can’t find it. You’ll need to set the Copy to Output Directory property to Copy if Newer. We now need to add the code to call the service and display the results on the screen. Go ahead and add a SpriteFont resource to the Content project and load it in the Game project. There’s nothing here that’s changed much from 3.1 other than your Content project is now under the Solution node and not the Project node. While you’re at it, add a string field to store the result of the service call, and intialize it to string.Empty. Then in the Draw method, write the string out to the screen, only if it does not equal string.Empty. Now to wrap this up, lets create a new field that’s of the type DataServiceClient. In the Initialize Method, create a new instance of this type using its default contructor, then in the LoadContent we can call the service. Since we can only call the GetData method of our service asynchronously we need to set up a Completed event handler first. Thankfully, Visual Studio helps out a lot there just create, using the tab key whatever VS says to. In the GetDataAsyncCompleted event handler assign the service result ( e.Result) to your string field. If you run your game, you should get something like this : Enjoy!

    Read the article

  • Guest Post: Christian Finn: Is Facebook About to Become a Victim of its Own Success?

    - by Michael Snow
    12.00 Print 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Cambria; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}  Since we have a number of new members of the WebCenter Evangelist team - I thought it would be appropriate to close the week with the newest hire and leader of the global WebCenter Evangelists, Christian Finn, who has just joined the Red team after many years with the small technology company up in Redmond, WA. He gave an intro to himself in an earlier post this morning but his post below is a great example of how customer engagement takes on a life of its own in our global online connected and social digital ecosystem. Is Facebook About to Become a Victim of its Own Success? What if I told you that your brand could advertise so successfully, you wouldn’t have to pay for the ads? A recent campaign by Ford Motor Company for the Ford Focus featuring Doug the spokespuppet (I am not making this up) did just that—and it raises some interesting issues for marketers and social media alike in the brave new world of customer engagement that is the Social Web. Allow me to elaborate. An article in the Wall Street Journal last week—“Big Brands Like Facebook, But They Don’t Like to Pay” tells the story of Ford’s recently concluded online campaign for the 2012 Ford Focus. (Ford, by the way, under the leadership of people such as Scott Monty, has been a pioneer of effective social campaigns.) The centerpiece of the campaign was the aforementioned Doug, who appeared as a character on Facebook in videos and via chat. (If you are not familiar with Doug, you can see him in action here, and read the WSJ story here.) You may be thinking puppet ads are a sign of Internet Bubble 2.0 and want to stop now, but bear with me. The Journal reported that Ford spent about $95M on its overall Ford Focus campaign, with TV accounting for over $60M of that spend. The Internet buy for the campaign was just over $10M, which included ad buys to drive traffic to Facebook for people to meet and ‘Like’ Doug and some amount on Facebook ads, too, to promote Doug and by extension, the Ford Focus. So far, a fairly straightforward consumer marketing story in the Internet Era. Yet here’s the curious thing: once Doug reached 10,000 fans on Facebook, Ford stopped paying for Facebook ads. Doug had gone viral with people sharing his videos with one another; once critical mass was reached there was no need to buy more ads on Facebook. Doug went on to be Liked by over 43,000 people, and 61% of his fans said they would be more likely to consider buying a Focus. According to the article, Ford says Focus sales are up this year—and increasing sales is every marketer’s goal. And so in effect, Ford found its Facebook campaign so successful that it could stop paying for it, instead letting its target consumers communicate its messages for fun—and for free. Not only did they get a 3X increase in fans beyond their paid campaign, they had thousands of customers sharing their messages in video form for months. Since free advertising is the Holy Grail of marketing both old and new-- and it appears social networks have an advantage in generating that buzz—it seems reasonable to ask: what would happen to brands’ advertising strategies—and the media they use to engage customers, if this success were repeated at scale? It seems logical to conclude that, at least initially, more ad dollars would be spent with social networks like Facebook as brands attempt to replicate Ford’s success. Certainly Facebook ad revenues are on the rise—eMarketer expects Facebook’s ad revenues to quintuple by 2012 compared with 2009 levels, to nearly 2.9B. That’s bad news for TV and the already battered print media and good news for Facebook. But perhaps not so over the longer run. With TV buys, you have to keep paying to generate impressions. If Doug the spokespuppet is any guide, however, that may not be true for social media campaigns. After an initial outlay, if a social campaign takes off, the audience will generate more impressions on its own. Thus a social medium like Facebook could be the victim of its own success when it comes to ad revenue. It may be there is an inherent limiting factor in the ad spend they can capture, as exemplified by Ford’s experience with Dough and the Focus. And brands may spend much less overall on advertising, with as good or better results, than they ever have in the past. How will these trends evolve? Can brands create social campaigns that repeat Ford’s formula for the Focus with effective results? Can social networks find ways to capture more spend and overcome their potential tendency to make further spend unnecessary? And will consumers become tired and insulated from social campaigns, much as they have to traditional advertising channels? These are the questions CMOs and Facebook execs alike will be asking themselves in the brave new world of customer engagement. As always, your thoughts and comments are most welcome.

    Read the article

  • The Evolution of an Era: Customer Experience in Retail

    - by Michael Hylton
    Two New Studies Point to the Direction Retailers are Taking in their CX Initiatives. Is it the Right Direction? The sheer velocity of change in retailing and customer behavior is forcing retailers to reinvigorate, expand and sharpen their vital Customer Experience (CX) strategies. Customers are becoming increasingly dynamic as they race to embrace the newest digital channels; shop in new ways on mobile devices, including smartphones and tablets, on the Web and in the store; share experiences socially; and interact with their preferred brands in new ways. Retailers are stepping up to their customers as they and their competitors create new modes of customer interaction. Underpinning these changes are vast quantities of customer data as customers flood digital channels and the social sphere. The informed retailer must now understand what their priorities are and what they should be for the future. To better understand this, Tata Consultancy Services (TCS) and Oracle independently launched CX-focused surveys to uncover what retailing leadership found important today. By comparing the results of these two studies together, we can further discover new insights about the industry. Click here to download this informative white paper.

    Read the article

  • Oracle Delivers Oracle Social Services Suite

    - by michael.seback
    Oracle Delivers Oracle Social Services Suite with New Releases of Siebel CRM Public Sector 8.2 and Oracle Policy Automation 10 Continuing its leadership and commitment to provide key innovations specifically created for social services agencies, Oracle today released the new Oracle Social Services Suite that includes updated versions of Oracle's Siebel CRM Public Sector 8.2 and Oracle Policy Automation 10. "Oracle's commitment to our social services customers is indisputable with the introduction of Oracle Social Services Suite and the latest innovations from Oracle's Siebel CRM Public Sector 8.2 and Oracle Policy Automation 10," said Anthony Lye, Senior Vice President of CRM, Oracle. "Social service agencies have not only many of the most complex jobs to perform with limited time and funding, but also some of the most important for our society, especially when children are involved. The technology advances Oracle provides will help these agencies increase their own efficiency and save costs, while helping to improve the outcome for their clients." read more

    Read the article

  • C#: Adding Functionality to 3rd Party Libraries With Extension Methods

    - by James Michael Hare
    Ever have one of those third party libraries that you love but it's missing that one feature or one piece of syntactical candy that would make it so much more useful?  This, I truly think, is one of the best uses of extension methods.  I began discussing extension methods in my last post (which you find here) where I expounded upon what I thought were some rules of thumb for using extension methods correctly.  As long as you keep in line with those (or similar) rules, they can often be useful for adding that little extra functionality or syntactical simplification for a library that you have little or no control over. Oh sure, you could take an open source project, download the source and add the methods you want, but then every time the library is updated you have to re-add your changes, which can be cumbersome and error prone.  And yes, you could possibly extend a class in a third party library and override features, but that's only if the class is not sealed, static, or constructed via factories. This is the perfect place to use an extension method!  And the best part is, you and your development team don't need to change anything!  Simply add the using for the namespace the extensions are in! So let's consider this example.  I love log4net!  Of all the logging libraries I've played with, it, to me, is one of the most flexible and configurable logging libraries and it performs great.  But this isn't about log4net, well, not directly.  So why would I want to add functionality?  Well, it's missing one thing I really want in the ILog interface: ability to specify logging level at runtime. For example, let's say I declare my ILog instance like so:     using log4net;     public class LoggingTest     {         private static readonly ILog _log = LogManager.GetLogger(typeof(LoggingTest));         ...     }     If you don't know log4net, the details aren't important, just to show that the field _log is the logger I have gotten from log4net. So now that I have that, I can log to it like so:     _log.Debug("This is the lowest level of logging and just for debugging output.");     _log.Info("This is an informational message.  Usual normal operation events.");     _log.Warn("This is a warning, something suspect but not necessarily wrong.");     _log.Error("This is an error, some sort of processing problem has happened.");     _log.Fatal("Fatals usually indicate the program is dying hideously."); And there's many flavors of each of these to log using string formatting, to log exceptions, etc.  But one thing there isn't: the ability to easily choose the logging level at runtime.  Notice, the logging levels above are chosen at compile time.  Of course, you could do some fun stuff with lambdas and wrap it, but that would obscure the simplicity of the interface.  And yes there is a Logger property you can dive down into where you can specify a Level, but the Level properties don't really match the ILog interface exactly and then you have to manually build a LogEvent and... well, it gets messy.  I want something simple and sexy so I can say:     _log.Log(someLevel, "This will be logged at whatever level I choose at runtime!");     Now, some purists out there might say you should always know what level you want to log at, and for the most part I agree with them.  For the most party the ILog interface satisfies 99% of my needs.  In fact, for most application logging yes you do always know the level you will be logging at, but when writing a utility class, you may not always know what level your user wants. I'll tell you, one of my favorite things is to write reusable components.  If I had my druthers I'd write framework libraries and shared components all day!  And being able to easily log at a runtime-chosen level is a big need for me.  After all, if I want my code to really be re-usable, I shouldn't force a user to deal with the logging level I choose. One of my favorite uses for this is in Interceptors -- I'll describe Interceptors in my next post and some of my favorites -- for now just know that an Interceptor wraps a class and allows you to add functionality to an existing method without changing it's signature.  At the risk of over-simplifying, it's a very generic implementation of the Decorator design pattern. So, say for example that you were writing an Interceptor that would time method calls and emit a log message if the method call execution time took beyond a certain threshold of time.  For instance, maybe if your database calls take more than 5,000 ms, you want to log a warning.  Or if a web method call takes over 1,000 ms, you want to log an informational message.  This would be an excellent use of logging at a generic level. So here was my personal wish-list of requirements for my task: Be able to determine if a runtime-specified logging level is enabled. Be able to log generically at a runtime-specified logging level. Have the same look-and-feel of the existing Debug, Info, Warn, Error, and Fatal calls.    Having the ability to also determine if logging for a level is on at runtime is also important so you don't spend time building a potentially expensive logging message if that level is off.  Consider an Interceptor that may log parameters on entrance to the method.  If you choose to log those parameter at DEBUG level and if DEBUG is not on, you don't want to spend the time serializing those parameters. Now, mine may not be the most elegant solution, but it performs really well since the enum I provide all uses contiguous values -- while it's never guaranteed, contiguous switch values usually get compiled into a jump table in IL which is VERY performant - O(1) - but even if it doesn't, it's still so fast you'd never need to worry about it. So first, I need a way to let users pass in logging levels.  Sure, log4net has a Level class, but it's a class with static members and plus it provides way too many options compared to ILog interface itself -- and wouldn't perform as well in my level-check -- so I define an enum like below.     namespace Shared.Logging.Extensions     {         // enum to specify available logging levels.         public enum LoggingLevel         {             Debug,             Informational,             Warning,             Error,             Fatal         }     } Now, once I have this, writing the extension methods I need is trivial.  Once again, I would typically /// comment fully, but I'm eliminating for blogging brevity:     namespace Shared.Logging.Extensions     {         // the extension methods to add functionality to the ILog interface         public static class LogExtensions         {             // Determines if logging is enabled at a given level.             public static bool IsLogEnabled(this ILog logger, LoggingLevel level)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         return logger.IsDebugEnabled;                     case LoggingLevel.Informational:                         return logger.IsInfoEnabled;                     case LoggingLevel.Warning:                         return logger.IsWarnEnabled;                     case LoggingLevel.Error:                         return logger.IsErrorEnabled;                     case LoggingLevel.Fatal:                         return logger.IsFatalEnabled;                 }                                 return false;             }             // Logs a simple message - uses same signature except adds LoggingLevel             public static void Log(this ILog logger, LoggingLevel level, object message)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message);                         break;                     case LoggingLevel.Informational:                         logger.Info(message);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message);                         break;                     case LoggingLevel.Error:                         logger.Error(message);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message);                         break;                 }             }             // Logs a message and exception to the log at specified level.             public static void Log(this ILog logger, LoggingLevel level, object message, Exception exception)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message, exception);                         break;                     case LoggingLevel.Informational:                         logger.Info(message, exception);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message, exception);                         break;                     case LoggingLevel.Error:                         logger.Error(message, exception);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message, exception);                         break;                 }             }             // Logs a formatted message to the log at the specified level.              public static void LogFormat(this ILog logger, LoggingLevel level, string format,                                          params object[] args)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.DebugFormat(format, args);                         break;                     case LoggingLevel.Informational:                         logger.InfoFormat(format, args);                         break;                     case LoggingLevel.Warning:                         logger.WarnFormat(format, args);                         break;                     case LoggingLevel.Error:                         logger.ErrorFormat(format, args);                         break;                     case LoggingLevel.Fatal:                         logger.FatalFormat(format, args);                         break;                 }             }         }     } So there it is!  I didn't have to modify the log4net source code, so if a new version comes out, i can just add the new assembly with no changes.  I didn't have to subclass and worry about developers not calling my sub-class instead of the original.  I simply provide the extension methods and it's as if the long lost extension methods were always a part of the ILog interface! Consider a very contrived example using the original interface:     // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsWarnEnabled)             {                 _log.WarnFormat("Statement {0} took too long to execute.", statement);             }             ...         }     }     Now consider this alternate call where the logging level could be perhaps a property of the class          // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // allow logging level to be specified by user of class instead         public LoggingLevel ThresholdLogLevel { get; set; }                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsLogEnabled(ThresholdLogLevel))             {                 _log.LogFormat(ThresholdLogLevel, "Statement {0} took too long to execute.",                     statement);             }             ...         }     } Next time, I'll show one of my favorite uses for these extension methods in an Interceptor.

    Read the article

  • Silverlight Cream for May 19, 2010 -- #865

    - by Dave Campbell
    In this Issue: Michael Washington, Mike Snow(-2-), Justin Angel(-2-), Jeremy Likness, and David Kelley. Shoutout: Erik Mork and crew have their latest up: Silverlight Week – Silverlight Android? From SilverlightCream.com: Simple Silverlight 4 Example Using oData and RX Extensions Michael Washington has a follow-on tutorial up on ViewModel, Rx, and lashed up to OData... good detailed tutorial with external links for more information. Silverlight Tip of the Day #21 – Animation Easing Mike Snow has a couple new tips up -- this first one is about easing... great diagrams to help visualize and a cool demo application to boot. Silverlight Tip of the Day #22 – Data Validation Mike Snow's second tip (#22) is about validation and again he has a great demo app on the post. Windows Phone 7 - Emulator Automation Justin Angel has a WP7 post up about Automating the emulator... and in the process, loading the emulator from something other than VS2010... lots of good information. TFS2010 WP7 Continuous Integration Justin Angel hinted at continuous integration for WP7 in the last post, and he pays off with this one... even without all the bits installed on the build server. Making the ScrollViewer Talk in Silverlight 4 Jeremy Likness tried to respond to a user query about knowing when a user scrolled to the bottom of a ScrollViewer... Jeremy resolved it by listening to the right property. MEF (Microsoft Extensibility Framework) made simple (ish) David Kelley is discussing MEF and using a real-world example while doing so. Good discussion and code available in his code browser app... check thecomments. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • C# tip: do not use “is” type, if you will need cast “as” later

    - by Michael Freidgeim
    We have a debate with one of my collegues, is it agood style to check, if the object of particular style, and then cast  as this  type. The perfect answer of Jon Skeet   and answers in Cast then check or check then cast? confirmed my point.//good    var coke = cola as CocaCola;    if (coke != null)    {        // some unique coca-cola only code    }    //worse    if (cola is CocaCola)    {        var coke =  cola as CocaCola;        // some unique coca-cola only code here.    }

    Read the article

  • NServiceBus Generic Host and mqsvc.exe high CPU

    - by Michael Stephenson
    We have been doing some work with NServiceBus recently and observed some unusual behaviour which was caused by our mistake and seemed worthy of a small post.   The Scenario In our solution we were doing some standard NServiceBus stuff by pushing a message to a queue using NServiceBus.  We had a direct send/receive scenario rather than a publish/subscribe one.   The background process which was meant to collect the message and then process it was a normal NServiceBus message handler.  We would run the NServiceBus.Host.exe which would find the handler and then do the usual NServiceBus magic.   The Problem In this solution we were creating some automated tests around this module of the integration process to ensure that it would work well.  We had two tests.   Test 1 This test would start NServiceBus.Host.exe using the Process object, then seed a message to the queue via our web service façade sitting above the queue which wrapped NServiceBus.  The background process would then process the message and the test would check the message had been processed fine.   If all was well then the NServiceBus.Host.exe process was stopped.   Test 2 In test 2 we would do a very similar thing except that instead of starting the process the test would install NServiceBus.Host.exe as a windows service and then start the service before the test and once the test was executed it would stop the test.   The Results of the Tests Test 1 worked really well, however in test 2 we found that it didn’t really work at all, instead of doing the background process we were finding that between mqsvc.exe and NServiceBus.Host.exe the CPU on the machine was maxed and nothing was really happening.   The Solution After trying a few things we found it was the permissions on the queue were not set correctly.  Once this was resolved it all worked fine and CPU was not excessive and ran just like the console application.   I think the couple of take aways from this are:   Make sure you set the windows service for NserviceBus Generic Host to the right credentials When you install the generic host as a windows service then by default it will use the default windows credentials.  For any production like scenario you should be using a domain account to run the process as via the windows service. Make sure you have the queue set with the right permissions For the credentials you have used to configure the generic host as a windows service you should ensure that this user has the appropriate permissions for any queues it will interact with. Make sure you turn on the right logging configuration in NServiceBus When this wasnt working correctly we didnt know there was an issue, we were just experiencing the high CPU condition.  I am a little surprised that there wasnt something logged and that the process didnt crash.  I guess this could be by design bearing in mind that the process could be monitoring many queues.  In this point Im just saying that originally we didnt have all of the log4net logging which is available from NServiceBus turned on.  Its probably a good idea to have this turned on and configured until you are happy your solution is working fine.   Thanks to Ahmed Hashmi on my team who got this working in the end.

    Read the article

  • Using NServiceBus behind a custom web service

    - by Michael Stephenson
    In this post I'd like to talk about an architecture scenario we had recently and how we were able to utilise NServiceBus to help us address this problem. Scenario Cognos is a reporting system used by one of my clients. A while back we developed a web service façade to allow line of business applications to be able to access reports from Cognos to support their various functions. The service was intended to provide access to reports which were quick running reports or pre-generated reports which could be accessed real-time on demand. One of the key aims of the web service was to provide a simple generic interface to allow applications to get any report without needing to worry about the complex .net SDK for Cognos. The web service also supported multi-hop kerberos delegation so that report data could be accesses under the context of the end user. This service was working well for a period of time. The Problem The problem we encountered was that reports were now also required to be available to batch processes. The original design was optimised for low latency so users would enjoy a positive experience, however when the batch processes started to request 250+ concurrent reports over an extended period of time you can begin to imagine the sorts of problems that come into play. The key problems this new scenario caused are: Users may be affected and the latency of on demand reports was significantly slower The Cognos infrastructure was not scaled sufficiently to be able to cope with these long peaks of load From a cost perspective it just isn't feasible to scale the Cognos infrastructure to be able to handle the load when it is only for a couple of hour window each night. We really needed to introduce a second pattern for accessing this service which would support high through-put scenarios. We also had little control over the batch process in terms of being able to throttle its load. We could however make some changes to the way it accessed the reports. The Approach My idea was to introduce a throttling mechanism between the Web Service Façade and Cognos. This would allow the batch processes to push reports requests hard at the web service which we were confident the web service can handle. The web service would then queue these requests and process them behind the scenes and make a call back to the batch application to provide the report once it had been accessed. In terms of technology we had some limitations because we were not able to use WCF or IIS7 where the MSMQ-Activated WCF services could have helped, but we did have MSMQ as an option and I thought NServiceBus could do just the job to help us here. The flow of how this would work was as follows: The batch applications would send a request for a report to the web service The web service uses NServiceBus to send the message to a Queue The NServiceBus Generic Host is running as a windows service with a message handler which subscribes to these messages The message handler gets the message, accesses the report from Cognos The message handler calls back to the original batch application, this is decoupled because the calling application provides a call back url The report gets into the batch application and is processed as normal This approach looks something like the below diagram: The key points are an application wanting to take advantage of the batch driven reports needs to do the following: Implement our call back contract Make a call to the service providing a call back url Provide a correlation ID so it knows how to tie each response back to its request What does NServiceBus offer in this solution So this scenario is not the typical messaging service bus type of solution people implement with NServiceBus, but it did offer the following: Simplified interaction with MSMQ Offered the ability to configure the number of processes working through the queue so we could find a balance between load on Cognos versus the applications end to end processing time NServiceBus offers retries and a way to manage failed messages NServiceBus offers a high availability setup The simple thing is that NServiceBus gave us the platform to build the solution on. We just implemented a message handler which functionally processed a message and we could rely on NServiceBus to do all of the hard work around managing the queues and all of the lower level things that would have took ages to write to any kind of robust level. Conclusion With this approach we were able to deal with a fairly significant performance issue with out too much rework. Hopefully this write up gives people some insight into ideas on how to leverage the excellent NServiceBus framework to help solve integration and high through-put scenarios.

    Read the article

  • C#/.NET Little Wonders: The Joy of Anonymous Types

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the .NET 3 Framework, Microsoft introduced the concept of anonymous types, which provide a way to create a quick, compiler-generated types at the point of instantiation.  These may seem trivial, but are very handy for concisely creating lightweight, strongly-typed objects containing only read-only properties that can be used within a given scope. Creating an Anonymous Type In short, an anonymous type is a reference type that derives directly from object and is defined by its set of properties base on their names, number, types, and order given at initialization.  In addition to just holding these properties, it is also given appropriate overridden implementations for Equals() and GetHashCode() that take into account all of the properties to correctly perform property comparisons and hashing.  Also overridden is an implementation of ToString() which makes it easy to display the contents of an anonymous type instance in a fairly concise manner. To construct an anonymous type instance, you use basically the same initialization syntax as with a regular type.  So, for example, if we wanted to create an anonymous type to represent a particular point, we could do this: 1: var point = new { X = 13, Y = 7 }; Note the similarity between anonymous type initialization and regular initialization.  The main difference is that the compiler generates the type name and the properties (as readonly) based on the names and order provided, and inferring their types from the expressions they are assigned to. It is key to remember that all of those factors (number, names, types, order of properties) determine the anonymous type.  This is important, because while these two instances share the same anonymous type: 1: // same names, types, and order 2: var point1 = new { X = 13, Y = 7 }; 3: var point2 = new { X = 5, Y = 0 }; These similar ones do not: 1: var point3 = new { Y = 3, X = 5 }; // different order 2: var point4 = new { X = 3, Y = 5.0 }; // different type for Y 3: var point5 = new {MyX = 3, MyY = 5 }; // different names 4: var point6 = new { X = 1, Y = 2, Z = 3 }; // different count Limitations on Property Initialization Expressions The expression for a property in an anonymous type initialization cannot be null (though it can evaluate to null) or an anonymous function.  For example, the following are illegal: 1: // Null can't be used directly. Null reference of what type? 2: var cantUseNull = new { Value = null }; 3:  4: // Anonymous methods cannot be used. 5: var cantUseAnonymousFxn = new { Value = () => Console.WriteLine(“Can’t.”) }; Note that the restriction on null is just that you can’t use it directly as the expression, because otherwise how would it be able to determine the type?  You can, however, use it indirectly assigning a null expression such as a typed variable with the value null, or by casting null to a specific type: 1: string str = null; 2: var fineIndirectly = new { Value = str }; 3: var fineCast = new { Value = (string)null }; All of the examples above name the properties explicitly, but you can also implicitly name properties if they are being set from a property, field, or variable.  In these cases, when a field, property, or variable is used alone, and you don’t specify a property name assigned to it, the new property will have the same name.  For example: 1: int variable = 42; 2:  3: // creates two properties named varriable and Now 4: var implicitProperties = new { variable, DateTime.Now }; Is the same type as: 1: var explicitProperties = new { variable = variable, Now = DateTime.Now }; But this only works if you are using an existing field, variable, or property directly as the expression.  If you use a more complex expression then the name cannot be inferred: 1: // can't infer the name variable from variable * 2, must name explicitly 2: var wontWork = new { variable * 2, DateTime.Now }; In the example above, since we typed variable * 2, it is no longer just a variable and thus we would have to assign the property a name explicitly. ToString() on Anonymous Types One of the more trivial overrides that an anonymous type provides you is a ToString() method that prints the value of the anonymous type instance in much the same format as it was initialized (except actual values instead of expressions as appropriate of course). For example, if you had: 1: var point = new { X = 13, Y = 42 }; And then print it out: 1: Console.WriteLine(point.ToString()); You will get: 1: { X = 13, Y = 42 } While this isn’t necessarily the most stunning feature of anonymous types, it can be handy for debugging or logging values in a fairly easy to read format. Comparing Anonymous Type Instances Because anonymous types automatically create appropriate overrides of Equals() and GetHashCode() based on the underlying properties, we can reliably compare two instances or get hash codes.  For example, if we had the following 3 points: 1: var point1 = new { X = 1, Y = 2 }; 2: var point2 = new { X = 1, Y = 2 }; 3: var point3 = new { Y = 2, X = 1 }; If we compare point1 and point2 we’ll see that Equals() returns true because they overridden version of Equals() sees that the types are the same (same number, names, types, and order of properties) and that the values are the same.   In addition, because all equal objects should have the same hash code, we’ll see that the hash codes evaluate to the same as well: 1: // true, same type, same values 2: Console.WriteLine(point1.Equals(point2)); 3:  4: // true, equal anonymous type instances always have same hash code 5: Console.WriteLine(point1.GetHashCode() == point2.GetHashCode()); However, if we compare point2 and point3 we get false.  Even though the names, types, and values of the properties are the same, the order is not, thus they are two different types and cannot be compared (and thus return false).  And, since they are not equal objects (even though they have the same value) there is a good chance their hash codes are different as well (though not guaranteed): 1: // false, different types 2: Console.WriteLine(point2.Equals(point3)); 3:  4: // quite possibly false (was false on my machine) 5: Console.WriteLine(point2.GetHashCode() == point3.GetHashCode()); Using Anonymous Types Now that we’ve created instances of anonymous types, let’s actually use them.  The property names (whether implicit or explicit) are used to access the individual properties of the anonymous type.  The main thing, once again, to keep in mind is that the properties are readonly, so you cannot assign the properties a new value (note: this does not mean that instances referred to by a property are immutable – for more information check out C#/.NET Fundamentals: Returning Data Immutably in a Mutable World). Thus, if we have the following anonymous type instance: 1: var point = new { X = 13, Y = 42 }; We can get the properties as you’d expect: 1: Console.WriteLine(“The point is: ({0},{1})”, point.X, point.Y); But we cannot alter the property values: 1: // compiler error, properties are readonly 2: point.X = 99; Further, since the anonymous type name is only known by the compiler, there is no easy way to pass anonymous type instances outside of a given scope.  The only real choices are to pass them as object or dynamic.  But really that is not the intention of using anonymous types.  If you find yourself needing to pass an anonymous type outside of a given scope, you should really consider making a POCO (Plain Old CLR Type – i.e. a class that contains just properties to hold data with little/no business logic) instead. Given that, why use them at all?  Couldn’t you always just create a POCO to represent every anonymous type you needed?  Sure you could, but then you might litter your solution with many small POCO classes that have very localized uses. It turns out this is the key to when to use anonymous types to your advantage: when you just need a lightweight type in a local context to store intermediate results, consider an anonymous type – but when that result is more long-lived and used outside of the current scope, consider a POCO instead. So what do we mean by intermediate results in a local context?  Well, a classic example would be filtering down results from a LINQ expression.  For example, let’s say we had a List<Transaction>, where Transaction is defined something like: 1: public class Transaction 2: { 3: public string UserId { get; set; } 4: public DateTime At { get; set; } 5: public decimal Amount { get; set; } 6: // … 7: } And let’s say we had this data in our List<Transaction>: 1: var transactions = new List<Transaction> 2: { 3: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = 2200.00m }, 4: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = -1100.00m }, 5: new Transaction { UserId = "Jim", At = DateTime.Now.AddDays(-1), Amount = 900.00m }, 6: new Transaction { UserId = "John", At = DateTime.Now.AddDays(-2), Amount = 300.00m }, 7: new Transaction { UserId = "John", At = DateTime.Now, Amount = -10.00m }, 8: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = 200.00m }, 9: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = -50.00m }, 10: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = -100.00m }, 11: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = 300.00m }, 12: }; So let’s say we wanted to get the transactions for each day for each user.  That is, for each day we’d want to see the transactions each user performed.  We could do this very simply with a nice LINQ expression, without the need of creating any POCOs: 1: // group the transactions based on an anonymous type with properties UserId and Date: 2: byUserAndDay = transactions 3: .GroupBy(tx => new { tx.UserId, tx.At.Date }) 4: .OrderBy(grp => grp.Key.Date) 5: .ThenBy(grp => grp.Key.UserId); Now, those of you who have attempted to use custom classes as a grouping type before (such as GroupBy(), Distinct(), etc.) may have discovered the hard way that LINQ gets a lot of its speed by utilizing not on Equals(), but also GetHashCode() on the type you are grouping by.  Thus, when you use custom types for these purposes, you generally end up having to write custom Equals() and GetHashCode() implementations or you won’t get the results you were expecting (the default implementations of Equals() and GetHashCode() are reference equality and reference identity based respectively). As we said before, it turns out that anonymous types already do these critical overrides for you.  This makes them even more convenient to use!  Instead of creating a small POCO to handle this grouping, and then having to implement a custom Equals() and GetHashCode() every time, we can just take advantage of the fact that anonymous types automatically override these methods with appropriate implementations that take into account the values of all of the properties. Now, we can look at our results: 1: foreach (var group in byUserAndDay) 2: { 3: // the group’s Key is an instance of our anonymous type 4: Console.WriteLine("{0} on {1:MM/dd/yyyy} did:", group.Key.UserId, group.Key.Date); 5:  6: // each grouping contains a sequence of the items. 7: foreach (var tx in group) 8: { 9: Console.WriteLine("\t{0}", tx.Amount); 10: } 11: } And see: 1: Jaime on 06/18/2012 did: 2: -100.00 3: 300.00 4:  5: John on 06/19/2012 did: 6: 300.00 7:  8: Jim on 06/20/2012 did: 9: 900.00 10:  11: Jane on 06/21/2012 did: 12: 200.00 13: -50.00 14:  15: Jim on 06/21/2012 did: 16: 2200.00 17: -1100.00 18:  19: John on 06/21/2012 did: 20: -10.00 Again, sure we could have just built a POCO to do this, given it an appropriate Equals() and GetHashCode() method, but that would have bloated our code with so many extra lines and been more difficult to maintain if the properties change.  Summary Anonymous types are one of those Little Wonders of the .NET language that are perfect at exactly that time when you need a temporary type to hold a set of properties together for an intermediate result.  While they are not very useful beyond the scope in which they are defined, they are excellent in LINQ expressions as a way to create and us intermediary values for further expressions and analysis. Anonymous types are defined by the compiler based on the number, type, names, and order of properties created, and they automatically implement appropriate Equals() and GetHashCode() overrides (as well as ToString()) which makes them ideal for LINQ expressions where you need to create a set of properties to group, evaluate, etc. Technorati Tags: C#,CSharp,.NET,Little Wonders,Anonymous Types,LINQ

    Read the article

  • Ubuntu Software Center does not allow changes to software sources

    - by Michael Goldshteyn
    The checkboxes that appear to be changeable under the Edit / Software Sources dialog box cannot be changed. I click on them and they just turn gray and stay at their current setting. Update: When I run software-center from a terminal window and try to change one of the checkbox settings, I get: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/softwareproperties/gtk/SoftwarePropertiesGtk.py", line 649, in on_isv_source_toggled self.backend.ToggleSourceUse(str(source_entry)) File "/usr/lib/python2.7/dist-packages/dbus/proxies.py", line 143, in __call__ **keywords) File "/usr/lib/python2.7/dist-packages/dbus/connection.py", line 630, in call_blocking message, timeout) dbus.exceptions.DBusException: com.ubuntu.SoftwareProperties.PermissionDeniedByPolicy: com.ubuntu.softwareproperties.applychanges These things happen instead of it properly prompting me for a password (for root privs).

    Read the article

  • C#/.NET Little Pitfalls: The Dangers of Casting Boxed Values

    - by James Michael Hare
    Starting a new series to parallel the Little Wonders series.  In this series, I will examine some of the small pitfalls that can occasionally trip up developers. Introduction: Of Casts and Conversions What happens when we try to assign from an int and a double and vice-versa? 1: double pi = 3.14; 2: int theAnswer = 42; 3:  4: // implicit widening conversion, compiles! 5: double doubleAnswer = theAnswer; 6:  7: // implicit narrowing conversion, compiler error! 8: int intPi = pi; As you can see from the comments above, a conversion from a value type where there is no potential data loss is can be done with an implicit conversion.  However, when converting from one value type to another may result in a loss of data, you must make the conversion explicit so the compiler knows you accept this risk.  That is why the conversion from double to int will not compile with an implicit conversion, we can make the conversion explicit by adding a cast: 1: // explicit narrowing conversion using a cast, compiler 2: // succeeds, but results may have data loss: 3: int intPi = (int)pi; So for value types, the conversions (implicit and explicit) both convert the original value to a new value of the given type.  With widening and narrowing references, however, this is not the case.  Converting reference types is a bit different from converting value types.  First of all when you perform a widening or narrowing you don’t really convert the instance of the object, you just convert the reference itself to the wider or narrower reference type, but both the original and new reference type both refer back to the same object. Secondly, widening and narrowing for reference types refers the going down and up the class hierarchy instead of referring to precision as in value types.  That is, a narrowing conversion for a reference type means you are going down the class hierarchy (for example from Shape to Square) whereas a widening conversion means you are going up the class hierarchy (from Square to Shape).  1: var square = new Square(); 2:  3: // implicitly convers because all squares are shapes 4: // (that is, all subclasses can be referenced by a superclass reference) 5: Shape myShape = square; 6:  7: // implicit conversion not possible, not all shapes are squares! 8: // (that is, not all superclasses can be referenced by a subclass reference) 9: Square mySquare = (Square) myShape; So we had to cast the Shape back to Square because at that point the compiler has no way of knowing until runtime whether the Shape in question is truly a Square.  But, because the compiler knows that it’s possible for a Shape to be a Square, it will compile.  However, if the object referenced by myShape is not truly a Square at runtime, you will get an invalid cast exception. Of course, there are other forms of conversions as well such as user-specified conversions and helper class conversions which are beyond the scope of this post.  The main thing we want to focus on is this seemingly innocuous casting method of widening and narrowing conversions that we come to depend on every day and, in some cases, can bite us if we don’t fully understand what is going on!  The Pitfall: Conversions on Boxed Value Types Can Fail What if you saw the following code and – knowing nothing else – you were asked if it was legal or not, what would you think: 1: // assuming x is defined above this and this 2: // assignment is syntactically legal. 3: x = 3.14; 4:  5: // convert 3.14 to int. 6: int truncated = (int)x; You may think that since x is obviously a double (can’t be a float) because 3.14 is a double literal, but this is inaccurate.  Our x could also be dynamic and this would work as well, or there could be user-defined conversions in play.  But there is another, even simpler option that can often bite us: what if x is object? 1: object x; 2:  3: x = 3.14; 4:  5: int truncated = (int) x; On the surface, this seems fine.  We have a double and we place it into an object which can be done implicitly through boxing (no cast) because all types inherit from object.  Then we cast it to int.  This theoretically should be possible because we know we can explicitly convert a double to an int through a conversion process which involves truncation. But here’s the pitfall: when casting an object to another type, we are casting a reference type, not a value type!  This means that it will attempt to see at runtime if the value boxed and referred to by x is of type int or derived from type int.  Since it obviously isn’t (it’s a double after all) we get an invalid cast exception! Now, you may say this looks awfully contrived, but in truth we can run into this a lot if we’re not careful.  Consider using an IDataReader to read from a database, and then attempting to select a result row of a particular column type: 1: using (var connection = new SqlConnection("some connection string")) 2: using (var command = new SqlCommand("select * from employee", connection)) 3: using (var reader = command.ExecuteReader()) 4: { 5: while (reader.Read()) 6: { 7: // if the salary is not an int32 in the SQL database, this is an error! 8: // doesn't matter if short, long, double, float, reader [] returns object! 9: total += (int) reader["annual_salary"]; 10: } 11: } Notice that since the reader indexer returns object, if we attempt to convert using a cast to a type, we have to make darn sure we use the true, actual type or this will fail!  If the SQL database column is a double, float, short, etc this will fail at runtime with an invalid cast exception because it attempts to convert the object reference! So, how do you get around this?  There are two ways, you could first cast the object to its actual type (double), and then do a narrowing cast to on the value to int.  Or you could use a helper class like Convert which analyzes the actual run-time type and will perform a conversion as long as the type implements IConvertible. 1: object x; 2:  3: x = 3.14; 4:  5: // if you want to cast, must cast out of object to double, then 6: // cast convert. 7: int truncated = (int)(double) x; 8:  9: // or you can call a helper class like Convert which examines runtime 10: // type of the value being converted 11: int anotherTruncated = Convert.ToInt32(x); Summary You should always be careful when performing a conversion cast from values boxed in object that you are actually casting to the true type (or a sub-type). Since casting from object is a widening of the reference, be careful that you either know the exact, explicit type you expect to be held in the object, or instead avoid the cast and use a helper class to perform a safe conversion to the type you desire. Technorati Tags: C#,.NET,Pitfalls,Little Pitfalls,BlackRabbitCoder

    Read the article

  • Silverlight Cream for November 21, 2011 -- #1171

    - by Dave Campbell
    In this Issue: Colin Eberhardt, Sumit Dutta, Morten Nielsen, Jesse Liberty, Jeff Blankenburg(-2-), Brian Noyes, and Tony Champion. Above the Fold: Silverlight: "PV Basics : Client-side Collections" Tony Champion WP7: "Pushpin Clustering with the Windows Phone 7 Bing Map control" Colin Eberhardt Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com: Pushpin Clustering with the Windows Phone 7 Bing Map control Colin Eberhardt is back discussing Pushpins for a BingMaps app on WP7 and provides a utility class that clusters pushpins, allowing you to render 1000s of pins on an app ... all the explanation and all the code Part 22 - Windows Phone 7 - Tile Push Notification Part 22 in Sumit Dutta's WP7 series is about Tile Push Notification... nice tutorial with all the code listed Correctly displaying your current location Morten Nielsen demonstrates formatting the information from the GPS on your WP7 into something intelligible and useful Spiking the Pomodoro Timer Jesse Liberty put up a quick and dirty version of a Pomodoro timer for WP7.1 to explore the technical challenges of the Full Stack Phase 2 he's cranking up 31 Days of Mango | Day #11: Network Jeff Blankenburg's Number 10 in his 31 Days quest of WP7.1 is about the NetworkInformation namespace which gives you all sorts of info on the user's device network connection availability, type, etc. 31 Days of Mango | Day #11: Live Tiles Jeff Blankenburg takes off on Live Tiles for Day 11... big topic for a 1 day post, but he takes off on it... updating and Live Tiles too Working with Prism 4 Part 2: MVVM Basics and Commands Brian Noyes has part 2 of his Prism/MVVM series up at SilverlightShow... very nice tutorial on the basics of getting a view and viewmodel up, and setting up an ICommand to launch an Edit View... plus the code to peruse. PV Basics : Client-side Collections Tony Champion is startig a series on Silverlight 5 and Pivot Viewer... First up is some basics in dealing with the control in SL5 and talking about Client-side Collections... great informative tutorial and all the code Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • C#/.NET Fundamentals: Choosing the Right Collection Class

    - by James Michael Hare
    The .NET Base Class Library (BCL) has a wide array of collection classes at your disposal which make it easy to manage collections of objects. While it's great to have so many classes available, it can be daunting to choose the right collection to use for any given situation. As hard as it may be, choosing the right collection can be absolutely key to the performance and maintainability of your application! This post will look at breaking down any confusion between each collection and the situations in which they excel. We will be spending most of our time looking at the System.Collections.Generic namespace, which is the recommended set of collections. The Generic Collections: System.Collections.Generic namespace The generic collections were introduced in .NET 2.0 in the System.Collections.Generic namespace. This is the main body of collections you should tend to focus on first, as they will tend to suit 99% of your needs right up front. It is important to note that the generic collections are unsynchronized. This decision was made for performance reasons because depending on how you are using the collections its completely possible that synchronization may not be required or may be needed on a higher level than simple method-level synchronization. Furthermore, concurrent read access (all writes done at beginning and never again) is always safe, but for concurrent mixed access you should either synchronize the collection or use one of the concurrent collections. So let's look at each of the collections in turn and its various pros and cons, at the end we'll summarize with a table to help make it easier to compare and contrast the different collections. The Associative Collection Classes Associative collections store a value in the collection by providing a key that is used to add/remove/lookup the item. Hence, the container associates the value with the key. These collections are most useful when you need to lookup/manipulate a collection using a key value. For example, if you wanted to look up an order in a collection of orders by an order id, you might have an associative collection where they key is the order id and the value is the order. The Dictionary<TKey,TVale> is probably the most used associative container class. The Dictionary<TKey,TValue> is the fastest class for associative lookups/inserts/deletes because it uses a hash table under the covers. Because the keys are hashed, the key type should correctly implement GetHashCode() and Equals() appropriately or you should provide an external IEqualityComparer to the dictionary on construction. The insert/delete/lookup time of items in the dictionary is amortized constant time - O(1) - which means no matter how big the dictionary gets, the time it takes to find something remains relatively constant. This is highly desirable for high-speed lookups. The only downside is that the dictionary, by nature of using a hash table, is unordered, so you cannot easily traverse the items in a Dictionary in order. The SortedDictionary<TKey,TValue> is similar to the Dictionary<TKey,TValue> in usage but very different in implementation. The SortedDictionary<TKey,TValye> uses a binary tree under the covers to maintain the items in order by the key. As a consequence of sorting, the type used for the key must correctly implement IComparable<TKey> so that the keys can be correctly sorted. The sorted dictionary trades a little bit of lookup time for the ability to maintain the items in order, thus insert/delete/lookup times in a sorted dictionary are logarithmic - O(log n). Generally speaking, with logarithmic time, you can double the size of the collection and it only has to perform one extra comparison to find the item. Use the SortedDictionary<TKey,TValue> when you want fast lookups but also want to be able to maintain the collection in order by the key. The SortedList<TKey,TValue> is the other ordered associative container class in the generic containers. Once again SortedList<TKey,TValue>, like SortedDictionary<TKey,TValue>, uses a key to sort key-value pairs. Unlike SortedDictionary, however, items in a SortedList are stored as an ordered array of items. This means that insertions and deletions are linear - O(n) - because deleting or adding an item may involve shifting all items up or down in the list. Lookup time, however is O(log n) because the SortedList can use a binary search to find any item in the list by its key. So why would you ever want to do this? Well, the answer is that if you are going to load the SortedList up-front, the insertions will be slower, but because array indexing is faster than following object links, lookups are marginally faster than a SortedDictionary. Once again I'd use this in situations where you want fast lookups and want to maintain the collection in order by the key, and where insertions and deletions are rare. The Non-Associative Containers The other container classes are non-associative. They don't use keys to manipulate the collection but rely on the object itself being stored or some other means (such as index) to manipulate the collection. The List<T> is a basic contiguous storage container. Some people may call this a vector or dynamic array. Essentially it is an array of items that grow once its current capacity is exceeded. Because the items are stored contiguously as an array, you can access items in the List<T> by index very quickly. However inserting and removing in the beginning or middle of the List<T> are very costly because you must shift all the items up or down as you delete or insert respectively. However, adding and removing at the end of a List<T> is an amortized constant operation - O(1). Typically List<T> is the standard go-to collection when you don't have any other constraints, and typically we favor a List<T> even over arrays unless we are sure the size will remain absolutely fixed. The LinkedList<T> is a basic implementation of a doubly-linked list. This means that you can add or remove items in the middle of a linked list very quickly (because there's no items to move up or down in contiguous memory), but you also lose the ability to index items by position quickly. Most of the time we tend to favor List<T> over LinkedList<T> unless you are doing a lot of adding and removing from the collection, in which case a LinkedList<T> may make more sense. The HashSet<T> is an unordered collection of unique items. This means that the collection cannot have duplicates and no order is maintained. Logically, this is very similar to having a Dictionary<TKey,TValue> where the TKey and TValue both refer to the same object. This collection is very useful for maintaining a collection of items you wish to check membership against. For example, if you receive an order for a given vendor code, you may want to check to make sure the vendor code belongs to the set of vendor codes you handle. In these cases a HashSet<T> is useful for super-quick lookups where order is not important. Once again, like in Dictionary, the type T should have a valid implementation of GetHashCode() and Equals(), or you should provide an appropriate IEqualityComparer<T> to the HashSet<T> on construction. The SortedSet<T> is to HashSet<T> what the SortedDictionary<TKey,TValue> is to Dictionary<TKey,TValue>. That is, the SortedSet<T> is a binary tree where the key and value are the same object. This once again means that adding/removing/lookups are logarithmic - O(log n) - but you gain the ability to iterate over the items in order. For this collection to be effective, type T must implement IComparable<T> or you need to supply an external IComparer<T>. Finally, the Stack<T> and Queue<T> are two very specific collections that allow you to handle a sequential collection of objects in very specific ways. The Stack<T> is a last-in-first-out (LIFO) container where items are added and removed from the top of the stack. Typically this is useful in situations where you want to stack actions and then be able to undo those actions in reverse order as needed. The Queue<T> on the other hand is a first-in-first-out container which adds items at the end of the queue and removes items from the front. This is useful for situations where you need to process items in the order in which they came, such as a print spooler or waiting lines. So that's the basic collections. Let's summarize what we've learned in a quick reference table.  Collection Ordered? Contiguous Storage? Direct Access? Lookup Efficiency Manipulate Efficiency Notes Dictionary No Yes Via Key Key: O(1) O(1) Best for high performance lookups. SortedDictionary Yes No Via Key Key: O(log n) O(log n) Compromise of Dictionary speed and ordering, uses binary search tree. SortedList Yes Yes Via Key Key: O(log n) O(n) Very similar to SortedDictionary, except tree is implemented in an array, so has faster lookup on preloaded data, but slower loads. List No Yes Via Index Index: O(1) Value: O(n) O(n) Best for smaller lists where direct access required and no ordering. LinkedList No No No Value: O(n) O(1) Best for lists where inserting/deleting in middle is common and no direct access required. HashSet No Yes Via Key Key: O(1) O(1) Unique unordered collection, like a Dictionary except key and value are same object. SortedSet Yes No Via Key Key: O(log n) O(log n) Unique ordered collection, like SortedDictionary except key and value are same object. Stack No Yes Only Top Top: O(1) O(1)* Essentially same as List<T> except only process as LIFO Queue No Yes Only Front Front: O(1) O(1) Essentially same as List<T> except only process as FIFO   The Original Collections: System.Collections namespace The original collection classes are largely considered deprecated by developers and by Microsoft itself. In fact they indicate that for the most part you should always favor the generic or concurrent collections, and only use the original collections when you are dealing with legacy .NET code. Because these collections are out of vogue, let's just briefly mention the original collection and their generic equivalents: ArrayList A dynamic, contiguous collection of objects. Favor the generic collection List<T> instead. Hashtable Associative, unordered collection of key-value pairs of objects. Favor the generic collection Dictionary<TKey,TValue> instead. Queue First-in-first-out (FIFO) collection of objects. Favor the generic collection Queue<T> instead. SortedList Associative, ordered collection of key-value pairs of objects. Favor the generic collection SortedList<T> instead. Stack Last-in-first-out (LIFO) collection of objects. Favor the generic collection Stack<T> instead. In general, the older collections are non-type-safe and in some cases less performant than their generic counterparts. Once again, the only reason you should fall back on these older collections is for backward compatibility with legacy code and libraries only. The Concurrent Collections: System.Collections.Concurrent namespace The concurrent collections are new as of .NET 4.0 and are included in the System.Collections.Concurrent namespace. These collections are optimized for use in situations where multi-threaded read and write access of a collection is desired. The concurrent queue, stack, and dictionary work much as you'd expect. The bag and blocking collection are more unique. Below is the summary of each with a link to a blog post I did on each of them. ConcurrentQueue Thread-safe version of a queue (FIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentStack Thread-safe version of a stack (LIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentBag Thread-safe unordered collection of objects. Optimized for situations where a thread may be bother reader and writer. For more information see: C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection ConcurrentDictionary Thread-safe version of a dictionary. Optimized for multiple readers (allows multiple readers under same lock). For more information see C#/.NET Little Wonders: The ConcurrentDictionary BlockingCollection Wrapper collection that implement producers & consumers paradigm. Readers can block until items are available to read. Writers can block until space is available to write (if bounded). For more information see C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection Summary The .NET BCL has lots of collections built in to help you store and manipulate collections of data. Understanding how these collections work and knowing in which situations each container is best is one of the key skills necessary to build more performant code. Choosing the wrong collection for the job can make your code much slower or even harder to maintain if you choose one that doesn’t perform as well or otherwise doesn’t exactly fit the situation. Remember to avoid the original collections and stick with the generic collections.  If you need concurrent access, you can use the generic collections if the data is read-only, or consider the concurrent collections for mixed-access if you are running on .NET 4.0 or higher.   Tweet Technorati Tags: C#,.NET,Collecitons,Generic,Concurrent,Dictionary,List,Stack,Queue,SortedList,SortedDictionary,HashSet,SortedSet

    Read the article

  • Do NOT Change "Copy Local” project references to false, unless understand subsequences.

    - by Michael Freidgeim
    To optimize performance of visual studio build I've found multiple recommendations to change CopyLocal property for dependent dlls to false,e.g. From http://stackoverflow.com/questions/690033/best-practices-for-large-solutions-in-visual-studio-2008 CopyLocal? For sure turn this offhttp://stackoverflow.com/questions/280751/what-is-the-best-practice-for-copy-local-and-with-project-referencesAlways set the Copy Local property to false and enforce this via a custom msbuild stephttp://codebetter.com/patricksmacchia/2007/06/20/benefit-from-the-c-and-vb-net-compilers-perf/BenefitBenefitMy advice is to always set ‘Copy Local’ to falseSome time ago we've tried to change the setting to false, and found that it causes problem for deployment of top-level projects.Recently I've followed the suggestion and changed the settings for middle-level projects. It didn't cause immediate issues, but I was warned by Readify Consultant Colin Savage about possible errors during deploymentsI haven't undone the changes immediately and we found a few issues during testing.There are many scenarios, when you need to have Copy Local’ left to True.The concerns are highlighted in some stack overflow answers, but they have small number of votes.Top-level projects:  set copy local = true.First of all, it doesn't work correctly for top-level projects, i.e. executables or web sites.As pointed in the answer http://stackoverflow.com/a/6529461/52277for all the references in the one at the top set copy local = true.Alternatively you have to change output directory as it's described in http://www.simple-talk.com/dotnet/.net-framework/partitioning-your-code-base-through-.net-assemblies-and-visual-studio-projects/If you set ‘ Copy Local = false’, VS will, unless you tell it otherwise, place each assembly alone in its own .\bin\Debugdirectory. Because of this, you will need to configure VS to place assemblies together in the same directory. To do so, for each VS project, go to VS > Project Properties > Build tab > Output path, and set the Ouput path to ..\bin\Debugfor debug configuration, and ..\bin\Release for release configuration.Second-level  dependencies:  set copy local = true.Another example when copylocal =false fails on run-time, is when top level assembly doesn't directly referenced one of indirect dependencies.E..g. Top-level assembly A has reference to assembly B with copylocal =true, but assembly B has reference to assembly C with copylocal =false. Most likely assembly C will be missing on runtime and will cause errors E.g. http://stackoverflow.com/questions/602765/when-should-copy-local-be-set-to-true-and-when-should-it-not?lq=1Copy local is important for deployment scenarios and tools. As a general rule you should use CopyLocal=True and http://stackoverflow.com/questions/602765/when-should-copy-local-be-set-to-true-and-when-should-it-not?lq=1 Unfortunately there are some quirks and CopyLocal won't necessary work as expected for assembly references in secondary assemblies structured as shown below.MainApp.exe MyLibrary.dll ThirdPartyLibrary.dll (if in the GAC CopyLocal won't copy to MainApp bin folder)This makes xcopy deployments difficult . .Reflection called DLLs  dependencies:  set copy local = true.E.g user can see error "ISystem.Reflection.ReflectionTypeLoadException: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information."The fix for the issue is recommended in http://stackoverflow.com/a/6200173/52277"I solved this issue by setting the Copy Local attribute of my project's references to true."In general, the problems with investigation of deployment issues may overweight the benefits of reduced build time. Setting the Copy Local to false without considering deployment issues is not a good idea.

    Read the article

  • I want to hit Apex SQL with a big stick

    - by Michael Stephenson
    <Whinge> Thought id just have a little whinge about this product which caused me a load of grief the other day..... So the background was that my development machine had a completely full hard disk which I needed to sort out.  Upon investigation I found the issue was that the msdb database had managed to get very large. This was caused because a long time ago (and I cant even remember why) I tried out Apex SQL.  After a few days I decided to uninstall it and thought nothing more of it.  What I didnt realise was that uninstalling it doesnt actually uninstall it (and it doesnt inform you about this), but there was still some assemblies left on my machine.  Everytime SQL Server was running it was starting the Apex SQL Connection monitor which was then running in the background and regularly recording information in the msdb database.  Over time it had recorded enough to fill the disk. The below article advises how to sort this out by removing this fully so if your having a problem then try this out:http://knowledgebase.apexsql.com/2007/08/how-to-uninstall-apexsqlconnectionmonit_09.htm Once this was sorted out its interesting to read the above article because I just dont think the approach used by the vendor of this software is a very good one.  So for the Apex team just wanted to pass on a thought: If I want to uninstall your product you should tell me if stuff is left on the machine especially if a process will be running which is going to fill my machine with useless data, </Whinge>

    Read the article

  • Silverlight Cream for April 25, 2010 -- #847

    - by Dave Campbell
    In this Issue: Michael Washington, David Poll, Andrea Boschin, Kunal Chowdhury(-2-), Lee, and Chad Campbell. Shoutout: Not at all Silverlight, but Kirupa has a great article up about Elastic Collisions From SilverlightCream.com: Easily decouple your MVVM ViewModel from your Model using RX Extensions Michael Washington continues with his Simplified MVVM and uses Rx to allow him to put web service methods in the model and call them from the ViewModel. A “refreshing” Authentication/Authorization experience with Silverlight 4 David Poll expands on his previous post and demonstrates returning the user to the page prior to the login, just the way you'd like it. Using a PollingDuplex service to handle long running activities Andrea Boschin is back discussing PollingDuplex again, this time is discussing getting updates from a long-running process. Introduction to Silverlight - Silverlight tutorials Chapter 1 Kunal Chowdhury has an intro to Silverlight Tutorial that looks good... if you're just getting started, here's a place to look. Introduction to Silverlight Application Development - Silverlight tutorial Chapter 2 In his 2nd introductory tutorial, Kunal Chowdhury gets into code and discusses User Controls. Drag and Drop grouping in Datagrid Lee has a post up where he's taken a DataGrid and produced some of the features of some of the 3rd-party controls, specifically dragging column headers to a place above the grid to sort the grid. Silverlight – HTTP request to [Url] was aborted. …local channel being closed while the request was still in progress Chad Campbell is addressing timeout problems you may hit with connecting up to your WCF service. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • How to run 'apt-get install' to install all dependencies?

    - by michael
    I am running this in ubuntu server installation: sudo apt-get install git-core gnupg flex bison gperf build-essential \ zip curl libc6-dev libncurses5-dev:i386 x11proto-core-dev \ libx11-dev:i386 libreadline6-dev:i386 libgl1-mesa-glx:i386 \ libgl1-mesa-dev g++-multilib mingw32 openjdk-6-jdk tofrodos \ python-markdown libxml2-utils xsltproc zlib1g-dev:i386 but I am getting this: Reading package lists... Building dependency tree... Reading state information... curl is already the newest version. gnupg is already the newest version. Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: build-essential : Depends: gcc (>= 4:4.4.3) but it is not going to be installed Depends: g++ (>= 4:4.4.3) but it is not going to be installed g++-multilib : Depends: cpp (>= 4:4.7.2-1ubuntu2) but it is not going to be installed Depends: gcc-multilib (>= 4:4.7.2-1ubuntu2) but it is not going to be installed Depends: g++ (>= 4:4.7.2-1ubuntu2) but it is not going to be installed Depends: g++-4.7-multilib (>= 4.7.2-1~) but it is not going to be installed How can I fix this?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >