Search Results

Search found 27083 results on 1084 pages for 'having'.

Page 450/1084 | < Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >

  • Time management and self improvement

    - by Filip
    I hope I can open a discussion on this topic as this is not a specific problem. It's a topic I hope to get some ideas on how people in similar situation as mine manage their time. OK, I'm a single developer on a software project for the last 6-8 months. The project I'm working on uses several technologies, mainly .net stuff: WPF, WF, NHibernate, WCF, MySql and other third party SDKs relevant for the project nature. My experience and knowledge vary, for example I have a lot of experience in WPF but much less in WCF. I work full time on the project and im curios on how other programmers which need to multi task in many areas manage their time. I'm a very applied type of person and prefer to code instead of doing research. I feel that doing research "might" slow down the progress of the project while I recognize that research and learning more in areas which I'm not so strong will ultimately make me more productive. How would you split up your daily time in productive coding time and time to and experiment, read blogs, go through tutorials etc. I would say that Im coding about 90%+ of my day and devoting some but very little time in research and acquiring new knowledge. Thanks for your replies. I think I will adopt a gradual transition to Dominics block parts. I kinda knew that coding was taking up way to much of my time but it feels good having a first version of the project completed and ready. With a few months of focused hard work behind me I hope to get more time to experiment and expand my knowlegde. Now I only hope my boss will cut me some slack and stop pressuring me for features...

    Read the article

  • Interacting with clients using project management systems

    - by Keyo
    I work in web development, that involves a lot of smaller custom projects rather than one large product. Requirements and specifications are always coming from outside the company. We've setup a ticket tracking system (Active Collab, which is rubbish compared to redmine btw) and given access to clients so they can submit issues. The idea being that less time is taken up with long phone conversations and emails. I think it can work really well if done right. However I'm not so sure it's always a good thing. Feature requests have gone up a lot on some projects. The system also needs to be friendly to non-developers while having the many features that developers use. Developers' tickets do not always map 1-to-1 with the tickets clients will create. So the requirements and broader tickets need to be separated from the more specific developer (specification) related tickets. Perhaps we could use two systems, one for clients to submit their requirements or describe a bug, and one for developers to create tickets like implement method x in class y. Maybe this can be achieved by structuring tickets into more appropriate categories or creating sub-tickets under a feature request ticket. I've briefly looked into Pivotal Tracker and it has a fundamentally different workflow. I would like to know how others are communicating with clients and keeping the technical workflow separate from the non-technical workflow. What tools do you use and how do you use them?

    Read the article

  • .Net Rocks Visual Studio 2010 Road Trip coming to Raleigh, NC May 6th

    - by Jim Duffy
    Listen up .NET developers within 50 miles of Research Triangle Park, NC!  Take out that red, blue, green, black or any other color Sharpie marker you fancy and circle May 6th! Fellow Microsoft Regional Directors Carl Franklin and Richard Campbell are going to be bringing the .Net Rocks Visual Studio 2010 Road Trip to town. What’s that you say, you’ve never been to a .Net Rocks Road Trip event and don’t know what to expect? Let me help with that. I stol… uhhh… I mean I was “inspired” by some content I found on the event information page. “Carl and Richard are loading up the DotNetMobile (a 30 foot RV) and driving to your town again to show off their favorite bits of Visual Studio 2010 and .NET 4.0! Richard talks about Web load testing and Carl talks about Silverlight 4.0 and multimedia. And to make the night even more fun, we’re going to bring a mystery rock star from the Visual Studio world to the event and interview them for a special .NET Rocks Road Trip show series. Along the way we’ll be giving away some great prizes, showing off some awesome technology and having a ton of laughs. So come out to the most fun you can have in a geeky evening – and learn a few things along the way about web load testing and Silverlight 4!“   I know I’ll be there so what are you waiting for? Head over to the event registration page and sign up today! Have a day. :-|

    Read the article

  • A Closable jQuery Plug-in

    - by Rick Strahl
    In my client side development I deal a lot with content that pops over the main page. Be it data entry ‘windows’ or dialogs or simple pop up notes. In most cases this behavior goes with draggable windows, but sometimes it’s also useful to have closable behavior on static page content that the user can choose to hide or otherwise make invisible or fade out. Here’s a small jQuery plug-in that provides .closable() behavior to most elements by using either an image that is provided or – more appropriately by using a CSS class to define the picture box layout. /* * * Closable * * Makes selected DOM elements closable by making them * invisible when close icon is clicked * * Version 1.01 * @requires jQuery v1.3 or later * * Copyright (c) 2007-2010 Rick Strahl * http://www.west-wind.com/ * * Licensed under the MIT license: * http://www.opensource.org/licenses/mit-license.php Support CSS: .closebox { position: absolute; right: 4px; top: 4px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 0.95; filter: alpha(opacity="100"); } Options: * handle Element to place closebox into (like say a header). Use if main element and closebox container are two different elements. * closeHandler Function called when the close box is clicked. Return true to close the box return false to keep it visible. * cssClass The CSS class to apply to the close box DIV or IMG tag. * imageUrl Allows you to specify an explicit IMG url that displays the close icon. If used bypasses CSS image styling. * fadeOut Optional provide fadeOut speed. Default no fade out occurs */ (function ($) { $.fn.closable = function (options) { var opt = { handle: null, closeHandler: null, cssClass: "closebox", imageUrl: null, fadeOut: null }; $.extend(opt, options); return this.each(function (i) { var el = $(this); var pos = el.css("position"); if (!pos || pos == "static") el.css("position", "relative"); var h = opt.handle ? $(opt.handle).css({ position: "relative" }) : el; var div = opt.imageUrl ? $("<img>").attr("src", opt.imageUrl).css("cursor", "pointer") : $("<div>"); div.addClass(opt.cssClass) .click(function (e) { if (opt.closeHandler) if (!opt.closeHandler.call(this, e)) return; if (opt.fadeOut) $(el).fadeOut(opt.fadeOut); else $(el).hide(); }); if (opt.imageUrl) div.css("background-image", "none"); h.append(div); }); } })(jQuery); The plugin can be applied against any selector that is a container (typically a div tag). The close image or close box is provided typically by way of a CssClass - .closebox by default – which supplies the image as part of the CSS styling. The default styling for the box looks something like this: .closebox { position: absolute; right: 4px; top: 4px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 0.95; filter: alpha(opacity="100"); } Alternately you can also supply an image URL which overrides the background image in the style sheet. I use this plug-in mostly on pop up windows that can be closed, but it’s also quite handy for remove/delete behavior in list displays like this: you can find this sample here to look to play along: http://www.west-wind.com/WestwindWebToolkit/Samples/Ajax/AmazonBooks/BooksAdmin.aspx For closable windows it’s nice to have something reusable because in my client framework there are lots of different kinds of windows that can be created: Draggables, Modal Dialogs, HoverPanels etc. and they all use the client .closable plug-in to provide the closable operation in the same way with a few options. Plug-ins are great for this sort of thing because they can also be aggregated and so different components can pick and choose the behavior they want. The window here is a draggable, that’s closable and has shadow behavior and the server control can simply generate the appropriate plug-ins to apply to the main <div> tag: $().ready(function() { $('#ctl00_MainContent_panEditBook') .closable({ handle: $('#divEditBook_Header') }) .draggable({ dragDelay: 100, handle: '#divEditBook_Header' }) .shadow({ opacity: 0.25, offset: 6 }); }) The window is using the default .closebox style and has its handle set to the header bar (Book Information). The window is just closable to go away so no event handler is applied. Actually I cheated – the actual page’s .closable is a bit more ugly in the sample as it uses an image from a resources file: .closable({ imageUrl: '/WestWindWebToolkit/Samples/WebResource.axd?d=TooLongAndNastyToPrint', handle: $('#divEditBook_Header')}) so you can see how to apply a custom image, which in this case is generated by the server control wrapping the client DragPanel. More interesting maybe is to apply the .closable behavior to list scenarios. For example, each of the individual items in the list display also are .closable using this plug-in. Rather than having to define each item with Html for an image, event handler and link, when the client template is rendered the closable behavior is attached to the list. Here I’m using client-templating and the code that this is done with looks like this: function loadBooks() { showProgress(); // Clear the content $("#divBookListWrapper").empty(); var filter = $("#" + scriptVars.lstFiltersId).val(); Proxy.GetBooks(filter, function(books) { $(books).each(function(i) { updateBook(this); showProgress(true); }); }, onPageError); } function updateBook(book,highlight) { // try to retrieve the single item in the list by tag attribute id var item = $(".bookitem[tag=" +book.Pk +"]"); // grab and evaluate the template var html = parseTemplate(template, book); var newItem = $(html) .attr("tag", book.Pk.toString()) .click(function() { var pk = $(this).attr("tag"); editBook(this, parseInt(pk)); }) .closable({ closeHandler: function(e) { removeBook(this, e); }, imageUrl: "../../images/remove.gif" }); if (item.length > 0) item.after(newItem).remove(); else newItem.appendTo($("#divBookListWrapper")); if (highlight) { newItem .addClass("pulse") .effect("bounce", { distance: 15, times: 3 }, 400); setTimeout(function() { newItem.removeClass("pulse"); }, 1200); } } Here the closable behavior is applied to each of the items along with an event handler, which is nice and easy compared to having to embed the right HTML and click handling into each item in the list individually via markup. Ideally though (and these posts make me realize this often a little late) I probably should set up a custom cssClass to handle the rendering – maybe a CSS class called .removebox that only changes the image from the default box image. This example also hooks up an event handler that is fired in response to the close. In the list I need to know when the remove button is clicked so I can fire of a service call to the server to actually remove the item from the database. The handler code can also return false; to indicate that the window should not be closed optionally. Returning true will close the window. You can find more information about the .closable class behavior and options here: .closable Documentation Plug-ins make Server Control JavaScript much easier I find this plug-in immensely useful especial as part of server control code, because it simplifies the code that has to be generated server side tremendously. This is true of plug-ins in general which make it so much easier to create simple server code that only generates plug-in options, rather than full blocks of JavaScript code.  For example, here’s the relevant code from the DragPanel server control which generates the .closable() behavior: if (this.Closable && !string.IsNullOrEmpty(DragHandleID) ) { string imageUrl = this.CloseBoxImage; if (imageUrl == "WebResource" ) imageUrl = ScriptProxy.GetWebResourceUrl(this, this.GetType(), ControlResources.CLOSE_ICON_RESOURCE); StringBuilder closableOptions = new StringBuilder("imageUrl: '" + imageUrl + "'"); if (!string.IsNullOrEmpty(this.DragHandleID)) closableOptions.Append(",handle: $('#" + this.DragHandleID + "')"); if (!string.IsNullOrEmpty(this.ClientDialogHandler)) closableOptions.Append(",handler: " + this.ClientDialogHandler); if (this.FadeOnClose) closableOptions.Append(",fadeOut: 'slow'"); startupScript.Append(@" .closable({ " + closableOptions + "})"); } The same sort of block is then used for .draggable and .shadow which simply sets options. Compared to the code I used to have in pre-jQuery versions of my JavaScript toolkit this is a walk in the park. In those days there was a bunch of JS generation which was ugly to say the least. I know a lot of folks frown on using server controls, especially the UI is client centric as the example is. However, I do feel that server controls can greatly simplify the process of getting the right behavior attached more easily and with the help of IntelliSense. Often the script markup is easier is especially if you are dealing with complex, multiple plug-in associations that often express more easily with property values on a control. Regardless of whether server controls are your thing or not this plug-in can be useful in many scenarios. Even in simple client-only scenarios using a plug-in with a few simple parameters is nicer and more consistent than creating the HTML markup over and over again. I hope some of you find this even a small bit as useful as I have. Related Links Download jquery.closable West Wind Web Toolkit jQuery Plug-ins © Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery   ASP.NET  JavaScript  

    Read the article

  • Leaving Microsoft

    - by Stephen Walther
    After two and a half years working with the ASP.NET team, I’ve decided that this is the right time to leave Microsoft and, with the help of some friends, re-launch my ASP.NET training and consulting company. The company has the modest name Superexpert. While working on my Ph.D. at MIT, I was surrounded by professors and students who were passionate about knowledge. During the Internet boom, I was lucky enough to work side-by-side with some very smart and hard-working people to create several successful startups. However, the people I worked with at Microsoft were among the smartest and hardest working. Microsoft hires a small number of people and gives them huge responsibilities. It continues to amaze me that so few people work on the ASP.NET team when you consider how much the team produces. I had the opportunity to work with a number of inspiring people at Microsoft. I’ll miss working with Scott Hunter, Dave Reed, Boris Moore, Eilon Lipton, Scott Guthrie, James Senior, Jim Wang, Phil Haack, Damian Edwards, Vishal Joshi, Mike Pope, Jon Young, Dmitry Robsman, Simon Calvert, Stefan Schackow, and many others. I’m proud of what we accomplished while I was working at Microsoft. We reached out to the jQuery team and changed direction from Microsoft Ajax to jQuery. We successfully contributed several important new features to the open-source jQuery project including jQuery Templates, jQuery Data-Linking, jQuery Globalization, and (as John Resig announced at the last jQuery conference) jQuery Require. I’m looking forward to returning to training and consulting. We want to focus on providing consulting on the “right way” of building ASP.NET websites, which we call Modern ASP.NET applications. By Modern ASP.NET applications, I mean applications built with ASP.NET MVC, jQuery, HTML5, and Visual Studio ALM. Additionally, we want to help companies that have existing ASP.NET Web Forms applications migrate to ASP.NET MVC. If you are interested in having us provide training for your company or you need help building a custom ASP.NET application then please contact us at [email protected] or visit our website at Superexpert.com.

    Read the article

  • Java crashes on lubuntu but not Ubuntu

    - by Echogene
    I have lubuntu and Ubuntu partitions on my drive. I've been having an interesting time with the new lubuntu partition. I've encountered strange things with the game Minecraft, Java and graphics drivers on the lubuntu partition. Firstly, I'll say that Minecraft runs fine at about 60fps on the Ubuntu partition with the latest drivers. (This is lower than it should be as it's a pretty decent graphics card [Radeon HD 5700].) When I first started lubuntu, I tried to see if I could get Minecraft running on Java. Java crashed when loading the main game graphics on both Sun and OpenJDK without proprietary drivers. Java also crashed on both Javas with proprietary drivers after the necessary restart. However, after disabling (with 'remove' button) the proprietary drivers with jockey-gtk in the session after the restart to install the drivers, Minecraft ran very well at ~120fps. This didn't continue after another restart, when it ran at 9fps. After failing thereafter on lubuntu to get it working at 15fps, I tried reinstalling lubuntu and installed the exact same driver (the latest one, not the one appearing on jockey) and Java versions as on Ubuntu. That is, now Ubuntu and lubuntu have the same graphics driver and Java version. Minecraft still crashes in the same way on lubuntu but works fine on Ubuntu. I would appreciate any explanation for any of these events. What differences between lubuntu and Ubuntu could cause this? Edit: After installing the 32bit driver version on lubuntu (seeing as lubuntu is 32bit), I have Java "working" for Minecraft. However, it is at <15fps again and it can't log in to servers as it takes too long.

    Read the article

  • problem installing mysql on ubuntu server 10.10 machine

    - by badperson
    Hi, I tried installing mysql a couple of times and I'm having problems. First of all, when I install it gives me a message that it's setting up and it just hangs. I can't ctl + c out of it, so I reboot the server and try to log into the db with sudo mysql -u root -p I enter my password and then get ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I restart the server: sudo /etc/init.d/mysql start Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service mysql start Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the start(8) utility, e.g. start mysql ~$ I try this: aptitude search mysql | grep ^i i A libdbd-mysql-perl - Perl5 database interface to the MySQL data i libmysql-java - Java database (JDBC) driver for MySQL i A libmysqlclient16 - MySQL database client library i mysql-client-5.1 - MySQL database client binaries i A mysql-client-core-5.1 - MySQL database core client binaries i mysql-common - MySQL database common files, e.g. /etc/mys i mysql-embedded - MySQL - embedded library i mysql-server-core-5.1 - MySQL database server binaries When I navigate to the folder to see if the *.sock file exists: '/var/run/mysqld/mysqld.sock' it does not. I also try this: service mysql status status: Unable to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory Any ideas? On my other machines installing mysql has been a snap, not sure what the problem is here. bp

    Read the article

  • ODI 11g – How to override SQL at runtime?

    - by David Allan
    Following on from the posting some time back entitled ‘ODI 11g – Simple, Powerful, Flexible’ here we push the envelope even further. Rather than just having the SQL we override defined statically in the interface design we will have it configurable via a variable….at runtime. Imagine you have a well defined interface shape that you want to be fulfilled and that shape can be satisfied from a number of different sources that is what this allows - or the ability for one interface to consume data from many different places using variables. The cool thing about ODI’s reference API and this is that it can be fantastically flexible and useful. When I use the variable as the option value, and I execute the top level scenario that uses this temporary interface I get prompted (or can get prompted to be correct) for the value of the variable. Note I am using the <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@> notation for the table reference, since this is done at runtime, then the context will resolve to the correct table name etc. Each time I execute, I could use a different source provider (obviously some dependencies on KMs/technologies here). For example, the following groovy snippet first executes and the query uses SCOTT model with EMP, the next time it is from BOB model and the datastore OTHERS. m=new Properties(); m.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@>"); s=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s, null, "GLOBAL", 5, null, true); m2=new Properties(); m2.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","OTHERS", "BOB","D")@>"); s2=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s2, null, "GLOBAL", 5, null, true); You’ll need a patch to 11.1.1.6 for this type of capability, thanks to my ole buddy Ron Gonzalez from the Enterprise Management group for help pushing the envelope!

    Read the article

  • Silverlight Cream for June 17, 2010 -- #885

    - by Dave Campbell
    In this Issue: Zoltan Arvai, Antoni Dol, Jeff Prosise, David Anson, and John Papa. Shoutouts: Rob Davis has a World Cup Football Stadium tour in Silverlight, Azure, and Bing Maps up: The World Cup Map... cruise around this... tons of features. The Silverlight Team Blog reports that NBC sports is streaming the US Open in Silverlight Adam Kinney announced Expression Studio 4 Launch keynote videos are available From SilverlightCream.com: Data Driven Applications with MVVM Part III: Validation, Bringing the UI Closer Zoltan Arvai's 3rd (and final) part of the Data-Driven MVVM apps is up at SilverlightShow. In this final section he is focusing on validation, and discussion of closer integration to the view. Focus on FocusVisualElement in Silverlight buttons Antoni Dol has a cool post up about the FocusVisualElement, and uses a button to demonstrate how it can be used. Dynamic XAP Discovery with Silverlight MEF Jeff Prosise is discussing Silverlight and MEF ... but better than the normal loading XAP files ... he's doing dynamic discovery of XAP files ... and makes it look easy! Updated analysis of two ways to create a full-size Popup in Silverlight David Anson revisits a prior post with an eye toward Silverlight 4. The feature he's discussing is that you can now hook the Resized event without having browser zoom disabled... and he demonstrates it's use in the code from the old post. Silverlight as a Transmedia Platform (Silverlight TV #33) Jesse Liberty joins John Papa this week with Silverlight TV #33, discussing Transmedia and Silverlight as a Transmedia Platform. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • What is the Xbox360's D3DRS_VIEWPORTENABLE equivalent on WinXP D3D9?

    - by Jim Buck
    I posted this on StackOverlow, but of course it should be posted here. I am maintaining a multiplatform codebase for Xbox360 and WinXP. I am seeing an issue on the XP side that appears to be related to D3DRS_VIEWPORTENABLE on the Xbox360 version not having an equivalent on WinXP D3D9. This article had an interesting idea, but the only way to construct an identity matrix is to supply negative numbers to D3DVIEWPORT9::X and D3DVIEWPORT9::Height, but they are unsigned numbers. (I tried to put in negative numbers anyway, but nothing interesting happened.) So, how does one emulate the behavior of D3DRS_VIEWPORTENABLE under WinXP/D3D9? (For clarity, the result I'm seeing is that a 2d screen-aligned quad works fine on Xbox360 but is offset/stretched on WinXP. In fact, the (0, 0) starts in the center of the screen on WinXP instead of in the lower-left corner like on the Xbox360 as a result of applying the viewport transform.) Update: I didn't have an Xbox360 devkit at the time I wrote up this question, but I've since gotten one. I commented out the disabling of the D3DRS_VIEWPORTENABLE state, and the exact same behavior resulted on the Xbox360 as on the WinXP build. So, there must be some DirectX magic to bridge the gap here for emulating D3DRS_VIEWPORTENABLE being turned off on WinXP.

    Read the article

  • SharePoint Apps and Windows Azure

    - by ScottGu
    Last Monday I had an opportunity to present as part of the keynote of this year’s SharePoint Conference.  My segment of the keynote covered the new SharePoint Cloud App Model we are introducing as part of the upcoming SharePoint 2013 and Office 365 releases.  This new app model for SharePoint is additive to the full trust solutions developers write today, and is built around three core tenants: Simplifying the development model and making it consistent between the on-premises version of SharePoint and SharePoint Online provided with Office 365. Making the execution model loosely coupled – and enabling developers to build apps and write code that can run outside of the core SharePoint service. This makes it easy to deploy SharePoint apps using Windows Azure, and avoid having to worry about breaking SharePoint and the apps within it when something is upgraded.  This new loosely coupled model also enables developers to write SharePoint applications that can leverage the full capabilities of the .NET Framework – including ASP.NET Web Forms 4.5, ASP.NET MVC 4, ASP.NET Web API, EF 5, Async, and more. Implementing this loosely coupled model using standard web protocols – like OAuth, JSON, and REST APIs – that enable developers to re-use skills and tools, and easily integrate SharePoint with Web and Mobile application architectures. A video of my talk + demos is now available to watch online: In the talk I walked through building an app from scratch – it showed off how easy it is to build solutions using new SharePoint application, and highlighted a web + workflow + mobile scenario that integrates SharePoint with code hosted on Windows Azure (all built using Visual Studio 2012 and ASP.NET 4.5 – including MVC and Web API). The new SharePoint Cloud App Model is something that I think is pretty exciting, and it is going to make it a lot easier to build SharePoint apps using the full power of both Windows Azure and the .NET Framework.  Using Windows Azure to easily extend SaaS based solutions like Office 365 is also a really natural fit and one that is going to offer a bunch of great developer opportunities.  Hope this helps, Scott  P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Comparison between Cocos2d and Corona

    - by dontangg
    I'm having a really hard time deciding which way to go on this. I'm about to start developing a game and I haven't been able to find many good comparisons between these approaches. I don't have many requirements for the game yet, but here is what I do know. needs to work on iPhone I don't have much money ($400 for Unity for iPhone is probably too much. I can probably afford $99 for Corona.) Graphics will be 2D Physics support is not needed Ability to use particles would be nice Game Center support would be nice (Corona is planning to support it soon) It would be nice to be able to support Android as well if it isn't much effort. I have done my own research, so I know basic things about them. I know Corona uses Lua and Cocos2D uses Objective C. I know that Corona allows deployment to iPhone and Android, but how easy is it? Cocos2D is free, but so many people talk about how easy it is to use Corona, but I don't like being restricted to features Corona supports or the price tag. I feel so torn here.

    Read the article

  • Tuning Red Gate: #5 of Multiple

    - by Grant Fritchey
    In the Tuning Red Gate series I've shown you how to look at a current load on the system and how to drill down to look at historical analysis of the system. I've also shown how you can see the top queries and other information from the current status of the system. I have one more thing I can show you before we need to start fixing things and showing how that affects the data collected, historical moments in time. For example, back in Post #3 I was looking at some spikes in some of the monitored resources that were taking place a couple of weeks back in time. Once I identify a moment in time that I'm interested in, I can go back to the first page of Monitor, Global Overview, and click on the icon: From this you can select the date and time you're interested in. For example, I saw some serious CPU queues last week: This then rolls back the time for all the information that's available to the Global Overview and the drill down to the server and the SQL Server instance there. This then allows me to look at the Top Queries running at this point, sort them by CPU and identify what was potentially the query that was causing the problem right when I saw the CPU queuing This ability to correlate a moment in time with the information available to you in the Analysis window makes for an excellent tool to investigate your systems going backwards in time. It really makes a huge difference in your knowledge. It's not enough to know that something happened at a particular time. You need to know what it was that was occurring. Remember, the key to tuning your systems is having enough knowledge about them. I'll post more on Tuning Red Gate as soon as I can get some queries rewritten. I'm working on that.

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • Should a developer create test cases and then run through test cases

    - by Eben Roux
    I work for a company where the development manager expects a developer to create test cases before writing any code. These test cases have to then be maintained by the developers. Every-so-often a developer will be expected to run through the test cases. From this you should be able to gather that the company in question is rather small and there are no testers. Coming from a Software Architect position and having to write / execute test cases wearing my 'tester' hat is somewhat of a shock to the system. I do it anyway but it does seem to be a rather expensive exercise :) EDIT: I seem to need to elaborate here: I am not talking about unit-testing, TDD, etc. :) I am talking about that bit of testing a tester does. Once I have developed a system (with my unit tests / tdd / etc.) the software goes through a testing phase. Should a developer be that tester and developer those test cases? I think the misunderstanding may stem from the fact that developers, typically, are not involved with this type of testing and, therefore, assumed I am referring to that testing we do do: unit testing. But alas, no. I hope that clears it up.

    Read the article

  • C#/.NET Little Wonders: Fun With Enum Methods

    - by James Michael Hare
    Once again lets dive into the Little Wonders of .NET, those small things in the .NET languages and BCL classes that make development easier by increasing readability, maintainability, and/or performance. So probably every one of us has used an enumerated type at one time or another in a C# program.  The enumerated types we create are a great way to represent that a value can be one of a set of discrete values (or a combination of those values in the case of bit flags). But the power of enum types go far beyond simple assignment and comparison, there are many methods in the Enum class (that all enum types “inherit” from) that can give you even more power when dealing with them. IsDefined() – check if a given value exists in the enum Are you reading a value for an enum from a data source, but are unsure if it is actually a valid value or not?  Casting won’t tell you this, and Parse() isn’t guaranteed to balk either if you give it an int or a combination of flags.  So what can we do? Let’s assume we have a small enum like this for result codes we want to return back from our business logic layer: 1: public enum ResultCode 2: { 3: Success, 4: Warning, 5: Error 6: } In this enum, Success will be zero (unless given another value explicitly), Warning will be one, and Error will be two. So what happens if we have code like this where perhaps we’re getting the result code from another data source (could be database, could be web service, etc)? 1: public ResultCode PerformAction() 2: { 3: // set up and call some method that returns an int. 4: int result = ResultCodeFromDataSource(); 5:  6: // this will suceed even if result is < 0 or > 2. 7: return (ResultCode) result; 8: } So what happens if result is –1 or 4?  Well, the cast does not fail, so what we end up with would be an instance of a ResultCode that would have a value that’s outside of the bounds of the enum constants we defined. This means if you had a block of code like: 1: switch (result) 2: { 3: case ResultType.Success: 4: // do success stuff 5: break; 6:  7: case ResultType.Warning: 8: // do warning stuff 9: break; 10:  11: case ResultType.Error: 12: // do error stuff 13: break; 14: } That you would hit none of these blocks (which is a good argument for always having a default in a switch by the way). So what can you do?  Well, there is a handy static method called IsDefined() on the Enum class which will tell you if an enum value is defined.  1: public ResultCode PerformAction() 2: { 3: int result = ResultCodeFromDataSource(); 4:  5: if (!Enum.IsDefined(typeof(ResultCode), result)) 6: { 7: throw new InvalidOperationException("Enum out of range."); 8: } 9:  10: return (ResultCode) result; 11: } In fact, this is often recommended after you Parse() or cast a value to an enum as there are ways for values to get past these methods that may not be defined. If you don’t like the syntax of passing in the type of the enum, you could clean it up a bit by creating an extension method instead that would allow you to call IsDefined() off any isntance of the enum: 1: public static class EnumExtensions 2: { 3: // helper method that tells you if an enum value is defined for it's enumeration 4: public static bool IsDefined(this Enum value) 5: { 6: return Enum.IsDefined(value.GetType(), value); 7: } 8: }   HasFlag() – an easier way to see if a bit (or bits) are set Most of us who came from the land of C programming have had to deal extensively with bit flags many times in our lives.  As such, using bit flags may be almost second nature (for a quick refresher on bit flags in enum types see one of my old posts here). However, in higher-level languages like C#, the need to manipulate individual bit flags is somewhat diminished, and the code to check for bit flag enum values may be obvious to an advanced developer but cryptic to a novice developer. For example, let’s say you have an enum for a messaging platform that contains bit flags: 1: // usually, we pluralize flags enum type names 2: [Flags] 3: public enum MessagingOptions 4: { 5: None = 0, 6: Buffered = 0x01, 7: Persistent = 0x02, 8: Durable = 0x04, 9: Broadcast = 0x08 10: } We can combine these bit flags using the bitwise OR operator (the ‘|’ pipe character): 1: // combine bit flags using 2: var myMessenger = new Messenger(MessagingOptions.Buffered | MessagingOptions.Broadcast); Now, if we wanted to check the flags, we’d have to test then using the bit-wise AND operator (the ‘&’ character): 1: if ((options & MessagingOptions.Buffered) == MessagingOptions.Buffered) 2: { 3: // do code to set up buffering... 4: // ... 5: } While the ‘|’ for combining flags is easy enough to read for advanced developers, the ‘&’ test tends to be easy for novice developers to get wrong.  First of all you have to AND the flag combination with the value, and then typically you should test against the flag combination itself (and not just for a non-zero)!  This is because the flag combination you are testing with may combine multiple bits, in which case if only one bit is set, the result will be non-zero but not necessarily all desired bits! Thanks goodness in .NET 4.0 they gave us the HasFlag() method.  This method can be called from an enum instance to test to see if a flag is set, and best of all you can avoid writing the bit wise logic yourself.  Not to mention it will be more readable to a novice developer as well: 1: if (options.HasFlag(MessagingOptions.Buffered)) 2: { 3: // do code to set up buffering... 4: // ... 5: } It is much more concise and unambiguous, thus increasing your maintainability and readability. It would be nice to have a corresponding SetFlag() method, but unfortunately generic types don’t allow you to specialize on Enum, which makes it a bit more difficult.  It can be done but you have to do some conversions to numeric and then back to the enum which makes it less of a payoff than having the HasFlag() method.  But if you want to create it for symmetry, it would look something like this: 1: public static T SetFlag<T>(this Enum value, T flags) 2: { 3: if (!value.GetType().IsEquivalentTo(typeof(T))) 4: { 5: throw new ArgumentException("Enum value and flags types don't match."); 6: } 7:  8: // yes this is ugly, but unfortunately we need to use an intermediate boxing cast 9: return (T)Enum.ToObject(typeof (T), Convert.ToUInt64(value) | Convert.ToUInt64(flags)); 10: } Note that since the enum types are value types, we need to assign the result to something (much like string.Trim()).  Also, you could chain several SetFlag() operations together or create one that takes a variable arg list if desired. Parse() and ToString() – transitioning from string to enum and back Sometimes, you may want to be able to parse an enum from a string or convert it to a string - Enum has methods built in to let you do this.  Now, many may already know this, but may not appreciate how much power are in these two methods. For example, if you want to parse a string as an enum, it’s easy and works just like you’d expect from the numeric types: 1: string optionsString = "Persistent"; 2:  3: // can use Enum.Parse, which throws if finds something it doesn't like... 4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result == MessagingOptions.Persistent) 7: { 8: Console.WriteLine("It worked!"); 9: } Note that Enum.Parse() will throw if it finds a value it doesn’t like.  But the values it likes are fairly flexible!  You can pass in a single value, or a comma separated list of values for flags and it will parse them all and set all bits: 1: // for string values, can have one, or comma separated. 2: string optionsString = "Persistent, Buffered"; 3:  4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked!"); 9: } Or you can parse in a string containing a number that represents a single value or combination of values to set: 1: // 3 is the combination of Buffered (0x01) and Persistent (0x02) 2: var optionsString = "3"; 3:  4: var result = (MessagingOptions) Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked again!"); 9: } And, if you really aren’t sure if the parse will work, and don’t want to handle an exception, you can use TryParse() instead: 1: string optionsString = "Persistent, Buffered"; 2: MessagingOptions result; 3:  4: // try parse returns true if successful, and takes an out parm for the result 5: if (Enum.TryParse(optionsString, out result)) 6: { 7: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 8: { 9: Console.WriteLine("It worked!"); 10: } 11: } So we covered parsing a string to an enum, what about reversing that and converting an enum to a string?  The ToString() method is the obvious and most basic choice for most of us, but did you know you can pass a format string for enum types that dictate how they are written as a string?: 1: MessagingOptions value = MessagingOptions.Buffered | MessagingOptions.Persistent; 2:  3: // general format, which is the default, 4: Console.WriteLine("Default : " + value); 5: Console.WriteLine("G (default): " + value.ToString("G")); 6:  7: // Flags format, even if type does not have Flags attribute. 8: Console.WriteLine("F (flags) : " + value.ToString("F")); 9:  10: // integer format, value as number. 11: Console.WriteLine("D (num) : " + value.ToString("D")); 12:  13: // hex format, value as hex 14: Console.WriteLine("X (hex) : " + value.ToString("X")); Which displays: 1: Default : Buffered, Persistent 2: G (default): Buffered, Persistent 3: F (flags) : Buffered, Persistent 4: D (num) : 3 5: X (hex) : 00000003 Now, you may not really see a difference here between G and F because I used a [Flags] enum, the difference is that the “F” option treats the enum as if it were flags even if the [Flags] attribute is not present.  Let’s take a non-flags enum like the ResultCode used earlier: 1: // yes, we can do this even if it is not [Flags] enum. 2: ResultCode value = ResultCode.Warning | ResultCode.Error; And if we run that through the same formats again we get: 1: Default : 3 2: G (default): 3 3: F (flags) : Warning, Error 4: D (num) : 3 5: X (hex) : 00000003 Notice that since we had multiple values combined, but it was not a [Flags] marked enum, the G and default format gave us a number instead of a value name.  This is because the value was not a valid single-value constant of the enum.  However, using the F flags format string, it broke out the value into its component flags even though it wasn’t marked [Flags]. So, if you want to get an enum to display appropriately for whether or not it has the [Flags] attribute, use G which is the default.  If you always want it to attempt to break down the flags, use F.  For numeric output, obviously D or  X are the best choice depending on whether you want decimal or hex. Summary Hopefully, you learned a couple of new tricks with using the Enum class today!  I’ll add more little wonders as I think of them and thanks for all the invaluable input!   Technorati Tags: C#,.NET,Little Wonders,Enum,BlackRabbitCoder

    Read the article

  • UEFI/GPT Win 7 Load Failure in Dual Boot and no GRUB2 [Ubuntu 12.04]

    - by cristian_jordache
    Configuration: MBB: ASRock X79 Extreme6 Win 7 installed on a INTEL 40GB SSD (GPT partitioned) Ubuntu 14.04 on a CORSAIR 30GB SSD (Ext4 and SWAP) I had Windows 7 installed previously in UEFI mode, using 3 partitions (GPT) and works fine if left alone. In UEFI BIOS settings I can see sometimes a "Windows Boot Manager" and other times (?) a "UEFI Intel" entry for INTEL HDD and Windows will boot properly selecting the one available at that time. I installed Ubuntu 14.04 after Win 7 w/o changing any UEFI BIOS settings and it works fine only if the BIOS is set w/ the Ubuntu partition as the first drive to boot, in AHCI mode. If both SSD drives are connected, the Win7 Intel boot drive can be chosen as first boot device but only as an "AHCI Intel drive" (No "Windows Boot Manager" nor "UEFI Intel device" options available in BIOS Boot menu) and Win7 will not load properly as long as the Ubuntu Crucial SSD is NOT PHYSICALLY DISCONNECTED. Windows will try, start booting for few seconds but will fail replacing Win7 logo and that startup animation with w/ the "old" white progress bar and then and will notify that there is a issue and prompt the user to try to Load Win 7 in Normal Mode again or try a Recovery Mode to fix it. If I let Windows INTEL HDD boot via BIOS/UEFI - Windows Boot manager selection, I may see the purple screen of Grub2 loaded for a while, but there's no selection for Ubuntu or Windows and/or then machine is not booting, showing a black screen and a small command prompt cursor blinking on top. So far the only option I see to have Ubuntu boot side by side w/ Win 7 is to reformat the Win7 SDD and set it boot in legacy BIOS mode with a MBR instead of GPT. Per my understanding this is a quite complex issue to fix (Rod Smith's answer was pretty helpful: UEFI boot on my Asus k52f) but any other suggestions are welcome. I find a bit odd that I can boot properly Windows7 SSD or an Ubuntu DVD using a DVD drive set in UEFI-BIOS in "AHCI mode" and w/ using "UEFI/Windows Boot Manager" booting option but I cannot boot a secondary SSD-HDD w/ Ubuntu having the same BIOS/UEFI Boot configuration. Looks like plugging the second SSD [the Ubuntu partition] is interfering with boot options in UEFI-BIOS.

    Read the article

  • Dilemma for growing a project: Open source volunteer developers VS closed source paid / revshare developers? [closed]

    - by giorgio79
    I am trying to grow my project, and I am vaccillating between some examples. Some options seem to be: 1. open sourcing the project to draw volunteer developers. Pros This would mean anyone can try and make some money off the code that would motivate them to contribute back and grow the project. Cons Existing bigger could easily copy and paste my work so far. They can also replicate without having access to the code, but that would take more time. I also thought of using AGPL license, but again, code can still be copied without redistribution. After all, enforcing a license costs a lot of money, and I cannot just say to a possible copycat that it seems you copied my code, show me what you got. 2. Keep the project closed source, but create some kind of a developer program where they get revshare Pros I keep the main rights for the project, but still generate interest by creating a developer program. Noone can copy code easily, just with some considerable effort, but make contributions easy as a breeze. I am also seeing many companies just open source a part of their projects, like Acquia does not open source its multisite setup, or github does not open source some of its core business. Cons Less attention from open source committed devs. Conclusion So option 2 seems the most secure, but would love some feedback.

    Read the article

  • "Has Oracle written the script for CRM success?" - Anthony Lye on Customer Experience at BAFTA

    - by Richard Lefebvre
    Anthony Lye showcased Oracle Fusion CRM at a BAFTA gathering, and MyCustomer.com covered the story under the title of "Has Oracle written the script for CRM success?' According to MyCustomer.com, "Oracle's SVP of CRM Anthony Lye set the scene for the event, suggesting products are becoming commoditized, so that the only way to differentiate is through the relationship with the customer. But he warned that "customers are more and more in control of that relationship, so you have to provide great experiences for them." "The quickest win within your organization to create a single view is to connect your marketing organization with your selling organization, align goals, processes, people and technology," Anthony explained.   "And this is a transition that is already happening - "VPs of marketing have started turning up in the same meetings as VPs of sales, we have started to see that they want to work together" - but this convergence needs nurturing." "In Fusion there are capabilities to align the organisation - we enable marketing on the same platform to build campaigns connected to sales stages. It can affect leads and opportunities at the top end of the funnel. And the selling organisation can take advantage of marketing content - the materials that are exclusively within marketing can now be used by sales. Your sales teams have been campaigning forever, but it's usually by email, it isn't aligned with the corporate message and it's being sent to people it shouldn't. By aligning them we can increase output and the quality of that output." Anthony concluded: "Operating in a disconnected fashion having two distinct systems will cost you time and money. So we feel there's a material advantage in a solution like this." Enjoy the full story at http://www.mycustomer.com/topic/marketing/has-oracle-written-script-crm-success/139958

    Read the article

  • Twin Cities Code Camp 8 Retrospective

    - by Lee Brandt
    I just got back (a few hours ago) from Minneapolis, where I was speaking at the Twin Cities Code Camp 8. I’d never been to a Twin Cities Code Camp, and I have always heard such great things, so I submitted and got accepted to speak. The conference (what I got to see) was great. My talk was pretty short on people, but there are many reasons for that. First, I spoke opposite Donn Felker (speaking about developing for Android) and Keith Dahlby (speaking about Dynamic .NET). So of course, my talk is going to be empty. How could I compete with that? Plus, my talk was about software process improvement, specifically about how our process has evolved. Maybe not the smartest idea to submit to talk about software process at a developer’s conference. The people who DID attend however, seemed to really enjoy the talk. There was good interaction and good, thoughtful questions. So the attendees seemed engaged. I actually did get a chance to go to one session. I went and saw Javier Lozano talk about Open source tools for ASP.NET MVC. I am hip-deep in MVC stuff right now and getting up to speed on MVC 2 as well. I learned about MVC Turbine, Javier’s Open Source project. I will definitely be adding it to my MVC arsenal. Thanks Javier! I did forget my AC adapter for my laptop and got a little lost in Minneapolis on my way to get one from MicroCenter Saturday morning, but other than that, it was a great trip. It’s a long drive, but seeing all the guys and getting two Nut & Honey rolls from Roly Poly in Eden Prarie for lunch on Saturday made the trip totally worth it. I look forward to seeing what Jason & Chris come up with for next year! Thanks for having me guys!

    Read the article

  • Building Interactive User Interfaces with Microsoft ASP.NET AJAX: Refreshing An UpdatePanel With Jav

    The ASP.NET AJAX UpdatePanel provides a quick and easy way to implement a snappier, AJAX-based user interface in an ASP.NET WebForm. In a nutshell, UpdatePanels allow page developers to refresh selected parts of the page (instead of refreshing the entire page). Typically, an UpdatePanel contains user interface elements that would normally trigger a full page postback - controls like Buttons or DropDownLists that have their AutoPostBack property set to True. Such controls, when placed inside an UpdatePanel, cause a partial page postback to occur. On a partial page postback only the contents of the UpdatePanel are refreshed, avoiding the "flash" of having the entire page reloaded. (For a more in-depth look at the UpdatePanel control, refer back to the Using the UpdatePanel installment in this article series.) Triggering a partial page postback refreshes the contents within an UpdatePanel, but what if you want to refresh an UpdatePanel's contents via JavaScript? Ideally, the UpdatePanel would have a client-side function named something like Refresh that could be called from script to perform a partial page postback and refresh the UpdatePanel. Unfortunately, no such function exists. Instead, you have to write script that triggers a partial page postback for the UpdatePanel you want to refresh. This article looks at how to accomplish this using just a single line of markup/script and includes a working demo you can download and try out for yourself. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • LINQ and conversion operators

    - by vik20000in
    LINQ has a habit of returning things as IEnumerable. But we have all been working with so many other format of lists like array ilist, dictionary etc that most of the time after having the result set we want to get them converted to one of our known format. For this reason LINQ has come up with helper method which can convert the result set in the desired format. Below is an example var sortedDoubles =         from d in doubles         orderby d descending         select d;     var doublesArray = sortedDoubles.ToArray(); This way we can also transfer the data to IList and Dictionary objects. Let’s say we have an array of Objects. The array contains all different types of data like double, int, null, string etc and we want only one type of data back then also we can use the helper function ofType. Below is an example     object[] numbers = { null, 1.0, "two", 3, "four", 5, "six", 7.0 };     var doubles = numbers.OfType<double>(); Vikram

    Read the article

  • NRF Big Show 2011 -- Part 3

    - by David Dorf
    I'm back from the NRF show having been one of the lucky people who's flight was not canceled. The show was very crowded with a reported 20% increase in attendance and everyone seemed in high spirits. After two years of sluggish retail sales, things are really picking up and it was reflected in everyone's mood. The pop-up Disney Store in the Oracle booth was great and attracted lots of interest in their mobile POS. I know many attendees visited the Disney Store in Times Square to see the entire operation. It's an impressive two-story store that keeps kids engaged. The POS demonstration station, where most of our innovations were demoed, was always crowded. Unfortunately most of the demos used WiFi and the signals from other booths prevented anything from working reliably. Nevertheless, the demo team did an excellent job walking people through the scenarios and explaining how shopping is being impacted by mobile, analytics, and RFID. Big Show Links Disney uncovers its store magic Top 10 Things You Missed at the NRF Big Show 2011 Oracle Retail Stores Innovation Station at NRF Big Show 2011 (video) The buzz of the show was again around mobile solutions. Several companies are creating mobile POS using the iPod Touch, including integrations to Oracle POS for the following retailers: Disney Stores with InfoGain Victoria's Secret with InfoGain Urban Outfitters with Starmount The Gap with Global Bay Keeping with the mobile theme, the NRF release a revised version of their Mobile Blueprint at NRF. It will be posted to the NRF site very soon. The alternate payments section had a major rewrite that provides a great overview and proximity and remote payment technologies. NRF Mobile Blueprint Links New mobile blueprint provides fresh insights NRF Mobile Blueprint 2011 (slides) I hope to do some posts on some of the interesting companies I spoke with in the coming weeks.

    Read the article

  • Why does integrity check fail for the 12.04.1 Alternate ISO?

    - by mghg
    I have followed various recommendations from the Ubuntu Documentation to create a bootable Ubuntu USB flash drive using the 12.04.1 Alternate install ISO-file for 64-bit PC. But the integrity test of the USB stick has failed and I do not see why. These are the steps I have made: Download of the 12.04.1 Alternate install ISO-file for 64-bit PC (ubuntu-12.04.1-alternate-amd64.iso) from http://releases.ubuntu.com/12.04.1/, as well as the MD5, SHA-1 and SHA-256 hash files and related PGP signatures Verification of the data integrity of the ISO-file using the MD5, SHA-1 and SHA-256 hash files, after having verified the hash files using the related PGP signature files (see e.g. https://help.ubuntu.com/community/HowToSHA256SUM and https://help.ubuntu.com/community/VerifyIsoHowto) Creation of a bootable USB stick using Ubuntu's Startup Disk Creator program (see http://www.ubuntu.com/download/help/create-a-usb-stick-on-ubuntu) Boot of my computer using the newly made 12.04.1 Alternate install on USB stick Selection of the option "Check disc for defects" (see https://help.ubuntu.com/community/Installation/CDIntegrityCheck) Steps 1, 2, 3 and 4 went without any problem or error messages. However, step 5 ended with an error message entitled "Integrity test failed" and with the following content: The ./install/netboot/ubuntu-installer/amd64/pxelinux.cfg/default file failed the MD5 checksum verification. Your CD-ROM or this file may have been corrupted. I have experienced the same (might only be similar since I have no exact notes) error message in previous attempts using the 12.04 (i.e. not the maintenance release) Alternate install ISO-file. I have in these cases tried to install anyway and have so far not experienced any problems to my knowledge. Is failed integrity check described above a serious error? What is the solution? Or can it be ignored without further problems?

    Read the article

  • Intel graphics driver installer, now the CPU fan is rarely quiet

    - by Space monkey
    I have an Optimus chipset: Intel HD 4000 (i7-3635QM CPU) Geforce 640m I don't care about the NVIDIA card, so I didn't try to install any proprietary drivers for it. So: I was having a choppy+high CPU experience with gnome-shell on Ubuntu 14.04. Only happened when I tried moving a window around quickly. I used the Intel graphics installer hoping that it will fix the problem. It did fix the problem, now there is no choppyness or high CPU when I move windows around. However, there is a new problem now: The fan is rarely quiet, doing barely anything at all will cause the fan to go into loud mode quickly. That happens despite the CPU usage being at just around 4%. This wasn't the case before installing Intel drivers. It would normally only do that if, for example, I'm installing packages or doing something that puts some stress on the CPU. I set all CPU cores to "powersave" using cpufreq-set, but nothing changed. Also on Windows, the fans are really quiet when I'm in powersave mode. I believe they completely shut off for most of the time. I remember the installer giving me a report at the end as to which packages it installed. Unfortunately, I didn't save the report and I don't know where it would have saved it if it did. Any ideas or similar experiences?

    Read the article

< Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >