Search Results

Search found 29317 results on 1173 pages for 'device control'.

Page 1093/1173 | < Previous Page | 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100  | Next Page >

  • How SQL Server 2014 impacts Red Gate’s SQL Compare

    - by Michelle Taylor
    SQL Compare 10.7 successfully connects to SQL Server 2014, but it doesn’t yet cover the SQL Server 2014 features which would require us to make major changes to SQL Compare to support. In this post I’m going to talk about the SQL Server 2014 features we’ve already begun supporting, and which ones we’re working on for the next release of SQL Compare (v11). From SQL Compare’s perspective, the new memory-optimized table functionality (some might know it as ‘Hekaton’) has been the most important change. It can’t be described as its own object type, but the new functionality is split across two existing object types (three if you count indexes), as it also comes with native stored procedures and inline indexes. Along with connectivity support, the SQL Compare team has already implemented the first part of the puzzle – inline specification of indexes. These are essential for memory-optimized tables because it’s not possible to alter the memory optimized table’s structure, and so indexes can’t be added after the fact without dropping the table. Books Online  shows this in more detail in the table_index and column_index clauses of http://msdn.microsoft.com/en-us/library/ms174979(v=sql.120).aspx. SQL Compare 10.7 currently supports reading the new inline index specification from script folders and source control repositories, and will write out inline indexes where it’s necessary to do so (i.e. in UDDTs or when attempting to write projects compatible with the SSDT database project format). However, memory-optimized tables themselves are not yet supported in 10.7. The team is actively working on making them available in the v11 release with full support later in the year, and in a beta version before that. Fortunately, SQL Compare already has some ways of handling tables that have to be dropped and created rather than altered, which are being adapted to handle this new kind of table. Because it’s one of the largest new database engine features, there’s an equally large Books Online section on memory-optimized tables, but for us the most important parts of the documentation are the normal table features that are changed or unsupported and the new syntax found in the T-SQL reference pages. We are treating SQL Compare’s support of Natively Compiled Stored Procedures as a separate unit of work, which will be available in a subsequent beta and also feed into the v11 release. This new type of stored procedure is designed to work with memory-optimized tables to maintain the performance improvements gained by them – but you can still also access memory-optimized tables from normal stored procedures and ad-hoc queries. To us, they’re essentially a limited-syntax stored procedure with a few extra options in the create statement, embodied in the updated CREATE PROCEDURE documentation and with the detailed limitations. They should be easier to handle than memory-optimized tables simply because the handling of stored procedures is less sensitive to dropping the object than the handling of tables. However, both share an incompatibility with DDL triggers and Event Notifications which mean we’ll need to temporarily disable these during the specific deployment operations that involve them – don’t worry, we’ll supply a warning if this is the case so that you can check your auditing arrangements can handle the situation. There are also a handful of other improvements in SQL Server 2014 which affect SQL Compare and SQL Data Compare that are not connected to memory optimized tables. The largest of these are the improvements to columnstore indexes, with the capability to create clustered columnstore indexes and update columnstore tables through them – for more detail, take a look at the new syntax reference. There’s also a new index option for better compression of columnstores (COLUMNSTORE_ARCHIVE) and a new statistics option for incremental per-partition statistics, plus the 90 compatibility level is being retired. We’re planning to finish up these small clean-up features last, and be ready to release SQL Compare 11 with full SQL 2014 support early in Q3 this year. For a more thorough overview of what’s new in SQL Server 2014, Books Online’s What’s New section is a good place to start (although almost all the changes in this version are in the Database Engine).

    Read the article

  • More SQL Smells

    - by Nick Harrison
    Let's continue exploring some of the SQL Smells from Phil's list. He has been putting together. Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) This is a great example poking holes in the whole theory of "If it works it's not broken" Queries will this probably will generally work and give the correct response. In fact, without careful analysis, you probably may be completely oblivious that there is even a problem. This subtle little problem will needlessly complicate queries and slow them down regardless of the indexes applied. Consider this example: CREATE TABLE [dbo].[Page](     [PageId] [int] IDENTITY(1,1) NOT NULL,     [Title] [varchar](75) NOT NULL,     [Sequence] [int] NOT NULL,     [ThemeId] [int] NOT NULL,     [CustomCss] [text] NOT NULL,     [CustomScript] [text] NOT NULL,     [PageGroupId] [int] NOT NULL;  CREATE PROCEDURE PageSelectBySequence ( @sequenceMin smallint , @sequenceMax smallint ) AS BEGIN SELECT [PageId] , [Title] , [Sequence] , [ThemeId] , [CustomCss] , [CustomScript] , [PageGroupId] FROM [CMS].[dbo].[Page] WHERE Sequence BETWEEN @sequenceMin AND @SequenceMax END  Note that the Sequence column is defined as int while the sequence parameter is defined as a small int. The problem is that the database may have to do a lot of type conversions to evaluate the query. In some cases, this may even negate the indexes that you have in place. Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) There are two main problems here. The first is a little subjective, since this is a non-standard way of expressing the query, it is harder to understand. The other problem is much more objective and potentially problematic. You are taking much of the control away from the optimizer. Written properly, such a query may well out perform a corresponding query written with traditional joins. More likely than not, performance will degrade. Whenever you assume that you know better than the optimizer, you will most likely be wrong. This is the fundmental problem with any hint. Consider a query like this:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , ( SELECT EffectName FROM dbo.Effect WHERE EffectId = dbo.PageEffects.EffectId ) AS EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId  This can and should be written as:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId INNER JOIN dbo.Effect ON dbo.Effects.EffectId = dbo.PageEffects.EffectId  The correlated query may just as easily show up in the where clause. It's not a good idea in the select clause or the where clause. Few or No comments. This one is a bit more complicated and controversial. All comments are not created equal. Some comments are helpful and need to be included. Other comments are not necessary and may indicate a problem. I tend to follow the rule of thumb that comments that explain why are good. Comments that explain how are bad. Many people may be shocked to hear the idea of a bad comment, but hear me out. If a comment is needed to explain what is going on or how it works, the logic is too complex and needs to be simplified. Comments that explain why are good. Comments may explain why the sql is needed are good. Comments that explain where the sql is used are good. Comments that explain how tables are related should not be needed if the sql is well written. If they are needed, you need to consider reworking the sql or simplify your data model. Use of functions in a WHERE clause. (Anil Das) Calling a function in the where clause will often negate the indexing strategy. The function will be called for every record considered. This will often a force a full table scan on the tables affected. Calling a function will not guarantee that there is a full table scan, but there is a good chance that it will. If you find that you often need to write queries using a particular function, you may need to add a column to the table that has the function already applied.

    Read the article

  • How to Restore the Real Internet Explorer Desktop Icon in Windows 7

    - by The Geek
    Remember how previous versions of Windows had an Internet Explorer icon on the desktop, and you could right-click it to quickly access the Internet Options screen? It’s completely gone in Windows 7, but a geeky hack can bring it back. Microsoft removed this feature to comply with all those murky legal battles they’ve had, and their alternate suggestion is to create a standard shortcut to iexplore.exe on the Desktop, but it’s not the same thing. We’ve got a registry hack to bring it back. This guest article was written by Ramesh from the WinHelpOnline blog, where he’s got loads of really geeky registry hacks. Bring Back the Internet Explorer Namespace Icon in Windows 7 the Easy Way If you just want the IE icon back, all you need to do is download the RealInternetExplorerIcon.zip file, extract the contents, and then double-click on the w7_ie_icon_restore.reg file. That’s all you have to do. There’s also an undo registry file there if you want to get rid of it. Download the Real Internet Explorer Icon Registry Hack Manual Registry Hack If you prefer doing things the manual way, or just really want to understand how this hack works, you can follow through the manual steps below to learn how it was done, but we’ll have to warn you that it’s a lot of steps. Launch Regedit.exe using the Start Menu search box, and then navigate to the following location: HKEY_CLASSES_ROOT \ CLSID \ {871C5380-42A0-1069-A2EA-08002B30309D} Right-click on the key on the left-hand pane, choose Export, and save it to a .REG file (say, ie-guid.reg) Open up the REG file using Notepad… From the Edit menu, click Replace, and replace every occurrence of the following GUID string {871C5380-42A0-1069-A2EA-08002B30309D} … with a custom GUID string, such as: {871C5380-42A0-1069-A2EA-08002B30301D} Save the REG file and close Notepad, and then double-click on the file to merge the contents to the registry. Either re-open the registry editor, or use the F5 key to reload everything with the new changes (this step is important). Now you can navigate downto the following registry key: HKEY_CLASSES_ROOT \ CLSID \ {871C5380-42A0-1069-A2EA-08002B30301D} \ Shellex \ ContextMenuHandlers \ ieframe Double-click on the (default) key in the right-hand pane and set its data as: {871C5380-42A0-1069-A2EA-08002B30309D} With this done, press F5 on the desktop and you’ll see the Internet Explorer icon that looks like this: The icon appears incomplete without the Properties command in right click menu, so keep reading. Final Registry Hack Adjustments Click on the following key, which should still be viewable in your Registry editor window from the last step. HKEY_CLASSES_ROOT\CLSID\{871C5380-42A0-1069-A2EA-08002B30301D} Double-click LocalizedString in the right-hand pane and type the following data to rename the icon. Internet Explorer Select the following key: HKEY_CLASSES_ROOT\CLSID\{871C5380-42A0-1069-A2EA-08002B30301D}\shell Add a subkey and name it as Properties, then select the Properties key, double-click the (default) value and type the following: P&roperties Create a String value named Position, and type the following data bottom At this point the window should look something like this: Under Properties, create a subkey and name it as Command, and then set its (default) value as follows: control.exe inetcpl.cpl Navigate down to the following key, and then delete the value named LegacyDisable HKEY_CLASSES_ROOT \ CLSID \ {871C5380-42A0-1069-A2EA-08002B30301D} \ shell \ OpenHomePage Now head to the this key: HKEY_LOCAL_MACHINE \ SOFTWARE \ Microsoft \ Windows \ CurrentVersion \ Explorer \ Desktop \ NameSpace Create a subkey named {871C5380-42A0-1069-A2EA-08002B30301D} (which is the custom GUID that we used earlier in this article.) Press F5 to refresh the Desktop, and here is how the Internet Explorer icon would look like, finally. That’s it! It only took 24 steps, but you made it through to the end—of course, you could just download the registry hack and get the icon back with a double-click. Similar Articles Productive Geek Tips Quick Help: Restore Show Desktop Icon in Windows VistaQuick Help: Restore Flip3D Icon in Windows VistaAdd Internet Explorer Icon to Windows XP / Vista DesktopHide, Delete, or Destroy the Recycle Bin Icon in Windows 7 or VistaBuilt-in Quick Launch Hotkeys in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • Beyond Chatting: What ‘Social’ Means for CRM

    - by Divya Malik
    A guest post by Steve Diamond, Senior Director, Outbound Product Management, Oracle In a recent post on the Oracle Applications blog, my colleague Steve Boese asked three questions related to the widespread popularity and incredibly rapid growth of Facebook, Pinterest, and LinkedIn. Steve then addressed the many applications for collaborative solutions in the area of Human Capital Management. So, in turning to a conversation about Customer Relationship Management (CRM) and Sales Force Automation (SFA), let me ask you one simple question. How many sales people, particularly at business-to-business companies, consistently meet or beat their quotas in their roles by working alone, with no collaboration among fellow sales people, sales executives, employees in product groups, in service, in Legal, third-party partners, etc.? Hello? Is anybody out there? What’s that cricket noise I hear? That’s correct. Nobody! When it comes to Sales, introverts arguably have a distinct disadvantage. While it’s certainly a truism that “success” in most professional endeavors requires working with people, it’s a mandatory success factor in Sales. This fact became abundantly clear to me one early morning in the late 1990s when I joined the former Hyperion Solutions (now part of Oracle) and attended a Sales Award Ceremony. The Head of Sales at that time gave out dozens of awards – none of them to individuals and all of them to TEAMS of individuals. That’s how it works in Sales. Your colleagues help provide you with product intelligence and competitive intelligence. They help you build the best presentations, pitches, and proposals. They help you develop the most killer RFPs. They align you with the best product people to ensure you’re matching the best products for the opportunity and join you in critical meetings. They help knock the socks of your prospects in “bake off” demo’s. They bring in the best partners to either add complementary products to your opportunity or help you implement a solution. They work with you as a collective team. And so how is all this collaboration STILL typically done today? Through email. And yet we all silently or not so silently grimace about email. It’s relatively siloed. It’s painful to search. It’s difficult to align by topic. And it’s nearly impossible to re-trace meaningful and helpful conversations that occurred among a group or a team at some point in history. This is where social networking for Sales comes into play. It’s about PURPOSEFUL social networking versus chattering. What is purposeful social networking? It’s collaboration that’s built around opportunities, accounts, and contacts. It’s collaboration that delivers valuable context – on the target company, and on key competitors – just to name two examples. It’s collaboration that can scale to provide coaching for larger numbers of sales representatives, both for general purposes, and as we’ve largely discussed here, for specific ‘deals.’ And it’s collaboration that allows a team of people to collectively edit and iterate on a document like an RFP or a soon-to-be killer presentation that is maintained in a central repository, with no time wasted searching for it or worrying about version control. But lest we get carried away, let’s remember that collaboration “happens” among sales people whether there is specialized software to support it or not. The human practice of sales has not changed much in the last 80 to 90 years. Collaboration has been a mainstay during this entire time. But what social networking in general, and Oracle Social Networking in particular delivers, is the opportunity for sales teams to dramatically increase their effectiveness and efficiency – to identify and close more high quality and lucrative opportunities more quickly. For most sales organizations, this is how the game is won. To learn more please visit Oracle Social Network and Oracle Fusion Customer Relationship Management on oracle.com

    Read the article

  • Rendering Linear Gradients using the HTML5 Canvas

    - by dwahlin
    Related HTML5 Canvas Posts: Getting Started with the HTML5 Canvas Rendering Text with the HTML5 Canvas Creating a Line Chart using the HTML5 Canvas New Pluralsight Course: HTML5 Canvas Fundamentals Gradients are everywhere. They’re used to enhance toolbars or buttons and help add additional flare to a web page when used appropriately. In the past we’ve always had to rely on images to render gradients which works well, but isn’t necessarily the most efficient (although 1 pixel wide images do work well). CSS3 provides a great way to render gradients in modern browsers (see http://www.colorzilla.com/gradient-editor for a nice online gradient generator tool) but it’s not the only option. If you’re working with charts, games, multimedia or other HTML5 Canvas applications you can also use gradients and render them on the client-side without relying on images. In this post I’ll introduce how to use linear gradients and discuss the different functions that can be used to create them.   Creating Linear Gradients Linear gradients can be created using the 2D context’s createLinearGradient function. The function takes the starting x,y coordinates and ending x,y coordinates of the gradient:   createLinearGradient(x1, y1, x2, y2);   By changing the start and end coordinates you can control the direction that the gradient renders. For example, adding the following coordinates causes the gradient to render from left to right since the y value stays at 0 for both points while the x value changes from 0 to 200. var lgrad = ctx.createLinearGradient(0, 0, 200, 0); Here’s an example of how changing the coordinates affects the gradient direction:   Once a linear gradient object has been created you can set color stops using the addColorStop() function. It takes the location where the color should appear in the gradient with 0 being the beginning and 1 being at the end (0.5 would be in the middle) as well as the color to display in the gradient. lgrad.addColorStop(0, 'white'); lgrad.addColorStop(1, 'gray');   An example of combining createLinearGradient() with addColorStop() is shown next:   Using createLinearGradient() var canvas = document.getElementById('myCanvas'); var ctx = canvas.getContext('2d'); var lgrad = ctx.createLinearGradient(0, 0, 200, 0); lgrad.addColorStop(0, 'white'); lgrad.addColorStop(1, 'gray'); ctx.fillStyle = lgrad; ctx.fillRect(0, 0, 200, 200); ctx.strokeRect(0, 0, 200, 200); This code renders a white to gray gradient as shown next: A live example of using createLinearGradient() is shown next. Click the Result tab to see the code in action.   In the next post on the HTML5 Canvas I’ll take a look at radial gradients and how they can be used. In the meantime, if you’re interested in learning more about the HTML5 Canvas and how it can be used in your Web or Windows 8 applications, check out my HTML5 Canvas Fundamentals course from Pluralsight. It has over 4 1/2 hours of canvas goodness packed in it.

    Read the article

  • Routes for IIS Classic and Integrated Mode

    - by imran_ku07
         Introduction:             ASP.NET MVC Routing feature makes it very easy to provide clean URLs. You just need to configure routes in global.asax file to create an application with clean URLs. In most cases you define routes works in IIS 6, IIS 7 (or IIS 7.5) Classic and Integrated mode. But in some cases your routes may only works in IIS 7 Integrated mode, like in the case of using extension less URLs in IIS 6 without a wildcard extension map. So in this article I will show you how to create different routes which works in IIS 6 and IIS 7 Classic and Integrated mode.       Description:             Let's say that you need to create an application which must work both in Classic and Integrated mode. Also you have no control to setup a wildcard extension map in IIS. So you need to create two routes. One with extension less URL for Integrated mode and one with a URL with an extension for Classic Mode.   routes.MapRoute( "DefaultClassic", // Route name "{controller}.aspx/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); routes.MapRoute( "DefaultIntegrated", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults );               Now you have set up two routes, one for Integrated mode and one for Classic mode. Now you only need to ensure that Integrated mode route should only match if the application is running in Integrated mode and Classic mode route should only match if the application is running in Classic mode. For making this work you need to create two custom constraint for Integrated and Classic mode. So replace the above routes with these routes,     routes.MapRoute( "DefaultClassic", // Route name "{controller}.aspx/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional }, // Parameter defaults new { mode = new ClassicModeConstraint() }// Constraints ); routes.MapRoute( "DefaultIntegrated", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional }, // Parameter defaults new { mode = new IntegratedModeConstraint() }// Constraints );            The first route which is for Classic mode adds a ClassicModeConstraint and second route which is for Integrated mode adds a IntegratedModeConstraint. Next you need to add the implementation of these constraint classes.     public class ClassicModeConstraint : IRouteConstraint { public bool Match(HttpContextBase httpContext, Route route, string parameterName, RouteValueDictionary values, RouteDirection routeDirection) { return !HttpRuntime.UsingIntegratedPipeline; } } public class IntegratedModeConstraint : IRouteConstraint { public bool Match(HttpContextBase httpContext, Route route, string parameterName, RouteValueDictionary values, RouteDirection routeDirection) { return HttpRuntime.UsingIntegratedPipeline; } }             HttpRuntime.UsingIntegratedPipeline returns true if the application is running on Integrated mode; otherwise, it returns false. So routes for Integrated mode only matched when the application is running on Integrated mode and routes for Classic mode only matched when the application is not running on Integrated mode.       Summary:             During developing applications, sometimes developers are not sure that whether this application will be host on IIS 6 or IIS 7 (or IIS 7.5) Integrated mode or Classic mode. So it's a good idea to create separate routes for both Classic and Integrated mode so that your application will use extension less URLs where possible and use URLs with an extension where it is not possible to use extension less URLs. In this article I showed you how to create separate routes for IIS Integrated and Classic mode. Hope you will enjoy this article too.   SyntaxHighlighter.all()

    Read the article

  • Silverlight Cream for May 15, 2010 -- #862

    - by Dave Campbell
    In this Issue: Victor Gaudioso, Antoni Dol(-2-), Brian Genisio, Shawn Wildermuth, Mike Snow, Phil Middlemiss, Pete Brown, Kirupa, Dan Wahlin, Glenn Block, Jeff Prosise, Anoop Madhusudanan, and Adam Kinney. Shoutouts: Victor Gaudioso would like you to Checkout my Interview with Microsoft’s Murray Gordon at MIX 10 Pete Brown announced: Connected Show Podcast #29 With … Me! From SilverlightCream.com: New Silverlight Video Tutorial: How to Create Fast Forward for the MediaElement Victor Gaudioso's latest video tutorial is on creating the ability to fast-forward a MediaElement... check it out in the tutorial player itself! Overlapping TabItems with the Silverlight Toolkit TabControl Antoni Dol has a very cool tutorial up on the Toolkit TabItems control... not only is he overlapping them quite nicely but this is a very cool tutorial... QuoteFloat: Animating TextBlock PlaneProjections for a spiraling effect in Silverlight Antoni Dol also has a Blend tutorial up on animating TextBlock items... run the demo and you'll want to read the rest :) Adventures in MVVM – My ViewModel Base – Silverlight Support! Brian Genisio continues his MVVM tutorials with this update on his ViewModel base using some new C# 4.0 features, and fully supports Silverlight and WPF My Thoughts on the Windows Phone 7 Shawn Wildermuth gives his take on WP7. He included a port of his XBoxGames app to WP7 ... thanks Shawn! Silverlight Tip of the Day #20 – Using Tooltips in Silverlight I figured Mike Snow was going to overrun me with tips since I have missed a couple days, but there's only one! ... and it's on Tooltips. Animating the Silverlight opacity mask Phil Middlemiss has an article at SilverZine describing a Behavior he wrote (and is sharing) that turns a FrameworkElement into an opacity mask for it's parent container... cool demo on the page too. Breaking Apart the Margin Property in Xaml for better Binding Pete Brown dug in on a Twitter message and put some thoughts down about breaking a Margin apart to see about binding to the individual elements. Building a Simple Windows Phone App Kirupa has a 6-part tutorial up on building not-your-typical first WP7 application... all good stuff! Integrating HTML into Silverlight Applications Dan Wahlin has a post up discussing three ways to display HTML inside a Silverlight app. Hello MEF in Silverlight 4 and VB! (with an MVVM Light cameo) Glenn Block has a post up discussing MEF, MVVM, and it's in VB this time... and it's actually a great tutorial top to bottom... all source included of course :) Understanding Input Scope in Silverlight for Windows Phone Jeff Prosise has a good post up on the WP7 SIP and how to set the proper InputScope to get the SIP you want. Thinking about Silverlight ‘desktop’ apps – Creating a Standalone Installer for offline installation (no browser) Anoop Madhusudanan is discussing something that's been floating around for a while... installing Silverlight from, say, a CD or DVD when someone installs your app. He's got some good code, but be sure to read Tim Heuer and Scott Guthrie's comments, and consider digging deeper into that part. Using FluidMoveBehavior to animate grid coordinates in Silverlight Adam Kinney has a cool post up on animating an object using the FluidMotionBehavior of Blend 4... looks great moving across a checkerboard... check out the demo, then grab the code. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Silverlight Cream for June 08, 2010 -- #877

    - by Dave Campbell
    In this Issue: Miroslav Miroslavov, Chris Klug, Beau, Christian Schormann(-2-), Dan Wahlin, Pete Brown, Michael S. Scherotter, Philipp Sumi, Andy Wigley, and Phil Middlemiss. Shoutouts: Mark Tucker set about learning Caliburn, and in the process is writing a Caliburn Book: Chapters 1-3 Jesse Liberty has a great link-laden post up about why we should all be learning/using Blend: Why Developers Should, Must, Do Care About The New Expression Blend be sure to read what he says about WP7 development, however! Charlie Kindel announced an Install problem with the Developer Tools CTP Refresh and the WP7 tools... check this out if you're having problems. John Papa has a good post up on the happenings yesterday: Expression Studio 4 Launch of Blend, SketchFlow, Encoder and More! Erik Mork & Company's latest "This Week in Silverlight" is titled First Drop: Prism v4 – First Drop is Available From SilverlightCream.com: Animated navigation between Pages Miroslav Miroslavov has Part 8 of his "Silverlight in Action" series up, detailing cool things from the CompleteIT site... this one is on Animated navigation between pages. Subtitling videos Chris Klug got a gig adding subtitles to videos for Microsoft (sweet) ... and no, not *that* kind of subtitles... read how he approached the final solution. Silverlight Watermark TextBox I'm not sure we can have too many Watermark TextBoxes, and neither does Beau , who sent me a link to this one... give it a dance and decide. Blend 4: Collaborative SketchFlow Feedback with SharePoint With the new Blend release, Christian Schormann has a post up describing the lashup to Sharepoint for sharing Sketchflow and getting feedback. New Utility, Links, and Tutorials for Path-Based Layout Christian Schormann also has a collection of resources for Path-Based Layouts, including a utility "that lets you apply a whole bunch of position-specific effects without having to write any code"... lots of links to resources here. Tales from the Trenches – Building a Real-World Silverlight Line of Business Application Dan Wahlin draws on his recent experience and lays out some of the fun and pitfalls of building LOB apps in Silverlight... WCF, MVVM, slides, and code included WPF (and Silverlight): Choose your Fonts and Text Rendering Options Wisely Pete Brown has a great post up on using fonts wisely across multiple platforms... lots of info and good discussion in the comments as well. Ball Watch USA Remember the awesome watch Michael S. Scherotter did in Silverlight 1 and then converted to Updated Ball Trainmaster Cannonball Watch to Silverlight 2? Well... there's now a contest underfoot and 8 videos to help you get started... all good stuff, and good luck! ... Michael has a post up about the contest: Enter to Win a Ball Watch by Creating One in Silverlight Announcing Sketchables – Rapid Mockup Creation with SketchFlow By way of Jesse Libertyhttp://jesseliberty.com/2010/06/08/why-developers-should-must-do-care-about-the-new-expression-blend/, this is a cool production by Philipp Sumi about a simple mockup framework he's created. Perst - a database for Windows Phone 7 Silverlight I think one of my first comments to Michael Washington back at the MVP Summit 2010 was that we'd need a database engine, and too cool, but we've got one, Andy Wigley discusses Perst in this post... to save you some time, here's the Perst site A Chrome and Glass Theme - Part 7 Phil Middlemiss has part 7 of his great theme-building series up... this time he's giving the accordian control a once-over. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • SQL SERVER – MSQL_XP – Wait Type – Day 20 of 28

    - by pinaldave
    In this blog post, I am going to discuss something from my field experience. While consultation, I have seen various wait typed, but one of my customers who has been using SQL Server for all his operations had an interesting issue with a particular wait type. Our customer had more than 100+ SQL Server instances running and the whole server had MSSQL_XP wait type as the most number of wait types. While running sp_who2 and other diagnosis queries, I could not immediately figure out what the issue was because the query with that kind of wait type was nowhere to be found. After a day of research, I was relieved that the solution was very easy to figure out. Let us continue discussing this wait type. From Book On-Line: ?MSQL_XP occurs when a task is waiting for an extended stored procedure to end. SQL Server uses this wait state to detect potential MARS application deadlocks. The wait stops when the extended stored procedure call ends. MSQL_XP Explanation: This wait type is created because of the extended stored procedure. Extended Stored Procedures are executed within SQL Server; however, SQL Server has no control over them. Unless you know what the code for the extended stored procedure is and what it is doing, it is impossible to understand why this wait type is coming up. Reducing MSQL_XP wait: As discussed, it is hard to understand the Extended Stored Procedure if the code for it is not available. In the scenario described at the beginning of this post, our client was using third-party backup tool. The third-party backup tool was using Extended Stored Procedure. After we learned that this wait type was coming from the extended stored procedure of the backup tool they were using, we contacted the tech team of its vendor. The vendor admitted that the code was not optimal at some places, and within that day they had provided the patch. Once the updated version was installed, the issue on this wait type disappeared. As viewed in the wait statistics of all the 100+ SQL Server, there was no more MSSQL_XP wait type found. In simpler terms, you must first identify which Extended Stored Procedure is creating the wait type of MSSQL_XP and see if you can get in touch with the creator of the SP so you can help them optimize the code. If you have encountered this MSSQL_XP wait type, I encourage all of you to write how you managed it. Please do not mention the name of the vendor in your comment as I will not approve it. The focus of this blog post is to understand the wait types; not talk about others. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • CRMIT’s HIGH VALUE CRM++ PLUGINS FOR CRM On DEMAND

    - by Soumo Das
    Customer satisfaction and experience being the two most considerable factors, these days businesses are on the lookout for automation tools that are world class, agile and keep quality at its core. CRMIT has developed such tools using cutting edge technologies and abstracting industry best practices and R&D.  Self Service Portal  With customers being so meticulous about regular updates and reliable access to their data, administrators just cannot think of walking a thin line. Surviving without a resource that provides a track of customer requirements for services available 24 x 7 can severely affect the productivity. In such a scenario, CRMIT’s Self Service Portal (SSP) is the best solution. This not only tracks the required customer data, but also allows companies to stay in tune with their employees, vendors and stakeholders.   One can directly sign up to become a CRMOD contact and SSP user. One need not use the database, as operations and interactions are d at run time. This is a fully configurable solution that tracks results periodically, thus making it easy for end users. It also offers better security and data visibility that enables users to progress smoothly. Quote and Order Management   When dealing with quotes, contracts and orders becomes complicated, only Quote & Order Management can work as a one-stop solution. CRMIT offers this great tool for managing all this information and for taking care of customer orders and service requirements.  This CRM On Demand plug-in allows one to create a new quote or copy the existing one. Products can be directly added from the product list of CRMOD and the pricing is calculated automatically. Quote can be generated and mailed to the external users in PDF, HTML and XLS formats. This not only allows management of quotes in an enhanced manner, but also supports various billing and tax calculation features that make work effortless.    Report Scheduler  When it comes to analyzing and providing statistics of various business processes currently running in an organization, one cannot depend on manual updates, which sometimes may be inaccurate or even delayed. CRMIT provides a SaaS based powerful solution - Report Scheduler - that allows CRM users to schedule reports as per the frequencies and then receive them as email attachments at the scheduled time.   With this powerful tool, administrators can control the report scheduler for assigning specific reports to specific users. After that, users can login and schedule any assigned report for viewing at particular intervals on monthly, weekly or daily basis. Additionally, users can also copy the mail to external users and can choose the preferred format. The best part is that sharing business data with third party become easy with this and for viewing reports, users need not log into their CRMOD account.  CRM On Demand Offline Solution CRM On-Demand Offline is another great CRM++ extension that allows one to work in both online and offline modes. Synchronizing both the modes is absolutely easy and offers ease while working. CRM OD offline works as an automation tool that not only improves efficiency, but also works as a backup in most cases. It is readily available as a windows application installer and requires users to be online only while validating and synchronizing. The best part is that working in the offline mode also works as a backup. 

    Read the article

  • upgraded "Compiz' and "Unity", now no Unity 3D on screen

    - by user18432
    Today when I logged in to my Ubuntu 12.04, the update manager told me of some upgrades. Compiz and Unity were in those upgrades. After I installed the upgrades, I can no longer get the Unity panel on the left side of screen or the systray at the top of screen. I now have to run Ubuntu 12.04 with Unity 2D. My laptop is a HP Pavilion dv9000 with Nvidia GeForce Go 7600 video. I tried to run "unity --reset" but it says there are serious issues with compiz. I have cut & pasted the read out from the terminal below. [09:35:02] xxxxxxx@L01U1204:~$ unity --reset unity-panel-service: no process found Checking if settings need to be migrated ...no Checking if internal files need to be migrated ...no Backend : gconf Integration : true Profile : unity Adding plugins Initializing core options...done compiz (core) - Warn: failed to receive ConfigureNotify event on 0x2e00004 compiz (core) - Warn: failed to receive ConfigureNotify event on 0x580005a compiz (core) - Warn: failed to receive ConfigureNotify event on 0x3600006 compiz (core) - Warn: failed to receive ConfigureNotify event on 0x3200255 compiz (core) - Warn: failed to receive ConfigureNotify event on 0x1600002 compiz (core) - Warn: failed to receive ConfigureNotify event on 0x1400002 Initializing composite options...done Initializing opengl options...done Initializing decor options...done Initializing vpswitch options...done Initializing snap options...done Initializing mousepoll options...done Initializing resize options...done Initializing place options...done Initializing move options...done Initializing wall options...done Initializing grid options...done I/O warning : failed to load external entity "/home/brwright/.compiz/session/10afaca1703486b216133648409481824100000130110002" Initializing session options...done Initializing gnomecompat options...done Initializing animation options...done Initializing fade options...done Initializing unitymtgrabhandles options...done Initializing workarounds options...done Initializing scale options...done compiz (expo) - Warn: failed to bind image to texture Initializing expo options...done Initializing ezoom options...done compiz (core) - Error: Couldn't load plugin '/usr/lib/compiz/libunityshell.so' : /usr/lib/compiz/libunityshell.so: undefined symbol: _ZNK5unity4dash10Controller6windowEv compiz (core) - Error: Couldn't load plugin 'unityshell' compiz (core) - Warn: unhandled ConfigureNotify on 0x7000090! compiz (core) - Warn: this should never happen. you should probably file a bug about this. compiz (core) - Warn: unhandled ConfigureNotify on 0x7000093! compiz (core) - Warn: this should never happen. you should probably file a bug about this. compiz (core) - Warn: unhandled ConfigureNotify on 0x7000096! compiz (core) - Warn: this should never happen. you should probably file a bug about this. compiz (core) - Warn: unhandled ConfigureNotify on 0x7000099! compiz (core) - Warn: this should never happen. you should probably file a bug about this. compiz (core) - Warn: unhandled ConfigureNotify on 0x700009c! compiz (core) - Warn: this should never happen. you should probably file a bug about this. compiz (core) - Warn: unhandled ConfigureNotify on 0x700009f! compiz (core) - Warn: this should never happen. you should probably file a bug about this. Initializing annotate options...done Initializing blur options...done Initializing clone options...done Initializing colorfilter options...done Initializing commands options...done Initializing cube options...done Initializing imgjpeg options...done Initializing kdecompat options...done Initializing mag options...done Initializing neg options...done Initializing obs options...done Initializing opacify options...done Initializing put options...done Initializing resizeinfo options...done Initializing ring options...done Initializing rotate options...done Initializing scaleaddon options...done Initializing screenshot options...done Initializing shift options...done Initializing staticswitcher options...done Initializing switcher options...done Initializing thumbnail options...done Initializing unityshell options...done Initializing water options...done Initializing winrules options...done Initializing wobbly options...done Setting Update "main_menu_key" Setting Update "run_key" Starting gtk-window-decorator As you can see the terminal never comes back to the CI prompt. I must do a control C to get to the CI prompt, but then the OS is frozen. I have to reboot and run Unity 2D in able to do anything on my laptop. I hope I have explained this enough and provided some useful info. I am at a loss to understand what the problem is, or what exactly what is causing the problem. Is it Unity or Compiz? Can anyone help?

    Read the article

  • SQL SERVER – Weekend Project – Experimenting with ACID Transactions, SQL Compliant, Elastically Scalable Database

    - by pinaldave
    Database technology is huge and big world. I like to explore always beyond what I know and share the learning. Weekend is the best time when I sit around download random software on my machine which I like to call as a lab machine (it is a pretty old laptop, hardly a quality as lab machine) and experiment it. There are so many free betas available for download that it’s hard to keep track and even harder to find the time to play with very many of them.  This blog is about one you shouldn’t miss if you are interested in the learning various relational databases. NuoDB just released their Beta 7.  I had already downloaded their Beta 6 and yesterday did the same for 7.   My impression is that they are onto something very very interesting.  In fact, it might be something really promising in terms of database elasticity, scale and operational cost reduction. The folks at NuoDB say they are working on the world’s first “emergent” database which they tout as a brand new transitional database that is intended to dramatically change what’s possible with OLTP.  It is SQL compliant, guarantees ACID transactions, yet scales elastically on heterogeneous and decentralized cloud-based resources. Interesting note for sure, making me explore more. Based on what I’ve seen so far, they are solving the architectural challenge that exists between elastic, cloud-based compute infrastructures designed to scale out in response to workload requirements versus the traditional relational database management system’s architecture of central control. Here’s my experience with the NuoDB Beta 6 so far: First they pretty much threw away all the features you’d associate with existing RDBMS architectures except the SQL and ACID transactions which they were smart to keep.  It looks like they have incorporated a number of the big ideas from various algorithms, systems and techniques to achieve maximum DB scalability. From a user’s perspective, the NuoDB Beta software behaves like any other traditional SQL database and seems to offer all the benefits users have come to expect from standards-based SQL solutions. One of the interesting feature is that one can run a transactional node and a storage node on my Windows laptop as well on other platforms – indeed interesting for sure. It’s quite amazing to see a database elastically scale across machine boundaries. So, one of the basic NuoDB concepts is that as you need to scale out, you can easily use more inexpensive hardware when/where you need it.  This is unlike what we have traditionally done to scale a database for an application – we replace the hardware with something more powerful (faster CPU and Disks). This is where I started to feel like NuoDB is on to something that has the potential to elastically scale on commodity hardware while reducing operational expense for a big OLTP database to a degree we’ve never seen before. NuoDB is able to fully leverage the cloud in an asynchronous and highly decentralized manner – while providing both SQL compliance and ACID transactions. Basically what NuoDB is doing is so new that it is all hard to believe until you’ve experienced it in action.  I will keep you up to date as I test the NuoDB Beta 7 but if you are developing a web-scale application or have an on-premise app you are thinking of moving to the cloud, testing this beta is worth your time. If you do try it, let me know what you think.  Before I say anything more, I am going to do more experiments and more test on this product and compare it with other existing similar products. For me it was a weekend worth spent on learning something new. I encourage you to download Beta 7 version and share your opinions here. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • Integrating Windows Form Click Once Application into SharePoint 2007 &ndash; Part 1 of 2

    - by Kelly Jones
    Last year, I had the opportunity to build a solution that involved integrating a Windows Form application into a SharePoint 2007 (WSS version 3.0). In this post, I’ll layout our architecture thinking and in part two, I’ll describe the technical details. Business Case Our challenge was this: we needed an easy way for a small group of our users to upload documents, in batches.  They also needed to quickly set the meta data values, as well as set security on individual files. Using the out of the box uploads just didn’t fit.  The single file upload allows set the meta data, but our users would be uploading dozens of files.  The multiple upload would allow our users to upload batches of files, but it doesn’t allow them to set the meta data during upload.  Also, neither upload method allows the users to set the permissions on the file. Our Solution We looked into building a web control of some kind, but ruled that out due to security complexities (if I remember correctly).  Another option would have been using a technology like Silverlight (or Flash?), but our team didn’t have the skills necessary to build with these. So, after looking at what was technically possible, and also what skills our team had, we settled on a Windows Form application.  We also decided to deliver it to the clients via Click Once, so we would have the ability to easily update the application in the future. Lessons Learned After deploying our solution, we’ve learned a few lessons.  First, you’ll need to have the .Net Framework installed on the client computers.  We knew this, but we still ran into issues making sure our users had the proper framework version installed.  Second, we had issues with authentication.  Our issues were due to our testing domain being a separate Active Directory domain from the domain that our end users and their workstations were members of.  (See my earlier post about Clearing Saved Passwords for the fix to our problem). Our third issue was how we dealt with uploading files that were named the same.  Our application would replace the existing file with the new file, which is the way we expected it to work.  However, our users wanted to upload weekly reports, named the same as the previous week.  We solved this by using folders within the document library to keep the sets of reports separate from previous weeks. One last thing to consider before implementing a solution like this, is what browsers and platforms your users will be working from.  We only needed to support IE and Windows, which works fine.  However, if you need to support Firefox, there are add-ons that allow Click Once to work with Firefox.  This is still a Windows only solution though.  In order to support Macs, you’d have to focus on either browser techniques (AJAX?) or Silverlight/Flash. Summary Our users are happy with the Click Once app.  It allowed them to move all of their content to our SharePoint site in under a couple hours, which they were thrilled with.  We’re happy because we can easily deploy updates, our development time was small, and we met all of our business requirements.

    Read the article

  • How to prepare for an interview presentation!

    - by Tim Koekkoek
    During an interview process you might be asked to prepare a presentation as one of the steps in the recruitment process. Below, we want to give you some tips to help you prepare for what might be considered a daunting aspect of a recruitment process. Main purpose of the presentation Always keep in mind the main purpose of what the presentation is meant to convey. Generally speaking, an interview presentation is for the company to check if you have the ability to represent and sell the organization (and yourself), to the internal and external stakeholders in the position you are applying for. A presentation is often also part of the recruitment process to check whether you can structure and explain your experience and thoughts in a convincing manner. If you are unsure about the purpose of the presentation, feel free to ask your recruiter for more information. Preparation As with every task you do, preparation is key, so is the case with an interview presentation. It is important to know who your audience is. You have to adapt your presentation to your audience, ensuring that you are presenting the facts which they would want to hear. Furthermore, make sure you practice your presentation beforehand; this will make you more confident in your presenting skills. Also, estimate the length of your presentation as presentations or pitches during the recruitment process are often capped to a certain time limit. Structure Every presentation should have a beginning, middle and an end. Make sure you give an overview of your presentation and tell the audience what they can expect. Your presentation should have a logical order and a clear message. Always build up to your key message with strong arguments and evidence. When speaking about the topic, make sure you convey your points with conviction. Also be sure you believe the message you bring forward, if you don’t believe it yourself, then the audience definitely won’t! Delivery When you think back on successful presentations you have seen, the presenter was most likely always standing up. So if asked to do a presentation, follow this example and make sure you stand up as well. Standing up when you are doing your presentation shows confidence and control. Another important aspect in the delivery of the presentation is to relax and speak slowly and with enough volume for the audience to hear you. Speaking slowly allows the audience the time to absorb the information you are providing to them. Visual Using PowerPoint, or an equivalent, makes it very easy to have a visually attractive presentation. Make sure however that you take into account that the visual aids you use are there to support you, not for you to read word for word what is on the slides! Questions When starting your presentation, it is a good idea to tell your audience that you will deal with any questions they might have at the end of the presentation. This way it doesn’t interrupt your train of thought and the flow of your presentation. Answering questions at the end may give you additional opportunities to further expand on your facts in the topic. Some people might play the “devil’s advocate” role and confront you with opposing opinions, if this is the case, take your time to reiterate your points and remain professional in your response. A good way to deal with this is to start interacting with other members of the audience and ask for their opinions, so it will become a group discussion. This will also shows strong leadership skills, as you are open to discuss and ask for other opinions. If you are interested in more tips and tricks for your interview process, have a look at Competency Based Interview Tips and How to prepare for a Telephone Interview. For more information about Oracle and our vacancies, please visit our Facebook community and our careers website.

    Read the article

  • Easy and Rapid Deployment of Application Workloads with Oracle VM

    - by Antoinette O'Sullivan
    Oracle VM is designed for easy and rapid deployment of application workloads. In addition to allowing for rapid deployment of an entire application stack, Oracle VM now gives administrators more fine-grained control of the application payloads inside the virtual machine. To get started on Oracle VM Server for x86 or Oracle VM Server fo SPARC, what better solution than to take the corresponding training course. You can take this training from your own desk, by choosing from a selection of live-virtual events already on the schedule on the Oracle University Portal. Alternatively, you can travel to an education center to take these courses. Below is a selection of in-class events already on the schedule for each course: Oracle VM Administration: Oracle VM Server for x86  Location  Date  Delivery Language  Paris, France  11 December 2013  French  Rome, Italy  22 April 2014  Italian  Budapest, Hungary  4 November 2013  Hungarian  Riga, Latvia  3 February 2014  Latvian  Oslo, Norway  9 December 2013  English  Warsaw, Poland  12 February 2014  Polish  Ljubjana, Slovenia  25 November 2013 Slovenian   Barcelona, Spain  29 October 2013  Spanish  Istanbul, Turkey  23 December 2013  Turkish  Cairo, Egypt  1 December 2013  Arabic  Johannesburg, South Africa  9 December 2013   English   Melbourne, Australia  12 February 2014  English  Sydney, Australia  25 November 2013   English   Singapore 27 November 2013    English   Montreal, Canada 18 February 2014  English  Ottawa, Canada  18 February 2014  English  Toronto, Canada  18 February 2014  English  Phoenix, AZ, United States  18 February 2014   English   Sacramento, CA, United States 18 February 2014    English   San Francisco, CA, United States 18 February 2014   English  San Jose, CA, United States  18 February 2014  English  Denver, CO, United States 22 January 2014   English  Roseville, MN, United States 10 February 2014    English   Edison, NJ, United States  18 February 2014  English  King of Prussia, PA, United States  18 February 2014  English  Reston, VA, United States  26 March 2014  English Oracle VM Server for SPARC: Installation and Configuration  Location  Date  Delivery Language  Prague, Czech Republic  2 December 2013  Czech  Paris, France  9 December 2013  French  Utrecht, Netherlands  9 December 2013  Dutch  Madrid, Spain  28 November 2013  Spanish  Dubai, United Arab Emirates  5 February 2014  English  Melbourne, Australia  31 October 2013  English  Sydney, Australia  10 February 2014  English  Tokyo, Japan  6 February 2014  Japanese  Petaling Jaya, Malaysia  23 December 2013  English  Auckland, New Zealand  21 November 2013  English  Singapore  7 November 2013  English  Toronto, Canada  25 November 2013  English  Sacramento, CA, United States  2 December 2013  English  San Francisco, CA, United States  2 December 2013  English  San Jose, CA, United States  2 December 2013  English  Caracas, Venezuela 5 November 2013   Spanish

    Read the article

  • Best Practices for Handing over Legacy Code

    - by PersonalNexus
    In a couple of months a colleague will be moving on to a new project and I will be inheriting one of his projects. To prepare, I have already ordered Michael Feathers' Working Effectively with Legacy Code. But this books as well as most questions on legacy code I found so far are concerned with the case of inheriting code as-is. But in this case I actually have access to the original developer and we do have some time for an orderly hand-over. Some background on the piece of code I will be inheriting: It's functioning: There are no known bugs, but as performance requirements keep going up, some optimizations will become necessary in the not too distant future. Undocumented: There is pretty much zero documentation at the method and class level. What the code is supposed to do at a higher level, though, is well-understood, because I have been writing against its API (as a black-box) for years. Only higher-level integration tests: There are only integration tests testing proper interaction with other components via the API (again, black-box). Very low-level, optimized for speed: Because this code is central to an entire system of applications, a lot of it has been optimized several times over the years and is extremely low-level (one part has its own memory manager for certain structs/records). Concurrent and lock-free: While I am very familiar with concurrent and lock-free programming and have actually contributed a few pieces to this code, this adds another layer of complexity. Large codebase: This particular project is more than ten thousand lines of code, so there is no way I will be able to have everything explained to me. Written in Delphi: I'm just going to put this out there, although I don't believe the language to be germane to the question, as I believe this type of problem to be language-agnostic. I was wondering how the time until his departure would best be spent. Here are a couple of ideas: Get everything to build on my machine: Even though everything should be checked into source code control, who hasn't forgotten to check in a file once in a while, so this should probably be the first order of business. More tests: While I would like more class-level unit tests so that when I will be making changes, any bugs I introduce can be caught early on, the code as it is now is not testable (huge classes, long methods, too many mutual dependencies). What to document: I think for starters it would be best to focus documentation on those areas in the code that would otherwise be difficult to understand e.g. because of their low-level/highly optimized nature. I am afraid there are a couple of things in there that might look ugly and in need of refactoring/rewriting, but are actually optimizations that have been out in there for a good reason that I might miss (cf. Joel Spolsky, Things You Should Never Do, Part I) How to document: I think some class diagrams of the architecture and sequence diagrams of critical functions accompanied by some prose would be best. Who to document: I was wondering what would be better, to have him write the documentation or have him explain it to me, so I can write the documentation. I am afraid, that things that are obvious to him but not me would otherwise not be covered properly. Refactoring using pair-programming: This might not be possible to do due to time constraints, but maybe I could refactor some of his code to make it more maintainable while he was still around to provide input on why things are the way they are. Please comment on and add to this. Since there isn't enough time to do all of this, I am particularly interested in how you would prioritize.

    Read the article

  • Using the StopWatch class to calculate the execution time of a block of code

    - by vik20000in
      Many of the times while doing the performance tuning of some, class, webpage, component, control etc. we first measure the current time taken in the execution of that code. This helps in understanding the location in code which is actually causing the performance issue and also help in measuring the amount of improvement by making the changes. This measurement is very important as it helps us understand the problem in code, Helps us to write better code next time (as we have already learnt what kind of improvement can be made with different code) . Normally developers create 2 objects of the DateTime class. The exact time is collected before and after the code where the performance needs to be measured.  Next the difference between the two objects is used to know about the time spent in the code that is measured. Below is an example of the sample code.             DateTime dt1, dt2;             dt1 = DateTime.Now;             for (int i = 0; i < 1000000; i++)             {                 string str = "string";             }             dt2 = DateTime.Now;             TimeSpan ts = dt2.Subtract(dt1);             Console.WriteLine("Time Spent : " + ts.TotalMilliseconds.ToString());   The above code works great. But the dot net framework also provides for another way to capture the time spent on the code without doing much effort (creating 2 datetime object, timespan object etc..). We can use the inbuilt StopWatch class to get the exact time spent. Below is an example of the same work with the help of the StopWatch class.             Stopwatch sw = Stopwatch.StartNew();             for (int i = 0; i < 1000000; i++)             {                 string str = "string";             }             sw.Stop();             Console.WriteLine("Time Spent : " +sw.Elapsed.TotalMilliseconds.ToString());   [Note the StopWatch class resides in the System.Diagnostics namespace] If you use the StopWatch class the time taken for measuring the performance is much better, with very little effort. Vikram

    Read the article

  • Silverlight Cream for March 30, 2010 -- #825

    - by Dave Campbell
    In this Issue: Jeremy Likness, Tim Greenfield, Tim Heuer, ondrejsv, XAML Ninja, Nikhil Kothari, Sergey Barskiy, Shawn Oster, smartyP, Christian Schormann(-2-), and John Papa And Glenn Block. Shoutouts: Victor Gaudioso produced a RefCard for DZone: Getting Started with Silverlight and Expression Blend Way to go Victor... it looks great! Gavin Wignall announced Metia launch FourSquare and Bing maps mash up – called Near.me Cheryl Simmons talks about VS2010 and the design surface: Changing Templates with the Silverlight Designer (and seeing the changes immediately) Michael S. Scherotter posted that New York Times Silverlight Kit Updated for Windows Phone 7 Series Jaime Rodriguez posted about 2 free chapters in his new book (with Yochay Kiriaty): A Journey Into Silverlight On Windows Phone -Via Learning WIndows PHone Programming Did you know there was "MSDN Radio"?? Tim Heuer posted follow-up answers to this morning's show: MSDN Radio follow-up answers: Prism for Silverlight, DomainServices and relationships Michael Klucher posted a great set of links for WP7 game development this morning: Great Game Development Tutorials for Windows Phone Zhiming Xue has 3 pages of synopsis and links for everything Windows Phone at MIX. This is the 1st, but at the top of the pages are links to the other two: Windows Phone 7 Content From MIX10 – Part I From SilverlightCream.com: Using WriteableBitmap to Simplify Animations with Clones Jeremy Likness takes a break from his LOB posts to demonstrate a page flip animation using WriteableBitmap to simplify the animation using clones. SAX-like Xml parsing Want some experience or fun with Rx? Tim Greenfield has a post up on building an observable XmlReader. nstalling Silverlight applications without the browser involved Last night I blogged Mike Taulty's take on the "Silent Install" for an OOB app, tonight, I'm posting Tim Heuer's insight on the topic. How to: Create computed/custom properties for sample data in Blend/Sketchflow ondrejsv posted an example of digging into the files that control the sample data for Blend to get what you really want. PathListBox Adventures – radial layout Check out the radial layout XAML Ninja did using the PathListBox ... and all code available. RIA Services and Validation Nikhil Kothari has a great (duh!) post up that follows his Silverlight TV on the same subject: RIA Services and validation... lots of good external links also. Windows Phone 7 Application with OData Sergey Barskiy did an OData to WP7 app by using the feed from MIX10. You can see a list of sessions, and click on one to see details. Getting Blur And DropShadow to work in the Windows Phone Emulator Shawn Oster responds to some forum questions about Blur and DropShadow effects not showing up in the WP7 emulator, and gives the code trick we have to do for now. Metro Icons for Windows Phone 7 We all got the other icon set for WP7 from MSDN, but smartyP pulled the Metro Icons from the PPT deck of the MIX10 presentations... good job! Fonts in SketchFlow Christian Schormann talks about fonts in Sketchflow, where they live on your machine, and how you can use them. Blend 4: About Path Layout, Part III Christian Schormann also has Part III of his epic tutorial up on Path Layout and Blend. This one is on dynamic resizing layouts, and he has links back to the other two if you missed them... or you can find them with a search at SilverlightCream... :) Simple ViewModel Locator for MVVM: The Patients Have Left the Asylum John Papa And Glenn Block teamed up to solve the View First model only without the maintenance involved with the ViewModel locator by using MEF. It only took these guys and hour... sigh... :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Is there a low carbon future for the retail industry?

    - by user801960
    Recently Oracle published a report in conjunction with The Future Laboratory and a global panel of experts to highlight the issue of energy use in modern industry and the serious need to reduce carbon emissions radically by 2050.  Emissions must be cut by 80-95% below the levels in 1990 – but what can the retail industry do to keep up with this? There are three key aspects to the retail industry where carbon emissions can be cut:  manufacturing, transport and IT.  Manufacturing Naturally, manufacturing is going to be a big area where businesses across all industries will be forced to make considerable savings in carbon emissions as well as other forms of pollution.  Many retailers of all sizes will use third party factories and will have little control over specific environmental impacts from the factory, but retailers can reduce environmental impact at the factories by managing orders more efficiently – better planning for stock requirements means economies of scale both in terms of finance and the environment. The John Lewis Partnership has made detailed commitments to reducing manufacturing and packaging waste on both its own-brand products and products it sources from third party suppliers. It aims to divert 95 percent of its operational waste from landfill by 2013, which is a huge logistics challenge.  The John Lewis Partnership’s website provides a large amount of information on its responsibilities towards the environment. Transport Similarly to manufacturing, tightening up on logistical planning for stock distribution will make savings on carbon emissions from haulage.  More accurate supply and demand analysis will mean less stock re-allocation after initial distribution, and better warehouse management will mean more efficient stock distribution.  UK grocery retailer Morrisons has introduced double-decked trailers to its haulage fleet and adjusted distribution logistics accordingly to reduce the number of kilometers travelled by the fleet.  Morrisons measures route planning efficiency in terms of cases moved per kilometre and has, over the last two years, increased the number of cases per kilometre by 12.7%.  See Morrisons Corporate Responsibility report for more information. IT IT infrastructure is often initially overlooked by businesses when considering environmental efficiency.  Datacentres and web servers often need to run 24/7 to handle both consumer orders and internal logistics, and this both requires a lot of energy and puts out a lot of heat.  Many businesses are lowering environmental impact by reducing IT system fragmentation in their offices, while an increasing number of businesses are outsourcing their datacenters to cloud-based services.  Using centralised datacenters reduces the power usage at smaller offices, while using cloud based services means the datacenters can be based in a more environmentally friendly location.  For example, Facebook is opening a massive datacentre in Sweden – close to the Arctic Circle – to reduce the need for artificial cooling methods.  In addition, moving to a cloud-based solution makes IT services more easily scaleable, reducing redundant IT systems that would still use energy.  In store, the UK’s Carbon Trust reports that on average, lighting accounts for 25% of a retailer’s electricity costs, and for grocery retailers, up to 50% of their electricity bill comes from refrigeration units.  On a smaller scale, retailers can invest in greener technologies in store and in their offices.  The report concludes that widely shared objectives of energy security, reduced emissions and continued economic growth are dependent on the development of a smart grid capable of delivering energy efficiency and demand response, as well as integrating renewable and variable sources of energy. The report is available to download from http://emeapressoffice.oracle.com/imagelibrary/detail.aspx?MediaDetailsID=1766I’d be interested to hear your thoughts on the report.   

    Read the article

  • OSB and Coherence Integration

    - by mark.ms.smith
    Anyone who has tried to manage Coherence nodes or tried to cache results in OSB, will appreciate the new functionality now available. As of WebLogic Server 10.3.4, you can use the WebLogic Administration Server, via the Administration Console or WLST, and java-based Node Manager to manage and monitor the life cycle of stand-alone Coherence cache servers. This is a great step forward as the previous options mainly involved writing your own scripts to do this. You can find an excellent description of how this works at James Bayer’s blog. You can also find the WebLogic documentation here.As of Oracle Service Bus 11gR1 (11.1.1.3.0), OSB now supports service result caching for Business Bervices with Coherence. If you use Business Services that return somewhat static results that do not change often, you can configure those Business Services to cache results. For Business Services that use result caching, you can control the time to live for the cached result. After the cached result expires, the next Business Service call results in invoking the back-end service to get the result. This result is then stored in the cache for future requests to access. I’m thinking that this caching functionality would be perfect for some sort of cross reference data that was refreshed nightly by batch. You can find the OSB Business Service documentation here.Result Caching in a dedicated JVMThis example demonstrates these new features by configuring a OSB Business Service to cache results in a separate Coherence JVM managed by WebLogic. The reason why you may want to use a separate, dedicated JVM is that the result cache data could potentially be quite large and you may want to protect your OSB java heap.In this example, the client will call an OSB Proxy Service to get Employee data based on an Employee Id. Using a Business Service, OSB calls an external system. The results are automatically cached and when called again, the respective results are retrieved from the cache rather than the external system.Step 1 – Set up your Coherence Server Via the OSB Administration Server Console, create your Coherence Server to be used as the results cache.Here are the configured Coherence Server arguments from the Server Start tab. Note that I’m using the default Cache Config and Override files in the domain.-Xms256m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=256m -Dtangosol.coherence.override=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-override.xml -Dtangosol.coherence.cluster=OSB-cluster -Dtangosol.coherence.cacheconfig=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dcom.sun.management.jmxremote Just incase you need it, here is my Coherence Server classpath:/app/middleware/jdev_11.1.1.4/oracle_common/modules/oracle.coherence_3.6/coherence.jar: /app/middleware/jdev_11.1.1.4/modules/features/weblogic.server.modules.coherence.server_10.3.4.0.jar: /app/middleware/jdev_11.1.1.4/oracle_osb/lib/osb-coherence-client.jarBy default, OSB will try and create a local result cache instance. You need to disable this by adding the following JVM parameters to each of the OSB Managed Servers:-Dtangosol.coherence.distributed.localstorage=false -DOSB.coherence.cluster=OSB-clusterIf you need more information on configuring a remote result cache, have a look at the configuration documentration under the heading Using an Out-of-Process Coherence Cache Server.Step 2 – Configure your Business Service Under the respective Business Service Message Handling Configuration (Advanced Properties), you need to enable “Result Caching”. Additionally, you need to determine what the cache data will be keyed on. In the example below, I’m keying it on the unique Employee Id.The Results As this test was on my laptop, the actual timings are just an indication that there is a benefit to caching results. Using my test harness, I sent 10,000 requests to OSB, all with the same Employee Id. In this case, I had result caching disabled.You can see that this caused the back end Business Service (BS_GetEmployeeData) to be called for each request. Then after enabling result caching, I sent the same number of identical requests.You can now see the Business Service was only invoked once on the first request. All subsequent requests used the Results Cache.

    Read the article

  • Building vs. Buying a Master Data Management Solution

    - by david.butler(at)oracle.com
    Many organizations prefer to build their own MDM solutions. The argument is that they know their data quality issues and their data better than anyone. Plus a focused solution will cost less in the long run then a vendor supplied general purpose product. This is not unreasonable if you think of MDM as a point solution for a particular data quality problem. But this approach carries significant risk. We now know that organizations achieve significant competitive advantages when they deploy MDM as a strategic enterprise wide solution: with the most common best practice being to deploy a tactical MDM solution and grow it into a full information architecture. A build your own approach most certainly will not scale to a larger architecture unless it is done correctly with the larger solution in mind. It is possible to build a home grown point MDM solution in such a way that it will dovetail into broader MDM architectures. A very good place to start is to use the same basic technologies that Oracle uses to build its own MDM solutions. Start with the Oracle 11g database to create a flexible, extensible and open data model to hold the master data and all needed attributes. The Oracle database is the most flexible, highly available and scalable database system on the market. With its Real Application Clusters (RAC) it can even support the mixed OLTP and BI workloads that represent typical MDM data access profiles. Use Oracle Data Integration (ODI) for batch data movement between applications, MDM data stores, and the BI layer. Use Oracle Golden Gate for more real-time data movement. Use Oracle's SOA Suite for application integration with its: BPEL Process Manager to orchestrate MDM connections to business processes; Identity Management for managing users; WS Manager for managing web services; Business Intelligence Enterprise Edition for analytics; and JDeveloper for creating or extending the MDM management application. Oracle utilizes these technologies to build its MDM Hubs.  Customers who build their own MDM solution using these components will easily migrate to Oracle provided MDM solutions when the home grown solution runs out of gas. But, even with a full stack of open flexible MDM technologies, creating a robust MDM application can be a daunting task. For example, a basic MDM solution will need: a set of data access methods that support master data as a service as well as direct real time access as well as batch loads and extracts; a data migration service for initial loads and periodic updates; a metadata management capability for items such as business entity matrixed relationships and hierarchies; a source system management capability to fully cross-reference business objects and to satisfy seemingly conflicting data ownership requirements; a data quality function that can find and eliminate duplicate data while insuring correct data attribute survivorship; a set of data quality functions that can manage structured and unstructured data; a data quality interface to assist with preventing new errors from entering the system even when data entry is outside the MDM application itself; a continuing data cleansing function to keep the data up to date; an internal triggering mechanism to create and deploy change information to all connected systems; a comprehensive role based data security system to control and monitor data access, update rights, and maintain change history; a flexible business rules engine for managing master data processes such as privacy and data movement; a user interface to support casual users and data stewards; a business intelligence structure to support profiling, compliance, and business performance indicators; and an analytical foundation for directly analyzing master data. Oracle's pre-built MDM Hub solutions are full-featured 3-tier Internet applications designed to participate in the full Oracle technology stack or to run independently in other open IT SOA environments. Building MDM solutions from scratch can take years. Oracle's pre-built MDM solutions can bring quality data to the enterprise in a matter of months. But if you must build, at lease build with the world's best technology stack in a way that simplifies the eventual upgrade to Oracle MDM and to the full enterprise wide information architecture that it enables.

    Read the article

  • WEB203 &ndash; Jump into Silverlight!&hellip; and Become Effective Immediately with Tim Huckaby, Fou

    - by Robert Burger
    Getting ready for the good stuff. Definitely wish there were more Silverlight and WCF RIA sessions, but this is a start.  Was lucky to get a coveted power-enabled seat.  Luckily, due to my trustily slow Verizon data card, I can get these notes out amidst a total Internet outage here.  This is the second breakout session of the day, and is by far standing-room only.  I stepped out before the session started to get a cool Diet COKE and wouldn’t have gotten back in if I didn’t already have a seat. Tim says this is an intro session and that he’s been begging for intro sessions at TechEd for years and that by looking at this audience, he thinks the demand is there.  Admittedly, I didn’t know this was an intro session, or I might have gone elsewhere.  But, it was the very first Silverlight session, so I had to be here. Tim says he will be providing a very good comprehensive reference application at the end of the presentation.  He has just demoed it, and it is a full CRUD-based Sales Manager application based on…  AdventureWorks! Session Agenda What it is / How to get started Declarative Programming Layout and Controls, Events and Commands Working with Data Adding Style to Your Application   Silverlight…  “WPF Light” Why is the download 4.2MB?  Because the direct competitor is a 4.2MB download.  There is no technical reason it is not the entire framework.  It is purely to “be competitive”.   Getting Started Get all of the following downloads from www.silverlight.net/getstarted Install VS2010 or Visual Web Developer Express 2010 Install Silverlight 4 Tools for VS2010 Install Expression Blend 4 Install the Silverlight 4 Toolkit   Reference Application Features Uses MVVM pattern – a way to move data access code that would normally be inline within the UI and placing it in nice data access libraries Images loaded dynamically from the database, converting GIF to PNG because Silverlight does not support GIF. LINQ to SQL is the data access model WCF is the data provider and is using binary message encoding   Declarative Programming XAML replaces code for UI representation Attributes control Layout and Style Event handlers wired-up in XAML Declarative Data Binding   Layout Overview Content rendering flows inside of parent Fixed positioning (Canvas) is seldom used Panels are used to house content Margins and Padding over fixed size   Panels StackPanel – Arranges child elements into a single line oriented horizontally or vertically Grid – A flexible grid are that consists of rows and columns Canvas – An are where positions are specifically fixed WrapPanel (in Toolkit) – Positions child elements in sequential position left to right and top to bottom. DockPanel (in Toolkit) – Positions child controls within a dockable area   Positioning Horizontal and Vertical Alignment Margin – Separates an element from neighboring elements Padding – Enlarges the effective size of an element by a thickness   Controls Overview Not all controls created equal Silverlight, as a subset of WPF, so many WPF controls do not exist in the core Siverlight release Silverlight Toolkit continues to add controls, but are released in different quality bands Plenty of good 3rd party controls to fill the gaps Windows Phone 7 is to have 95% of controls available in Silverlight Core and Toolkit.   Events and Commands Standard .NET Events Routed Events Commands – based on the ICommand interface – logical action that can be invoked in several ways   Adding Style to Your Application Resource Dictionaries – Contains a hash table of key/value pairs.  Silverlight can only use Static Resources whereas WPF can also use Dynamic Resources Visual State Manager Silverlight 4 supports Implicit styles ResourceDictionary.MergedDictionaries combines many different file-based resources   Downloads

    Read the article

  • Share Files and Folders and Internet between Guest OS and the Host in Hyper-V

    - by Manesh Karunakaran
    For those who are familiar with the VirtualPC, vmWare and VirtualBox environments will be quite irritated to find out that there is no direct way to share files from the Host machine to the Virtualized guest environment. This is a good thing from a CIO perspective because there’s excellent isolation for the virtualized environments this way, but for the developer junkies like us, this is an irritant, especially for those who have nuked their Windows 7 OS and installed Windows Server 2008 R2 for all the the SharePoint friendliness that it offers. Here’s a quick 5 minutes howto on Enabling Shared Folders and Internet Access for the Hyper-V images, for those who are still struggling with this. Step 1: Add a Virtual Network Adapter to your Guest OS For this, shut down the guest machine, go to its settings and add a Virtual Network Adapter as given in the images below     Step 2: Enable Virtual Networking in Hyper-V   Setting this up is very easy. In the Hyper-V Manager, under Actions (right panel), click the Virtual Network Manager. In the Virtual Network Manager in the Create virtual network panel, select Internal and click the Add button.        At this point if you open Control Panel\Network and Internet\Network Connections you will be able to see the new Network Adapter, Now name it to something meaningful other than Network Adapter X. Now you can add this network to each of your virtual machines, but at this point, unless you assign an IP address in each connection, you won't be able to do much.   Step 3: Enable Internet Connection Sharing so that Guest OS’es also can connect to the internet. To enable ICS follow these steps: Click on the network icon in the tray of your host machine and select Network and Sharing Center. From there click Manage network connections. Select the network adapter that you use to access the Internet. Right click it and select Properties. In the properties dialog select the Sharing tab. On this tab check the box that says "Allow other network users..." and then set the Home networking connection to be the network adapter that was created above (now you see why I said to rename it to something useful). Now your virtual machines that have this network connection will automatically get an IP address and will be able to connect to the Internet (provided your internet connection is working). Because each adapter also gets an automatic address you can now share files and folders between your host and your virtual machines which is important since you can't just drag-and-drop files like you can with Virtual PC.   Step 4: Create a Shared Folder in the Host Machine and use it in the Guest machine. Right click on the folder that you want to Share and select ‘Share with\Specific People’ and specify who all can access the share. Open the Guest OS from Hyper V Navigate to Start > Run and type in the Address of the Share (Or Map a Drive to the Share) Bingo! The Share opens!! :)   Now you can share as many files and folders as you want between the host and the guest, and you also have internet access inside the Virtual machines. Hope that helps.   Technorati Tags: Shared folder,Hyper-V,Share Files,Share files and folders between guest and host,Hyper-V Networking,Share Internet Access in Hyper-V,Internet,Files,Shared folders in Hyper-V

    Read the article

  • Create Custom Windows Key Keyboard Shortcuts in Windows

    - by Asian Angel
    Nearly everyone uses keyboard shortcuts of some sort on their Windows system but what if you could create new ones for your favorite apps or folders? You might just be amazed at how simple it can be with just a few clicks and no programming using WinKey. WinKey in Action During the installation process you will see this window that gives you a good basic idea of just what can be accomplished with this wonderful little app. As soon as the installation process has finished you will see the “Main App Window”. It provides a simple straightforward listing of all the keyboard shortcuts that it is currently managing. Note: WinKey will automatically add an entry to the “Startup Listing” in your “Start Menu” during installation. To see the regular built-in Windows keyboard shortcuts that it is managing click “Standard Shortcuts” to select it and then click on “Properties”. For those who are curious WinKey does have a “System Tray Icon” that can be disabled if desired. Now onto creating those new keyboard shortcuts… For our example we decided to create a keyboard shortcut for an app rather than a folder. To create a shortcut for an app click on the small “Paper Icon” as shown here. Once you have done that browse to the appropriate folder and select the exe file. The second step will be choosing which keyboard shortcut you would like to associate with that particular app. You can use the drop-down list to choose from a listing of available keyboard combinations. For our example we chose “Windows Key + A”. The final step is choosing the “Run Mode”. There are three options available in the drop-down list…choose the one that best suits your needs. Here is what our example looked like once finished. All that is left to do at this point is click “OK” to finish the process. And just like that your new keyboard shortcut is now listed in the “Main App Window”. Time to try out your new keyboard shortcut! One quick use of our new keyboard shortcut and Iron Browser opened right up. WinKey really does make creating new keyboard shortcuts as simple as possible. Conclusion If you have been wanting to create new keyboard shortcuts for your favorite apps and folders then it really does not get any simpler than with WinKey. This is definitely a recommended app for anyone who loves “get it done” software. Links Download WinKey at Softpedia Similar Articles Productive Geek Tips Show Keyboard Shortcut Access Keys in Windows VistaCreate a Keyboard Shortcut to Access Hidden Desktop Icons and FilesKeyboard Ninja: 21 Keyboard Shortcut ArticlesAnother Desktop Cube for Windows XP/VistaHow-To Geek on Lifehacker: Control Your Computer with Shortcuts & Speed Up Vista Setup TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems

    Read the article

< Previous Page | 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100  | Next Page >