Search Results

Search found 17976 results on 720 pages for 'old versions'.

Page 512/720 | < Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >

  • Move Window Buttons Back to the Right in Ubuntu 10.04

    - by Trevor Bekolay
    One of the more controversial changes in the Ubuntu 10.04 beta is the Mac OS-inspired change to have window buttons on the left side. We’ll show you how to move the buttons back to the right. Before While the change may or may not persist through to the April 29 release of Ubuntu 10.04, in the beta version the maximize, minimize, and close buttons appear in the top left of a window. How to move the window buttons The window button locations are dictated by a configuration file. We’ll use the graphical program gconf-editor to change this configuration file. Press Alt+F2 to bring up the Run Application dialog box, enter “gconf-editor” in the text field, and click on Run. The Configuration Editor should pop up. The key that we want to edit is in apps/metacity/general. Click on the + button next to the “apps” folder, then beside “metacity” in the list of folders expanded for apps, and then click on the “general” folder. The button layout can be changed by changing the “button_layout” key. Double-click button_layout to edit it. Change the text in the Value text field to: menu:maximize,minimize,close Click OK and the change will occur immediately, changing the location of the window buttons in the Configuration Editor. Note that this ordering of the window buttons is slightly different than the typical order; in previous versions of Ubuntu and in Windows, the minimize button is to the left of the maximize button. You can change the button_layout string to reflect that ordering, but using the default Ubuntu 10.04 theme, it looks a bit strange. If you plan to change the theme, or even just the graphics used for the window buttons, then this ordering may be more natural to you. After After this change, all of your windows will have the maximize, minimize, and close buttons on the right. What do you think of Ubuntu 10.04’s visual change? Let us know in the comments! Similar Articles Productive Geek Tips Move a Window Without Clicking the Titlebar in UbuntuBring Misplaced Off-Screen Windows Back to Your Desktop (Keyboard Trick)Keep the Display From Turning Off on UbuntuPut Close/Maximize/Minimize Buttons on the Left in UbuntuAllow Remote Control To Your Desktop On Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users

    Read the article

  • Isometric screen to 3D world coordinates efficiently

    - by Justin
    Been having a difficult time transforming 2D screen coordinates to 3D isometric space. This is the situation where I am working in 3D but I have an orthographic camera. Then my camera is positioned at (100, 200, 100), Where the xz plane is flat and y is up and down. I've been able to get a sort of working solution, but I feel like there must be a better way. Here's what I'm doing: With my camera at (0, 1, 0) I can translate my screen coordinates directly to 3D coordinates by doing: mouse2D.z = (( event.clientX / window.innerWidth ) * 2 - 1) * -(window.innerWidth /2); mouse2D.x = (( event.clientY / window.innerHeight) * 2 + 1) * -(window.innerHeight); mouse2D.y = 0; Everything okay so far. Now when I change my camera back to (100, 200, 100) my 3D space has been rotated 45 degrees around the y axis and then rotated about 54 degrees around a vector Q that runs along the xz plane at a 45 degree angle between the positive z axis and the negative x axis. So what I do to find the point is first rotate my point by 45 degrees using a matrix around the y axis. Now I'm close. So then I rotate my point around the vector Q. But my point is closer to the origin than it should be, since the Y value is not 0 anymore. What I want is that after the rotation my Y value is 0. So now I exchange my X and Z coordinates of my rotated vector with the X and Z coordinates of my non-rotated vector. So basically I have my old vector but it's y value is at an appropriate rotated amount. Now I use another matrix to rotate my point around the vector Q in the opposite direction, and I end up with the point where I clicked. Is there a better way? I feel like I must be missing something. Also my method isn't completely accurate. I feel like it's within 5-10 coordinates of where I click, maybe because of rounding from many calculations. Sorry for such a long question.

    Read the article

  • Going Metro

    - by Tony Davis
    When it was announced, I confess was somewhat surprised by the striking new "Metro" User Interface for Windows 8, based on Swiss typography, Bauhaus design, tiles, touches and gestures, and the new Windows Runtime (WinRT) API on which Metro apps were to be built. It all seemed to have come out of nowhere, like field mushrooms in the night and seemed quite out-of-character for a company like Microsoft, which has hung on determinedly for over twenty years to its quaint Windowing system. Many were initially puzzled by the lack of support for plug-ins in the "Metro" version of IE10, which ships with Win8, and the apparent demise of Silverlight, Microsoft's previous 'radical new framework'. Win8 signals the end of the road for Silverlight apps in the browser, but then its importance here has been waning for some time, anyway, now that HTML5 has usurped its most compelling use case, streaming video. As Shawn Wildermuth and others have noted, if you're doing enterprise, desktop development with Silverlight then nothing much changes immediately, though it seems clear that ultimately Silverlight will die off in favor of a single WPF/XAML framework that supports those technologies that were pioneered on the phones and tablets. There is a mystery here. Is Silverlight dead, or merely repurposed? The more you look at Metro, the more it seems to resemble Silverlight. A lot of the philosophies underpinning Silverlight applications, such as the fundamentally asynchronous nature of the design, have moved wholesale into Metro, along with most the Microsoft Silverlight dev team. As Simon Cooper points out, "Silverlight developers, already used to all the principles of sandboxing and separation, will have a much easier time writing Metro apps than desktop developers". Metro certainly has given the framework formerly known as Silverlight a new purpose. It has enabled Microsoft to bestow on Windows 8 a new "duality", as both a traditional desktop OS supporting 'legacy' Windows applications, and an OS that supports a new breed of application that can share functionality such as search, that understands, and can react to, the full range of gestures and screen-sizes, and has location-awareness. It's clear that Win8 is developed in the knowledge that the 'desktop computer' will soon be a very large, tilted, touch-screen monitor. Windows owes its new-found versatility to the lessons learned from Windows Phone, but it's developed for the big screen, and with full support for familiar .NET desktop apps as well as the new Metro apps. But the old mouse-driven Windows applications will soon look very passé, just as MSDOS character-mode applications did in the nineties. Cheers, Tony.

    Read the article

  • Roll your own free .NET technical conference

    - by Brian Schroer
    If you can’t get to a conference, let the conference come to you! There are a ton of free recorded conference presentations online… Microsoft TechEd Let’s start with the proverbial 800 pound gorilla. Recent TechEds have recorded the majority of presentations and made them available online the next day. Check out presentations from last month’s TechEd North America 2012 or last week’s TechEd Europe 2012. If you start at http://channel9.msdn.com/Events/TechEd, you can also drill down to presentations from prior years or from other regional TechEds (Australia, New Zealand, etc.) The top presentations from my “View Queue”: Damian Edwards: Microsoft ASP.NET and the Realtime Web (SignalR) Jennifer Smith: Design for Non-Designers Scott Hunter: ASP.NET Roadmap: One ASP.NET – Web Forms, MVC, Web API, and more Daniel Roth: Building HTTP Services with ASP.NET Web API Benjamin Day: Scrum Under a Waterfall NDC The Norwegian Developer Conference site has the most interesting presentations, in my opinion. You can find the videos from the June 2012 conference at that link. The 2011 and 2010 pages have a lot of presentations that are still relevant also. My View Queue Top 5: Shay Friedman: Roslyn... hmmmm... what? Hadi Hariri: Just ‘cause it’s JavaScript, doesn’t give you a license to write rubbish Paul Betts: Introduction to Rx Greg Young: How to get productive in a project in 24 hours Michael Feathers: Deep Design Lessons ØREDEV Travelling on from Norway to Sweden... I don’t know why, but the Scandinavians seem to have this conference thing figured out. ØREDEV happens each November, and you can find videos here and here. My View Queue Top 5: Marc Gravell: Web Performance Triage Robby Ingebretsen: Fonts, Form and Function: A Primer on Digital Typography Jon Skeet: Async 101 Chris Patterson: Hacking Developer Productivity Gary Short: .NET Collections Deep Dive aspConf - The Virtual ASP.NET Conference Formerly known as “mvcConf”, this one’s a little different. It’s a conference that takes place completely on the web. The next one’s happening July 17-18, and it’s not too late to register (It’s free!). Check out the recordings from February 2011 and July 2010. It’s two years old and talks about ASP.NET MVC2, but most of it is still applicable, and Jimmy Bogard’s Put Your Controllers On a Diet presentation is the most useful technical talk I have ever seen. CodeStock Videos from the 2011 edition of this Tennessee conference are available. Presentations from last month’s 2012 conference should be available soon here. I’m looking forward to watching Matt Honeycutt’s Build Your Own Application Framework with ASP.NET MVC 3. UserGroup.tv User Group.tv was founded in January of 2011 by Shawn Weisfeld, with the mission of providing User Group content online for free. You can search by date, group, speaker and category tags. My View Queue Top 5: Sergey Rathon & Ian Henehan: UI Test Automation with Selenium Rob Vettor: The Repository Pattern Latish Seghal: The .NET Ninja’s Toolbelt Amir Rajan: Get Things Done With Dynamic ASP.NET MVC Jeffrey Richter: .NET Nuggets – Houston TechFest Keynote

    Read the article

  • How can fix HDMI HDTV overscan when I my TV has no aspect ratio setting?

    - by Colin Dean
    I have a 32" Vizio HDTV. It's a few years old, but running well. I just connected a new nettop to it using HDMI out. It's the Intel 3x00 graphics chipset. I'm seeing overscan, where the resolution in Ubuntu is set to 1280x720, but the TV itself is 1366x768. When I go into the Monitors control applet, I cannot change the resolution to anything other than the current or 640x480. A user had a similar overscan problem, but fixed the overscan by adjusting his TV's aspect ratio settings. I do not have that luxury. Is there a way I can do this without having to delve into xorg.conf or other command-line craziness? I'm more than comfortable doing so, but there must be a cleaner way. I'm running Ubuntu Natty, keeping up with updates and such. Here's the output of lspci: colin@bricktop:~$ lspci 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 12) 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 12) 00:04.0 Signal processing controller: Intel Corporation Core Processor Thermal Management Controller (rev 12) 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 06) 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 06) 00:1c.2 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 3 (rev 06) 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a6) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 06) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 4 port SATA AHCI Controller (rev 06) 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 06) 00:1f.6 Signal processing controller: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem (rev 06) 01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) 02:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) 03:00.0 Network controller: Atheros Communications Inc. AR9287 Wireless Network Adapter (PCI-Express) (rev 01)

    Read the article

  • Computer Visionaries 2014 Kinect Hackathon

    - by T
    Originally posted on: http://geekswithblogs.net/tburger/archive/2014/08/08/computer-visionaries-2014-kinect-hackathon.aspxA big thank you to Computer Vision Dallas and Microsoft for putting together the Computer Visionaries 2014 Kinect Hackathon that took place July 18th and 19th 2014.  Our team had a great time and learned a lot from the Kinect MVP's and Microsoft team.  The Dallas Entrepreneur Center was a fantastic venue. In total, 114 people showed up to form 15 teams. Burger ITS & Friends team members with Ben Lower:  Shawn Weisfeld, Teresa Burger, Robert Burger, Harold Pulcher, Taylor Woolley, Cori Drew (not pictured), and Katlyn Drew (not pictured) We arrived Friday after a long day of work/driving.  Originally, our idea was to make a learning game for kids.  It was intended to be multi-simultaneous players dragging and dropping tiles into a canvas area for kids around 5 years old. We quickly learned that we were limited to two simultaneous players. After working on the game for the rest of the evening and into the next morning we decided that a fast multi-player game with hand gestures was not going to happen without going beyond what was provided with the API. If we were going to have something to show, it was time to switch gears. The next idea on the table was the Photo Anywhere Kiosk. The user can use voice and hand gestures to pick a place they would like to be.  After the user says a place (or anything they want) and then the word "search", the app uses Bing to display a bunch of images for him/her to choose from. With the use of hand gesture (grab and slide to move back and forth and push/pull to select an image) the user can get the perfect image to pose with. I couldn't get a snippet with the hand but when a the app is in use, a hand shows up to cue the user to use their hand to control it's movement. Once they chose an image, we use the Kinect background removal feature to super impose the user on that image. When they are in the perfect position, they say "save" to save the image. Currently, the image is saved in the images folder on the users account but there are many possibilities such as emailing it, posting to social media, etc.. The competition was great and we were honored to be recognized for third place. Other related posts: http://jasongfox.com/computer-visionaries-2014-incredible-success/ A couple of us are continuing to work on the kid's game and are going to make it a Windows 8 multi-player game without Kinect functionality. Stay tuned for more updates.

    Read the article

  • ASP.NET 4.0- Menu control enhancement.

    - by Jalpesh P. Vadgama
    Till asp.net 3.5 asp.net menu control was rendered through table. And we all know that it is very hard to have CSS applied to table. For a professional look of our website a CSS is must required thing. But in asp.net 4.0 Menu control is table less it will loaded with UL and LI tags which is easier to manage through CSS. Another problem with table is it will create a large html which will increase your asp.net page KB and decrease your performance. While with UL and LI Tags its very easy very short. So You page KB Size will also be down. Let’s take a simple example. Let’s Create a menu control in asp.net with four menu item like following. <asp:Menu ID="myCustomMenu" runat="server" > <Items> <asp:MenuItem Text="Menu1" Value="Menu1"></asp:MenuItem> <asp:MenuItem Text="Menu2" Value="Menu2"></asp:MenuItem> <asp:MenuItem Text="Menu3" Value="Menu3"></asp:MenuItem> <asp:MenuItem Text="Menu4" Value="Menu4"></asp:MenuItem> </Items></asp:Menu> It will render menu in browser like following. Now If we render this menu control with tables then HTML as you can see via view page source like following.   Now If in asp.net 4.0 It will be loaded with UL and LI tags and if you now see page source then it will look like following. Which will have must lesser HTML then it was earlier like following. So isn’t that great performance enhancement?.. It’s very cool. If you still like old way doing with tables then in asp.net 4.0 there is property called ‘RenderingMode’ is given. So you can set RenderingMode=Table then it will load menu control with table otherwise it will load menu control with UL and LI Tags. That’s it..Stay tuned for more..Happy programming.. Technorati Tags: Menu,Asp.NET 4.0

    Read the article

  • Download Internet Explorer 9 RTM

    - by Harish Ranganathan
    The much anticipated RTM release of Internet Explorer 9 (IE9) happened today.  IE9 preview release was first showcased at MIX 2010 and post that there were 7-8 Platform Preview releases.  Also, IE9 Beta came out in September 2010 with close to 10 million downloads within a month.  More recently, the RC version was out with much improved performance.  Today, marks the launch of IE9 RTM.  What this means is that, within an year, the IE Team has shipped the stable product, much faster than the earlier cycles for IE8 and IE7.  I wanted to clarify a few things (myths) that arise in common 1. I am already using Chrome and its faster for me, why would I need IE9 IE9 uses 100% hardware acceleration which means, you are going to get the best of performance compared to any other browser that shipped/will ship in future.  With native Windows support, IE9 will outperform all other browsers in terms of performance. 2. What about standards and security Agreed IE6 hasn’t been in the best of standards, but why would someone compare IE6 which was released almost 10 years back.  Later, we shipped IE7 and IE8 which had the best of standards and supports during their timeframes, but one would agree that standards and specifications keep getting updated and its hard to keep pace with the same for older browsers.  Example. HTML5 support is not there in IE8 but it is very much there in IE9.  IE9 supports most of the stable standards of HTML5 and its going to provide preview releases for the work-in-progress standards. 3. IE doesn’t keep in pace with other browsers Agreed! we don’t force/release updates on major versions in very short time periods.  What we do is provide Windows Update that provides security updates/patches and other critical updates for not just IE but the whole of Windows operating system 4. I am running Windows XP, what do I do? This is the trickiest part.  Windows XP isn’t the supported operating system for IE9 and there are various reasons to it.  The recommended operating system is Windows Vista and Windows 7.  In the interest of technology and its pace, we had to discontinue Windows XP both from a retail selling perspective as well as IE9 support.  But, the recent 2 years has seen PCs/Laptops only shipped with Windows Vista or Windows 7 so, it shouldn't affect them. 5. Where do I verify IE9’s performance/standard support and other information. http://samples.msdn.microsoft.com/ietestcenter/  Here below is a snapshot of one of the tests. Clearly IE9 outperforms all other browsers and will continue to outperform them in future.  You can download IE9 from www.beautyoftheweb.com Cheers!!!

    Read the article

  • Simple tips to design a Customer Journey Map

    - by Isabel F. Peñuelas
    “A model can abstract to a level that is comprehensible to humans, without getting lost in details.” -The Unified Modeling Language Reference Manual. Inception using Post-it, StoryBoards, Lego or Mindmaping Techniques The first step in a Customer Experience project is to describe customer interactions creating a customer journey map. Modeling is never easy, so to succeed on this effort, it is very convenient that your CX´s team have some “abstract thinking” skills. Besides is very helpful to consult a Business Service Design offered by an Interactive Agency to lead your inception process. Initially, you may start by a free discussion using post-it cards; storyboards; even lego or any other brainstorming technique you like. This will help you to get your mind into the path followed by the customer to purchase your product or to consume any business service you actually offer to your customers, or plan to offer in the near future. (from www.servicedesigntools.org) Colorful Mind Maps are very useful to document and share meeting ideas. Some Mind Maps software providers as ThinkBuzzan provide trial versions, and you will find more mindmapping options on this post by Mashable. Finally to produce a quick one, I do recommend Wise, an entirely online mindmaping service. On my view the best results in terms of communication will always come for an artistic hand-made drawing. Customer Experience Mind Map Example Making your first Customer Journey Map To add some more formalization to your thoughts, there is a wide offering for designing Customer Journey Maps. A Customer Map can be represented as an oriented graph in which another follows each step. The one below is the most simple Customer Journey you can draw. Nothing more than a couple of pictures, numbers and lines to design the customer steps sequence in the purchase process. Very simple Customer Journey for Social Mobile Shopping There are a lot of Customer Journey templates much more sophisticated available  in the Web using a variety of styles, as per example this one with a focus on underlining emotional experience, or this other worksheet template. Representing different interaction devices on the vertical axis, and touchpoints / requirements and existing gaps horizontally  is today´s most common format for Customer Journeys. From Customer Journey Maps to CX Technology Adoption Plans Once you have your map ready, you can start to identify the IT infrastructure requirements for your CXProject. By analyzing customer problems and improvement opportunities with maps, you will then identify the technology gaps and the new investment requirements in your IT infrastructure. Deeping step by step from the more abstract to the more concrete is the best guarantee to take the right IT investment decisions.  ¡Remember to keep your initial customer journey safe on your pocket in every one of your CX´s project meetings- that´s you map to success!

    Read the article

  • Using SQL Source Control and Vault Professional Part 4

    - by Ajarn Mark Caldwell
    Two weeks ago I upgraded our installation of Fortress to the latest version, which is now named Vault Professional.  This is the version of Vault (i.e. Vault Standard 5.1 / Vault Professional 5.1) that will be officially supported with Red-Gate SQL Source Control 2.1.  While the folks at Red-Gate did a fantastic job of working with me to get SQL Source Control to work with the older Fortress version, we weren’t going to just sit on that.  There are a couple of things that Vault Professional cleaned up for us, such as improved integration with Visual Studio 2010, so it was a win all around. Shortly after that upgrade, I received notice from Red-Gate that they had a new Early Access version of SQL Source Control available that included the ability to source control static data.  The idea here is that you probably have a few fairly static lookup tables in your system, and those data values are similar in concept to source code, and should be versioned in your source control management system also.  I agree with this, but please be wise…somebody out there is bound to try to use this feature as their disaster recovery for their entire database, and that is NOT the purpose.  First off, you should never have your PROD (or LIVE, whatever you call it) system attached to source control.  Source Control is for development, not for PROD systems.  Second, use the features that are intended for this purpose, such as BACKUP and RESTORE. Laying that tangent aside, it is great that now you can include these critical values in your repository and make them part of a deployment process.  As you would guess, SQL Source Control uses SQL Data Compare to create the data change scripts just like it uses SQL Compare to create the schema change scripts.  Once again, they did a very good job with the integration to their other products.  At this point we are really starting to see some good payback on our investment in the full SQL Developer Bundle.  Those products were worth the investment back when we only used them sporadically for troubleshooting and DBA analysis, but now with SQL Source Control, they are becoming everyday-use products for the development team. I like this software (SQL Source Control) so much that I am about to break my own rules and distribute it to my team to use even though it is still in beta.  This is the first time that I have approved the use of any beta software in a production scenario (actively building our next versions of internal software) but I predict that the usability and productivity gain of using SQL Source Control over manual scripting is worth the risk.  Of course, I have also put this beta software through its paces pretty well to be comfortable with it, and Red-Gate has proven their responsiveness to issues that came up in my early beta testing, and so I am willing to bet on their continued support.  Likewise, SourceGear, the maker of Vault Professional, has proven itself to me as well, and so the combination of SQL Source Control with Vault Professional is the new standard for my development team.

    Read the article

  • SBUG Session: The Enterprise Cache

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] I did a session on "The Enterprise Cache" at the UK SOA/BPM User Group yesterday which generated some useful discussion. The proposal was for a dedicated caching layer which all app servers and service providers can hook into, sharing resources and common data. The architecture might end up like this: I'll update this post with a link to the slide deck once it's available. The next session will have Udi Dahan walking through nServiceBus, register on EventBrite if you want to come along. Synopsis Looked at the benefits and drawbacks of app-centric isolated caches, compared to an enterprise-wide shared cache running on dedicated nodes; Suggested issues and risks around caching including staleness of data, resource usage, performance and testing; Walked through a generic service cache implemented as a WCF behaviour – suitable for IIS- or BizTalk-hosted services - which I'll be releasing on CodePlex shortly; Listed common options for cache providers and their offerings. Discussion Cache usage. Different value propositions for utilising the cache: improved performance, isolation from underlying systems (e.g. service output caching can have a TTL large enough to cover downtime), reduced resource impact – CPU, memory, SQL and cost (e.g. caching results of paid-for services). Dedicated cache nodes. Preferred over in-host caching provided latency is acceptable. Depending on cache provider, can offer easy scalability and global replication so cache clients always use local nodes. Restriction of AppFabric Caching to Windows Server 2008 not viewed as a concern. Security. Limited security model in most cache providers. Options for securing cache content suggested as custom implementations. Obfuscating keys and serialized values may mean additional security is not needed. Depending on security requirements and architecture, can ensure cache servers only accessible to cache clients via IPsec. Staleness. Generally thought to be an overrated problem. Thinking in line with eventual consistency, that serving up stale data may not be a significant issue. Good technical arguments support this, although I suspect business users will be harder to persuade. Providers. Positive feedback for AppFabric Caching – speed, configurability and richness of the distributed model making it a good enterprise choice. .NET port of memcached well thought of for performance but lack of replication makes it less suitable for these shared scenarios. Replicated fork – repcached – untried and less active than memcached. NCache also well thought of, but Express version too limited for enterprise scenarios, and commercial versions look costly compared to AppFabric.

    Read the article

  • Ghost team foundation build controllers

    - by Martin Hinshelwood
    Quite often after an upgrade there are things left over. Most of the time they are easy to delete, but sometimes it takes a little effort. Even rarer are those times when something just will not go away no matter how much you try. We have had a ghost team build controller hanging around for a while now, and it had defeated my best efforts to get rid of it. The build controller was from our old TFS server from before our TFS 2010 beta 2 upgrade and was really starting to annoy me. Every time I try to delete it I get the message: Controller cannot be deleted because there are build in progress -Manage Build Controller dialog   Figure: Deleting a ghost controller does not always work. I ended up checking all of our 172 Team Projects for the build that was queued, but did not find anything. Jim Lamb pointed me to the “tbl_BuildQueue” table in the team Project Collection database and sure enough there was the nasty little beggar. Figure: The ghost build was easily spotted Adam Cogan asked me: “Why did you suspect this one?” Well, there are a number of things that led me to suspect it: QueueId is very low: Look at the other items, they are in the thousands not single digits ControllerId: I know there is only one legitimate controller, and I am assuming that 6 relates to “zzUnicorn” DefinitionId: This is a very low number and I looked it up in “tbl_BuildDefinition” and it did not exist QueueTime: As we did not upgrade to TFS 2010 until late 2009 a date of 2008 for a queued build is very suspect Status: A status of 2 means that it is still queued This build must have been queued long ago when we were using TFS 2008, probably a beta, and it never got cleaned up. As controllers are new in TFS 2010 it would have created the “zzUnicorn” controller to handle any build servers that already exist. I had previously deleted the Agent, but leaving the controller just looks untidy. Now that the ghost build has been identified there are two options: Delete the row I would not recommend ever deleting anything from the database to achieve something in TFS. It is really not supported. Set the Status to cancelled (Recommended) This is the best option as TFS will then clean it up itself So I set the Status of this build to 2 (cancelled) and sure enough it disappeared after a couple of minutes and I was then able to then delete the “zzUnicorn” controller. Figure: Almost completely clean Now all I have to do is get rid of that untidy “zzBunyip” agent, but that will require rewriting one of our build scripts which will have to wait for now.   Technorati Tags: ALM,TFBS,TFS 2010

    Read the article

  • Oracle VM Templates for EBS 12.1.3 for Exalogic Now Available

    - by Elke Phelps (Oracle Development)
    Oracle VM Templates for Oracle E-Business Suite 12.1.3 for x86 Exalogic Platform (64 bit) are now available on the Oracle Software Delivery Cloud.  The templates contain all the required elements to create an Oracle E-Business Suite R12 demonstration system on an Exalogic server. You can use these templates to quickly build an EBS 12.1.3 demonstration environment, bypassing the operating system and the software install (via the EBS Rapid Install).   The Oracle E-Business Suite Release 12.1.3 (64 bit) template for the Exalogic platform is a Oracle Virtual Server Guest template that contains a complete Oracle E-Business Suite Release 12.1.3 Database Tier and Application Tier Installation.  For additional details, please refer to the following My Oracle Support Note: Oracle E-Business Suite Release 12.1.3 Database Tier and Application Tier Template for Oracle Exalogic Platform (Note 1499132.1) The Oracle E-Business Suite system is installed on top of Oracle Linux Version 5 update 6. The templates have been optimized for performance, including OS kernel settings and E-Business Suite configuration settings tuned specifically for the Exalogic platform.  The configuration delivered with this template for a mid-tier running on Exalogic will support hundreds of concurrent users.  Please refer to Section 2: Performance Analysis in My Oracle Support Note 1499132.1 for additional details.   Additional Information The Oracle E-Business Suite VM templates for the Exalogic platform contain the following software versions: Operating System: Oracle Linux Version 5 Update 6 Oracle E-Business Suite 12.1.3 (Database Tier) Oracle E-Business Suite 12.1.3 (Application Tier) The following considerations were made when the Oracle E-Business Suite VM template for the Exalogic platform were designed: Templates use the hardware-virtualized architecture, supporting hardware with virtualization feature. Database Tier Template is configured to use the following configuration: 16 GB RAM 4 VCPUs 250 GB of Disk space for application installation Application Tier Template is configured to use the following configuration: 16 GB RAM 4 VCPUs 50 GB of Disk space for application installation References Oracle E-Business Suite Release 12.1.3 Database Tier and Application Tier Template for Oracle Exalogic Platform (Note 1499132.1) Related Articles Part 1: E-Business Suite 12.1.1 Templates for Oracle VM Now Available Part 2: Using Oracle VM with Oracle E-Business Suite Virtualization Kit Part 3: On Clouds and Virtualization in EBS Environments (OpenWorld 2009 Recap) Part 4: Deploying E-Business Suite on Amazon Web Services Elastic Compute Cloud Part 5: Live Migration of EBS Services Using Oracle VM Support Policies for Virtualization Technologies and Oracle E-Business Suite Virtualization and the E-Business Suite, Redux Virtualization and E-Business Suite

    Read the article

  • C# Domain-Driven Design Sample Released

    - by Artur Trosin
    In the post I want to declare that NDDD Sample application(s) is released and share the work with you. You can access it here: http://code.google.com/p/ndddsample. NDDDSample from functionality perspective matches DDDSample 1.1.0 which is based Java and on joint effort by Eric Evans' company Domain Language and the Swedish software consulting company Citerus. But because NDDDSample is based on .NET technologies those two implementations could not be matched directly. However concepts, practices, values, patterns, especially DDD, are cross-language and cross-platform :). Implementation of .NET version of the application was an interesting journey because now as .NET developer I better understand the differences positive and negative between these two platforms. Even there are those differences they can be overtaken, in many cases it was not so hard to match a java libs\framework with .NET during the implementation. Here is a list of technology stack: 1. .net 3.5 - framework 2. VS.NET 2008 - IDE 3. ASP.NET MVC2.0 - for administration and tracking UI 4. WCF - communication mechanism 5. NHibernate - ORM 6. Rhino Commons - Nhibernate session management, base classes for in memory unit tests 7. SqlLite - database 8. Windsor - inversion of control container 9. Windsor WCF facility - for better integration with NHibernate 10. MvcContrib - and in particular its Castle WindsorControllerFactory in order to enable IoC for controllers 11. WPF - for incident logging application 12. Moq - mocking lib used for unit tests 13. NUnit - unit testing framework 14. Log4net - logging framework 15. Cloud based on Azure SDK These are not the latest technologies, tools and libs for the moment but if there are someone thinks that it would be useful to migrate the sample to latest current technologies and versions please comment. Cloud version of the application is based on Azure emulated environment provided by the SDK, so it hasn't been tested on ‘real' Azure scenario (we just do not have access to it). Thanks to participants, Eugen Gorgan who was involved directly in development, Ruslan Rusu and Victor Lungu spend their free time to discuss .NET specific decisions, Eugen Navitaniuc helped with Java related questions. Also, big thank to Cornel Cretu, he designed a nice logo and helped with some browser incompatibility issues. Any review and feedback are welcome! Thank you, Artur Trosin

    Read the article

  • on coding style

    - by user12607414
    I vastly prefer coding to discussing coding style, just as I would prefer to write poetry instead of talking about how it should be written. Sometimes the topic cannot be put off, either because some individual coder is messing up a shared code base and needs to be corrected, or (worse) because some officious soul has decided, "what we really need around here are some strongly enforced style rules!" Neither is the case at the moment, and yet I will venture a post on the subject. The following are not rules, but suggested etiquette. The idea is to allow a coherent style of coding to flourish safely and sanely, as a humane, inductive, social process. Maxim M1: Observe, respect, and imitate the largest-scale precedents available. (Preserve styles of whitespace, capitalization, punctuation, abbreviation, name choice, code block size, factorization, type of comments, class organization, file naming, etc., etc., etc.) Maxim M2: Don't add weight to small-scale variations. (Realize that Maxim M1 has been broken many times, but don't take that as license to create further irregularities.) Maxim M3: Listen to and rely on your reviewers to help you perceive your own coding quirks. (When you review, help the coder do this.) Maxim M4: When you touch some code, try to leave it more readable than you found it. (When you review such changes, thank the coder for the cleanup. When you plan changes, plan for cleanups.) On the Hotspot project, which is almost 1.5 decades old, we have often practiced and benefited from such etiquette. The process is, and should be, inductive, not prescriptive. An ounce of neighborliness is better than a pound of police-work. Reality check: If you actually look at (or live in) the Hotspot code base, you will find we have accumulated many annoying irregularities in our source base. I suppose this is the normal condition of a lived-in space. Unless you want to spend all your time polishing and tidying, you can't live without some smudge and clutter, can you? Final digression: Grammars and dictionaries and other prescriptive rule books are sometimes useful, but we humans learn and maintain our language by example not grammar. The same applies to style rules. Actually, I think the process of maintaining a clean and pleasant working code base is an instance of a community maintaining its common linguistic identity. BTW, I've been reading and listening to John McWhorter lately with great pleasure. (If you end with a digression, is it a tail-digression?)

    Read the article

  • JavaScript Sucks.

    - by Matt Watson
    JavaScript Sucks. Yes, I said it. Microsoft's announcement of TypeScript got me thinking today. Is this a step in the right direction? It sounds like it fixes a lot of problems with JavaScript development. But is it really just duct tape and super glue for a programming model that needs to be replaced?I have had a love hate relationship with JavaScript, like most developers who would prefer avoiding client side code. I started doing web development over 10 years ago and I have done some pretty cool stuff with JavaScript. It has came a long ways and is the universal standard these days for client side scripting in the web browser. Over the years the browsers have become much faster at processing JavaScript. Now people are even trying to use it on the server side via node.js. OK, so why do I think JavaScript sucks?Well first off, as an enterprise web application developer, I don't like any scripting or dynamic languages. I like code that compiles for lots of obvious reasons. It is messy to code with and lacks all kinds of modern programming features. We spend a lot of time trying to hack it to do things it was never really designed for.Ever try to use different jQuery based plugins that require conflicting jQuery versions? Yeah, that sucks.How about trying to figure out how to make 20 javascript include files load quicker as one request? Yeah that sucks too.Performance? Let me just point to the old Facebook mobile app made with JS & HTML5. It sucked. Enough said.How about unit testing JavaScript? I've never tried it, but it sure sounds like fun.My biggest problem with JavaScript is code security. If I make some awesome product, there is no way to protect my code. How can we expect game makers to write apps in 100% JavaScript and HTML5 if they can't protect their intellectual property?There are compiling tools like Closure, unit test frameworks, minify, coffee script, TypeScript and a bunch of other tools. But to me, they all try to make up for the weaknesses and problems with JavaScript. JavaScript is a mess and we spend a lot of time trying to work around all of it's problems. It is possible to program in Silverlight, Java or Flash and run that in the browser instead of JavaScript, but they all have their own problems and lack universal mobile support. I believe Microsoft's new TypeScript is a step forward for JavaScript, but I think we need to start planning to go a whole different direction. We need a new universal client side programming model, because JavaScript sucks.

    Read the article

  • Ubuntu 11.10 running in windows 7 (wubi) AND on a separate partition

    - by Pareen
    I am in a very strange situation and need some help: I installed Ubuntu 11.10 through Wubi a while back so that I can use it alongside Windows 7. I was running out of space on my disk when trying to install applications. Without understanding how Wubi worked, I partitioned my C drive (creating a new 90 GB partition) in Windows, booted from the Ubuntu 11.10 install/live disk, and used the "something else" option to create a ext4 (setting the mount point to root) and swap space partitions (/sda5 and /sda6). After the install, my computer no longer boots with the previous Wubi menu and is now using the Linux grub. The options I have are /sda2, which boots Windows 7; /sda1, which doesn't do anything and reloads the same menu, and the run Linux options. So I now have Ubuntu running on a separate partition, as well as the original Wubi install. I want to delete the seperate partition and go back to running Ubuntu on Wubi...if I remove the partition will I need the Windows 7 disk to restore the boot loader? I dont have the Windows 7 disk on me so what is the best way to clean this up so I get rid of the seperate partition? -------------------------------------------------UPDATE----------------------------------------------------- ============================================================================================================ thank you so much for your response. Actually, it would be fantastic if I could migrate my Wubi install into the new partition because I had downloaded the AOSP on the Wubi install (as well as other files) and would love to preserve them. If i can do that and work on the new partition with the old files than that would be great, and I can worry about wiping out the partition completely later on i.e. when I have the windows disk or something. Can you tell me how to do this migration?? So when I select the /sda2, it loads up my Windows. If i click on the Linux, it loads up the newly install Linux (my files that were on the Wubi install aren't there) fine. If I click on the /sda1 (SYSTEM_DRIVE... this is what the Wubi was using to boot the menu that let me select Windows 7 or Ubuntu)... it fails and just reloads the original menu. Here is the link to my boot info script http://pastebin.com/dMrY0NL3

    Read the article

  • ATI Radeon HD 4650 AGP Video card not recognized properly

    - by PastorLarry
    I have an ASUS ATI Radeon HD 4650 AGP in this system (yeah, I know how old it is). I've been on Ubuntu since 10.04, and the system has never properly recognized the card. I have always had the VESA drivers installed. Now that I have the time to address the problem, 12.04 was listing the card as "Unknown" under the System Settings. Meanwhile, Sysinfo recognizes the card as: Advanced Micro Devices [AMD] nee ATI RV730 Pro AGP [Radeon HD 4600 Series] (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. Device 0028 So I know that this card should be using the radeon driver (or even the radeonhd driver). However, when I installed the mesa-utils package, the card is suddenly reported as: Gallium 0.4 on llvmpipe (LLVM 0x300) So now, I'm completely at a loss. It seems that the llvmpipe stuff has to do with OpenGL, but it still appears that I don't have the proper video driver installed. That being said, anyone know what I can do to force the system to recognize the card and use the radeon driver? [EDIT 05.28] I did look at some other information, including glxinfo and a couple of other commands (it was REALLY late, so I don't remember the other commands) and I got these: glxinfo | grep vendor: server glx vendor string: SGI client glx vendor string: Mesa Project and SGI OpenGL vendor string: X.org glxinfo | grep renderer: OpenGL renderer string: Gallium 0.4 on AMD RV730 One of the other commands gave a whole lot of info and near the end stated that the activation string for the radeon driver was "modprobe radeon". I've tried that from sudo and as root, but it doesn't seem to change anything. I'm at a complete loss. I've even added the xorg-edgers ppa to my Software Sources and updated and rebooted the system, but nothing has changed. Most of all, I can't seem to find any documentation on this issue, as it seems that it's assumed that the radeon driver will install automatically, no questions asked. I feel like such a newbie. Does anyone have any ideas on this? [edit 05.28] results of lsmod | grep radeon (in a more readable format than the comment below): radeon 733693 3 ttm 65344 1 radeon drm_kms_helper 45466 1 radeon drm 197692 5 radeon,ttm,drm_kms_helper i2c_algo_bit 13199 1 radeon [edit 05.29] This is my /etc/X11/xorg.conf: Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 EndSection Section "Module" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" So here is my question. Can I simply change the name of the driver in the device section to "radeon" instead of "fglrx" and have the radeon driver work? Or is ther a way to use this as a tmeplate and change the appropriate lines and activate the radeon driver through this file?

    Read the article

  • Handling Coding Standards at Work (I'm not the boss)

    - by Josh Johnson
    I work on a small team, around 10 devs. We have no coding standards at all. There are certain things that have become the norm but some ways of doing things are completely disparate. My big one is indentation. Some use tabs, some use spaces, some use a different number of spaces, which creates a huge problem. I often end up with conflicts when I merge because someone used their IDE to auto format and they use a different character to indent than I do. I don't care which we use I just want us all to use the same one. Or else I'll open a file and some lines have curly brackets on the same line as the condition while others have them on the next line. Again, I don't mind which one so long as they are all the same. I've brought up the issue of standards to my direct manager, one on one and in group meetings, and he is not overly concerned about it (there are several others who share the same view as myself). I brought up my specific concern about indentation characters and he thought a better solution would be to, "create some kind of script that could convert all that when we push/pull from the repo." I suspect that he doesn't want to change and this solution seems overly complicated and prone to maintenance issues down the road (also, this addresses only one manifestation of a larger issue). Have any of you run into a similar situation at work? If so, how did you handle it? What would be some good points to help sell my boss on standards? Would starting a grass roots movement to create coding standards, among those of us who are interested, be a good idea? Am I being too particular, should I just let it go? Thank you all for your time. Note: Thanks everyone for the great feedback so far! To be clear, I don't want to dictate One Style To Rule Them All. I'm willing to concede my preferred way of doing something in favor of what suits everyone the best. I want consistency and I want this to be a democracy. I want it to be a group decision that everyone agrees on. True, not everyone will get their way, but I'm hoping that everyone will be mature enough to compromise for the betterment of the group. Note 2: Some people are getting caught up in the two examples I gave above. I'm more after the heart of the matter. It manifests itself with many examples: naming conventions, huge functions that should be broken up, should something go in a util or service, should something be a constant or injected, should we all use different versions of a dependency or the same, should an interface be used for this case, how should unit tests be set up, what should be unit tested, (Java specific) should we use annotations or external config. I could go on.

    Read the article

  • SQL SERVER – Preserve Leading Zero While Coping to Excel from SSMS

    - by pinaldave
    Earlier I wrote two articles about how to efficiently copy data from SSMS to Excel. Since I wrote that post there are plenty of interest generated on this subject. There are a few questions I keep on getting over this subject. One of the question is how to get the leading zero preserved while copying the data from SSMS to Excel. Well it is almost the same way as my earlier post SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet. The key here is in EXCEL and not in SQL Server. The step here is to change the format of Excel Cell to Text from Numbers and that will preserve the value of the with leading or trailing Zeros in Excel. However, I assume this is done for display purpose only because once you convert column to Text you may find it difficult to do numeric operations over the column for example Aggregation, Average etc. If you need to do the same you should either convert the columns back to Numeric in Excel or do the process in Database and export the same value as along with it as well. However, I have seen in requirement in the real world where the user has to have a numeric value with leading Zero values in it for display purpose. Here is my suggestion, instead of manipulating numeric value in the database and converting it to character value the ideal thing to do is to store it as a numeric value only in the database. Whatever changes you want to do for display purpose should be handled at the time of the display using the format function of SQL or Application Language. Honestly, database is data layer and presentation is presentation layer – they are two different things and if possible they should not be mixed. If due to any reason you cannot follow above advise and you need is to have append leading zeros in the database only here are two of my previous articles I suggest you to refer them. I am open to learn new tricks as these articles are almost three years old. Please share your opinion and suggestions in the comments area. SQL SERVER – Pad Ride Side of Number with 0 – Fixed Width Number Display SQL SERVER – UDF – Pad Ride Side of Number with 0 – Fixed Width Number Display Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Excel

    Read the article

  • ASP.Net 4.5 Garbage Collection Improvement

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/06/24/asp.net-4.5-garbage-collection-improvement.aspxI just read Five Great .NET Framework 4.5 Features on CodeProject by Shivprasad koirala. Feature 5 in his article mentions the GC background cleanup and has a good explanation of the work the GC has to do for ASP.Net on the server. “Garbage collector is one real heavy task in a .NET application. And it becomes heavier when it is an ASP.NET application. ASP.NET applications run on the server and a lot of clients send requests to the server thus creating loads of objects, making the GC really work hard for cleaning up unwanted objects.” “To overcome the above problem, server GC was introduced. In server GC there is one more thread created which runs in the background. This thread works in the background and keeps cleaning…objects thus minimizing the load on the main GC thread. Due to double GC threads running, the main application threads are less suspended, thus increasing application throughput. To enable server GC, we need to use the gcServer XML tag and enable it to true.” <configuration> <runtime> <gcServer enabled="true"/> </runtime> </configuration> This is not done by default. The MSDN information page says “There are only two garbage collection options, workstation or server. For single-processor computers, the default workstation garbage collection should be the fastest option. Either workstation or server can be used for two-processor computers. Server garbage collection should be the fastest option for more than two processors. Use the GCSettingsIsServerGC property to determine if server garbage collection is enabled.” “In the .NET Framework 4 and earlier versions, concurrent garbage collection is not available when server garbage collection is enabled. Starting with the .NET Framework 4.5, server garbage collection is concurrent. To use non-concurrent server garbage collection, set the <gcServer> element to true and the <gcConcurrent> element to false. “ So if you’re using ASP.Net 4.5 and have a multi-core server, you should try turning on the Server Garbage Collection and do some profiling to see if it improves the performance of your site.

    Read the article

  • Installing Forms and Reports on a development system

    - by Duncan Mills
    By popular demand I've resurrected / updated one of the old blog postings from Jan Carlin's Blog on GroundSide here. A recent (lengthy) post on the Forms forums chronicles the problems some of you have had installing F&R on a development machine. See the link in the headline of this post for the main one. When installing, here are some points to bear in mind: Download and install Weblogic Server first. http://www.oracle.com/technology/software/products/middleware/index.htmlFind the Forms and Reports (and Disco and Portal) zip files here. Download them to the desktop (or some other temporary directory of your choosing). Unzip both of the two zip files into the same new directory (maybe called 'stage') and check that you have 4 directories in the stage dir when you are finished unzipping: 'Disk1', 'Disk2', 'Disk3' and 'Disk4'. These folders are specified in the zip file structure and must be preserved for the setup executable to work. If you use WinZip and have a right click menu option that say "Extract to here", use that by right click-dragging the zip file onto the newly created directory. Don't use the "Extract to folder %HOME%\Desktop\ofm_pfrd...disk_1of2" option. That will get you into the trouble that was reported early in this thread. Free up as much memory as you can. Stop services and background processes and virus scanners and databases (you don't need a DB to install Forms) and other things lurking about on your machine. You can restart them when the install is done. Around 1.5 GB free real memory should do it. If it doesn't, free up more if you can. Don't change the swap space unless you know what you are doing. Let Windows handle it. A 1 GB machine will likely not be enough. You will likely need at least 2GB of RAM.Start the install with setup.exe from the 'Disk1' directoryChoose the Install and Configure option unless you have a good reason not to.Choose a unique instance name even if you deinstalled and removed the last install. I suggest using 'asinst_20090722_1' (today's date in ISO format with a running incremented number at the end if you install more than two times on a particular day).Unselect Portal and Discoverer and select the Builders you want.Unselect WebCacheUnselect OHS.Unselect the single sign-on option Check for any failures and choose the retry option if any occur. If that doesn't fix the problem, call Oracle Customer Support .

    Read the article

  • Before the Summit of 2012

    - by Ajarn Mark Caldwell
    Today, Monday, was the first day of the PASS Summit Preconference training events, but instead I spent the day at the free SQL in the City event put on by Red Gate. For me this was not a financial decision (pre-con sessions cost extra above the general Summit registration) but rather a matter of interest.  I had already included money for pre-cons in this year’s training budget, but none of them really stood out to me, so even if the Red-Gate event were not going on at the same time, I probably would not have gone to any pre-cons this year.  However, the topics being presented at the SQL in the City event were of great interest to me.  There promised to be good information on Continuous Integration and automated deployment of database changes, which lately has been a real hot topic at my work.  And indeed, Red-Gate announced the release of a new tool (still in Early Access Program…a.k.a. Beta) which is called the Deployment Manager.  Since we are in the middle of a TFS implementation project, it will be interesting to see how this plays out and compares to what we put together with the automated builds in TFS.  But, as I understand it, the primary focus of Deployment Manager is not to be the Build process (Red Gate uses JetBrains’ Team City for that in their shop) but rather to aid in the deployment of those build packages, as well as providing easy rollback and a good visualization of which versions of software are in which environments.  It looks promising and I’ve already downloaded the installer package to play with it later. Overall, I was quite impressed with the SQL in the City event.  Having heard many current and past members of the PASS Board of Directors describe the challenges of putting on a large conference, and the growing pains that the PASS Summit has gone through, I am even more impressed that the Red Gate event ran as smoothly as it did.  And it is quite impressive the amount of money that Red Gate must have spent given that this was a no-charge event to attend, they had a very nice hot lunch, and the after-event drinks celebration.  Well done, folks! Of course it was great to hear from a variety of speakers.  Today I listened to some folks from Red Gate like Grant Fritchey (blog | @GFritchey) and David Atkinson (Product Manager for SQL Source Control and now the Deployment Manager tool set); and also Brent Ozar (blog | @BrentO) and Buck Woody (blog | @BuckWoody).  By the way, if you have never seen either Brent or Buck speak, you really should.  Different styles, but both are very entertaining and educational at the same time.  I love Buck’s sense of humor (here’s a tip…don’t be late to Buck’s session or you’ll become part of the presentation) and I praise Brent’s slides.  Brent’s style very much reminds me of that espoused by Garr Reynolds on his Presentation Zen blog (and book) and I am impressed that he can make a technical presentation so engaging. It was a great day, a great way to kick off the week, and I am excited to get into the full Summit!

    Read the article

  • Freshen the RTS genre

    - by William Michael Thorley
    This isn't really a question, but a request for feedback. RPS (Rock, Paper, Scissors) RTS (Real Time Strategy) Demo version is out: The game is simple. It is an RTS. Why has it been made? Many if not most RTS’s are about economy and large numbers of unit types. The genre hasn’t actually developed the gameplay drastically from the very first RTS’s produced, some lesson have been learned, but the games are really very similar to how they have always been. RPS brings new gameplay to the RTS genre. Through three means: • New combat mechanics: RPS has two unique modes (as well as the old favourite) of resolving weapon fire. These change how combat happens, and make application of the correct units vital to success. From this comes the requirement to run Intel on your enemies. • Fixed Resource Economy: Each player has a fixed amount of energy, This means that there is a definite end to the game. You can attrition your enemy and try to outlast them, or try to outspend your opponent and destroy them. There is a limit to how fast ships can be built, through the generation of construction blocks, but energy is the fast limit on economy. • Game Modes: Game modes add victory conditions and new game pieces. The game is overseen by a controller which literally runs the game. Games are no longer line them up, gun them down. This means that new tactics must be played making skirmish games fresh with novel tactics without adding huge amounts of new game units to learn. I’ve produced RPS from the ground. I will be running a kickstarter in the near future, but right now I want feedback and input from the game developing community. Regarding the concepts, where RPS is going, the game modes, the combat mechanics. How it plays. RPS will give fresh gameplay to the genre so it must be right. It works over the internet or a LAN and supports single player games. Get it. Play it. Tell me about your games. Thank you Demo: https://dl.dropbox.com/u/51850113/RPS%20Playtest.zip Tutorials: https://dl.dropbox.com/u/51850113/RPSGamePlay.zip

    Read the article

  • &ldquo;Our Users are Doing Something Surprising&rdquo;&hellip; but what?

    - by antonio romero
    I’ve just started a discussion on the OWB Linkedin Group based on a blog post from Laura Klein’s “Users Know” blog, entitled “Your Users are Doing Something Surprising”… As a PM I found the post thought-provoking and a good reminder to learn from our customers: ...You may have written user stories and work flows... But you know who didn’t read your user stories? That’s right: your users. The result? Somewhere out there, a whole lot of your users are doing something totally unexpected with your product.... Your customers want to do something with your product so badly that they’re going out of their way to come up with clever ways to do it on their own. There are three excellent reasons for you to know what your customers are actually doing with your product: So you know if you are missing an opportunity to pivot your product or marketing So you know if you are missing an important feature So you don’t accidentally destroy a commonly used workaround or "unplanned feature" Truer words were rarely blogged. In fact just in the last few weeks I have had several "users" (some customers, and some internal to Oracle, in fact) turn up having built unexpected but powerful things around OWB, because it has such extensibility mechanisms built into it: OMB*Plus, the old Java APIs back before 10.2, and now the code template/knowledge module framework OWB shares with ODI. Some of our external users show astounding knowledge of how to make OWB really sing. (We hope to feature case studies from several of them over the course of the year on the OWB blog.) My question to all of you: can you identify things you have done or are doing with OWB or that you depend on in it that you think would come as a surprise to us? This could be either some development so advanced as to leave us all gob-smacked, or just some common (to you) thing that you use it for that you find enormously valuable but that you think is a bit off the theoretical "main line" use case of loading data warehouses. I invite the readers of this blog to come visit the OWB and ODI LinkedIn group and share their unusual applications of OWB or the very ordinary-looking features that you don’t want us to forget or would like us to extend. Your anecdotes will impress the crowd and will also help shape future data integration products from Oracle... Come on, surprise us. :)

    Read the article

< Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >