Search Results

Search found 22122 results on 885 pages for 'uses feature'.

Page 112/885 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Unit testing ASP.NET Web API controllers that rely on the UrlHelper

    - by cibrax
    UrlHelper is the class you can use in ASP.NET Web API to automatically infer links from the routing table without hardcoding anything. For example, the following code uses the helper to infer the location url for a new resource,public HttpResponseMessage Post(User model) { var response = Request.CreateResponse(HttpStatusCode.Created, user); var link = Url.Link("DefaultApi", new { id = id, controller = "Users" }); response.Headers.Location = new Uri(link); return response; } That code uses a previously defined route “DefaultApi”, which you might configure in the HttpConfiguration object (This is the route generated by default when you create a new Web API project). The problem with UrlHelper is that it requires from some initialization code before you can invoking it from a unit test (for testing the Post method in this example). If you don’t initialize the HttpConfiguration and Request instances associated to the controller from the unit test, it will fail miserably. After digging into the ASP.NET Web API source code a little bit, I could figure out what the requirements for using the UrlHelper are. It relies on the routing table configuration, and a few properties you need to add to the HttpRequestMessage. The following code illustrates what’s needed,var controller = new UserController(); controller.Configuration = new HttpConfiguration(); var route = controller.Configuration.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var routeData = new HttpRouteData(route, new HttpRouteValueDictionary { { "id", "1" }, { "controller", "Users" } } ); controller.Request = new HttpRequestMessage(HttpMethod.Post, "http://localhost:9091/"); controller.Request.Properties.Add(HttpPropertyKeys.HttpConfigurationKey, controller.Configuration); controller.Request.Properties.Add(HttpPropertyKeys.HttpRouteDataKey, routeData);  The HttpRouteData instance should be initialized with the route values you will use in the controller method (“id” and “controller” in this example). Once you have correctly setup all those properties, you shouldn’t have any problem to use the UrlHelper. There is no need to mock anything else. Enjoy!!.

    Read the article

  • Migrating a blog from Orchard 0.5 to 0.9

    - by Bertrand Le Roy
    My personal blog still runs on Orchard 0.5, because the theme that I used to build it is not yet available for more recent versions, but it is still very important for me to know that I can migrate all my content and comments to a new version at any time. Fortunately, Nick Mayne has been consistently shipping a BlogML module a few days after each of the Orchard versions shipped. Because the module gallery for each version is behind a different URL and is kept alive even after a new one shipped, it is very easy to install the module for both versions. Step 0: Setting up the migration environment In order to do the migration, I made a local copy of the production site on my laptop (data included: I'm using SQL CE) and I also created a new local site with a fresh install of Orchard 0.9. Step 1: Enable the gallery feature on both versions From the admin UI, go to Features and locate the Gallery feature under "Packaging". Enable it. You may now click on "Browse Gallery" on the 0.5 instance and "Modules" under "Gallery" for 0.9: Step 2: Install the BlogML module on both versions From the gallery page, locate the BlogML module and install it. Do it on both versions. Then go to Features and enable BlogML under "Content Publishing". Do it on both versions. Step 3: Export from the 0.5 version Click on "Manage Blog" then on "Export using BlogML" from the 0.5 version. The module then informs you of the path of the saved file: Step 4: Import into the 0.9 version From the 0.9 version, click "Import under "Blogs". Click the button to browse to the file that you just saved from 0.5. Then click "Upload file and Import" Step 5: Copy the 0.5 media folder into 0.9 Copy the contents of the 0.5 version's media folder into the media folder of the 0.9 version. Once that is done, you can delete the "Default/Blog Exports" subfolder. Step 6: Configure the target blog Click "Manage Blog", then "Blog Properties" and restore any properties you had on the source blog. For me, it was the title and URL as well as to set the blog as the home page and show it on the main menu: Step 7: Republish the new site to the production server Once this is done and everything works locally, you are ready to publish to the production site. I use FTP. Note: this should work just as well for any couple of versions for which the BlogML module exists, and not just for 0.5 and 0.9.

    Read the article

  • UIView vs CCLayer and Making gestures work in Cocos2d

    - by Lewis
    now I've been been searching for an answer to this question for at least 3 days now. I've tried on many cocos2d forums, including the official one and have heard nothing back. I've found a project which uses custom gestures: https://github.com/melle/OneFingerRotationGestureDemo Explanation here: http://blog.mellenthin.de/archives/2012/02/13/an-one-finger-rotation-gesture-recognizer/ Now I want to implement that behaviour onto a sprite in a cocos2d application. I've tried to do this but it fails to work. It uses a view controller which inherits like this: @interface OneFingerRotationGestureViewController : UIViewController <OneFingerRotationGestureRecognizerDelegate> Now my question is how would I implement the OneFingerRotationGesture behaviour onto a CCSprite in cocos2d 2.0? As interfaces in cocos2d look like this: @interface HelloWorldLayer : CCLayer Now I have asked a similar question to this on stack overflow and a user directed me to this link: https://github.com/krzysztofzablocki/CCNode-SFGestureRecognizers Which I believe makes use of gestures (like the first github linked project) but not custom gestures. I lack the obj-c skills to work out and implement the functionality into my game, so I would appreciate it if someone could explain the differences between CCLayer and UIViewController, and help me implement the OneFingerRotation gesture into a cocos2d 2.0 project. Regards, Lewis.

    Read the article

  • Architecture advice for converting biz app from old school to new school?

    - by Aaron Anodide
    I've got a WinForms business application that evolved over the past few years. It's forms over data with a number custom UI experiences taylored to the business, so I don't think it's a candidate to port to something like SharePoint or re-write in LightSwitch (at least not without significant investment). When I started it in 2009 I was new to this type of development (coming from more low level programming and my RDBMS knowledge was just slightly greater than what I got from school). Thus, when I was confronted with a business model that operates on a strict monthly accounting cycle, I made the unfortunate decision to create a separate database for each accounting period. Also, when I started I knew DataSets, then I learned Linq2Sql, then I learned EntityFramework. The screens are a mix and match of those. Now, after a few years developing this thing by myself I've finally got a small team. Ultimately, I want a web front end (for remote access to more straight up screens with grids of data) and a thick client (for the highly customized interfaces). My question is: can you offer me some broad strokes architecture advice that will help me formulate a battle plan to convert over to a single database and lay the foundations for my future goals at the same time? Here's a screen shot showing how an older screen uses DataSets and a newer screen uses EF (I'm thinking this might make it more real for someone reading the question - I'm willing to add any amount of detail if someone is willing to help).

    Read the article

  • An XEvent a Day (28 of 31) – Tracking Page Compression Operations

    - by Jonathan Kehayias
    The Database Compression feature in SQL Server 2008 Enterprise Edition can provide some significant reductions in storage requirements for SQL Server databases, and in the right implementations and scenarios performance improvements as well.  There isn’t really a whole lot of information about the operations of database compression that is documented as being available in the DMV’s or SQL Trace.  Paul Randal pointed out on Twitter today that sys.dm_db_index_operational_stats() provides...(read more)

    Read the article

  • Why is Quicksort called "Quicksort"?

    - by Darrel Hoffman
    The point of this question is not to debate the merits of this over any other sorting algorithm - certainly there are many other questions that do this. This question is about the name. Why is Quicksort called "Quicksort"? Sure, it's "quick", most of the time, but not always. The possibility of degenerating to O(N^2) is well known. There are various modifications to Quicksort that mitigate this problem, but the ones which bring the worst case down to a guaranteed O(n log n) aren't generally called Quicksort anymore. (e.g. Introsort). I just wonder why of all the well-known sorting algorithms, this is the only one deserving of the name "quick", which describes not how the algorithm works, but how fast it (usually) is. Mergesort is called that because it merges the data. Heapsort is called that because it uses a heap. Introsort gets its name from "Introspective", since it monitors its own performance to decide when to switch from Quicksort to Heapsort. Similarly for all the slower ones - Bubblesort, Insertion sort, Selection sort, etc. They're all named for how they work. The only other exception I can think of is "Bogosort", which is really just a joke that nobody ever actually uses in practice. Why isn't Quicksort called something more descriptive, like "Partition sort" or "Pivot sort", which describe what it actually does? It's not even a case of "got here first". Mergesort was developed 15 years before Quicksort. (1945 and 1960 respectively according to Wikipedia) I guess this is really more of a history question than a programming one. I'm just curious how it got the name - was it just good marketing?

    Read the article

  • Tagged: 5 things SQL Server should drop

    - by AaronBertrand
    I was tagged by Paul Randal ( blog | twitter ) last night in his latest blog post, entitled, " What 5 things should SQL Server get rid of? " His top 5 pretty much coincide with my top 5, so I'll have to dig a little deeper. In no particular order: Syntax inconsistencies This isn't really a specific thing that Microsoft should get rid of, but rather an attitude and overall approach to SQL Server's long-term development. Every time they add a feature or option to SQL Server, it seems to be implemented...(read more)

    Read the article

  • Telephone.com

    - by jrice
    Check Telephone.com our new website using .netframework 3.5 You can now add your own twist to telephone.com and personalize your messaging style by writing your own SMS applications to implement any feature you would like to add to your messaging experience using our wcf rest API Regards

    Read the article

  • Using polygons instead of quads on Cocos2d

    - by rraallvv
    I've been looking under the hood of Cocos2d, and I think (please correct me if I'm wrong) that although working with quads is a key feature of the engine, it should't be dificult to make it work with arrays of vertices (aka polygons) instead of quads, being the quads a special case of an array of four vertices by the way, does anyone have any code that makes cocos2d render a texture filled polygon inside a batch node? the code posted here (http://www.cocos2d-iphone.org/forum/topic/8142/page/2#post-89393) does a nice job rendering a texture filled polygon but the class doesn't work with batch nodes

    Read the article

  • Which tools should be used for data migration between environments?

    - by Paula Speranza-Hadley
    Ø  With the Oracle Utilities Application Framework based products there are a number of tools provided that can be used to transfer data from one environment to another. Ø  There are three main tools that implementations use: §  ConfigLab - A configurable copy facility is metadata aware and therefore understands the relationships between objects and by invoking the relevant maintenance objects validates the data copied. This utility uses the object validation to help ensure data integrity. Basically it is a set of configuration tables and a set of batch jobs to perform the migration of data. §  Bundling - A configurable release management tool that allows exporting of Advanced Configuration Environment based objects (business services, business objects, UI Maps etc) from one environment to another. §  Blueprint - An Oracle Utilities Software Development Kit (SDK) based tool to import metadata from the development environment to your initial testing environment. The utility is command line based and basically uses a text based configuration file to drive the utility on the source and target sides. Ø  Each tool has a role in an implementation but you must be careful to use the right tool for the right job within an implementation. The suggestions are as follows: §  Only use the Blueprint tool for migrating data from your development platform to your initial test environment. The blueprint tool is not designed to move large amounts of data and certainly is risky, if not used correctly, and can potentially break the integrity of your data. §  The SDK provides the configuration data that it is used for (mainly meta-data). This should not be extended as, while it can perform data migration on any data, it is not efficient and risky for certain types of configuration data. Ø  Additional information can be found in the following whitepaper:  Oracle Utilities Application Framework - Release Management - Software Configuration Management on MyOracle.com

    Read the article

  • How to Never Use iTunes With Your iPhone, iPad, or iPod Touch

    - by Chris Hoffman
    iTunes isn’t an amazing program on Windows. There was a time when Apple device users had to plug their devices into their PCs or Macs and use iTunes for device activation, updates, and syncing, but iTunes is no longer necessary. Apple still allows you to use iTunes for these things, but you don’t have to. Your iOS device can function independently from iTunes, so you should never be forced to plug it into a PC or Mac. Device Activation When the iPad first came out, it was touted as a device that could replace full PCs and Macs for people who only needed to perform light computing tasks. Yet, to set up a new iPad, users had to plug it into a PC or Mac running iTunes and use iTunes to activate the device. This is no longer necessary. With new iPads, iPhones, and iPod Touches, you can simply go through the setup process after turning on your new device without ever having to plug it into iTunes. Just connect to a Wi-Fi or cellular data network and log in with your Apple ID when prompted. You’ll still see an option that allows you to activate the device via iTunes, but this should only be necessary if you don’t have a wireless Internet connection available for your device. Operating System Updates You no longer have to use Apple’s iTunes software to update to a new version of Apple’s iOS operating system, either. Just open the Settings app on your device, select the General category, and tap Software Update. You’ll be able to update right from your device without ever opening iTunes. Purchased iTunes Media Apple allows you to easily access content you’ve purchased from the iTunes Store on any device. You don’t have to connect your device to your computer and sync via iTunes. For example, you can purchase a movie from the iTunes Store. Then, without any syncing, you can open the iTunes Store app on any of your iOS devices, tap the Purchased section, and see stuff you’ve downloaded. You can download the content right from the store to your device. This also works for apps — apps you purchase from the App Store can be accessed in the Purchased section on the App Store on your device later. You don’t have to sync apps from iTunes to your device, although iTunes still allows you to. You can even set up automatic downloads from the iTunes & App Store settings screen. This would allow you to purchase content on one device and have it automatically download to your other devices without any hassle. Music Apple allows you to re-download purchased music from the iTunes Store in the same way. However, there’s a good chance you have your own music you didn’t purchase from iTunes. Maybe you spent time ripping it all from your old CDs and you’ve been syncing it to your devices via iTunes ever since. Apple’s solution for this is named iTunes Match. This feature isn’t free, but it’s not a bad deal at all. For $25 per year, Apple allows you to upload all your music to your iCloud account. You can then access all your music from any iPhone, IPad, or iPod Touch. You can stream all your music — perfect if you have a huge library and little storage on your device — and choose which songs you want to download to your device for offline use. When you add additional music to your computer, iTunes will notice it and upload it using iTunes Match, making it available for streaming and downloading directly from your iOS devices without any syncing. This feature is named iTunes Match because it doesn’t just upload music — if Apple already has a song you upload, it will “match” your song with Apple’s copy. This means you may get higher-quality versions of your songs if you ripped them from CD at a lower bitrate. Podcasts You don’t have to use iTunes to subscribe to podcasts and sync them to your devices. Even if you have a lowly iPod Touch, you can install APple’s Podcasts app from the app store. Use it to subscribe to podcasts and configure them to automatically download directly to your device. You can use other podcast apps for this, too. Backups You can continue backing up your device’s data through iTunes, generating local backups that are stored on your computer. However, new iOS devices are configured to automatically back up their data to iCloud. This happens automatically in the background without you even having to think about it, and you can restore such backups when setting up a device simply by logging in with your Apple ID. Personal Data In the days of PalmPilots, people would use desktop programs like iTunes to sync their email, contacts, and calendar events with their mobile devices. You probably shouldn’t have to sync this data form your computer. Just sign into your email account — for example, a Gmail account — on your device and iOS will automatically pull your email, contacts, and calendar events from your associated account. Photos Rather than connecting your iOS device to your computer and syncing photos from it, you can use an app that automatically uploads your photos to a web service. Dropbox, Google+, and even Flickr all have this feature in their apps. You’ll be able to access your photos from any computer and have a backup copy without any syncing required. You may still need to use iTunes if you want to sync local music without paying for iTunes Match or copy local video files to your device. Copying large local files over is the only real scenario where you’d need iTunes. If you don’t need to copy such files over, you can go ahead and uninstall iTunes from your Windows PC if you like. You shouldn’t need it.     

    Read the article

  • How to choose an agile methodology?

    - by Christophe Debove
    I'm working in a little firm about 10 developpers, we are working a kind of agile way but knowledgeless and without formalism. I think be aware of what are agile method, what can they afford to us, may render more productive our products. However there is a lot of agile method, which could be the simplest to "learn"? Rapid Application Development Dynamic systems development method Scrum Feature Driven Development Extreme programming Adaptive software development Test Driven Development Crystal clear

    Read the article

  • MySQL Enterprise Backup 3.8.2 - Overview

    - by Priya Jayakumar
      MySQL Enterprise Backup (MEB) is the ideal solution for backing up MySQL databases. MEB 3.8.2 is released in June 2013. MySQL Enterprise Backup 3.8.2 release’s main goal is to improve usability. With this release, users can know the progress of backup completed both in terms of size and as a percentage of the total. This release also offers options to be able to manage the behavior of MEB in case the space on the secondary storage is completely exhausted during backup. The progress indicator is a (short) string that indicates how far the execution of a time-consuming MEB command has progressed. It consists of one or more "meters" that measures the progress of the command. There are two options introduced to control the progress reporting function of mysqlbackup command (1) –show-progress (2) –progress-interval. The user can control the progress indicator by using “--show-progress” option in any of the MEB operations. This option instructs MEB to output periodically short reports on the progress of time-consuming commands. The argument of this option instructs where the output could be sent. For example it could be stderr, stdout, file, fifo and table. With the “--show-progress” option both the total size of the backup to be copied and the size that’s already copied will be shown. Along with this, the state of the operation for example data or meta-data being copied or tables being locked and other such operations will also be reported. This gives more clear information to the DBA on the progress of the backup that’s happening. Interval between progress report in seconds is controlled by “--progress-interval” option. For more information on this please refer progress-report-options. MEB can also be accessed through GUI from MySQL WorkBench’s next version. This can be used as the front end interface for MEB users to perform backup operations at the click of a button. This feature was highly requested by DBAs and will be very useful. Refer http://insidemysql.com/mysql-workbench-6-0-a-sneak-preview/ for WorkBench upcoming release info. Along with the progress report feature some of the important issues like below are also addressed in MEB 3.8.2. In MEB 3.8.2 a new command line option “--on-disk-full” is introduced to abort or warn the user when a backup process encounters a full disk condition. When no option is given, by default it would abort. A few issues related to “incremental-backup” are also addressed in this release. Please refer 3.8.2 documentation for more details. It would be good for MEB users to move to 3.8.2 to take incremental backups. Overall the added usability and the important defects fixed in this release makes MySQL Enterprise Backup 3.8.2 a promising release.  

    Read the article

  • Answers to Your Common Oracle Database Lifecycle Management Questions

    - by Scott McNeil
    We recently ran a live webcast on Strategies for Managing Oracle Database's Lifecycle. There were tons of questions from our audience that we simply could not get to during the hour long presentation. Below are some of those questions along with their answers. Enjoy! Question: In the webcast the presenter talked about “gold” configuration standards, for those who want to use this technique, could you recommend a best practice to consider or follow? How do I get started? Answer:Gold configuration standardization is a quick and easy way to improve availability through consistency. Start by choosing a reference database and saving the configuration to the Oracle Enterprise Manager repository using the Save Configuration feature. Next create a comparison template using the Oracle provided template as a starting point and modify the ignored properties to eliminate expected differences in your environment. Finally create a comparison specification using the comparison template you created plus your saved gold configuration and schedule it to run on a regular basis. Don’t forget to fill in the email addresses of those you want to notify upon drift detection. Watch the database configuration management demo to learn more. Question: Can Oracle Lifecycle Management Pack for Database help with patching an Oracle Real Application Cluster (RAC) environment? Answer: Yes, Oracle Enterprise Manager supports both parallel and rolling patch application of Oracle Real Application Clusters. The use of rolling patching is recommended as there is no downtime involved. For more details watch this demo. Question: What are some of the things administrators can do to control configuration drift? Why is it important? Answer:Configuration drift is one of the main causes of instability and downtime of applications. Oracle Enterprise Manager makes it easy to manage and control drift using scheduled configuration comparisons combined with comparison templates. Question: Does Oracle Enterprise Manager 12c Release 2 offer an incremental update feature for "gold" images? For instance, if the source binary has a higher PSU level, what is the best approach to update the existing "gold" image in the software library? Do you have to create a new image or can you just update the original one? Answer:Provisioning Profiles (Gold images) can contain the installation files and database configuration templates. Although it is possible to make some changes to the profile after creation (mainly to configuration), it is normally recommended to simply create a new profile after applying a patch to your reference database. Question: The webcast talked about enforcing in-house standards, does Oracle Enterprise Manager 12c offer verification of your databases and systems to those standards? For example, the initial "gold" image has been massively deployed over time, and there may be some changes to it. How can you do regular checks from Enterprise Manager to ensure the in-house standards are being enforced? Answer:There are really two methods to validate conformity to standards. The first method is to use gold standards which you compare other databases to report unwanted differences. This method uses a new comparison template technology which allows users to ignore known differences (i.e. SID, Start time, etc) which results in a report only showing important or non-conformant differences. This method is quick to setup and configure and recommended for those who want to get started validating compliance quickly. The second method leverages the new compliance framework which allows the creation of specific and robust validations. These compliance rules are grouped into standards which can be assigned to databases quickly and easily. Compliance rules allow for targeted and more sophisticated validation beyond the basic equals operation available in the comparison method. The compliance framework can be used to implement just about any internal or industry standard. The compliance results will track current and historic compliance scores at the overall and individual database targets. When the issue is resolved, the score is automatically affected. Compliance framework is the recommended long term solution for validating compliance using Oracle Enterprise Manager 12c. Check out this demo on database compliance to learn more. Question: If you are using the integration between Oracle Enterprise Manager and My Oracle Support in an "offline" mode, how do you know if you have the latest My Oracle Support metadata? Answer:In Oracle Enterprise Manager 12c Release 2, you now only need to download one zip file containing all of the metadata xmls files. There is no indication that the metadata has changed but you could run a checksum on the file and compare it to the previously downloaded version to see if it has changed. Question: What happens if a patch fails while administrators are applying it to a database or system? Answer:A large portion of Oracle Enterprise Manager's patch automation is the pre-requisite checks that happen to ensure the highest level of confidence the patch will successfully apply. It is recommended you test the patch in a non-production environment and save the patch plan as a template once successful so you can create new plans using the saved template. If you are using the recommended ‘out of place’ patching methodology, there is no urgency because the database is still running as the cloned Oracle home is being patched. Users can address the issue and restart the patch procedure at the point it left off. If you are using 'in place' method, you can address the issue and continue where the procedure left off. Question: Can Oracle Enterprise Manager 12c R2 compare configurations between more than one target at the same time? Answer:Oracle Enterprise Manager 12c can compare any number of target configurations at one time. This is the basis of many important use cases including Configuration Drift Management. These comparisons can also be scheduled on a regular basis and emails notification sent should any differences appear. To learn more about configuration search and compare watch this demo. Question: How is data comparison done since changes are taking place in a live production system? Answer:There are many things to keep in mind when using the data comparison feature (as part of the Change Management ability to compare table data). It was primarily intended to be used for maintaining consistency of important but relatively static data. For example, application seed data and application setup configuration. This data does not change often but is critical when testing an application to ensure results are consistent with production. It is not recommended to use data comparison on highly dynamic data like transactional tables or very large tables. Question: Which versions of Oracle Database can be monitored through Oracle Enterprise Manager 12c? Answer:Oracle Database versions: 9.2.0.8, 10.1.0.5, 10.2.0.4, 10.2.0.5, 11.1.0.7, 11.2.0.1, 11.2.0.2, 11.2.0.3. Watch the On-Demand Webcast Stay Connected: Twitter | Facebook | YouTube | Linkedin | NewsletterDownload the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • Submit Nominations for Duke's Choice Awards Latin America

    - by Tori Wieldt
    The Duke's Choice Awards are nominated by members of the Java community and recognize compelling uses of Java technology or community involvement.  The first of the regional Duke's Choice Awards will be in December in Latin America. Three winners will be announced on stage during JavaOne Latin America December 4th to 6th and in the Jan/Feb issue of Java Magazine.   Nominations are accepted from anyone in the Java community for compelling uses of Java technology or community involvement.   Duke's Choice Awards LAD judges include community members Yara Senger (Brazil) and Alexis Lopez (Colombia). In keeping with the 10 year tradition of the Duke's Choice Award program, the most important ingredient is innovation. Let's recognize and celebrate the innovation that Java delivers within Latin America! Submit your nominations now!  Nominations close 7 November. www.java.net/dukeschoiceLAD As announced at JavaOne San Francisco, the Duke's Choice Award program has been expanded to include regional awards in conjunction with each international JavaOne conference.  The expanded Duke's Choice Award program celebrates Java innovation happening within specific regions and provides an opportunity to recognize winners locally. Regions include Latin America (LAD), Europe Africa Middle East (EMEA), and Asia.  The global program will continue in association with the flagship JavaOne conference.  

    Read the article

  • Windows Desktop Virtualization Gets Easier

    - by andrewbrust
    This past Thursday, Microsoft announced that Windows (7) Virtual PC (WVPC) and its XP Mode feature would no longer require hardware assisted virtualization (HAV).  That means any PC running Windows 7 Pro, or higher, can now run this software.  And that’s a great thing because, as I noted in a post almost five month ago, determining whether a given PC you might be planning to buy actually offers HAV can be extremely difficult.  That meant even dedicated, sophisticated PC users, with a budget for new hardware, might be blocked from using this technology.  And that was just plain silly. One of the features offered by WVPC, and utilized heavily by XP Mode, is the concept of virtual applications: apps within a guest VM that can actually run within the host’s desktop environment.  I find this feature so powerful that my February Redmond Review column entertained the notion of a future version of Windows that runs all applications in this manner. The elimination of the HAV requirement for XP Mode and WVPC was just one of many virtualization-related announcements Microsoft made on Thursday.  And, interestingly, most of the others were also desktop-related, rather than server-related.  This is a welcome change from the multi-year period in which Microsoft enhanced its server virtualization lineup (in Hyper-V) and let the desktop platform fester.  Microsoft now seems to understand desktop virtualization is in high-demand and strengthens the Windows franchise.  As I explained in the column, even cloud computing can have a desktop spin if desktop virtualization is part of the equation. One company that knows this well is Citrix, and a closer alliance between Microsoft and Citrix was one of the many announcements from Thursday.  In fact, there’s a whole Web site dedicated to the alliance at http://www.citrixandmicrosoft.com/. I’d love to see virtual applications and entire virtual desktops offered as Azure-branded services.  This could allow me to run, for example, the full Office client on a variety of desktops I might use, and for large organizations it could easily reduce the expense, burden and duration of the deployment cycle for new versions of Office.  Business Intelligence providers, including my own firm, twentysix New York, would find great relief in enabling their customers to run the newest version of Excel, with the latest BI capabilities, instead of having to wait the requisite two to three years it takes for many Fortune 500 customers to upgrade. Microsoft should do more, and faster.  WVPC still does not support 64-bit guest images, even on 64-bit hosts.  That needs to be fixed.  File access from the guest to the host needs to be improved (right now, it’s done through Terminal Services/Remote Desktop file sharing, and it’s slow) and VM load times need to be significantly reduced before virtualized apps can become the norm.  (I suppose the advance of solid state drive technology will help there.) I do think these improvements will come, because Microsoft is focused on the virtual desktop now.  And that’s a smart focus to have.

    Read the article

  • Set Time Limits in Windows Parental Controls

    So you decided that Window 7 s Parental Controls feature could help you with monitoring your child s activities on your computer. You already learned how to enable Parental Controls on your PC. While its default settings will help your monitoring efforts setting your own rules provides more of a hands-on monitoring experience.... Comcast? Business Class - Official Site Learn About Comcast Small Business Services. Best in Phone, TV & Internet.

    Read the article

  • Eclipse: always keep files updated

    - by AK01
    I keep lots of files/editors open in Eclipse. I also love using git stash and other git commands that essentially change the contents of my open files. Is there an Eclipse feature or plugin that will always keep the contents of my open files up to date and live? Currently if I put focus in an out of sync editor, I get an awkwardly worded dialog that I have to parse carefully every time. I wish it would just keep me synced like Textmate does.

    Read the article

  • Packing jar files into library jar files

    - by Hillel
    Firstly, this question is not about packing a simple jar file (e.g. lwjgl) into a runnable jar file. I know how to do this using JarSplice. So if I have a game which uses JInput, I will pack my game jar and jinput.jar using JarSplice and enter the natives in the process. The problem arises when I want to create a custom library that uses JInput, and then pack that into my games. See, the whole idea of writing a game library is that I don't ever have to even copy code like the wrapper I wrote for JInput Controller, and I always have a definitive version inside a library jar. Basically what I wanna do is create a jar file of my library, pack jinput.jar into it using JarSplice, possibly with the natives as well, and then when I want to export a jar of my game, I either export it automatically through Eclipse with the library jar, or, if that doesn't work, use JarSplice. I've tried several solutions, and nothing works. When I try to pack the game jar and the library jar using JarSplice, I get an error saying that there's either duplicate .project or .classpath. When I try to export my game through Eclipse with the library jar, it won't run (which is to be expected), but then, if I try to attach the natives with JarSplice, it doesn't give me any errors but the jar doesn't run. I'm not expecting anyone to solve this, but if anyone has an idea, something that will allow me to never look at the Gamepad code ever again, that would be awesome. I don't care if I have to package my library jar using JarSplice 5 times, and then do the same with the game jar, as long as it works. Otherwise I'll just have to copy the Gamepad class into every project alongside the library jar. :(

    Read the article

  • Connect Spotlight: Rename Instance Name

    - by Lara Rubbelke
    Every now and then customers ask me how they can suggest changes or behavior changes to SQL Server. Many of us are aware of Connect , where you can add feature recommendations and vote on other people’s suggestions. There are a LOT of recommendations, and I know Microsoft values your feedback and suggestions. Sometimes these recommendations are grand, and others are small – in either case, your votes do make a difference on how Microsoft prioritizes features and changes in future releases. Recently,...(read more)

    Read the article

  • Article Sharing &ndash; Windows Azure Memcached Plugin

    - by Shaun
    I just found that David Aiken, a windows azure developer and evangelist, wrote a cool article about how to use Memcached in Windows Azure through the new feature Azure Plugin. http://www.davidaiken.com/2011/01/11/windows-azure-memcached-plugin/ I think the best solution for distributed cache in Azure would be the Windows Azure AppFabric Caching but since it’s only in CTP and avaiable in the US data center David’s solution would be the best. Only one thing I’m concerning about, is the stability of windows verion Memcached.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >