Search Results

Search found 22122 results on 885 pages for 'uses feature'.

Page 107/885 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Scott Guthrie in Glasgow

    - by Martin Hinshelwood
    Last week Scott Guthrie was in Glasgow for his new Guathon tour, which was a roaring success. Scott did talks on the new features in Visual Studio 2010, Silverlight 4, ASP.NET MVC 2 and Windows Phone 7. Scott talked from 10am till 4pm, so this can only contain what I remember and I am sure lots of things he discussed just went in one ear and out another, however I have tried to capture at least all of my Ohh’s and Ahh’s. Visual Studio 2010 Right now you can download and install Visual Studio 2010 Candidate Release, but soon we will have the final product in our hands. With it there are some amazing improvements, and not just in the IDE. New versions of VB and C# come out of the box as well as Silverlight 4 and SharePoint 2010 integration. The new Intellisense features allow inline support for Types and Dictionaries as well as being able to type just part of a name and have the list filter accordingly. Even better, and my personal favourite is one that Scott did not mention, and that is that it is not case sensitive so I can actually find things in C# with its reasonless case sensitivity (Scott, can we please have an option to turn that off.) Another nice feature is the Routing engine that was created for ASP.NET MVC is now available for WebForms which is good news for all those that just imported the MVC DLL’s to get at it anyway. Another fantastic feature that will need some exploring is the ability to add validation rules to your entities and have them validated automatically on the front end. This removes the need to add your own validators and means that you can control an objects validation rules from a single location, the object. A simple command “GridView.EnableDynamicData(gettype(product))“ will enable this feature on controls. What was not clear was wither there would be support for this in WPF and WinForms as well. If there is, we can write our validation rules once and use everywhere. I was disappointed to here that there would be no inbuilt support for the Dynamic Language Runtime (DLR) with VS2010, but I think it will be there for .vNext. Because I have been concentrating on the Visual Studio ALM enhancements to VS2010 I found this section invaluable as I now know at least some of what I missed. Silverlight 4 I am not a big fan of Silverlight. There I said it, and I will probably get lynched for it. My big problem with Silverlight is that most of the really useful things I leaned from WPF do not work. I am only going to mention one thing and that is “x:Type”. If you are a WPF developer you will know how much power these 6 little letters provide; the ability to target templates at object types being the the most magical and useful. But, and this is a massive but, if you are developing applications that MUST run on platforms other than windows then Silverlight is your only choice (well that and Flash, but lets just not go there). And Silverlight has a huge install base as well.. 60% of all internet connected devices have Silverlight. Can Adobe say that? Even though I am not a fan of it my current project is a Silverlight one. If you start your XAML experience with Silverlight you will not be disappointed and neither will the users of the applications you build. Scott showed us a fantastic application called “Silverface” that is a Silverlight 4 Out of Browser application. I have looked for a link and can’t find one, but true to form, here is a fantastic WPF version called Fish Bowl from Microsoft. ASP.NET MVC 2 ASP.NET MVC is something I have played with but never used in anger. It is definitely the way forward, but WebForms is not dead yet. there are still circumstances when WebForms are better. If you are starting from greenfield and you are using TDD, then MVC is ultimately the only way you can go. New in version 2 are Dynamic Scaffolding helpers that let you control how data is presented in the UI from the Entities. Adding validation rules and other options that make sense there can help improve the overall ease of developing the UI. Also the Microsoft team have heard the cries of help from the larger site builders and provided “Areas” which allow a level of categorisation to your Controllers and Views. These work just like add-ins and have their own folder, but also have sub Controllers and Views. Areas are totally pluggable and can be dropped onto existing sites giving the ability to have boxed products in MVC, although what you do with all of those views is anyone's guess. They have been listening to everyone again with the new option to encapsulate UI using the Html.Action or Html.ActionRender. This uses the existing  .ascx functionality in ASP.NET to render partial views to the screen in certain areas. While this was possible before, it makes the method official thereby opening it up to the masses and making it a standard. At the end of the session Scott pulled out some IIS goodies including the IIS SEO Toolkit which can be used to verify your own site is “good” for search engine consumption. Better yet he suggested that you run it against your friends sites and shame them with how bad they are. note: make sure you have fixed yours first. Windows Phone 7 Series I had already seen the new UI for WP7 and heard about the developer story, but Scott brought that home by building a twitter application in about 20 minutes using the emulator. Scott’s only mistake was loading @plip’s tweets into the app… And guess what, it was written in Silverlight. When Windows Phone 7 launches you will be able to use about 90% of the codebase of your existing Silverlight application and use it on the phone! There are two downsides to the new WP7 architecture: No, your existing application WILL NOT work without being converted to either a Silverlight or XNA UI. NO, you will not be able to get your applications onto the phone any other way but through the Marketplace. Do I think these are problems? No, not even slightly. This phone is aimed at consumers who have probably never tried to install an application directly onto a device. There will be support for enterprise apps in the future, but for now enterprises should stay on Windows Phone 6.5.x devices. Post Event drinks At the after event drinks gathering Scott was checking out my HTC HD2 (released to the US this month on T-Mobile) and liked the Windows Phone 6.5.5 build I have on it. We discussed why Microsoft were not going to allow Windows Phone 7 Series onto it with my understanding being that it had 5 buttons and not 3, while Scott was sure that there was more to it from a hardware standpoint. I think he is right, and although the HTC HD2 has a DX9 compatible processor, it was never built with WP7 in mind. However, as if by magic Saturday brought fantastic news for all those that have already bought an HD2: Yes, this appears to be Windows Phone 7 running on a HTC HD2. The HD2 itself won't be getting an official upgrade to Windows Phone 7 Series, so all eyes are on the ROM chefs at the moment. The rather massive photos have been posted by Tom Codon on HTCPedia and they've apparently got WiFi, GPS, Bluetooth and other bits working. The ROM isn't online yet but according to the post there's a beta version coming soon. Leigh Geary - http://www.coolsmartphone.com/news5648.html  What was Scott working on on his flight back to the US?   Technorati Tags: VS2010,MVC2,WP7S,WP7 Follow: @CAMURPHY, @ColinMackay, @plip and of course @ScottGu

    Read the article

  • Azure News at TechEd 2010

    - by guybarrette
    The Azure team did a few announcements today at TechED US 2010: June 2010 Azure Tools SDK with support for VS 2010 RTM and the .NET Framework 4 Production launch of the Azure CDN OS Auto-upgrade Feature Read all about it here var addthis_pub="guybarrette";

    Read the article

  • Azure News at TechEd 2010

    The Azure team did a few announcements today at TechED US 2010: June 2010 Azure Tools SDK with support for VS 2010 RTM and the .NET Framework 4 Production launch of the Azure CDN OS Auto-upgrade Feature Read all about it here var addthis_pub="guybarrette";...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • MVC Portable Areas &ndash; Web Application Projects

    - by Steve Michelotti
    This is the first post in a series related to build and deployment considerations as I’ve been exploring MVC Portable Areas: #1 – Using Web Application Project to build portable areas #2 – Conventions for deploying portable area static files #3 – Portable area static files as embedded resources Portable Areas is a relatively new feature available in MvcContrib that builds upon the new feature called Areas that was introduced in MVC 2. In short, portable areas provide a way to distribute MVC binary components as simple .NET assemblies rather than an assembly along with all the physical files for the views. At the heart of portable areas is a custom view engine that delivers the *.aspx pages by pulling them from embedded resources rather than from the physical file system. A portable area can be something as small as a tiny snippet of html that eventually gets rendered on a page, to something as large as an entire MVC web application. You should read this 4-part series to get up to speed on what portable areas are. Web Application Project In most of the posts to date, portable areas are shown being created with a simple C# class library. This is cool and it serves as an effective way to illustrate the simplicity of portable areas. However, the problem with that is that the developer loses out on the normal developer experience with the various tooling/scaffolding options that we’ve come to expect in visual studio like the ability to add controllers, views, etc. easily: I’ve had good results just using a normal web application project (rather than a class library) to develop portable areas and get the normal vs.net benefits. However, one gotcha that comes as a result is that it’s easy to forget to set the file to “Embedded Resource” every time you add a new aspx page. To mitigate this, simply add this MSBuild snippet shown below to your *.csproj file and all *.aspx, *ascx will automatically be set as embedded resources when your project compiles: 1: <Target Name="BeforeBuild"> 2: <ItemGroup> 3: <EmbeddedResource Include="**\*.aspx;**\*.ascx" /> 4: </ItemGroup> 5: </Target> Also, you should remove the Global.asax from this web application as it is not the host. Being able to have the normal tooling experience we’ve come to expect from Visual Studio makes creating portable areas quite simple. This even allows us to do things like creating a project template such as “MVC Portable Area Web Application” that would come pre-configured with routes set up in the PortableAreaRegistration and no Global.asax file.

    Read the article

  • Would you re-design completely under .Net?

    - by dboarman
    A very extensive application began as an Access-based system (for database storage). Forms were written in VB5 and/or VB6. As .Net became a fixture in the development community, certain modules have been rewritten. This seems very awkward and potentially costly just to maintain because of the cross-technologies and extra work to keep the two technologies happy with each other. Of course, the application uses a mix of ODBC OleDb and MySql. Would you think spending the time and resources to completely re-develop the application under .Net would be more cost effective? In an effort to develop a more stable application, wouldn't it make sense to use .Net? Or continue chasing Access bugs, adding new features in .Net (which may or may not create new bugs between .Net and Access), and rewriting old Access modules into .Net modules under time constraints that prevent proper design and development? Update The application uses OleDb and MySql - I corrected my previous statement. Also, to lend further support to rewriting: I have since found out that when the "porting" to .Net began, the VBA/VB6 code that existed was basically translated to the .Net equivalent. From my understanding, nothing was done to improve performance, or take advantage of new libraries or technologies. In my opinion, this creates a very fragile and unstable application. With every new update, this becomes more and more visible. As a help desk technician, I have noticed an increase in problems reported. The customers using the software have noticed an increase in problems and are commenting on it.

    Read the article

  • ASP.NET Web Forms Extensibility: Providers

    - by Ricardo Peres
    Introduction This will be the first of a number of posts on ASP.NET extensibility. At this moment I don’t know exactly how many will be and I only know a couple of subjects that I want to talk about, so more will come in the next days. I have the sensation that the providers offered by ASP.NET are not widely know, although everyone uses, for example, sessions, they may not be aware of the extensibility points that Microsoft included. This post won’t go into details of how to configure and extend each of the providers, but will hopefully give some pointers on that direction. Canonical These are the most widely known and used providers, coming from ASP.NET 1, chances are, you have used them already. Good support for invoking client side, either from a .NET application or from JavaScript. Lots of server-side controls use them, such as the Login control for example. Membership The Membership provider is responsible for managing registered users, including creating new ones, authenticating them, changing passwords, etc. ASP.NET comes with two implementations, one that uses a SQL Server database and another that uses the Active Directory. The base class is Membership and new providers are registered on the membership section on the Web.config file, as well as parameters for specifying minimum password lengths, complexities, maximum age, etc. One reason for creating a custom provider would be, for example, storing membership information in a different database engine. 1: <membership defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </membership> Role The Role provider assigns roles to authenticated users. The base class is Role and there are three out of the box implementations: XML-based, SQL Server and Windows-based. Also registered on Web.config through the roleManager section, where you can also say if your roles should be cached on a cookie. If you want your roles to come from a different place, implement a custom provider. 1: <roleManager defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly" /> 4: </providers> 5: </roleManager> Profile The Profile provider allows defining a set of properties that will be tied and made available to authenticated or even anonymous ones, which must be tracked by using anonymous authentication. The base class is Profile and the only included implementation stores these settings in a SQL Server database. Configured through profile section, where you also specify the properties to make available, a custom provider would allow storing these properties in different locations. 1: <profile defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </profile> Basic OK, I didn’t know what to call these, so Basic is probably as good as a name as anything else. Not supported client-side (doesn’t even make sense). Session The Session provider allows storing data tied to the current “session”, which is normally created when a user first accesses the site, even when it is not yet authenticated, and remains all the way. The base class and only included implementation is SessionStateStoreProviderBase and it is capable of storing data in one of three locations: In the process memory (default, not suitable for web farms or increased reliability); A SQL Server database (best for reliability and clustering); The ASP.NET State Service, which is a Windows Service that is installed with the .NET Framework (ok for clustering). The configuration is made through the sessionState section. By adding a custom Session provider, you can store the data in different locations – think for example of a distributed cache. 1: <sessionState customProvider=”MyProvider”> 2: <providers> 3: <add name=”MyProvider” type=”MyClass, MyAssembly” /> 4: </providers> 5: </sessionState> Resource A not so known provider, allows you to change the origin of localized resource elements. By default, these come from RESX files and are used whenever you use the Resources expression builder or the GetGlobalResourceObject and GetLocalResourceObject methods, but if you implement a custom provider, you can have these elements come from some place else, such as a database. The base class is ResourceProviderFactory and there’s only one internal implementation which uses these RESX files. Configuration is through the globalization section. 1: <globalization resourceProviderFactoryType="MyClass, MyAssembly" /> Health Monitoring Health Monitoring is also probably not so well known, and actually not a good name for it. First, in order to understand what it does, you have to know that ASP.NET fires “events” at specific times and when specific things happen, such as when logging in, an exception is raised. These are not user interface events and you can create your own and fire them, nothing will happen, but the Health Monitoring provider will detect it. You can configure it to do things when certain conditions are met, such as a number of events being fired in a certain amount of time. You define these rules and route them to a specific provider, which must inherit from WebEventProvider. Out of the box implementations include sending mails, logging to a SQL Server database, writing to the Windows Event Log, Windows Management Instrumentation, the IIS 7 Trace infrastructure or the debugger Trace. Its configuration is achieved by the healthMonitoring section and a reason for implementing a custom provider would be, for example, locking down a web application in the event of a significant number of failed login attempts occurring in a small period of time. 1: <healthMonitoring> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </healthMonitoring> Sitemap The Sitemap provider allows defining the site’s navigation structure and associated required permissions for each node, in a tree-like fashion. Usually this is statically defined, and the included provider allows it, by supplying this structure in a Web.sitemap XML file. The base class is SiteMapProvider and you can extend it in order to supply you own source for the site’s structure, which may even be dynamic. Its configuration must be done through the siteMap section. 1: <siteMap defaultProvider="MyProvider"> 2: <providers><add name="MyProvider" type="MyClass, MyAssembly" /> 3: </providers> 4: </siteMap> Web Part Personalization Web Parts are better known by SharePoint users, but since ASP.NET 2.0 they are included in the core Framework. Web Parts are server-side controls that offer certain possibilities of configuration by clients visiting the page where they are located. The infrastructure handles this configuration per user or globally for all users and this provider is responsible for just that. The base class is PersonalizationProvider and the only included implementation stores settings on SQL Server. Add new providers through the personalization section. 1: <webParts> 2: <personalization defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </personalization> 7: </webParts> Build The Build provider is responsible for compiling whatever files are present on your web folder. There’s a base class, BuildProvider, and, as can be expected, internal implementations for building pages (ASPX), master pages (Master), user web controls (ASCX), handlers (ASHX), themes (Skin), XML Schemas (XSD), web services (ASMX, SVC), resources (RESX), browser capabilities files (Browser) and so on. You would write a build provider if you wanted to generate code from any kind of non-code file so that you have strong typing at development time. Configuration goes on the buildProviders section and it is per extension. 1: <buildProviders> 2: <add extension=".ext" type="MyClass, MyAssembly” /> 3: </buildProviders> New in ASP.NET 4 Not exactly new since they exist since 2010, but in ASP.NET terms, still new. Output Cache The Output Cache for ASPX pages and ASCX user controls is now extensible, through the Output Cache provider, which means you can implement a custom mechanism for storing and retrieving cached data, for example, in a distributed fashion. The base class is OutputCacheProvider and the only implementation is private. Configuration goes on the outputCache section and on each page and web user control you can choose the provider you want to use. 1: <caching> 2: <outputCache defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </outputCache> 7: </caching> Request Validation A big change introduced in ASP.NET 4 (and refined in 4.5, by the way) is the introduction of extensible request validation, by means of a Request Validation provider. This means we are not limited to either enabling or disabling event validation for all pages or for a specific page, but we now have fine control over each of the elements of the request, including cookies, headers, query string and form values. The base provider class is RequestValidator and the configuration goes on the httpRuntime section. 1: <httpRuntime requestValidationType="MyClass, MyAssembly" /> Browser Capabilities The Browser Capabilities provider is new in ASP.NET 4, although the concept exists from ASP.NET 2. The idea is to map a browser brand and version to its supported capabilities, such as JavaScript version, Flash support, ActiveX support, and so on. Previously, this was all hardcoded in .Browser files located in %WINDIR%\Microsoft.NET\Framework(64)\vXXXXX\Config\Browsers, but now you can have a class inherit from HttpCapabilitiesProvider and implement your own mechanism. Register in on the browserCaps section. 1: <browserCaps provider="MyClass, MyAssembly" /> Encoder The Encoder provider is responsible for encoding every string that is sent to the browser on a page or header. This includes for example converting special characters for their standard codes and is implemented by the base class HttpEncoder. Another implementation takes care of Anti Cross Site Scripting (XSS) attacks. Build your own by inheriting from one of these classes if you want to add some additional processing to these strings. The configuration will go on the httpRuntime section. 1: <httpRuntime encoderType="MyClass, MyAssembly" /> Conclusion That’s about it for ASP.NET providers. It was by no means a thorough description, but I hope I managed to raise your interest on this subject. There are lots of pointers on the Internet, so I only included direct references to the Framework classes and configuration sections. Stay tuned for more extensibility!

    Read the article

  • Translating with Google Translate without API and C# Code

    - by Rick Strahl
    Some time back I created a data base driven ASP.NET Resource Provider along with some tools that make it easy to edit ASP.NET resources interactively in a Web application. One of the small helper features of the interactive resource admin tool is the ability to do simple translations using both Google Translate and Babelfish. Here's what this looks like in the resource administration form: When a resource is displayed, the user can click a Translate button and it will show the current resource text and then lets you set the source and target languages to translate. The Go button fires the translation for both Google and Babelfish and displays them - pressing use then changes the language of the resource to the target language and sets the resource value to the newly translated value. It's a nice and quick way to get a quick translation going. Ch… Ch… Changes Originally, both implementations basically did some screen scraping of the interactive Web sites and retrieved translated text out of result HTML. Screen scraping is always kind of an iffy proposition as content can be changed easily, but surprisingly that code worked for many years without fail. Recently however, Google at least changed their input pages to use AJAX callbacks and the page updates no longer worked the same way. End result: The Google translate code was broken. Now, Google does have an official API that you can access, but the API is being deprecated and you actually need to have an API key. Since I have public samples that people can download the API key is an issue if I want people to have the samples work out of the box - the only way I could even do this is by sharing my API key (not allowed).   However, after a bit of spelunking and playing around with the public site however I found that Google's interactive translate page actually makes callbacks using plain public access without an API key. By intercepting some of those AJAX calls and calling them directly from code I was able to get translation back up and working with minimal fuss, by parsing out the JSON these AJAX calls return. I don't think this particular Warning: This is hacky code, but after a fair bit of testing I found this to work very well with all sorts of languages and accented and escaped text etc. as long as you stick to small blocks of translated text. I thought I'd share it in case anybody else had been relying on a screen scraping mechanism like I did and needed a non-API based replacement. Here's the code: /// <summary> /// Translates a string into another language using Google's translate API JSON calls. /// <seealso>Class TranslationServices</seealso> /// </summary> /// <param name="Text">Text to translate. Should be a single word or sentence.</param> /// <param name="FromCulture"> /// Two letter culture (en of en-us, fr of fr-ca, de of de-ch) /// </param> /// <param name="ToCulture"> /// Two letter culture (as for FromCulture) /// </param> public string TranslateGoogle(string text, string fromCulture, string toCulture) { fromCulture = fromCulture.ToLower(); toCulture = toCulture.ToLower(); // normalize the culture in case something like en-us was passed // retrieve only en since Google doesn't support sub-locales string[] tokens = fromCulture.Split('-'); if (tokens.Length > 1) fromCulture = tokens[0]; // normalize ToCulture tokens = toCulture.Split('-'); if (tokens.Length > 1) toCulture = tokens[0]; string url = string.Format(@"http://translate.google.com/translate_a/t?client=j&text={0}&hl=en&sl={1}&tl={2}", HttpUtility.UrlEncode(text),fromCulture,toCulture); // Retrieve Translation with HTTP GET call string html = null; try { WebClient web = new WebClient(); // MUST add a known browser user agent or else response encoding doen't return UTF-8 (WTF Google?) web.Headers.Add(HttpRequestHeader.UserAgent, "Mozilla/5.0"); web.Headers.Add(HttpRequestHeader.AcceptCharset, "UTF-8"); // Make sure we have response encoding to UTF-8 web.Encoding = Encoding.UTF8; html = web.DownloadString(url); } catch (Exception ex) { this.ErrorMessage = Westwind.Globalization.Resources.Resources.ConnectionFailed + ": " + ex.GetBaseException().Message; return null; } // Extract out trans":"...[Extracted]...","from the JSON string string result = Regex.Match(html, "trans\":(\".*?\"),\"", RegexOptions.IgnoreCase).Groups[1].Value; if (string.IsNullOrEmpty(result)) { this.ErrorMessage = Westwind.Globalization.Resources.Resources.InvalidSearchResult; return null; } //return WebUtils.DecodeJsString(result); // Result is a JavaScript string so we need to deserialize it properly JavaScriptSerializer ser = new JavaScriptSerializer(); return ser.Deserialize(result, typeof(string)) as string; } To use the code is straightforward enough - simply provide a string to translate and a pair of two letter source and target languages: string result = service.TranslateGoogle("Life is great and one is spoiled when it goes on and on and on", "en", "de"); TestContext.WriteLine(result); How it works The code to translate is fairly straightforward. It basically uses the URL I snagged from the Google Translate Web Page slightly changed to return a JSON result (&client=j) instead of the funky nested PHP style JSON array that the default returns. The JSON result returned looks like this: {"sentences":[{"trans":"Das Leben ist großartig und man wird verwöhnt, wenn es weiter und weiter und weiter geht","orig":"Life is great and one is spoiled when it goes on and on and on","translit":"","src_translit":""}],"src":"en","server_time":24} I use WebClient to make an HTTP GET call to retrieve the JSON data and strip out part of the full JSON response that contains the actual translated text. Since this is a JSON response I need to deserialize the JSON string in case it's encoded (for upper/lower ASCII chars or quotes etc.). Couple of odd things to note in this code: First note that a valid user agent string must be passed (or at least one starting with a common browser identification - I use Mozilla/5.0). Without this Google doesn't encode the result with UTF-8, but instead uses a ISO encoding that .NET can't easily decode. Google seems to ignore the character set header and use the user agent instead which is - odd to say the least. The other is that the code returns a full JSON response. Rather than use the full response and decode it into a custom type that matches Google's result object, I just strip out the translated text. Yeah I know that's hacky but avoids an extra type and firing up the JavaScript deserializer. My internal version uses a small DecodeJsString() method to decode Javascript without the overhead of a full JSON parser. It's obviously not rocket science but as mentioned above what's nice about it is that it works without an Google API key. I can't vouch on how many translates you can do before there are cut offs but in my limited testing running a few stress tests on a Web server under load I didn't run into any problems. Limitations There are some restrictions with this: It only works on single words or single sentences - multiple sentences (delimited by .) are cut off at the ".". There is also a length limitation which appears to happen at around 220 characters or so. While that may not sound  like much for typical word or phrase translations this this is plenty of length. Use with a grain of salt - Google seems to be trying to limit their exposure to usage of the Translate APIs so this code might break in the future, but for now at least it works. FWIW, I also found that Google's translation is not as good as Babelfish, especially for contextual content like sentences. Google is faster, but Babelfish tends to give better translations. This is why in my translation tool I show both Google and Babelfish values retrieved. You can check out the code for this in the West Wind West Wind Web Toolkit's TranslationService.cs file which contains both the Google and Babelfish translation code pieces. Ironically the Babelfish code has been working forever using screen scraping and continues to work just fine today. I think it's a good idea to have multiple translation providers in case one is down or changes its format, hence the dual display in my translation form above. I hope this has been helpful to some of you - I've actually had many small uses for this code in a number of applications and it's sweet to have a simple routine that performs these operations for me easily. Resources Live Localization Sample Localization Resource Provider Administration form that includes options to translate text using Google and Babelfish interactively. TranslationService.cs The full source code in the West Wind West Wind Web Toolkit's Globalization library that contains the translation code. © Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  HTTP   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to add event receiver to SharePoint2010 content type programmatically

    - by ybbest
    Today , I’d like to show how to add event receiver to How to add event receiver to SharePoint2010 content type programmatically. 1. Create empty SharePoint Project and add a class called ItemContentTypeEventReceiver and make it inherit from SPItemEventReceiver and implement your logic as below public class ItemContentTypeEventReceiver : SPItemEventReceiver { private bool eventFiringEnabledStatus; public override void ItemAdded(SPItemEventProperties properties) { base.ItemAdded(properties); UpdateTitle(properties); } private void UpdateTitle(SPItemEventProperties properties) { SPListItem addedItem = properties.ListItem; string enteredTitle = addedItem["Title"] as string; addedItem["Title"] = enteredTitle + " Updated"; DisableItemEventsScope(); addedItem.Update(); EnableItemEventsScope(); } public override void ItemUpdated(SPItemEventProperties properties) { base.ItemUpdated(properties); UpdateTitle(properties); } private void DisableItemEventsScope() { eventFiringEnabledStatus = EventFiringEnabled; EventFiringEnabled = false; } private void EnableItemEventsScope() { eventFiringEnabledStatus = EventFiringEnabled; EventFiringEnabled = true; } } 2.Create a Site or Web(depending or your requirements) scoped feature and implement your feature event handler as below: public override void FeatureActivated(SPFeatureReceiverProperties properties) { SPWeb web = GetFeatureWeb(properties); //http://karinebosch.wordpress.com/walkthroughs/event-receivers-theory/ string assemblyName =  System.Reflection.Assembly.GetExecutingAssembly().FullName; const string className = "YBBEST.AddEventReceiverToContentType.ItemContentTypeEventReceiver"; SPContentType contentType= web.ContentTypes["Item"]; AddEventReceiverToContentType(className, contentType, assemblyName, SPEventReceiverType.ItemAdded, SPEventReceiverSynchronization.Asynchronous); AddEventReceiverToContentType(className, contentType, assemblyName, SPEventReceiverType.ItemUpdated, SPEventReceiverSynchronization.Asynchronous); contentType.Update(); } protected static void AddEventReceiverToContentType(string className, SPContentType contentType, string assemblyName, SPEventReceiverType eventReceiverType, SPEventReceiverSynchronization eventReceiverSynchronization) { if (className == null) throw new ArgumentNullException("className"); if (contentType == null) throw new ArgumentNullException("contentType"); if (assemblyName == null) throw new ArgumentNullException("assemblyName"); SPEventReceiverDefinition eventReceiver = contentType.EventReceivers.Add(); eventReceiver.Synchronization = eventReceiverSynchronization; eventReceiver.Type = eventReceiverType; eventReceiver.Assembly = assemblyName; eventReceiver.Class = className; eventReceiver.Update(); } 3.Deploy your solution and now you have a event receiver that attached to the Item contentType. You can download the complete source code here.You can also check how to add event receiver to a list using SharePoint event receiver item in Visual Studio2010 in my previous blog.

    Read the article

  • C# in Depth, Third Edition by Jon Skeet, Manning Publications Co. Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/10/24/c-in-depth-third-edition-by-jon-skeet-manning-publications.aspx I started reading this ebook on September 28, 2013, the same day it was sent my way by Manning Publications Co. for review while it still being fresh off the press. So 1st thing – thanks to Manning for this opportunity and a free copy of this must have on every C# developer’s desk book! Several hours ago I finished reading this book (well, except a for a large portion of its quite lengthy appendix). I jumped writing this review right away while still being full of emotions and impressions from reading it thoroughly and running code examples. Before I go any further I would like say that I used to program on various platforms using various languages starting with the Mainframe and ending on Windows, and I gradually shifted toward dealing with databases more than anything, however it happened with me to program in C# 1 a lot when it was first released and then some C# 2 with a big leap in between to C# 5. So my perception and experience reading this book may differ from yours. Also what I want to tell is somewhat funny that back then, knowing some Java and seeing C# 1 released, initially made me drawing a parallel that it is a copycat language, how wrong was I… Interestingly, Jon programs in Java full time, but how little it was mentioned in the book! So more on the book: Be informed, this is not a typical “Recipes”, “Cookbook” or any set of ready solutions, it is rather targeting mature, advanced developers who do not only know how to use a number of features, but are willing to understand how the language is operating “under the hood”. I must state immediately, at the same time I am glad the author did not go into the murky depths of the MSIL, so this is a very welcome decision on covering a modern language as C# for me, thank you Jon! Frankly, not all was that rosy regarding the tone and structure of the book, especially the the first half or so filled me with several negative and positive emotions overpowering each other. To expand more on that, some statements in the book appeared to be bias to me, or filled with pre-justice, it started to look like it had some PR-sole in it, but thankfully this was all gone toward the end of the 1st third of the book. Specifically, the mention on the C# language popularity, Java is the #1 language as per https://sites.google.com/site/pydatalog/pypl/PyPL-PopularitY-of-Programming-Language (many other sources put C at the top which I highly doubt), also many interesting functional languages as Clojure and Groovy appeared and gained huge traction which run on top of Java/JVM whereas C# does not enjoy such a situation. If we want to discuss the popularity in general and say how fast a developer can find a new job that pays well it would be indeed the very Java, C++ or PHP, never C#. Or that phrase on language preference as a personal issue? We choose where to work or we are chosen because of a technology used at a given software shop, not vice versa. The book though it technically very accurate with valid code, concise examples, but I wish the author would give more concrete, real-life examples on where each feature should be used, not how. Another point to realize before you get the book is that it is almost a live book which started to be written when even C# 3 wasn’t around so a lot of ground is covered (nearly half of the book) on the pre-C# 3 feature releases so if you already have a solid background in the previous releases and do not plan to upgrade, perhaps half of the book can be skipped, otherwise this book is surely highly recommended. Alas, for me it was a hard read, most of it. It was not boring (well, only may be two times), it was just hard to grasp some concepts, but do not get me wrong, it did made me pause, on several occasions, and made me read and re-read a page or two. At times I even wondered if I have any IQ at all (LOL). Be prepared to read A LOT on generics, not that they are widely used in the field (I happen to work as a consultant and went thru a lot of code at many places) I can tell my impression is the developers today in best case program using examples found at OpenStack.com. Also unlike the Java world where having the most recent version is nearly mandated by the OSS most companies on the Microsoft platform almost never tempted to upgrade the .Net version very soon and very often. As a side note, I was glad to see code recently that included a nullable variable (myvariable? notation) and this made me smile, besides, I recommended that person this book to expand her knowledge. The good things about this book is that Jon maintains an active forum, prepared code snippets and even a small program (Snippy) that is happy to run the sample code saving you from writing any plumbing code. A tad now on the C# language itself – it sure enjoyed a wonderful road toward perfection and a very high adoption, especially for ASP development. But to me all the recent features that made this statically typed language more dynamic look strange. Don’t we have F#? Which supposed to be the dynamic language? Why do we need to have a hybrid language? Now the developers live their lives in dualism of the static and dynamic variables! And LINQ to SQL, it is covered in depth, but wasn’t it supposed to be dropped? Also it seems that very little is being added, and at a slower pace, e.g. Roslyn will come in late 2014 perhaps, and will be probably the only main feature. Again, it is quite hard to read this book as various chapters, C# versions mentioned every so often only if I only could remember what was covered exactly where! So the fact it has so many jumps/links back and forth I recommend the ebook format to make the navigations easier to perform and I do recommend using software that allows bookmarking, also make sure you have access to plenty of coffee and pizza (hey, you probably know this joke – who a programmer is) ! In terms of closing, if you stuck at C# 1 or 2 level, it is time to embrace the power of C# 5! Finally, to compliment Manning, this book unlike from any other publisher so far, was the only one as well readable (put it formatted) on my tablet as in Adobe Reader on a laptop.

    Read the article

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • mediatek 7630e 802.11 wifi bgn adapter failed in hp probook G1

    - by user257026
    id: network description: Network controller product: MT7630e 802.11bgn Wireless Network Adapter vendor: MEDIATEK Corp. physical id: 0 bus info: pci@0000:04:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency = 0 resources: memory : b0600000-b06fffff THIS IS MY WIFI driver details of my notebook pc.... BY the way.. recently I have installed ubuntu 14.04 LTS .....my every hardware is working properly except wifi adapter.... in windows it(wifi) was also working properly.. from hp driver center I have download linux kernel driver package ..Actually those driver package was rpm package ...then i have convert it to .dev file using alien...but the true fact is no result though..... again,previously released ubuntu version(such as 12.04LTS) causes the same issue ...those versions have same bugs there.. after googling web i have few results but no reliable outcomes to solve my problem(wifi issue) ..... As I am new user in ubuntu I cannot able to solve the problem drastically like pro(superuser).. https://answers.launchpad.net/ubuntu/+question/243203 How do I get a Mediatek MT7630E 802.11bgn Wi-Fi Adapter working? here two links about my issuses but I am confused what can i do (feeling meh)??? is there anyone who can help me in this issues...?? my notebook model is HP probook 450G1 Question #243203 : Questions : Ubuntu My HP laptop uses MediaTek's (MEDIATEK Corp.) MT7630e 802.11bgn Wireless Network Adapter. I cannot access wifi after installing Ubuntu myself and there are no drivers available - or so it seems. Apparantly some laptops which use this card came with Ubuntu pre-installed, with working drivers. These d… answers.launchpad.net Question #243203 : Questions : Ubuntu My HP laptop uses MediaTek's (MEDIATEK Corp.) MT7630e 802.11bgn Wireless Network Adapter. I cannot access wifi after installing Ubuntu myself and there are no drivers available - or so it... ANSWERS.LAUNCHPAD.NET

    Read the article

  • Windows - Use Local Service and/or Network Service account for a windows service

    - by user19185
    I've created a window's service that monitors files on a specific directory on our Windows OS. When a file is detected, the service does some file I/O, reads the files, creates sub-directories, etc. This service also uses database connectivity to connect to another server. My plan is to have the service run as the default "Local Service" account. Since I need to allow write/read privileges, which apparently the "Local Service" account does not do by default, I'm going to explicitly set "Full Control" privileges for the "Local Service" account on the folder that I'm reading/writing to and from. I believe the above is a good . My question is, for the folder that I'm reading and writing to, do I need to setup a "Network Service" role with full control access? I'm wondering since my service uses database connectivity to another server, if I'll need the "Network Service" account setup. I may be misunderstanding what the "Network Service" account does.

    Read the article

  • Non-Blocking I/O Made Possible in Java

    Java SE7 "Dolphin" release is nearing and we're chomping at the bit. So let's dig in and review non-blocking IO, a feature of java.nio (New I/O) package that is a part of Java v1.4, v1.5 and v1.6 and we'll also take a peek at the java.nio.file (NIO.2) package.

    Read the article

  • Non-Blocking I/O Made Possible in Java

    Java SE7 "Dolphin" release is nearing and we're chomping at the bit. So let's dig in and review non-blocking IO, a feature of java.nio (New I/O) package that is a part of Java v1.4, v1.5 and v1.6 and we'll also take a peek at the java.nio.file (NIO.2) package.

    Read the article

  • Amazon CloudFront Cache Invalidation – Fill out the Survey!

    - by joelvarty
    Amazon have come up with a survey regarding how cache can be invalidated on object stored in their CloudFront servers. http://survey.amazonwebservices.com/survey/s?s=1369   This is a key feature for Agility CMS, and for a lot of other applications. If it’s important to you, I suggest you spend a few minutes and fill it out. more later - joel

    Read the article

  • Exposing BL as WCF service

    - by Oren Schwartz
    I'm working on a middle-tier project which encapsulates the business logic (uses a DAL layer, and serves a web application server [ASP.net]) of a product deployed in a LAN. The BL serves as a bunch of services and data objects that are invoked upon user action. At present times, the DAL acts as a separate application whereas the BL uses it, but is consumed by the web application as a DLL. Both the DAL and the web application are deployed on different servers inside organization, and since the BL DLL is consumed by the web application, it resides in the same server. The worst thing about exposing the BL as a DLL is that we lost track with what we expose. Deployment is not such a big issue since mostly, product versions are deployed together. Would you recommend migrating from DLL to WCF service? if so, why ? Do you know anyone who had a similar experience ?

    Read the article

  • Exposing BL as WCF service

    - by Oren Schwartz
    I'm working on a middle-tier project which encapsulates the business logic (uses a DAL layer, and serves a web application server [ASP.net]) of a product deployed in a LAN. The BL serves as a bunch of services and data objects that are invoked upon user action. At present times, the DAL acts as a separate application whereas the BL uses it, but is consumed by the web application as a DLL. Both the DAL and the web application are deployed on different servers inside organization, and since the BL DLL is consumed by the web application, it resides in the same server. The worst thing about exposing the BL as a DLL is that we lost track with what we expose. Deployment is not such a big issue since mostly, product versions are deployed together. Would you recommend migrating from DLL to WCF service? if so, why ? Do you know anyone who had a similar experience ? Thank you !

    Read the article

  • Ray Bradbury’s Predictions about Future Technology that have been Fulfilled

    - by Asian Angel
    Ray Bradbury wrote about many wonderful items of technology in his stories of the future, but you may be surprised to see just how many of them have become reality. Note: Visit the blog post linked below to see the full-size version of the chart. Ray Bradbury Predictions Fulfilled [via Geeks are Sexy] HTG Explains: What Is Windows RT and What Does It Mean To Me? HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux Hack Your Kindle for Easy Font Customization

    Read the article

  • Show raw Text Code from a URL with CodePaste.NET

    - by Rick Strahl
    I introduced CodePaste.NET more than 2 years ago. In case you haven't checked it out it's a code-sharing site where you can post some code, assign a title and syntax scheme to it and then share it with others via a short URL. The idea is super simple and it's not the first time this has been done, but it's focused on Microsoft languages and caters to that crowd. Show your own code from the Web There's another feature that I tweeted about recently that's been there for some time, but is not used very much: CodePaste.NET has the ability to show raw text based code from a URL on the Web in syntax colored format for any of the formats provided. I use this all the time with code links to my Subversion repository which only displays code as plain text. Using CodePaste.NET allows me to show syntax colored versions of the same code. For example I can go from this URL: http://www.west-wind.com:8080/svn/WestwindWebToolkit/trunk/Westwind.Utilities/SupportClasses/PropertyBag.cs To a nicely colored source code view at this Url: http://codepaste.net/ShowUrl?url=http%3A%2F%2Fwww.west-wind.com%3A8080%2Fsvn%2FWestwindWebToolkit%2Ftrunk%2FWestwind.Utilities%2FSupportClasses%2FPropertyBag.cs&Language=C%23 which looks like this:   Use the Form or access URLs directly To get there navigate to the Web Code icon on the CodePaste.NET site and paste your original URL and select a language to display: The form creates a link shown above which has two query string parameters: url - The URL for the raw text on the Web language -  The code language used for syntax highlighting Note that parameters must be URL encoded to work especially the # in C# because otherwise the # will be interpreted by the browser as a hash tag to jump to in the target URL. The URL must be Web accessible so that CodePaste can download it and then apply the syntax coloring. It doesn't work with localhost urls for example. The code returned must be returned in plain text - HTML based text doesn't work. Hope some of you find this a useful feature. Enjoy…© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Homebrew development for 7th gen home consoles

    - by Brian McKenna
    I'm looking to do some homebrew development for either the Wii, Xbox360 or PS3. I'll be developing from a Linux system. The programming language doesn't matter. Wii - devkitPPC and libogc look fairly easy and complete Xbox360 - Mono.XNA looks interesting but not very feature complete PS3 - psl1ght seems interesting but I haven't been able to find out much How homebrew friendly are each of these consoles? Is someone able to give a comparison of each of these scenes?

    Read the article

  • Bad Screen Flicker from video recording of recordmydesktop

    - by Tarun
    I have ubuntu 11.10 and I installed recordmydesktop. Video recording from recordmydesktop always result in screen flicker. In recording I see half of the screen moving forward while half would be stuck. I checked the settings and "Frame per Second" is set to 15 One such recording is available here - http://www.youtube.com/watch?v=QafF44m2Ttk&feature=youtu.be I am quite new to Ubuntu and not sure what is wrong.

    Read the article

  • Installing drivers for switchable graphics

    - by Anonymous
    I recently bought a laptop that came with Windows 7 64-bit installed. I have some older (16-bit and 32-bit) software that doesn't work with 64-bit Windows, but works just fine with 32-bit. Since I also wanted to get rid of all of the pre-installed spam, I decided to wipe the hard drive and install a fresh copy of Windows 7 32-bit. I can't get the graphics cards working. This laptop uses switchable graphics, an Intel card and a Radeon card. I first tried installing this driver from Intel, which works for the Intel card. Of course, the Radeon card doesn't work with this driver and I need it for some of the newer games I have. I also tried this driver. Windows's device manager will recognize the Radeon card, but it will still use the Intel card. Also, even though that package says it contains the Intel driver, the Intel card still isn't properly recognized by Windows (leaving me with a nasty 800x600 resolution). On top of that, the Catalyst Control Center won't open (saying "The Catalyst Control Center is not supported by the driver version of your enabled graphics adapter") I tried installing HP's driver then installing Intel's driver on top of it. Device manager will then recognize both graphics cards properly. However, the laptop still uses the Intel card. The CCC still won't start (saying the same thing as before) and I can't find any of 'switching' graphics cards. Before formatting, I could right-click the desktop and click "Configure Switchable Graphics" This option hasn't been in the context menu regardless of what driver(s) I've installed. After some research, I found out that this menu entry runs the command "cli.exe Start PowerXpressHybrid" I've tried manually running this command, but I get the same unsupported message from CCC. So, does anyone know how I can get this working? I would like to be able to switch between the Intel and Radeon. But, if there's some way to disable the Intel and use only the Radeon, that would be fine I dual-boot with Linux (framebuffer uses the Intel, haven't even tried getting X set up yet) Here's the output of lspci # lspci -v | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) (prog-if 00 [VGA controller]) 01:00.0 VGA compatible controller: ATI Technologies Inc NI Seymour [AMD Radeon HD 6470M] (prog-if 00 [VGA controller]) The laptop is a HP Pavilion g6t-1d00. HP doesn't support installing anything but Windows 7 64-bit, so calling tech support isn't an option. Thanks for any help UPDATE: I finally got it working. After a fresh install of Windows 7, I installed the HP driver (the one linked above). Then, there's an optional Windows update I installed (don't remember the exact name, but it'll stick out). After that, graphics switching works just like it's supposed to. Moab, thanks anyways for your help

    Read the article

  • GAC locking problem when running deployment

    - by Kieran
    We have a NANT script that uses msbuild to compile our visual studio solutions and deploys the .dlls into the GAC. This works well on our integration/test servers as part of continuous integration, cruise control uses the NANT scripts and every time the dlls are put into the GAC without problem. On our local development machines, where we use subversion/vs.net etc. for development, frequently certain dlls do not make it to the GAC when we run the build. We think we have narrowed this down to visual studio and/or a plug in locking the GAC or the dlls for some reason. Strangely if we run the build a second time all the dlls make it to the GAC. We have added various iisreset's to the NANT script in the hope of releasing the lock but to no avail. Can anyone suggest a good approach to attack this problem? All the best

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >