Search Results

Search found 640 results on 26 pages for 'activation'.

Page 14/26 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Hosting and consuming WCF services without configuration files

    - by martinsj
    In this post, I'll demonstrate how to configure both the host and the client in code without the need for configuring services i the <system.serviceModel> section of the config-file. In fact, you don't need a  <system.serviceModel> section at all. What you'll do need (and want) sometimes, is the Uri of the service in the configuration file. Configuring the Uri of the the service is actually only needed for the client or when self-hosting, not when hosting in IIS. So, exactly What do we need to configure? The binding type and the binding constraints The metadata behavior Debug behavior You can of course configure even more, and even more if you want to, WCF is after all the king of configuration… As an example I'll be hosting and consuming a service that removes most of the default constraints for WCF-services, using a BasicHttpBinding. Of course, in regards to security, it is probably better to have some constraints on the server, but this is only a demonstration. The ServerConfig class in the code beneath is a static helper class that will be used in the examples. In this post, I’ll be using this helper-class for all configuration, for both the server and the client. In WCF, the  client and the server have both their own WCF-configuration. With this piece of code, they will be sharing the same configuration. 1: public static class ServiceConfig 2: { 3: public static Binding DefaultBinding 4: { 5: get 6: { 7: var binding = new BasicHttpBinding(); 8: Configure(binding); 9: return binding; 10: } 11: } 12:  13: public static void Configure(HttpBindingBase binding) 14: { 15: if (binding == null) 16: { 17: throw new ArgumentException("Argument 'binding' cannot be null. Cannot configure binding."); 18: } 19:  20: binding.SendTimeout = new TimeSpan(0, 0, 30, 0); // 30 minute timeout 21: binding.MaxBufferSize = Int32.MaxValue; 22: binding.MaxBufferPoolSize = 2147483647; 23: binding.MaxReceivedMessageSize = Int32.MaxValue; 24: binding.ReaderQuotas.MaxArrayLength = Int32.MaxValue; 25: binding.ReaderQuotas.MaxBytesPerRead = Int32.MaxValue; 26: binding.ReaderQuotas.MaxDepth = Int32.MaxValue; 27: binding.ReaderQuotas.MaxNameTableCharCount = Int32.MaxValue; 28: binding.ReaderQuotas.MaxStringContentLength = Int32.MaxValue; 29: } 30:  31: public static ServiceMetadataBehavior ServiceMetadataBehavior 32: { 33: get 34: { 35: return new ServiceMetadataBehavior 36: { 37: HttpGetEnabled = true, 38: MetadataExporter = {PolicyVersion = PolicyVersion.Policy15} 39: }; 40: } 41: } 42:  43: public static ServiceDebugBehavior ServiceDebugBehavior 44: { 45: get 46: { 47: var smb = new ServiceDebugBehavior(); 48: Configure(smb); 49: return smb; 50: } 51: } 52:  53:  54: public static void Configure(ServiceDebugBehavior behavior) 55: { 56: if (behavior == null) 57: { 58: throw new ArgumentException("Argument 'behavior' cannot be null. Cannot configure debug behavior."); 59: } 60: 61: behavior.IncludeExceptionDetailInFaults = true; 62: } 63: } Configuring the server There are basically two ways to host a WCF service, in IIS and self-hosting. When hosting a WCF service in a production environment using SOA architecture, you'll be most likely hosting it in IIS. When testing the service in integration tests, it's very handy to be able to self-host services in the unit-tests. In fact, you can share the the WCF configuration for self-hosted services and services hosted in IIS. And that is exactly what you want to do, testing the same configurations for test and production environments.   Configuring when Self-hosting When self-hosting, in order to start the service, you'll have to instantiate the ServiceHost class, configure the  service and open it. 1: // Create the service-host. 2: var host = new ServiceHost(typeof(MyService), endpoint); 3:  4: // Configure the binding 5: host.AddServiceEndpoint(typeof(IMyService), ServiceConfig.DefaultBinding, endpoint); 6:  7: // Configure metadata behavior 8: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 9:  10: // Configure debgug behavior 11: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 12: 13: // Start listening to the service 14: host.Open(); 15:  Configuring when hosting in IIS When you create a WCF service application with the wizard in Visual Studio, you'll end up with bits and pieces of code in order to get the service running: Svc-file with codebehind. A interface to the service Web.config In order to get rid of the configuration in the <system.serviceModel> section, which the wizard has generated for us, we must tell the service that we have a factory that will create the service for us. We do this by changing the markup for the svc-file: 1: <%@ ServiceHost Language="C#" Debug="true" Service="Namespace.MyService" Factory="Namespace.ServiceHostFactory" %> The markup tells IIS that we have a factory called ServiceHostFactory for this service. The service factory has a method we can override which will be called when someone asks IIS for the service. There are overloads we can override: 1: System.ServiceModel.ServiceHostBase CreateServiceHost(string constructorString, Uri[] baseAddresses) 2: System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 3:  In this example, we'll be using the last one, so our implementation looks like this: 1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3:  4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12:  1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3: 4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12: As you can see, we are using the same configuration helper we used when self-hosting. Now, when you have a factory, the <system.serviceModel> section of the configuration can be removed, because the section will be ignored when the service has a custom factory. If you want to configure something else in the config-file, one could configure in some other section.   Configuring the client Microsoft has helpfully created a ChannelFactory class in order to create a proxy client. When using this approach, you don't have generate those awfull proxy classes for the client. If you share the contracts with the server in it's own assembly like in the layer diagram under, you can share the same piece of code. The contracts in WCF are the interface to the service and if any, the datacontracts (custom types) the service depends on. Using the ChannelFactory with our configuration helper-class is very simple: 1: var identity = EndpointIdentity.CreateDnsIdentity("localhost"); 2: var endpointAddress = new EndpointAddress(endPoint, identity); 3: var factory = new ChannelFactory<IMyService>(DeployServiceConfig.DefaultBinding, endpointAddress); 4: using (var myService = new factory.CreateChannel()) 5: { 6: myService.Hello(); 7: } 8: factory.Close();   Happy configuration!

    Read the article

  • SharePoint logging to a list

    - by Norgean
    I recently worked in an environment with several servers. Locating the correct SharePoint log file for error messages, or development trace calls, is cumbersome. And once the solution hit the cloud, it got even worse, as we had no access to the log files at all. Obviously we are not the only ones with this problem, and the current trend seems to be to log to a list. This had become an off-hour project, so rather than do the sensible thing and find a ready-made solution, I decided to do it the hard way. So! Fire up Visual Studio, create yet another empty SharePoint solution, and start to think of some requirements. Easy on/offI want to be able to turn list-logging on and off.Easy loggingFor me, this means being able to use string.Format.Easy filteringLet's have the possibility to add some filtering columns; category and severity, where severity can be "verbose", "warning" or "error". Easy on/off Well, that's easy. Create a new web feature. Add an event receiver, and create the list on activation of the feature. Tear the list down on de-activation. I chose not to create a new content type; I did not feel that it would give me anything extra. I based the list on the generic list - I think a better choice would have been the announcement type. Approximately: public void CreateLog(SPWeb web)         {             var list = web.Lists.TryGetList(LogListName);             if (list == null)             {                 var listGuid = web.Lists.Add(LogListName, "Logging for the masses", SPListTemplateType.GenericList);                 list = web.Lists[listGuid];                 list.Title = LogListTitle;                 list.Update();                 list.Fields.Add(Category, SPFieldType.Text, false);                 var stringColl = new StringCollection();                 stringColl.AddRange(new[]{Error, Information, Verbose});                 list.Fields.Add(Severity, SPFieldType.Choice, true, false, stringColl);                 ModifyDefaultView(list);             }         }Should be self explanatory, but: only create the list if it does not already exist (d'oh). Best practice: create it with a Url-friendly name, and, if necessary, give it a better title. ...because otherwise you'll have to look for a list with a name like "Simple_x0020_Log". I've added a couple of fields; a field for category, and a 'severity'. Both to make it easier to find relevant log messages. Notice that I don't have to call list.Update() after adding the fields - this would cause a nasty error (something along the lines of "List locked by another user"). The function for deleting the log is exactly as onerous as you'd expect:         public void DeleteLog(SPWeb web)         {             var list = web.Lists.TryGetList(LogListTitle);             if (list != null)             {                 list.Delete();             }         } So! "All" that remains is to log. Also known as adding items to a list. Lots of different methods with different signatures end up calling the same function. For example, LogVerbose(web, message) calls LogVerbose(web, null, message) which again calls another method which calls: private static void Log(SPWeb web, string category, string severity, string textformat, params object[] texts)         {             if (web != null)             {                 var list = web.Lists.TryGetList(LogListTitle);                 if (list != null)                 {                     var item = list.AddItem(); // NOTE! NOT list.Items.Add… just don't, mkay?                     var text = string.Format(textformat, texts);                     if (text.Length > 255) // because the title field only holds so many chars. Sigh.                         text = text.Substring(0, 254);                     item[SPBuiltInFieldId.Title] = text;                     item[Degree] = severity;                     item[Category] = category;                     item.Update();                 }             } // omitted: Also log to SharePoint log.         } By adding a params parameter I can call it as if I was doing a Console.WriteLine: LogVerbose(web, "demo", "{0} {1}{2}", "hello", "world", '!'); Ok, that was a silly example, a better one might be: LogError(web, LogCategory, "Exception caught when updating {0}. exception: {1}", listItem.Title, ex); For performance reasons I use list.AddItem rather than list.Items.Add. For completeness' sake, let us include the "ModifyDefaultView" function that I deliberately skipped earlier.         private void ModifyDefaultView(SPList list)         {             // Add fields to default view             var defaultView = list.DefaultView;             var exists = defaultView.ViewFields.Cast<string>().Any(field => String.CompareOrdinal(field, Severity) == 0);               if (!exists)             {                 var field = list.Fields.GetFieldByInternalName(Severity);                 if (field != null)                     defaultView.ViewFields.Add(field);                 field = list.Fields.GetFieldByInternalName(Category);                 if (field != null)                     defaultView.ViewFields.Add(field);                 defaultView.Update();                   var sortDoc = new XmlDocument();                 sortDoc.LoadXml(string.Format("<Query>{0}</Query>", defaultView.Query));                 var orderBy = (XmlElement) sortDoc.SelectSingleNode("//OrderBy");                 if (orderBy != null && sortDoc.DocumentElement != null)                     sortDoc.DocumentElement.RemoveChild(orderBy);                 orderBy = sortDoc.CreateElement("OrderBy");                 sortDoc.DocumentElement.AppendChild(orderBy);                 field = list.Fields[SPBuiltInFieldId.Modified];                 var fieldRef = sortDoc.CreateElement("FieldRef");                 fieldRef.SetAttribute("Name", field.InternalName);                 fieldRef.SetAttribute("Ascending", "FALSE");                 orderBy.AppendChild(fieldRef);                   fieldRef = sortDoc.CreateElement("FieldRef");                 field = list.Fields[SPBuiltInFieldId.ID];                 fieldRef.SetAttribute("Name", field.InternalName);                 fieldRef.SetAttribute("Ascending", "FALSE");                 orderBy.AppendChild(fieldRef);                 defaultView.Query = sortDoc.DocumentElement.InnerXml;                 //defaultView.Query = "<OrderBy><FieldRef Name='Modified' Ascending='FALSE' /><FieldRef Name='ID' Ascending='FALSE' /></OrderBy>";                 defaultView.Update();             }         } First two lines are easy - see if the default view includes the "Severity" column. If it does - quit; our job here is done.Adding "severity" and "Category" to the view is not exactly rocket science. But then? Then we build the sort order query. Through XML. The lines are numerous, but boring. All to achieve the CAML query which is commented out. The major benefit of using the dom to build XML, is that you may get compile time errors for spelling mistakes. I say 'may', because although the compiler will not let you forget to close a tag, it will cheerfully let you spell "Name" as "Naem". Whichever you prefer, at the end of the day the view will sort by modified date and ID, both descending. I added the ID as there may be several items with the same time stamp. So! Simple logging to a list, with sensible a view, and with normal functionality for creating your own filterings. I should probably have added some more views in code, ready filtered for "only errors", "errors and warnings" etc. And it would be nice to block verbose logging completely, but I'm not happy with the alternatives. (yetanotherfeature or an admin page seem like overkill - perhaps just removing it as one of the choices, and not log if it isn't there?) Before you comment - yes, try-catches have been removed for clarity. There is nothing worse than having a logging function that breaks your site!

    Read the article

  • Windows update error code : 80244004

    - by Hamidreza
    I am using Windows 7 and ESET SMART SECURITY 5 . Today I wanted to update my computer using Windows Update but it does give me error : Error(s) found: Code 80244004        Windows Update encountered an unknown error. My System Info : Sony Vaio EA2gfx , Ram : 4GB DDR2 , CPU: Intel Core i 5 I checkd out this links but they didn't help : http://answers.microsoft.com/en-us/windows/forum/windows_7-windows_update/while-updating-i-am-getting-the-error-code/0b9b756c-5b6e-4571-838e-f90c48a4e00c https://www.calguns.net/calgunforum/showthread.php?t=583860 http://www.sevenforums.com/windows-updates-activation/235807-windows-update-error-80244004-a.html Please help me, thanks.

    Read the article

  • Windows 8 / IIS 8 Concurrent Requests Limit

    - by OWScott
    IIS 8 on Windows Server 2012 doesn’t have any fixed concurrent request limit, apart from whatever limit would be reached when resources are maxed. However, the client version of IIS 8, which is on Windows 8, does have a concurrent connection request limitation to limit high traffic production uses on a client edition of Windows. Starting with IIS 7 (Windows Vista), the behavior changed from previous versions.  In previous client versions of IIS, excess requests would throw a 403.9 error message (Access Forbidden: Too many users are connected.).  Instead, Windows Vista, 7 and 8 queue excessive requests so that they will be handled gracefully, although there is a maximum number of requests that will be processed simultaneously. Thomas Deml provided a concurrent request chart for Windows Vista many years ago, but I have been unable to find an equivalent chart for Windows 8 so I asked Wade Hilmo from the IIS team what the limits are.  Since this is controlled not by the IIS team itself but rather from the Windows licensing team, he asked around and found the authoritative answer, which I’ll provide below. Windows 8 – IIS 8 Concurrent Requests Limit Windows 8 3 Windows 8 Professional 10 Windows RT N/A since IIS does not run on Windows RT Windows 7 – IIS 7.5 Concurrent Requests Limit Windows 7 Home Starter 1 Windows 7 Basic 1 Windows 7 Premium 3 Windows 7 Ultimate, Professional, Enterprise 10 Windows Vista – IIS 7 Concurrent Requests Limit Windows Vista Home Basic (IIS process activation and HTTP processing only) 3 Windows Vista Home Premium 3 Windows Vista Ultimate, Professional 10 Windows Server 2003, Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012 allow an unlimited amount of simultaneously requests.

    Read the article

  • How to Upgrade Your Netbook to Windows 7 Home Premium

    - by Matthew Guay
    Would you like more features and flash in Windows on your netbook?  Here’s how you can easily upgrade your netbook to Windows 7 Home Premium the easy way. Most new netbooks today ship with Windows 7 Starter, which is the cheapest edition of Windows 7.  It is fine for many computing tasks, and will run all your favorite programs great, but it lacks many customization, multimedia, and business features found in higher editions.  Here we’ll show you how you can quickly upgrade your netbook to more full-featured edition of Windows 7 using Windows Anytime Upgrade.  Also, if you want to upgrade your laptop or desktop to another edition of Windows 7, say Professional, you can follow these same steps to upgrade it, too. Please note: This is only for computers already running Windows 7.  If your netbook is running XP or Vista, you will have to run a traditional upgrade to install Windows 7. Upgrade Advisor First, let’s make sure your netbook can support the extra features, such as Aero Glass, in Windows 7 Home Premium.  Most modern netbooks that ship with Windows 7 Starter can run the advanced features in Windows 7 Home Premium, but let’s check just in case.  Download the Windows 7 Upgrade Advisor (link below), and install as normal. Once it’s installed, run it and click Start Check.   Make sure you’re connected to the internet before you run the check, or otherwise you may see this error message.  If you see it, click Ok and then connect to the internet and start the check again. It will now scan all of your programs and hardware to make sure they’re compatible with Windows 7.  Since you’re already running Windows 7 Starter, it will also tell you if your computer will support the features in other editions of Windows 7. After a few moments, the Upgrade Advisor will show you want it found.  Here we see that our netbook, a Samsung N150, can be upgraded to Windows 7 Home Premium, Professional, or Ultimate. We also see that we had one issue, but this was because a driver we had installed was not recognized.  Click “See all system requirements” to see what your netbook can do with the new edition. This shows you which of the requirements, including support for Windows Aero, your netbook meets.  Here our netbook supports Aero, so we’re ready to go upgrade. For more, check out our article on how to make sure your computer can run Windows 7 with Upgrade Advisor. Upgrade with Anytime Upgrade Now, we’re ready to upgrade our netbook to Windows 7 Home Premium.  Enter “Anytime Upgrade” in the Start menu search,and select Windows Anytime Upgrade. Windows Anytime Upgrade lets you upgrade using product key you already have or one you purchase during the upgrade process.  And, it installs without any downloads or Windows disks, so it works great even for netbooks without DVD drives. Anytime Upgrades are cheaper than a standard upgrade, and for a limited time, select retailers in the US are offering Anytime Upgrades to Windows 7 Home Premium for only $49.99 if purchased with a new netbook.  If you already have a netbook running Windows 7 Starter, you can either purchase an Anytime Upgrade package at a retail store or purchase a key online during the upgrade process for $79.95.  Or, if you have a standard Windows 7 product key (full or upgrade), you can use it in Anytime upgrade.  This is especially nice if you can purchase Windows 7 cheaper through your school, university, or office. Purchase an upgrade online To purchase an upgrade online, click “Go online to choose the edition of Windows 7 that’s best for you”.   Here you can see a comparison of the features of each edition of Windows 7.  Note that you can upgrade to either Home Premium, Professional, or Ultimate.  We chose home Premium because it has most of the features that home users want, including Media Center and Aero Glass effects.  Also note that the price of each upgrade is cheaper than the respective upgrade from Windows XP or Vista.  Click buy under the edition you want.   Enter your billing information, then your payment information.  Once you confirm your purchase, you will directly be taken to the Upgrade screen.  Make sure to save your receipt, as you will need the product key if you ever need to reinstall Windows on your computer. Upgrade with an existing product key If you purchased an Anytime Upgrade kit from a retailer, or already have a Full or Upgrade key for another edition of Windows 7, choose “Enter an upgrade key”. Enter your product key, and click Next.  If you purchased an Anytime Upgrade kit, the product key will be located on the inside of the case on a yellow sticker. The key will be verified as a valid key, and Anytime Upgrade will automatically choose the correct edition of Windows 7 based on your product key.  Click Next when this is finished. Continuing the Upgrade process Whether you entered a key or purchased a key online, the process is the same from here on.  Click “I accept” to accept the license agreement. Now, you’re ready to install your upgrade.  Make sure to save all open files and close any programs, and then click Upgrade. The upgrade only takes about 10 minutes in our experience but your mileage may vary.  Any available Microsoft updates, including ones for Office, Security Essentials, and other products, will be installed before the upgrade takes place. After a couple minutes, your computer will automatically reboot and finish the installation.  It will then reboot once more, and your computer will be ready to use!  Welcome to your new edition of Windows 7! Here’s a before and after shot of our desktop.  When you do an Anytime Upgrade, all of your programs, files, and settings will be just as they were before you upgraded.  The only change we noticed was that our pinned taskbar icons were slightly rearranged to the default order of Internet Explorer, Explorer, and Media Player.  Here’s a shot of our desktop before the upgrade.  Notice that all of our pinned programs and desktop icons are still there, as well as our taskbar customization (we are using small icons on the taskbar instead of the default large icons). Before, with the Windows 7 Starter background and the Aero Basic theme: And after, with Aero Glass and the more colorful default Windows 7 background.   All of the features of Windows 7 Home Premium are now ready to use.  The Aero theme was activate by default, but you can now customize your netbook theme, background, and more with the Personalization pane.  To open it, right-click on your desktop and select Personalize. You can also now use Windows Media Center, and can play-back DVD movies using an external drive. One of our favorite tools, the Snipping Tool, is also now available for easy screenshots and clips. Activating you new edition of Windows 7 You will still need to activate your new edition of Windows 7.  To do this right away, open the start menu, right-click on Computer, and select Properties.   Scroll to the bottom, and click “Activate Windows Now”. Make sure you’re connected to the internet, and then select “Activate Windows online now”. Activation may take a few minutes, depending on your internet connection speed. When it is done, the Activation wizard will let you know that Windows is activated and genuine.  Your upgrade is all finished! Conclusion Windows Anytime Upgrade makes it easy, and somewhat cheaper, to upgrade to another edition of Windows 7.  It’s useful for desktop and laptop owners who want to upgrade to Professional or Ultimate, but many more netbook owners will want to upgrade from Starter to Home Premium or another edition.  Links Download the Windows 7 Upgrade Advisor Windows Team Blog: Anytime Upgrade Special with new PC purchase Similar Articles Productive Geek Tips How To Upgrade from Vista to Windows 7 Home Premium EditionAnother Blog You Should Subscribe ToMysticgeek Blog: Turn Vista Home Premium Into Ultimate (Part 3) – Shadow CopyUpgrade Ubuntu from Breezy to DapperHow to Upgrade the Windows 7 RC to RTM (Final Release) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Get Your Delicious Bookmarks In Firefox’s Awesome Bar Manage Photos Across Different Social Sites With Dropico Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3 Yes, it’s Patch Tuesday

    Read the article

  • SQL SERVER – Installing AdventureWorks for SQL Server 2011

    - by pinaldave
    I just began with SQL Server 2011 Denali CTP1. The very first thing, I realized that there is no AdventureWorks Sample Database available for Denali. I quickly searched online and reached to Microsoft documentations where it provides information of the how to install (restore) AdventureWorks for SQL Server 2011 for Denali. Download the AdventureWorks from here. Run following script (replace your path of mdf file. CREATE DATABASE AdventureWorks2008R2 ON (FILENAME = 'C:\SQL 11 CTP1\CTP1\AdventureWorks2008R2_Data.mdf') FOR ATTACH_REBUILD_LOG ; When you run above script it will give you following message and you are DONE! File activation failure. The physical file name "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdventureWorks2008R2_Log.ldf" may be incorrect. New log file 'C:\SQL 11 CTP1\CTP1\AdventureWorks2008R2_log.ldf' was created. Converting database 'AdventureWorks2008R2' from version 679 to the current version 684. Database 'AdventureWorks2008R2' running the upgrade step from version 679 to version 680. Database 'AdventureWorks2008R2' running the upgrade step from version 680 to version 681. Database 'AdventureWorks2008R2' running the upgrade step from version 681 to version 682. Database 'AdventureWorks2008R2' running the upgrade step from version 682 to version 683. Database 'AdventureWorks2008R2' running the upgrade step from version 683 to version 684. I will soon write my experience about Denali. However, SQL Server Management Studio more started to look a like Visual Studio. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to change the firmware of my NETGEAR WGT624 v2 to DD-WRT

    - by Lirik
    In reference to my previous question: I can't find the appropriate firmware for my NETGEAR WGT624 v2 router. I went on the dd-wrt web site and I read the wiki, but I didn't see any files or instructions on how to change the firmware. The router database lists my router, but as I said: no files. In addition the "Supported" column lists "wip", what is wip? Router Database 4 routers found Manufacturer Model Revision Supported Activation required Netgear WGT624 v1 wip no Netgear WGT624 v2 wip no ... The only thing listed for my router on the dd-wrt web site is: Router details Chipset: AR2312A RAM: 16 MB FLASH: 4 MB Additional information * OpenWRT Wiki: Netgear WGT624 * Unbrick procedure Is the router supported? Can I change the firmware? If yes, then how can I change it?

    Read the article

  • Communication Between Your PC and Azure VM via Windows Azure Connect

    - by Shaun
    With the new release of the Windows Azure platform there are a lot of new features available. In my previous post I introduced a little bit about one of them, the remote desktop access to azure virtual machine. Now I would like to talk about another cool stuff – Windows Azure Connect.   What’s Windows Azure Connect I would like to quote the definition of the Windows Azure Connect in MSDN With Windows Azure Connect, you can use a simple user interface to configure IP-sec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. IP-sec protects communications over Internet Protocol (IP) networks through the use of cryptographic security services. There’s an image available at the MSDN as well that I would like to forward here As we can see, using the Windows Azure Connect the Worker Role 1 and Web Role 1 are connected with the development machines and database servers which some of them are inside the organization some are not. With the Windows Azure Connect, the roles deployed on the cloud could consume the resource which located inside our Intranet or anywhere in the world. That means the roles can connect to the local database, access the local shared resource such as share files, folders and printers, etc.   Difference between Windows Azure Connect and AppFabric It seems that the Windows Azure Connect are duplicated with the Windows Azure AppFabric. Both of them are aiming to solve the problem on how to communication between the resource in the cloud and inside the local network. The table below lists the differences in my understanding. Category Windows Azure Connect Windows Azure AppFabric Purpose An IP-sec connection between the local machines and azure roles. An application service running on the cloud. Connectivity IP-sec, Domain-joint Net Tcp, Http, Https Components Windows Azure Connect Driver Service Bus, Access Control, Caching Usage Azure roles connect to local database server Azure roles use local shared files,  folders and printers, etc. Azure roles join the local AD. Expose the local service to Internet. Move the authorization process to the cloud. Integrate with existing identities such as Live ID, Google ID, etc. with existing local services. Utilize the distributed cache.   And also some scenarios on which of them should be used. Scenario Connect AppFabric I have a service deployed in the Intranet and I want the people can use it from the Internet.   Y I have a website deployed on Azure and need to use a database which deployed inside the company. And I don’t want to expose the database to the Internet. Y   I have a service deployed in the Intranet and is using AD authorization. I have a website deployed on Azure which needs to use this service. Y   I have a service deployed in the Intranet and some people on the Internet can use it but need to be authorized and authenticated.   Y I have a service in Intranet, and a website deployed on Azure. This service can be used from Internet and that website should be able to use it as well by AD authorization for more functionalities. Y Y   How to Enable Windows Azure Connect OK we talked a lot information about the Windows Azure Connect and differences with the Windows Azure AppFabric. Now let’s see how to enable and use the Windows Azure Connect. First of all, since this feature is in CTP stage we should apply before use it. On the Windows Azure Portal we can see our CTP features status under Home, Beta Program page. You can send the apply to join the Beta Programs to Microsoft in this page. After a few days the Microsoft will send an email to you (the email of your Live ID) when it’s available. In my case we can see that the Windows Azure Connect had been activated by Microsoft and then we can click the Connect button on top, or we can click the Virtual Network item from the left navigation bar.   The first thing we need, if it’s our first time to enter the Connect page, is to enable the Windows Azure Connect. After that we can see our Windows Azure Connect information in this page.   Add a Local Machine to Azure Connect As we explained below the Windows Azure Connect can make an IP-sec connection between the local machines and azure role instances. So that we firstly add a local machine into our Azure Connect. To do this we will click the Install Local Endpoint button on top and then the portal will give us an URL. Copy this URL to the machine we want to add and it will download the software to us. This software will be installed in the local machines which we want to join the Connect. After installed there will be a tray-icon appeared to indicate this machine had been joint our Connect. The local application will be refreshed to the Windows Azure Platform every 5 minutes but we can click the Refresh button to let it retrieve the latest status at once. Currently my local machine is ready for connect and we can see my machine in the Windows Azure Portal if we switched back to the portal and selected back Activated Endpoints node.   Add a Windows Azure Role to Azure Connect Let’s create a very simple azure project with a basic ASP.NET web role inside. To make it available on Windows Azure Connect we will open the azure project property of this role from the solution explorer in the Visual Studio, and select the Virtual Network tab, check the Activate Windows Azure Connect. The next step is to get the activation token from the Windows Azure Portal. In the same page there is a button named Get Activation Token. Click this button then the portal will display the token to me. We copied this token and pasted to the box in the Visual Studio tab. Then we deployed this application to azure. After completed the deployment we can see the role instance was listed in the Windows Azure Portal - Virtual Connect section.   Establish the Connect Group The final task is to create a connect group which contains the machines and role instances need to be connected each other. This can be done in the portal very easy. The machines and instances will NOT be connected until we created the group for them. The machines and instances can be used in one or more groups. In the Virtual Connect section click the Groups and Roles node from the left side navigation bar and clicked the Create Group button on top. This will bring up a dialog to us. What we need to do is to specify a group name, description; and then we need to select the local computers and azure role instances into this group. After the Azure Fabric updated the group setting we can see the groups and the endpoints in the page. And if we switch back to the local machine we can see that the tray-icon have been changed and the status turned connected. The Windows Azure Connect will update the group information every 5 minutes. If you find the status was still in Disconnected please right-click the tray-icon and select the Refresh menu to retrieve the latest group policy to make it connected.   Test the Azure Connect between the Local Machine and the Azure Role Instance Now our local machine and azure role instance had been connected. This means each of them can communication to others in IP level. For example we can open the SQL Server port so that our azure role can connect to it by using the machine name or the IP address. The Windows Azure Connect uses IPv6 to connect between the local machines and role instances. You can get the IP address from the Windows Azure Portal Virtual Network section when select an endpoint. I don’t want to take a full example for how to use the Connect but would like to have two very simple tests. The first one would be PING.   When a local machine and role instance are connected through the Windows Azure Connect we can PING any of them if we opened the ICMP protocol in the Filewall setting. To do this we need to run a command line before test. Open the command window on the local machine and the role instance, execute the command as following netsh advfirewall firewall add rule name="ICMPv6" dir=in action=allow enable=yes protocol=icmpv6 Thanks to Jason Chen, Patriek van Dorp, Anton Staykov and Steve Marx, they helped me to enable  the ICMPv6 setting. For the full discussion we made please visit here. You can use the Remote Desktop Access feature to logon the azure role instance. Please refer my previous blog post to get to know how to use the Remote Desktop Access in Windows Azure. Then we can PING the machine or the role instance by specifying its name. Below is the screen I PING my local machine from my azure instance. We can use the IPv6 address to PING each other as well. Like the image following I PING to my role instance from my local machine thought the IPv6 address.   Another example I would like to demonstrate here is folder sharing. I shared a folder in my local machine and then if we logged on the role instance we can see the folder content from the file explorer window.   Summary In this blog post I introduced about another new feature – Windows Azure Connect. With this feature our local resources and role instances (virtual machines) can be connected to each other. In this way we can make our azure application using our local stuff such as database servers, printers, etc. without expose them to Internet.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Mouse pointer strange problem...

    - by Paska
    Hi all, i have last ubuntu installed (10.10), but from an year and thousand updates, video drivers updates, an hundreds of tricks, the mouse pointer is showed like an UGLY square... These are the screenshots: First Second I have no idea what to do to solve this problem. Anyone of you have an idea to solve it? Edit: this problem was encountered from 8.10+! Edit 2, Video card specifications: paska@ubuntu:~$ hwinfo --gfxcard 35: PCI 100.0: 0300 VGA compatible controller (VGA) [Created at pci.318] UDI: /org/freedesktop/Hal/devices/pci_1106_3230 Unique ID: VCu0.QX54AGQKWeE Parent ID: vSkL.CP+qXDDqow8 SysFS ID: /devices/pci0000:00/0000:00:01.0/0000:01:00.0 SysFS BusID: 0000:01:00.0 Hardware Class: graphics card Model: "VIA K8M890CE/K8N890CE [Chrome 9]" Vendor: pci 0x1106 "VIA Technologies, Inc." Device: pci 0x3230 "K8M890CE/K8N890CE [Chrome 9]" SubVendor: pci 0x1043 "ASUSTeK Computer Inc." SubDevice: pci 0x81b5 Revision: 0x11 Memory Range: 0xd0000000-0xdfffffff (rw,prefetchable) Memory Range: 0xfa000000-0xfaffffff (rw,non-prefetchable) Memory Range: 0xfbcf0000-0xfbcfffff (ro,prefetchable,disabled) IRQ: 16 (10026 events) I/O Ports: 0x3c0-0x3df (rw) Module Alias: "pci:v00001106d00003230sv00001043sd000081B5bc03sc00i00" Driver Info #0: Driver Status: viafb is not active Driver Activation Cmd: "modprobe viafb" Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #17 (PCI bridge) Primary display adapter: #35 paska@ubuntu:~$ thanks, A

    Read the article

  • Streaming Netflix Media with My Wii

    - by Ben Griswold
    Late last year, I wrote about Streaming Media with my Sony Blu-ray Disc Player. I am still digging the Blu-ray player setup but guess what showed up in the mail yesterday?   That’s right!  A free Netflix disc which now let’s me instantly watch TV episodes and movies via my Wii console.  I popped the disc into the console and in less than 2 minutes the brain-numbingly simple activation was complete.  (Full-disclosure: I already had my Wi-Fi connection configured, but I’m confident that the Netflix installation disc would have helpfully walked me through this additional step if need be.) As it turns out, the Wii Netflix UI offers far more options than what one gets with the Blu-ray setup.  Not only can I view my Instant Queue, but there’s a list of recently watched movies, a list of recommended titles by category, the star rating system, movies information and nearly everything you find on the web.  I reread Steve Krug’s Don’t Make Me Think: A Common Sense Approach to Web Usability on a flight back from Orlando on Wednesday, so my current view of the world may be a little skewed but, the brilliance of Netflix Wii’s user interface is undeniable. It’s not like the Blu-ray navigation is complicated but the Wii navigation feels familiar and intuitive. How intuitive?  Well, you won’t find a single bit of help text on any of the Wii screens – just a simple and obvious point-and-click navigation system.  And the UI is really pretty (which is still very important if you ask me) and so easy it became fun. Did I mention the media streaming works!  Yep, we watched 2 half-hour kid videos yesterday without any streaming issues at all.  If you have a Netflix account and a Wii, order your disc and give it a go. It’s good stuff.

    Read the article

  • Oracle Service Bus JMS Deployments Utility by Mike Muller

    - by JuergenKress
    For proxy services utilizing the JMS transport, OSB receives messages from destinations by using an MDB. These MDBs get generated and deployed during activation of the service configuration. OSB creates a random, unique name for the J2EE application that gets deployed to WLS. The name starts with “_ALSB_” and ends in a unique series of digits. The EAR files are written to the sbgen subdirectory of the domain home directory. You will see these applications on the WLS console page for “Deployments”. For various operational reasons, there are times when the application name for a given proxy service needs to be determined. Since the generated name of the application doesn’t reflect the name of the service, it becomes difficult to determine the relationship between the service and its EAR file. In fact, it can not be discerned from either the OSB or WLS consoles.Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Service bus,OSB,JMS,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • defunct dbus-daemon zombie freezes login for 30 seconds

    - by oldenburgb
    I'm running Ubuntu 12.04.2 LTS (Precise Pangolin) and noticed a 30 second delay whenever i log into my server via ssh or perform any kind of login via sudo on that machine. I can provoke immediate execution by killing the defunct dbus-daemon showing up during the delay: output of ps fax |grep dbus 19222 ? Ss 0:00 dbus-daemon --system --fork --activation=upstart 19752 ? Z 0:00 \_ [dbus-daemon] <defunct> taping into the dbus using dbus-monitor --system i'm getting: signal sender=org.freedesktop.DBus -> dest=(null destination) serial=7 path=/org/freedesktop/DBus; interface=org.freedesktop.DBus; member=NameOwnerChanged string ":1.4" string "" string ":1.4" each login. Stopping the dbus service eliminates this problem but probably causes many other... I'm not running xorg on the machine but the packages are present for X11 forwarding capabilities. I've ruled out the common motd script delay and ssh "UseDNS no" fixes one finds when looking up login delay issues. Many thanks in advance for any help with this, it's been driving me crazy ;-)

    Read the article

  • Davicom Semiconductor, Inc. 21x4x DEC-Tulip not detected by Wireshark but IP operational

    - by deepsix86
    Recently flipped to Ubuntu 11.10 on a Dell 4300 (Intel). Getting IP address and no issues (ping/surf) but Wireshark unable to detect eth0 interface. I see references in forums to blacklist tulip but looks like I am running dmfe. Not sure if the blacklist is required and where to go from here. Maybe Driver update? Got a little lost looking in that area. Some h/w details below (IP/MAC/HOSTNAME removed) Linux xxxxxx 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 17:34:21 UTC 2012 i686 i686 i386 GNU/Linux network-admin (HOSTS TAB) does not list eth0, only loopback and bunch of IPv6 interfaces ifconfig eth0 Link encap:Ethernet HWaddr xxxxxxxx inet addr:192.168.x.xx Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: xxxxxxxxxxx 64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:36662 errors:0 dropped:1 overruns:0 frame:0 TX packets:24975 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:42115779 (42.1 MB) TX bytes:3056435 (3.0 MB) Interrupt:18 Base address:0xe800 lspci 02:09.0 Ethernet controller: Davicom Semiconductor, Inc. 21x4x DEC-Tulip compatible 10/100 Ethernet (rev 31) Subsystem: Device 4554:434e Flags: bus master, medium devsel, latency 64, IRQ 18 I/O ports at e800 [size=256] Memory at fe1ffc00 (32-bit, non-prefetchable) [size=256] Expansion ROM at fe200000 [disabled] [size=256K] Capabilities: [50] Power Management version 2 Kernel driver in use: dmfe Kernel modules: dmfe hwinfo --netcard 20: PCI 209.0: 0200 Ethernet controller [Created at pci.318] Unique ID: rBUF.0NgK5ZS9c0D Parent ID: 6NW+.siohrLUzzI4 SysFS ID: /devices/pci0000:00/0000:00:1e.0/0000:02:09.0 SysFS BusID: 0000:02:09.0 Hardware Class: network Model: "Davicom 21x4x DEC-Tulip compatible 10/100 Ethernet" Vendor: pci 0x1282 "Davicom Semiconductor, Inc." Device: pci 0x9102 "21x4x DEC-Tulip compatible 10/100 Ethernet" SubVendor: pci 0x4554 SubDevice: pci 0x434e Revision: 0x31 Driver: "dmfe" Driver Modules: "dmfe" Device File: eth0 I/O Ports: 0xe800-0xe8ff (rw) Memory Range: 0xfe1ffc00-0xfe1ffcff (rw,non-prefetchable) Memory Range: 0xfe200000-0xfe23ffff (ro,non-prefetchable,disabled) IRQ: 18 (61379 events) HW Address: 00:08:a1:01:35:70 Link detected: yes Module Alias: "pci:v00001282d00009102sv00004554sd0000434Ebc02sc00i00" Driver Info #0: Driver Status: dmfe is active Driver Activation Cmd: "modprobe dmfe" Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #11 (PCI bridge)

    Read the article

  • How Do I Implement parameterMaps for ADF Regions and Dynamic Regions?

    - by david.giammona
    parameterMap objects defined by managed beans can help reduce the number of child <parameter> elements listed under an ADF region or dynamic region page definition task flow binding. But more importantly, the parameterMap approach also allows greater flexibility in determining what input parameters are passed to an ADF region or dynamic region. This can be especially helpful when using dynamic regions where each task flow utilized can provide an entirely different set of input parameters. The parameterMap is specified within an ADF region or dynamic region page definition task flow binding as shown below: <taskFlow id="checkoutflow1" taskFlowId="/WEB-INF/checkout-flow.xml#checkout-flow" activation="deferred" xmlns="http://xmlns.oracle.com/adf/controller/binding" parametersMap="#{pageFlowScope.userInfoBean.parameterMap}"/> The parameter map object must implement the java.util.Map interface. The keys it specifies match the names of input parameters defined by the task flows utilized within the task flow binding. An example parameterMap object class is shown below: import java.util.HashMap; import java.util.Map; public class UserInfoBean { private Map<String, Object> parameterMap = new HashMap<String, Object>(); public Map getParameterMap() { parameterMap.put("isLoggedIn", getSecurity().isAuthenticated()); parameterMap.put("principalName", getSecurity().getPrincipalName()); return parameterMap; }

    Read the article

  • Cannot configure NAP DCOM security.

    - by mattdwen
    I've just added a new 2K8 domain controller to an existing domain as part of a transition from 2k3. I am getting a lot of DCOM 10016 errors, indicating launch security permission problems on a specific CLSID, which ends up being the NAP Agent Service. I've dealt with this before by granting the Network Service local launch and local activation permissions, but the secuirty options are all disabled for this component in the Component Services snap-in. The NAP agent service is not running, and startup is set to Manual. Any ideas on how to remove the errors for the unrequried NAP agent?

    Read the article

  • Sudo and startup script

    - by Pitto
    Hello my friends. I have a new asus 1215n and I need to digit commands to enable multitouch. No problem: I've made a script. Since this netbook also need manual activation of the wifi driver the complete script is: #!/bin/bash # # list of synaptics device properties http://www.x.org/archive/X11R7.5/doc/man/man4/synaptics.4.html#sect4 # list current synaptics device properties: xinput list-props '"SynPS/2 Synaptics TouchPad"' # sleep 5 #added delay... xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Device Enabled" 8 1 xinput --set-prop --type=int --format=32 "SynPS/2 Synaptics TouchPad" "Synaptics Two-Finger Pressure" 4 xinput --set-prop --type=int --format=32 "SynPS/2 Synaptics TouchPad" "Synaptics Two-Finger Width" 9 # Below width 1 finger touch, above width simulate 2 finger touch. - value=pad-pixels xinput --set-prop --type=int --format=8 "SynPS/2 Synaptics TouchPad" "Synaptics Edge Scrolling" 1 1 0 # vertical, horizontal, corner - values: 0=disable 1=enable xinput --set-prop --type=int --format=32 "SynPS/2 Synaptics TouchPad" "Synaptics Jumpy Cursor Threshold" 250 # stabilize 2 finger actions - value=pad-pixels #xinput --set-prop --type=int --format=8 "SynPS/2 Synaptics TouchPad" "Synaptics Tap Action" 0 0 0 0 1 2 3 # pad corners rt rb lt lb tap fingers 1 2 3 (can't simulate more then 2 tap fingers AFAIK) - values: 0=disable 1=left 2=middle 3=right etc. (in FF 8=back 9=forward) xinput --set-prop --type=int --format=8 "SynPS/2 Synaptics TouchPad" "Synaptics Two-Finger Scrolling" 1 0 # vertical scrolling, horizontal scrolling - values: 0=disable 1=enable #xinput --set-prop --type=int --format=8 "SynPS/2 Synaptics TouchPad" "Synaptics Circular Scrolling" 1 #xinput --set-prop --type=int --format=8 "SynPS/2 Synaptics TouchPad" "Synaptics Circular Scrolling Trigger" 3 sudo modprobe lib80211 sudo insmod /home/pitto/Drivers/broadcom/wl.ko exit I've saved the script, then put it in my home, then chmod +x scriptname and then added it to startup applications. Then I did: sudo visudo and added this row: myusername ALL=(ALL) NOPASSWD: /home/scriptname rebooted and... Multitouch works but wifi not. When I manually launch the script it asks for sudo password so I thought it was because of modprobe and insmod commands and I've added those commands to sudo visudo. Nothing. What am I doing wrong?

    Read the article

  • Cannot install any Feature/Role on Windows 2008 R2 Standard

    - by Parsa
    I was trying to install Exchange 2010 prerequisites, when I encountered some error. All look like the same. Like this one: Error: Installation of [Windows Process Activation Service] Configuration APIs failed. the server needs to be restarted to undo the changes. My server is running Windows Server 2008 R2 Standard Edition. UPDATE: I tried installing the prerequisites one by one using PowerShell. Now I have errors on RPC over Http proxy: Installation of [Web Server (IIS)] Tracing Failed, Attempt to install tracing failed with error code 0x80070643. Fatal error during installation. Searching about the error code doesn't tell me much more than something went wrong when trying to update windows. Installing Http Tracing alone also doesn't make any difference.

    Read the article

  • Best way to fix security problems caused by windows updates?

    - by Chris Lively
    I have a laptop running Windows 7 32-bit. Last nights security updates caused my logitech mouse to stop working (specifically, it caused several USB ports to stop altogether). After reviewing the system event log I found that the IPBusEnum component was failing due to an activation security error. A little more research and I found that this was caused by the TrustedInstaller replacing the security permissions on those keys and generally mucking them up. To fix this I had to open regedit, take ownership of ALL the keys related to IPBusEnum and force it to use the inherited permissions from the tree. Is there a better way to fix this when MS screws up the updates? I would hate to have to walk around to a number of machines and manually fix the registry key security settings.

    Read the article

  • No endpoint listening at.........

    - by Michael Stephenson
    I was having some very frustrating behaviour on our build server and while I found a number of articles online with similar error messages none of them helped me.  I thought I would just explain this here incase if helps me or anyone else in future.The error message we were getting is:There was no endpoint listening at http://localhostStubs.ExternalApplication/SampleService.svc that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more detailsOur scenario is as follows:We have a solution where a WCF service application hosting the WCF routing service is listening to the Windows Azure Service Bus Relay.  We have an acceptance test project in the solution which sends a message to the service bus which is then received by the WCF routing service and routed to SampleService.svc which is hosted in another IIS application on the same box.  A response is flowed back through to the test.  In the tests there are 5 scenarios simulating a successful message, and various error conditions.  On my developer machine it was working absolutely fine every time, and a clean build on my developer machine worked fine.  On the build server however one or more of the tests would fail each time with the above error message.  There didnt seem to be any pattern to which test would fail.The solution was building on a Windows 2008 R2 machine with IIS 7 and AppFabric Server installed with auto-start configured for the IIS Application which would be listening to service bus.After lots of searching online and looking at logs etc it turned out to be a simple solution to just restart the WAS service (Windows Process Activation Service) and the services it advised you to restart with it.  Hope this helps someone else

    Read the article

  • Sql 2008 r2 DEV install failed

    - by obulay
    I am trying to install Sql 2008 r2 DEVeloper on Windows 2008 STD. It previously had Sql 2008 Enterprise installed. I first uninstalled SQL 2008. I think uninstall still leaves some crap in registery, and in Program files. Installation fails in the last step of the process - installation, about couple of minutes into it. Can't insert picture for you because serverfault does not allow me to do so. But it basically it: Error message: "Installation failed" I re-installed SQL 2008 Enterprise on this box and we are not going to go to R2 on older servers that previously had SQL 2005 or SQL 2008 on it. Looking at Windows log: Activation context generation failed for "C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\SQLServer2008R2\x64\Microsoft.SqlServer.Configuration.SqlServer_ConfigExtension.dll". Dependent Assembly Microsoft.VC80.CRT,processorArchitecture="amd64",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="8.0.50727.4027" could not be found. Please use sxstrace.exe for detailed diagnosis.

    Read the article

  • SharePoint 2010 BDC Model Deployment Issue: “The default web application could not be determined.”

    - by Jan Tielens
    Yesterday I tried to deploy a Business Data Connectivity Model project created in Visual Studio 2010 to my SharePoint 2010 test server (all RTM versions), but during the deployment of the solution, SharePoint threw my following error: Add Solution:  Adding solution 'BCSDemo2.wsp'...  Deploying solution 'BCSDemo2.wsp'...Error occurred in deployment step 'Add Solution': The default web application could not be determined. Set the SiteUrl property in feature BCSDemo2_Feature1 to the URL of the desired site and retry activation.Parameter name: properties A little bit of searching on the internet taught me that I was not the only one having this issue, actually Paul Andrew describes how to solve it in this post. Although Paul describes what to do, his explanation is not, let’s say, very elaborate. :-) So let’s describe the steps a little bit more in detail: Create a new Business Data Connectivity Model project in Visual Studio 2010 and (optionally) implement all your code, change the model etc. When you try to deploy you get the error mentioned above. To fix it, in the Solution Explorer, navigate to and open the Feature1.Template.xml file (the name could be different if you decided to give your feature a different name of course). Add the following XML in the Feature element that’s already there (replace the Value with the URL of your site of course):  <Properties>    <Property Key='SiteUrl' Value='http://spf.u2ucourse.com'/>  </Properties>The resulting XML should look like:<?xml version="1.0" encoding="utf-8" ?><Feature xmlns="http://schemas.microsoft.com/sharepoint/">  <Properties>    <Property Key='SiteUrl' Value='http://spf.u2ucourse.com'/>  </Properties></Feature> Deploy the solution, now without any issues. :-) What happens now, is that when Visual Studio creates the SharePoint Solution (the WSP file), it will use the Feature template XML to generate the Feature manifest, which will now include the missing property.

    Read the article

  • Cannot install any Feature/Role on Windows Server 2008 R2 Standard

    - by Parsa
    I was trying to install Exchange 2010 prerequisites, when I encountered some error. All look like the same. Like this one: Error: Installation of [Windows Process Activation Service] Configuration APIs failed. the server needs to be restarted to undo the changes. My server is running Windows Server 2008 R2 Standard Edition. UPDATE: I tried installing the prerequisites one by one using PowerShell. Now I have errors on RPC over Http proxy: Installation of [Web Server (IIS)] Tracing Failed, Attempt to install tracing failed with error code 0x80070643. Fatal error during installation. Searching about the error code doesn't tell me much more than something went wrong when trying to update windows. Installing Http Tracing alone also doesn't make any difference.

    Read the article

  • How to enable connection security for WMI firewall rules when using VAMT 2.0?

    - by Ondrej Tucny
    I want to use VAMT 2.0 to install product keys and active software in remote machines. Everything works fine as long as the ASync-In, DCOM-In, and WMI-In Windows Firewall rules are enabled and the action is set to Allow the connection. However, when I try using Allow the connection if it is secure (regardless of the connection security option chosen) VAMT won't connect to the remote machine. I tried using wbemtest and the error always is “The RPC server is unavailable”, error code 0x800706ba. How do I setup at least some level of connection security for remote WMI access for VAMT to work? I googled for correct VAMT setup, read the Volume Activation 2.0 Step-by-Step guide, but no luck finding anything about connection security.

    Read the article

  • Is It Possible To Recover A Partial LVM Logical Volume?

    - by Terry Wang
    Background It is an Ubuntu 12.04 VirtualBox VM with 5 virtual HDDs (VDI), NOTE this is just a test VM, so not well planned ahead: ubuntu.vdi for / (/dev/mapper/ubuntu-root AKA /dev/ubuntu/root) and /home (/dev/mapper/ubuntu-home) weblogic.vdi - /dev/sdb (mounted on /bea for weblogic and other stuff) btrfs1.vdi - /dev/sdc (part of btrfs -m raid1 -d raid1 configuration) btrfs2.vdi - /dev/sdd (part of btrfs -m raid1 -d raid1 configuration) more.vdi - /dev/sde (added this virtual HDD because / ran out of inodes and it wasn't easy to figure out what to delete so as to free up inodes, so I just added the new virtual HDD, created PV, added it to existing volume group ubuntu, grew the root logical volume to work around the inode issue -_-) What happened? Last Friday, before finishing up I wanted to free up some disk space on that box, for some reason I thought the more.vdi was useless and tried to detach it from the VM, I then clicked delete (should have clicked keep files damn!) by mistake when detaching. Unfortunately I didn't have backup for it. All too late. What I have tried Tried to undelete (use testdisk and photorec) the vdi files but it takes too long and recovered heaps of .vdi files that I didn't want (huge, filled the disk, damn!). I finally gave up. Fortunately most of data is on separate ext4 partition and btrfs volumes. Out of curiosity, I still tried to mount the logical volumes and see if it is possible to at least recover the /var and /etc I tried to use system rescue cd to boot and activate the volume groups, I got: Couldn't find device with uuid xxxx. Refusing activation of the partial LV root. Use --partial to override. 1 logical volume(s) in volume group "ubuntu" now active. I was able to mount home LV but not root LV. I am wondering if it is possible to access the root LV any more. Under the bonnet, data (on LV root - /) was striped to more.vdi (PV), I know it's almost impossible to to recover. But I am still curious about how system administrator/DevOps guys deal with this sort of situation;-) Thanks in advance.

    Read the article

  • Does HP 8530w support Wifi 802.11n?

    - by FoxyBOA
    According to vendor site HP 8530w supports 802.11n draft mode. My Windows 7 tell me that network card is 5100 ABG and know nothing about N mode (guys from Intel forum mentioned, that 802.11N versions must ended with N letter, e.g. 5100 ABN). Not sure if it matter, but my laptop model id is #FU462EA. How could I check that my Wifi card compliant with 802.11n (I suspect that "draft N" could have two "special" meaning: doesn't work with 802.11n at all or requires special activation of the mode). Any hints are welcome.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >