Search Results

Search found 3353 results on 135 pages for 'volume activation'.

Page 67/135 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Store product data in session variable or access db every time?

    - by JakeTheSnake
    I have my database storing lots of information about products (year, name, release_date, volume, etc. etc.). I currently load all of the products once and store them in a session variable - right now there's only 8 products but the list will grow as time progresses. The reason why I did this was to (perhaps foolishly) save HDD reads every time the products page was accessed. Am I shooting myself in the foot by storing this information in the session?

    Read the article

  • Will any USB DVD reader work with Apple Mac's?

    - by Kev
    Following on from this question: Can Apple Macintosh computers boot from a USB volume? Can I use any USB 2.0 DVD drive to boot a Mac from? I had a look around my local PC World today and only the more expensive drives explicitly state on the packaging that they were Mac compatible. Is this just because these products have been through Apple's hardware approval/testing programme, or do Mac's have some special requirement?

    Read the article

  • Changing default playback device on Windows 8

    - by emartel
    Previously, on Vista and Windows 7, changing the Default Playback device would occur instantly. For example, audio is coming out of my speakers, I right click the Volume Control, click Playback Devices then I select another device and click Set Default. Audio would be transferred immediately. Unfortunately, now, with Windows 8, I need to kill whatever process what outputting sound, and restart it for the change to take effect. Is there something that can be done about it so that changes are taken into account immediately?

    Read the article

  • Where can I get a Windows 8 side-loading product key?

    - by Earlz
    I have Windows 8 available through MSDN, as such, I have access to a lot of things such as volume licensing, though for now I'm just using the regular single-license Windows 8 Enterprise. I've tried to get side-loading to work without having a developer license but I can't. Looking over some things on the internet seems to indicate that you need "a side-loading product key". Where can I get such a thing?

    Read the article

  • Running resize2fs on /

    - by user42363
    I'm trying to resize an ext4 filesystem on a Fedora 11 box. Using fsdisk and lvm, I was able to grow the partition and logical volume containing the filesystem. When I try to run resize2fs on the device containing the filesystem (/dev/sda2 in this case), I get: "Device or resource busy while trying to open /dev/sda2, Couldn't find valid filesystem superblock" I've tried this from a rescue disk that doesn't have the filesystem mounted, no joy. Maybe resize2fs doesn't know about ext4?

    Read the article

  • Windows update error code : 80244004

    - by Hamidreza
    I am using Windows 7 and ESET SMART SECURITY 5 . Today I wanted to update my computer using Windows Update but it does give me error : Error(s) found: Code 80244004        Windows Update encountered an unknown error. My System Info : Sony Vaio EA2gfx , Ram : 4GB DDR2 , CPU: Intel Core i 5 I checkd out this links but they didn't help : http://answers.microsoft.com/en-us/windows/forum/windows_7-windows_update/while-updating-i-am-getting-the-error-code/0b9b756c-5b6e-4571-838e-f90c48a4e00c https://www.calguns.net/calgunforum/showthread.php?t=583860 http://www.sevenforums.com/windows-updates-activation/235807-windows-update-error-80244004-a.html Please help me, thanks.

    Read the article

  • Windows 8 / IIS 8 Concurrent Requests Limit

    - by OWScott
    IIS 8 on Windows Server 2012 doesn’t have any fixed concurrent request limit, apart from whatever limit would be reached when resources are maxed. However, the client version of IIS 8, which is on Windows 8, does have a concurrent connection request limitation to limit high traffic production uses on a client edition of Windows. Starting with IIS 7 (Windows Vista), the behavior changed from previous versions.  In previous client versions of IIS, excess requests would throw a 403.9 error message (Access Forbidden: Too many users are connected.).  Instead, Windows Vista, 7 and 8 queue excessive requests so that they will be handled gracefully, although there is a maximum number of requests that will be processed simultaneously. Thomas Deml provided a concurrent request chart for Windows Vista many years ago, but I have been unable to find an equivalent chart for Windows 8 so I asked Wade Hilmo from the IIS team what the limits are.  Since this is controlled not by the IIS team itself but rather from the Windows licensing team, he asked around and found the authoritative answer, which I’ll provide below. Windows 8 – IIS 8 Concurrent Requests Limit Windows 8 3 Windows 8 Professional 10 Windows RT N/A since IIS does not run on Windows RT Windows 7 – IIS 7.5 Concurrent Requests Limit Windows 7 Home Starter 1 Windows 7 Basic 1 Windows 7 Premium 3 Windows 7 Ultimate, Professional, Enterprise 10 Windows Vista – IIS 7 Concurrent Requests Limit Windows Vista Home Basic (IIS process activation and HTTP processing only) 3 Windows Vista Home Premium 3 Windows Vista Ultimate, Professional 10 Windows Server 2003, Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012 allow an unlimited amount of simultaneously requests.

    Read the article

  • How to Upgrade Your Netbook to Windows 7 Home Premium

    - by Matthew Guay
    Would you like more features and flash in Windows on your netbook?  Here’s how you can easily upgrade your netbook to Windows 7 Home Premium the easy way. Most new netbooks today ship with Windows 7 Starter, which is the cheapest edition of Windows 7.  It is fine for many computing tasks, and will run all your favorite programs great, but it lacks many customization, multimedia, and business features found in higher editions.  Here we’ll show you how you can quickly upgrade your netbook to more full-featured edition of Windows 7 using Windows Anytime Upgrade.  Also, if you want to upgrade your laptop or desktop to another edition of Windows 7, say Professional, you can follow these same steps to upgrade it, too. Please note: This is only for computers already running Windows 7.  If your netbook is running XP or Vista, you will have to run a traditional upgrade to install Windows 7. Upgrade Advisor First, let’s make sure your netbook can support the extra features, such as Aero Glass, in Windows 7 Home Premium.  Most modern netbooks that ship with Windows 7 Starter can run the advanced features in Windows 7 Home Premium, but let’s check just in case.  Download the Windows 7 Upgrade Advisor (link below), and install as normal. Once it’s installed, run it and click Start Check.   Make sure you’re connected to the internet before you run the check, or otherwise you may see this error message.  If you see it, click Ok and then connect to the internet and start the check again. It will now scan all of your programs and hardware to make sure they’re compatible with Windows 7.  Since you’re already running Windows 7 Starter, it will also tell you if your computer will support the features in other editions of Windows 7. After a few moments, the Upgrade Advisor will show you want it found.  Here we see that our netbook, a Samsung N150, can be upgraded to Windows 7 Home Premium, Professional, or Ultimate. We also see that we had one issue, but this was because a driver we had installed was not recognized.  Click “See all system requirements” to see what your netbook can do with the new edition. This shows you which of the requirements, including support for Windows Aero, your netbook meets.  Here our netbook supports Aero, so we’re ready to go upgrade. For more, check out our article on how to make sure your computer can run Windows 7 with Upgrade Advisor. Upgrade with Anytime Upgrade Now, we’re ready to upgrade our netbook to Windows 7 Home Premium.  Enter “Anytime Upgrade” in the Start menu search,and select Windows Anytime Upgrade. Windows Anytime Upgrade lets you upgrade using product key you already have or one you purchase during the upgrade process.  And, it installs without any downloads or Windows disks, so it works great even for netbooks without DVD drives. Anytime Upgrades are cheaper than a standard upgrade, and for a limited time, select retailers in the US are offering Anytime Upgrades to Windows 7 Home Premium for only $49.99 if purchased with a new netbook.  If you already have a netbook running Windows 7 Starter, you can either purchase an Anytime Upgrade package at a retail store or purchase a key online during the upgrade process for $79.95.  Or, if you have a standard Windows 7 product key (full or upgrade), you can use it in Anytime upgrade.  This is especially nice if you can purchase Windows 7 cheaper through your school, university, or office. Purchase an upgrade online To purchase an upgrade online, click “Go online to choose the edition of Windows 7 that’s best for you”.   Here you can see a comparison of the features of each edition of Windows 7.  Note that you can upgrade to either Home Premium, Professional, or Ultimate.  We chose home Premium because it has most of the features that home users want, including Media Center and Aero Glass effects.  Also note that the price of each upgrade is cheaper than the respective upgrade from Windows XP or Vista.  Click buy under the edition you want.   Enter your billing information, then your payment information.  Once you confirm your purchase, you will directly be taken to the Upgrade screen.  Make sure to save your receipt, as you will need the product key if you ever need to reinstall Windows on your computer. Upgrade with an existing product key If you purchased an Anytime Upgrade kit from a retailer, or already have a Full or Upgrade key for another edition of Windows 7, choose “Enter an upgrade key”. Enter your product key, and click Next.  If you purchased an Anytime Upgrade kit, the product key will be located on the inside of the case on a yellow sticker. The key will be verified as a valid key, and Anytime Upgrade will automatically choose the correct edition of Windows 7 based on your product key.  Click Next when this is finished. Continuing the Upgrade process Whether you entered a key or purchased a key online, the process is the same from here on.  Click “I accept” to accept the license agreement. Now, you’re ready to install your upgrade.  Make sure to save all open files and close any programs, and then click Upgrade. The upgrade only takes about 10 minutes in our experience but your mileage may vary.  Any available Microsoft updates, including ones for Office, Security Essentials, and other products, will be installed before the upgrade takes place. After a couple minutes, your computer will automatically reboot and finish the installation.  It will then reboot once more, and your computer will be ready to use!  Welcome to your new edition of Windows 7! Here’s a before and after shot of our desktop.  When you do an Anytime Upgrade, all of your programs, files, and settings will be just as they were before you upgraded.  The only change we noticed was that our pinned taskbar icons were slightly rearranged to the default order of Internet Explorer, Explorer, and Media Player.  Here’s a shot of our desktop before the upgrade.  Notice that all of our pinned programs and desktop icons are still there, as well as our taskbar customization (we are using small icons on the taskbar instead of the default large icons). Before, with the Windows 7 Starter background and the Aero Basic theme: And after, with Aero Glass and the more colorful default Windows 7 background.   All of the features of Windows 7 Home Premium are now ready to use.  The Aero theme was activate by default, but you can now customize your netbook theme, background, and more with the Personalization pane.  To open it, right-click on your desktop and select Personalize. You can also now use Windows Media Center, and can play-back DVD movies using an external drive. One of our favorite tools, the Snipping Tool, is also now available for easy screenshots and clips. Activating you new edition of Windows 7 You will still need to activate your new edition of Windows 7.  To do this right away, open the start menu, right-click on Computer, and select Properties.   Scroll to the bottom, and click “Activate Windows Now”. Make sure you’re connected to the internet, and then select “Activate Windows online now”. Activation may take a few minutes, depending on your internet connection speed. When it is done, the Activation wizard will let you know that Windows is activated and genuine.  Your upgrade is all finished! Conclusion Windows Anytime Upgrade makes it easy, and somewhat cheaper, to upgrade to another edition of Windows 7.  It’s useful for desktop and laptop owners who want to upgrade to Professional or Ultimate, but many more netbook owners will want to upgrade from Starter to Home Premium or another edition.  Links Download the Windows 7 Upgrade Advisor Windows Team Blog: Anytime Upgrade Special with new PC purchase Similar Articles Productive Geek Tips How To Upgrade from Vista to Windows 7 Home Premium EditionAnother Blog You Should Subscribe ToMysticgeek Blog: Turn Vista Home Premium Into Ultimate (Part 3) – Shadow CopyUpgrade Ubuntu from Breezy to DapperHow to Upgrade the Windows 7 RC to RTM (Final Release) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Get Your Delicious Bookmarks In Firefox’s Awesome Bar Manage Photos Across Different Social Sites With Dropico Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3 Yes, it’s Patch Tuesday

    Read the article

  • SQL SERVER – Installing AdventureWorks for SQL Server 2011

    - by pinaldave
    I just began with SQL Server 2011 Denali CTP1. The very first thing, I realized that there is no AdventureWorks Sample Database available for Denali. I quickly searched online and reached to Microsoft documentations where it provides information of the how to install (restore) AdventureWorks for SQL Server 2011 for Denali. Download the AdventureWorks from here. Run following script (replace your path of mdf file. CREATE DATABASE AdventureWorks2008R2 ON (FILENAME = 'C:\SQL 11 CTP1\CTP1\AdventureWorks2008R2_Data.mdf') FOR ATTACH_REBUILD_LOG ; When you run above script it will give you following message and you are DONE! File activation failure. The physical file name "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdventureWorks2008R2_Log.ldf" may be incorrect. New log file 'C:\SQL 11 CTP1\CTP1\AdventureWorks2008R2_log.ldf' was created. Converting database 'AdventureWorks2008R2' from version 679 to the current version 684. Database 'AdventureWorks2008R2' running the upgrade step from version 679 to version 680. Database 'AdventureWorks2008R2' running the upgrade step from version 680 to version 681. Database 'AdventureWorks2008R2' running the upgrade step from version 681 to version 682. Database 'AdventureWorks2008R2' running the upgrade step from version 682 to version 683. Database 'AdventureWorks2008R2' running the upgrade step from version 683 to version 684. I will soon write my experience about Denali. However, SQL Server Management Studio more started to look a like Visual Studio. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Creating STA COM compatible ASP.NET Applications

    - by Rick Strahl
    When building ASP.NET applications that interface with old school COM objects like those created with VB6 or Visual FoxPro (MTDLL), it's extremely important that the threads that are serving requests use Single Threaded Apartment Threading. STA is a COM built-in technology that allows essentially single threaded components to operate reliably in a multi-threaded environment. STA's guarantee that COM objects instantiated on a specific thread stay on that specific thread and any access to a COM object from another thread automatically marshals that thread to the STA thread. The end effect is that you can have multiple threads, but a COM object instance lives on a fixed never changing thread. ASP.NET by default uses MTA (multi-threaded apartment) threads which are truly free spinning threads that pay no heed to COM object marshaling. This is vastly more efficient than STA threading which has a bit of overhead in determining whether it's OK to run code on a given thread or whether some sort of thread/COM marshaling needs to occur. MTA COM components can be very efficient, but STA COM components in a multi-threaded environment always tend to have a fair amount of overhead. It's amazing how much COM Interop I still see today so while it seems really old school to be talking about this topic, it's actually quite apropos for me as I have many customers using legacy COM systems that need to interface with other .NET applications. In this post I'm consolidating some of the hacks I've used to integrate with various ASP.NET technologies when using STA COM Components. STA in ASP.NET Support for STA threading in the ASP.NET framework is fairly limited. Specifically only the original ASP.NET WebForms technology supports STA threading directly via its STA Page Handler implementation or what you might know as ASPCOMPAT mode. For WebForms running STA components is as easy as specifying the ASPCOMPAT attribute in the @Page tag:<%@ Page Language="C#" AspCompat="true" %> which runs the page in STA mode. Removing it runs in MTA mode. Simple. Unfortunately all other ASP.NET technologies built on top of the core ASP.NET engine do not support STA natively. So if you want to use STA COM components in MVC or with class ASMX Web Services, there's no automatic way like the ASPCOMPAT keyword available. So what happens when you run an STA COM component in an MTA application? In low volume environments - nothing much will happen. The COM objects will appear to work just fine as there are no simultaneous thread interactions and the COM component will happily run on a single thread or multiple single threads one at a time. So for testing running components in MTA environments may appear to work just fine. However as load increases and threads get re-used by ASP.NET COM objects will end up getting created on multiple different threads. This can result in crashes or hangs, or data corruption in the STA components which store their state in thread local storage on the STA thread. If threads overlap this global store can easily get corrupted which in turn causes problems. STA ensures that any COM object instance loaded always stays on the same thread it was instantiated on. What about COM+? COM+ is supposed to address the problem of STA in MTA applications by providing an abstraction with it's own thread pool manager for COM objects. It steps in to the COM instantiation pipeline and hands out COM instances from its own internally maintained STA Thread pool. This guarantees that the COM instantiation threads are STA threads if using STA components. COM+ works, but in my experience the technology is very, very slow for STA components. It adds a ton of overhead and reduces COM performance noticably in load tests in IIS. COM+ can make sense in some situations but for Web apps with STA components it falls short. In addition there's also the need to ensure that COM+ is set up and configured on the target machine and the fact that components have to be registered in COM+. COM+ also keeps components up at all times, so if a component needs to be replaced the COM+ package needs to be unloaded (same is true for IIS hosted components but it's more common to manage that). COM+ is an option for well established components, but native STA support tends to provide better performance and more consistent usability, IMHO. STA for non supporting ASP.NET Technologies As mentioned above only WebForms supports STA natively. However, by utilizing the WebForms ASP.NET Page handler internally it's actually possible to trick various other ASP.NET technologies and let them work with STA components. This is ugly but I've used each of these in various applications and I've had minimal problems making them work with FoxPro STA COM components which is about as dififcult as it gets for COM Interop in .NET. In this post I summarize several STA workarounds that enable you to use STA threading with these ASP.NET Technologies: ASMX Web Services ASP.NET MVC WCF Web Services ASP.NET Web API ASMX Web Services I start with classic ASP.NET ASMX Web Services because it's the easiest mechanism that allows for STA modification. It also clearly demonstrates how the WebForms STA Page Handler is the key technology to enable the various other solutions to create STA components. Essentially the way this works is to override the WebForms Page class and hijack it's init functionality for processing requests. Here's what this looks like for Web Services:namespace FoxProAspNet { public class WebServiceStaHandler : System.Web.UI.Page, IHttpAsyncHandler { protected override void OnInit(EventArgs e) { IHttpHandler handler = new WebServiceHandlerFactory().GetHandler( this.Context, this.Context.Request.HttpMethod, this.Context.Request.FilePath, this.Context.Request.PhysicalPath); handler.ProcessRequest(this.Context); this.Context.ApplicationInstance.CompleteRequest(); } public IAsyncResult BeginProcessRequest( HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } } public class AspCompatWebServiceStaHandlerWithSessionState : WebServiceStaHandler, IRequiresSessionState { } } This class overrides the ASP.NET WebForms Page class which has a little known AspCompatBeginProcessRequest() and AspCompatEndProcessRequest() method that is responsible for providing the WebForms ASPCOMPAT functionality. These methods handle routing requests to STA threads. Note there are two classes - one that includes session state and one that does not. If you plan on using ASP.NET Session state use the latter class, otherwise stick to the former. This maps to the EnableSessionState page setting in WebForms. This class simply hooks into this functionality by overriding the BeginProcessRequest and EndProcessRequest methods and always forcing it into the AspCompat methods. The way this works is that BeginProcessRequest() fires first to set up the threads and starts intializing the handler. As part of that process the OnInit() method is fired which is now already running on an STA thread. The code then creates an instance of the actual WebService handler factory and calls its ProcessRequest method to start executing which generates the Web Service result. Immediately after ProcessRequest the request is stopped with Application.CompletRequest() which ensures that the rest of the Page handler logic doesn't fire. This means that even though the fairly heavy Page class is overridden here, it doesn't end up executing any of its internal processing which makes this code fairly efficient. In a nutshell, we're highjacking the Page HttpHandler and forcing it to process the WebService process handler in the context of the AspCompat handler behavior. Hooking up the Handler Because the above is an HttpHandler implementation you need to hook up the custom handler and replace the standard ASMX handler. To do this you need to modify the web.config file (here for IIS 7 and IIS Express): <configuration> <system.webServer> <handlers> <remove name="WebServiceHandlerFactory-Integrated-4.0" /> <add name="Asmx STA Web Service Handler" path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" precondition="integrated"/> </handlers> </system.webServer> </configuration> (Note: The name for the WebServiceHandlerFactory-Integrated-4.0 might be slightly different depending on your server version. Check the IIS Handler configuration in the IIS Management Console for the exact name or simply remove the handler from the list there which will propagate to your web.config). For IIS 5 & 6 (Windows XP/2003) or the Visual Studio Web Server use:<configuration> <system.web> <httpHandlers> <remove path="*.asmx" verb="*" /> <add path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" /> </httpHandlers> </system.web></configuration> To test, create a new ASMX Web Service and create a method like this: [WebService(Namespace = "http://foxaspnet.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class FoxWebService : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { return "Hello World. Threading mode is: " + System.Threading.Thread.CurrentThread.GetApartmentState(); } } Run this before you put in the web.config configuration changes and you should get: Hello World. Threading mode is: MTA Then put the handler mapping into Web.config and you should see: Hello World. Threading mode is: STA And you're on your way to using STA COM components. It's a hack but it works well! I've used this with several high volume Web Service installations with various customers and it's been fast and reliable. ASP.NET MVC ASP.NET MVC has quickly become the most popular ASP.NET technology, replacing WebForms for creating HTML output. MVC is more complex to get started with, but once you understand the basic structure of how requests flow through the MVC pipeline it's easy to use and amazingly flexible in manipulating HTML requests. In addition, MVC has great support for non-HTML output sources like JSON and XML, making it an excellent choice for AJAX requests without any additional tools. Unlike WebForms ASP.NET MVC doesn't support STA threads natively and so some trickery is needed to make it work with STA threads as well. MVC gets its handler implementation through custom route handlers using ASP.NET's built in routing semantics. To work in an STA handler requires working in the Page Handler as part of the Route Handler implementation. As with the Web Service handler the first step is to create a custom HttpHandler that can instantiate an MVC request pipeline properly:public class MvcStaThreadHttpAsyncHandler : Page, IHttpAsyncHandler, IRequiresSessionState { private RequestContext _requestContext; public MvcStaThreadHttpAsyncHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); _requestContext = requestContext; } public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } protected override void OnInit(EventArgs e) { var controllerName = _requestContext.RouteData.GetRequiredString("controller"); var controllerFactory = ControllerBuilder.Current.GetControllerFactory(); var controller = controllerFactory.CreateController(_requestContext, controllerName); if (controller == null) throw new InvalidOperationException("Could not find controller: " + controllerName); try { controller.Execute(_requestContext); } finally { controllerFactory.ReleaseController(controller); } this.Context.ApplicationInstance.CompleteRequest(); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } public override void ProcessRequest(HttpContext httpContext) { throw new NotSupportedException("STAThreadRouteHandler does not support ProcessRequest called (only BeginProcessRequest)"); } } This handler code figures out which controller to load and then executes the controller. MVC internally provides the information needed to route to the appropriate method and pass the right parameters. Like the Web Service handler the logic occurs in the OnInit() and performs all the processing in that part of the request. Next, we need a RouteHandler that can actually pick up this handler. Unlike the Web Service handler where we simply registered the handler, MVC requires a RouteHandler to pick up the handler. RouteHandlers look at the URL's path and based on that decide on what handler to invoke. The route handler is pretty simple - all it does is load our custom handler: public class MvcStaThreadRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); return new MvcStaThreadHttpAsyncHandler(requestContext); } } At this point you can instantiate this route handler and force STA requests to MVC by specifying a route. The following sets up the ASP.NET Default Route:Route mvcRoute = new Route("{controller}/{action}/{id}", new RouteValueDictionary( new { controller = "Home", action = "Index", id = UrlParameter.Optional }), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute);   To make this code a little easier to work with and mimic the behavior of the routes.MapRoute() functionality extension method that MVC provides, here is an extension method for MapMvcStaRoute(): public static class RouteCollectionExtensions { public static void MapMvcStaRoute(this RouteCollection routeTable, string name, string url, object defaults = null) { Route mvcRoute = new Route(url, new RouteValueDictionary(defaults), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute); } } With this the syntax to add  route becomes a little easier and matches the MapRoute() method:RouteTable.Routes.MapMvcStaRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); The nice thing about this route handler, STA Handler and extension method is that it's fully self contained. You can put all three into a single class file and stick it into your Web app, and then simply call MapMvcStaRoute() and it just works. Easy! To see whether this works create an MVC controller like this: public class ThreadTestController : Controller { public string ThreadingMode() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Try this test both with only the MapRoute() hookup in the RouteConfiguration in which case you should get MTA as the value. Then change the MapRoute() call to MapMvcStaRoute() leaving all the parameters the same and re-run the request. You now should see STA as the result. You're on your way using STA COM components reliably in ASP.NET MVC. WCF Web Services running through IIS WCF Web Services provide a more robust and wider range of services for Web Services. You can use WCF over HTTP, TCP, and Pipes, and WCF services support WS* secure services. There are many features in WCF that go way beyond what ASMX can do. But it's also a bit more complex than ASMX. As a basic rule if you need to serve straight SOAP Services over HTTP I 'd recommend sticking with the simpler ASMX services especially if COM is involved. If you need WS* support or want to serve data over non-HTTP protocols then WCF makes more sense. WCF is not my forte but I found a solution from Scott Seely on his blog that describes the progress and that seems to work well. I'm copying his code below so this STA information is all in one place and quickly explain. Scott's code basically works by creating a custom OperationBehavior which can be specified via an [STAOperation] attribute on every method. Using his attribute you end up with a class (or Interface if you separate the contract and class) that looks like this: [ServiceContract] public class WcfService { [OperationContract] public string HelloWorldMta() { return Thread.CurrentThread.GetApartmentState().ToString(); } // Make sure you use this custom STAOperationBehavior // attribute to force STA operation of service methods [STAOperationBehavior] [OperationContract] public string HelloWorldSta() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Pretty straight forward. The latter method returns STA while the former returns MTA. To make STA work every method needs to be marked up. The implementation consists of the attribute and OperationInvoker implementation. Here are the two classes required to make this work from Scott's post:public class STAOperationBehaviorAttribute : Attribute, IOperationBehavior { public void AddBindingParameters(OperationDescription operationDescription, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { } public void ApplyClientBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.ClientOperation clientOperation) { // If this is applied on the client, well, it just doesn’t make sense. // Don’t throw in case this attribute was applied on the contract // instead of the implementation. } public void ApplyDispatchBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.DispatchOperation dispatchOperation) { // Change the IOperationInvoker for this operation. dispatchOperation.Invoker = new STAOperationInvoker(dispatchOperation.Invoker); } public void Validate(OperationDescription operationDescription) { if (operationDescription.SyncMethod == null) { throw new InvalidOperationException("The STAOperationBehaviorAttribute " + "only works for synchronous method invocations."); } } } public class STAOperationInvoker : IOperationInvoker { IOperationInvoker _innerInvoker; public STAOperationInvoker(IOperationInvoker invoker) { _innerInvoker = invoker; } public object[] AllocateInputs() { return _innerInvoker.AllocateInputs(); } public object Invoke(object instance, object[] inputs, out object[] outputs) { // Create a new, STA thread object[] staOutputs = null; object retval = null; Thread thread = new Thread( delegate() { retval = _innerInvoker.Invoke(instance, inputs, out staOutputs); }); thread.SetApartmentState(ApartmentState.STA); thread.Start(); thread.Join(); outputs = staOutputs; return retval; } public IAsyncResult InvokeBegin(object instance, object[] inputs, AsyncCallback callback, object state) { // We don’t handle async… throw new NotImplementedException(); } public object InvokeEnd(object instance, out object[] outputs, IAsyncResult result) { // We don’t handle async… throw new NotImplementedException(); } public bool IsSynchronous { get { return true; } } } The key in this setup is the Invoker and the Invoke method which creates a new thread and then fires the request on this new thread. Because this approach creates a new thread for every request it's not super efficient. There's a bunch of overhead involved in creating the thread and throwing it away after each thread, but it'll work for low volume requests and insure each thread runs in STA mode. If better performance is required it would be useful to create a custom thread manager that can pool a number of STA threads and hand off threads as needed rather than creating new threads on every request. If your Web Service needs are simple and you need only to serve standard SOAP 1.x requests, I would recommend sticking with ASMX services. It's easier to set up and work with and for STA component use it'll be significantly better performing since ASP.NET manages the STA thread pool for you rather than firing new threads for each request. One nice thing about Scotts code is though that it works in any WCF environment including self hosting. It has no dependency on ASP.NET or WebForms for that matter. STA - If you must STA components are a  pain in the ass and thankfully there isn't too much stuff out there anymore that requires it. But when you need it and you need to access STA functionality from .NET at least there are a few options available to make it happen. Each of these solutions is a bit hacky, but they work - I've used all of them in production with good results with FoxPro components. I hope compiling all of these in one place here makes it STA consumption a little bit easier. I feel your pain :-) Resources Download STA Handler Code Examples Scott Seely's original STA WCF OperationBehavior Article© Rick Strahl, West Wind Technologies, 2005-2012Posted in FoxPro   ASP.NET  .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Hotmail — avoid sign up confirmation / lost password being marked as spam

    - by Xerxes Cameron
    When sending legit large volume Emails from our IP (e.g. for sign up confirmation or Lost password) it gets marked as Junk in Hotmail. In the past, there was the Sender ID SPF Record Submission Form, where you could put yourself on the radar of Microsoft. See this old discussion. However, as of April 2012 this has been abandoned. Any hints what to do now? What is a good way to contact the Hotmail team?

    Read the article

  • NTFS Corruption: Files created in Linux corrupted when Windows Boots

    - by Logan Mayfield
    I'm getting some file loss and corruption on my Win7/Ubuntu 12.04 dual boot setup. I have a large shared NTFS partition. I have my Windows Docs/Music/etc. directories on that file and have the comparable directors in Linux setup as a sym. link. I'm using ntfs-3g on the linux side of things to manage the ntfs partition. The shared partition is on a logical partition along with my Linux /home / and /swap partitions. The ntfs partition is mounted at boot time via fstab with the following options: ntfs-3g users,nls=utf8,locale=en_US.UTF-8,exec,rw The problem seems to be confined to newly created and recently edited files. I have not see data loss or corruption when creating/editing files in Windows and then moving over to Ubuntu. I've been using the sync command aggressively in Ubuntu to try to ensure everything is getting written to the HDD. I do not use hibernate in Windows so I know it's not the usual missing files due to Hibernation problem. I'm not seeing any mount related issues on dmesg. Most recently I had a set of files related to a LaTeX document go bad. Some of them show up in Ubuntu but I am unable to delete them. In the GUI file browser they are given thumbnails associated with files I created on my last boot of Windows. To be more specific: I created a few png files in Windows. The files corrupted by that Windows boot are associated with running PdfLatex on a file and are not image files. However, two of the corrupted files show up with the thumbnail image of one of the previously mentioned png files. The png files are not in the same directory as the latex files but they are both win the Document Folder tree. I've had sucess with using NTFS for shared data in the past and am hoping there's some quirk here I'm missing and it's not just bad luck. On one hand this appears to be some kind of Windows problem as data loss occurs when I boot to Windows after having worked in Ubuntu for a while. However, I'm assuming it's more on the Ubuntu end as it requires the special NTFS drivers. Edit for more info: This is a Lenovo Thinkpad L430. Purchased new in the last month. So it's a fairly fresh install. Many of the files on the shared partition were copied over from a previous NTFS formatted shared partition on another HDD. As requested: here's a sample chkdsk log. Some of the files its mentioning were files that got deleted off the partition while in Ubuntu. Others were created/edited but not deleted. Checking file system on D: Volume dismounted. All opened handles to this volume are now invalid. Volume label is Files. CHKDSK is verifying files (stage 1 of 3)... Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x789f47 for possibly 0x21 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x42 is already in use. Deleting corrupt attribute record (128, "") from file record segment 66. 86496 file records processed. File verification completed. 385 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 3)... Deleted invalid filename Screenshot from 2012-09-09 09:51:27.png (72) in directory 46. The NTFS file name attribute in file 0x48 is incorrect. 53 00 63 00 72 00 65 00 65 00 6e 00 73 00 68 00 S.c.r.e.e.n.s.h. 6f 00 74 00 20 00 66 00 72 00 6f 00 6d 00 20 00 o.t. .f.r.o.m. . 32 00 30 00 31 00 32 00 2d 00 30 00 39 00 2d 00 2.0.1.2.-.0.9.-. 30 00 39 00 20 00 30 00 39 00 3a 00 35 00 31 00 0.9. .0.9.:.5.1. 3a 00 32 00 37 00 2e 00 70 00 6e 00 67 00 0d 00 :.2.7...p.n.g... 00 00 00 00 00 00 90 94 49 1f 5e 00 00 80 d4 00 ......I.^.... File 72 has been orphaned since all its filenames were invalid Windows will recover the file in the orphan recovery phase. Correcting minor file name errors in file 72. Index entry found.000 of index $I30 in file 0x5 points to unused file 0x11. Deleting index entry found.000 in index $I30 of file 5. Index entry found.001 of index $I30 in file 0x5 points to unused file 0x16. Deleting index entry found.001 in index $I30 of file 5. Index entry found.002 of index $I30 in file 0x5 points to unused file 0x15. Deleting index entry found.002 in index $I30 of file 5. Index entry DOWNLO~1 of index $I30 in file 0x28 points to unused file 0x2b6. Deleting index entry DOWNLO~1 in index $I30 of file 40. Unable to locate the file name attribute of index entry Screenshot from 2012-09-09 09:51:27.png of index $I30 with parent 0x2e in file 0x48. Deleting index entry Screenshot from 2012-09-09 09:51:27.png in index $I30 of file 46. An index entry of index $I30 in file 0x32 points to file 0x151e8 which is beyond the MFT. Deleting index entry latexsheet.tex in index $I30 of file 50. An index entry of index $I30 in file 0x58bc points to file 0x151eb which is beyond the MFT. Deleting index entry D8CZ82PK in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151f7 which is beyond the MFT. Deleting index entry EGA4QEAX in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151e9 which is beyond the MFT. Deleting index entry NGTB469M in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151fb which is beyond the MFT. Deleting index entry WU5RKXAB in index $I30 of file 22716. Index entry comp220-lab3.synctex.gz of index $I30 in file 0xda69 points to unused file 0xd098. Deleting index entry comp220-lab3.synctex.gz in index $I30 of file 55913. Unable to locate the file name attribute of index entry comp220-numberGrammars.aux of index $I30 with parent 0xda69 in file 0xa276. Deleting index entry comp220-numberGrammars.aux in index $I30 of file 55913. The file reference 0x500000000cd43 of index entry comp220-numberGrammars.out of index $I30 with parent 0xda69 is not the same as 0x600000000cd43. Deleting index entry comp220-numberGrammars.out in index $I30 of file 55913. The file reference 0x500000000cd45 of index entry comp220-numberGrammars.pdf of index $I30 with parent 0xda69 is not the same as 0xc00000000cd45. Deleting index entry comp220-numberGrammars.pdf in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15290 which is beyond the MFT. Deleting index entry gram.aux in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15291 which is beyond the MFT. Deleting index entry gram.out in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15292 which is beyond the MFT. Deleting index entry gram.pdf in index $I30 of file 55913. Unable to locate the file name attribute of index entry comp230-quiz1.synctex.gz of index $I30 with parent 0xda6f in file 0xd183. Deleting index entry comp230-quiz1.synctex.gz in index $I30 of file 55919. An index entry of index $I30 in file 0xf3cc points to file 0x15283 which is beyond the MFT. Deleting index entry require-transform.rkt in index $I30 of file 62412. An index entry of index $I30 in file 0xf3cc points to file 0x15284 which is beyond the MFT. Deleting index entry set.rkt in index $I30 of file 62412. An index entry of index $I30 in file 0xf497 points to file 0x15280 which is beyond the MFT. Deleting index entry logger.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf497 points to file 0x15281 which is beyond the MFT. Deleting index entry misc.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf497 points to file 0x15282 which is beyond the MFT. Deleting index entry more-scheme.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf5bf points to file 0x15285 which is beyond the MFT. Deleting index entry core-layout.rkt in index $I30 of file 62911. An index entry of index $I30 in file 0xf5e0 points to file 0x15286 which is beyond the MFT. Deleting index entry ref.scrbl in index $I30 of file 62944. An index entry of index $I30 in file 0xf6f0 points to file 0x15287 which is beyond the MFT. Deleting index entry base-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x15288 which is beyond the MFT. Deleting index entry html-properties.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x15289 which is beyond the MFT. Deleting index entry html-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528b which is beyond the MFT. Deleting index entry latex-prefix.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528c which is beyond the MFT. Deleting index entry latex-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528e which is beyond the MFT. Deleting index entry scribble.tex in index $I30 of file 63216. An index entry of index $I30 in file 0xf717 points to file 0x1528a which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63255. An index entry of index $I30 in file 0xf721 points to file 0x1528d which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63265. An index entry of index $I30 in file 0xf764 points to file 0x1528f which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63332. An index entry of index $I30 in file 0x14261 points to file 0x15270 which is beyond the MFT. Deleting index entry fddff3ae9ae2221207f144821d475c08ec3d05 in index $I30 of file 82529. An index entry of index $I30 in file 0x14621 points to file 0x15268 which is beyond the MFT. Deleting index entry FETCH_HEAD in index $I30 of file 83489. An index entry of index $I30 in file 0x14650 points to file 0x15272 which is beyond the MFT. Deleting index entry 86 in index $I30 of file 83536. An index entry of index $I30 in file 0x14651 points to file 0x15266 which is beyond the MFT. Deleting index entry pack-7f54ce9f8218d2cd8d6815b8c07461b50584027f.idx in index $I30 of file 83537. An index entry of index $I30 in file 0x14651 points to file 0x15265 which is beyond the MFT. Deleting index entry pack-7f54ce9f8218d2cd8d6815b8c07461b50584027f.pack in index $I30 of file 83537. An index entry of index $I30 in file 0x146f1 points to file 0x15275 which is beyond the MFT. Deleting index entry master in index $I30 of file 83697. An index entry of index $I30 in file 0x146f6 points to file 0x15276 which is beyond the MFT. Deleting index entry remotes in index $I30 of file 83702. An index entry of index $I30 in file 0x1477d points to file 0x15278 which is beyond the MFT. Deleting index entry pad.rkt in index $I30 of file 83837. An index entry of index $I30 in file 0x14797 points to file 0x1527c which is beyond the MFT. Deleting index entry pad1.rkt in index $I30 of file 83863. An index entry of index $I30 in file 0x14810 points to file 0x1527d which is beyond the MFT. Deleting index entry cm.rkt in index $I30 of file 83984. An index entry of index $I30 in file 0x14926 points to file 0x1527e which is beyond the MFT. Deleting index entry multi-file-search.rkt in index $I30 of file 84262. An index entry of index $I30 in file 0x149ef points to file 0x1527f which is beyond the MFT. Deleting index entry com.rkt in index $I30 of file 84463. An index entry of index $I30 in file 0x14b47 points to file 0x15202 which is beyond the MFT. Deleting index entry COMMIT_EDITMSG in index $I30 of file 84807. An index entry of index $I30 in file 0x14b47 points to file 0x15279 which is beyond the MFT. Deleting index entry index in index $I30 of file 84807. An index entry of index $I30 in file 0x14b4c points to file 0x15274 which is beyond the MFT. Deleting index entry master in index $I30 of file 84812. An index entry of index $I30 in file 0x14b61 points to file 0x1520b which is beyond the MFT. Deleting index entry 02 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1525a which is beyond the MFT. Deleting index entry 28 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15208 which is beyond the MFT. Deleting index entry 29 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1521f which is beyond the MFT. Deleting index entry 2c in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15261 which is beyond the MFT. Deleting index entry 2e in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151f0 which is beyond the MFT. Deleting index entry 45 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1523e which is beyond the MFT. Deleting index entry 47 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151e5 which is beyond the MFT. Deleting index entry 49 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15214 which is beyond the MFT. Deleting index entry 58 in index $I30 of file 84833. Index entry 6e of index $I30 in file 0x14b61 points to unused file 0xd182. Deleting index entry 6e in index $I30 of file 84833. Unable to locate the file name attribute of index entry a0 of index $I30 with parent 0x14b61 in file 0xd29c. Deleting index entry a0 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1521b which is beyond the MFT. Deleting index entry cd in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15249 which is beyond the MFT. Deleting index entry d6 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15242 which is beyond the MFT. Deleting index entry df in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15227 which is beyond the MFT. Deleting index entry ea in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1522e which is beyond the MFT. Deleting index entry f3 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151f2 which is beyond the MFT. Deleting index entry ff in index $I30 of file 84833. An index entry of index $I30 in file 0x14b62 points to file 0x15254 which is beyond the MFT. Deleting index entry 1ed39b36ad4bd48c91d22cbafd7390f1ea38da in index $I30 of file 84834. An index entry of index $I30 in file 0x14b75 points to file 0x15224 which is beyond the MFT. Deleting index entry 96260247010fe9811fea773c08c5f3a314df3f in index $I30 of file 84853. An index entry of index $I30 in file 0x14b79 points to file 0x15219 which is beyond the MFT. Deleting index entry 8f689724ca23528dd4f4ab8b475ace6edcb8f5 in index $I30 of file 84857. An index entry of index $I30 in file 0x14b7c points to file 0x15223 which is beyond the MFT. Deleting index entry 1df17cf850656be42c947cba6295d29c248d94 in index $I30 of file 84860. An index entry of index $I30 in file 0x14b7c points to file 0x15217 which is beyond the MFT. Deleting index entry 31db8a3c72a3e44769bbd8db58d36f8298242c in index $I30 of file 84860. An index entry of index $I30 in file 0x14b7c points to file 0x15267 which is beyond the MFT. Deleting index entry 8e1254d755ff1882d61c07011272bac3612f57 in index $I30 of file 84860. An index entry of index $I30 in file 0x14b82 points to file 0x15246 which is beyond the MFT. Deleting index entry f959bfaf9643c1b9e78d5ecf8f669133efdbf3 in index $I30 of file 84866. An index entry of index $I30 in file 0x14b88 points to file 0x151fe which is beyond the MFT. Deleting index entry 7e9aa15b1196b2c60116afa4ffa613397f2185 in index $I30 of file 84872. An index entry of index $I30 in file 0x14b8a points to file 0x151ea which is beyond the MFT. Deleting index entry 73cb0cd248e494bb508f41b55d862e84cdd6e0 in index $I30 of file 84874. An index entry of index $I30 in file 0x14b8e points to file 0x15264 which is beyond the MFT. Deleting index entry bd555d9f0383cc14c317120149e9376a8094c4 in index $I30 of file 84878. An index entry of index $I30 in file 0x14b96 points to file 0x15212 which is beyond the MFT. Deleting index entry 630dba40562d991bc6cbb6fed4ba638542e9c5 in index $I30 of file 84886. An index entry of index $I30 in file 0x14b99 points to file 0x151ec which is beyond the MFT. Deleting index entry 478be31ca8e538769246e22bba3330d81dc3c8 in index $I30 of file 84889. An index entry of index $I30 in file 0x14b99 points to file 0x15258 which is beyond the MFT. Deleting index entry 66c60c0a0f3253bc9a5112697e4cbb0dfc0c78 in index $I30 of file 84889. An index entry of index $I30 in file 0x14b9c points to file 0x15238 which is beyond the MFT. Deleting index entry 1c7ceeddc2953496f9ffbfc0b6fb28846e3fe3 in index $I30 of file 84892. An index entry of index $I30 in file 0x14b9c points to file 0x15247 which is beyond the MFT. Deleting index entry ae6e32ffc49d897d8f8aeced970a90d3653533 in index $I30 of file 84892. An index entry of index $I30 in file 0x14ba0 points to file 0x15233 which is beyond the MFT. Deleting index entry f71c7d874e45179a32e138b49bf007e5bbf514 in index $I30 of file 84896. Index entry 2e04fefbd794f050d45e7a717d009e39204431 of index $I30 in file 0x14ba7 points to unused file 0xd097. Deleting index entry 2e04fefbd794f050d45e7a717d009e39204431 in index $I30 of file 84903. An index entry of index $I30 in file 0x14baa points to file 0x15241 which is beyond the MFT. Deleting index entry 0dda7dec1c635cd646dfef308e403c2843d5dc in index $I30 of file 84906. An index entry of index $I30 in file 0x14baa points to file 0x151fc which is beyond the MFT. Deleting index entry 98151e654dd546edcfdec630bc82d90619ac8e in index $I30 of file 84906. An index entry of index $I30 in file 0x14bb1 points to file 0x151e9 which is beyond the MFT. Deleting index entry 1997c5be62ffeebc99253cced7608415e38e4e in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb1 points to file 0x1521d which is beyond the MFT. Deleting index entry 6bf3aedefd3ac62d9c49cad72d05e8c0ad242c in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb1 points to file 0x151f4 which is beyond the MFT. Deleting index entry 907b755afdca14c00be0010962d0861af29264 in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb3 points to file 0x15218 which is beyond the MFT. Deleting index entry

    Read the article

  • How to change the firmware of my NETGEAR WGT624 v2 to DD-WRT

    - by Lirik
    In reference to my previous question: I can't find the appropriate firmware for my NETGEAR WGT624 v2 router. I went on the dd-wrt web site and I read the wiki, but I didn't see any files or instructions on how to change the firmware. The router database lists my router, but as I said: no files. In addition the "Supported" column lists "wip", what is wip? Router Database 4 routers found Manufacturer Model Revision Supported Activation required Netgear WGT624 v1 wip no Netgear WGT624 v2 wip no ... The only thing listed for my router on the dd-wrt web site is: Router details Chipset: AR2312A RAM: 16 MB FLASH: 4 MB Additional information * OpenWRT Wiki: Netgear WGT624 * Unbrick procedure Is the router supported? Can I change the firmware? If yes, then how can I change it?

    Read the article

  • Communication Between Your PC and Azure VM via Windows Azure Connect

    - by Shaun
    With the new release of the Windows Azure platform there are a lot of new features available. In my previous post I introduced a little bit about one of them, the remote desktop access to azure virtual machine. Now I would like to talk about another cool stuff – Windows Azure Connect.   What’s Windows Azure Connect I would like to quote the definition of the Windows Azure Connect in MSDN With Windows Azure Connect, you can use a simple user interface to configure IP-sec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. IP-sec protects communications over Internet Protocol (IP) networks through the use of cryptographic security services. There’s an image available at the MSDN as well that I would like to forward here As we can see, using the Windows Azure Connect the Worker Role 1 and Web Role 1 are connected with the development machines and database servers which some of them are inside the organization some are not. With the Windows Azure Connect, the roles deployed on the cloud could consume the resource which located inside our Intranet or anywhere in the world. That means the roles can connect to the local database, access the local shared resource such as share files, folders and printers, etc.   Difference between Windows Azure Connect and AppFabric It seems that the Windows Azure Connect are duplicated with the Windows Azure AppFabric. Both of them are aiming to solve the problem on how to communication between the resource in the cloud and inside the local network. The table below lists the differences in my understanding. Category Windows Azure Connect Windows Azure AppFabric Purpose An IP-sec connection between the local machines and azure roles. An application service running on the cloud. Connectivity IP-sec, Domain-joint Net Tcp, Http, Https Components Windows Azure Connect Driver Service Bus, Access Control, Caching Usage Azure roles connect to local database server Azure roles use local shared files,  folders and printers, etc. Azure roles join the local AD. Expose the local service to Internet. Move the authorization process to the cloud. Integrate with existing identities such as Live ID, Google ID, etc. with existing local services. Utilize the distributed cache.   And also some scenarios on which of them should be used. Scenario Connect AppFabric I have a service deployed in the Intranet and I want the people can use it from the Internet.   Y I have a website deployed on Azure and need to use a database which deployed inside the company. And I don’t want to expose the database to the Internet. Y   I have a service deployed in the Intranet and is using AD authorization. I have a website deployed on Azure which needs to use this service. Y   I have a service deployed in the Intranet and some people on the Internet can use it but need to be authorized and authenticated.   Y I have a service in Intranet, and a website deployed on Azure. This service can be used from Internet and that website should be able to use it as well by AD authorization for more functionalities. Y Y   How to Enable Windows Azure Connect OK we talked a lot information about the Windows Azure Connect and differences with the Windows Azure AppFabric. Now let’s see how to enable and use the Windows Azure Connect. First of all, since this feature is in CTP stage we should apply before use it. On the Windows Azure Portal we can see our CTP features status under Home, Beta Program page. You can send the apply to join the Beta Programs to Microsoft in this page. After a few days the Microsoft will send an email to you (the email of your Live ID) when it’s available. In my case we can see that the Windows Azure Connect had been activated by Microsoft and then we can click the Connect button on top, or we can click the Virtual Network item from the left navigation bar.   The first thing we need, if it’s our first time to enter the Connect page, is to enable the Windows Azure Connect. After that we can see our Windows Azure Connect information in this page.   Add a Local Machine to Azure Connect As we explained below the Windows Azure Connect can make an IP-sec connection between the local machines and azure role instances. So that we firstly add a local machine into our Azure Connect. To do this we will click the Install Local Endpoint button on top and then the portal will give us an URL. Copy this URL to the machine we want to add and it will download the software to us. This software will be installed in the local machines which we want to join the Connect. After installed there will be a tray-icon appeared to indicate this machine had been joint our Connect. The local application will be refreshed to the Windows Azure Platform every 5 minutes but we can click the Refresh button to let it retrieve the latest status at once. Currently my local machine is ready for connect and we can see my machine in the Windows Azure Portal if we switched back to the portal and selected back Activated Endpoints node.   Add a Windows Azure Role to Azure Connect Let’s create a very simple azure project with a basic ASP.NET web role inside. To make it available on Windows Azure Connect we will open the azure project property of this role from the solution explorer in the Visual Studio, and select the Virtual Network tab, check the Activate Windows Azure Connect. The next step is to get the activation token from the Windows Azure Portal. In the same page there is a button named Get Activation Token. Click this button then the portal will display the token to me. We copied this token and pasted to the box in the Visual Studio tab. Then we deployed this application to azure. After completed the deployment we can see the role instance was listed in the Windows Azure Portal - Virtual Connect section.   Establish the Connect Group The final task is to create a connect group which contains the machines and role instances need to be connected each other. This can be done in the portal very easy. The machines and instances will NOT be connected until we created the group for them. The machines and instances can be used in one or more groups. In the Virtual Connect section click the Groups and Roles node from the left side navigation bar and clicked the Create Group button on top. This will bring up a dialog to us. What we need to do is to specify a group name, description; and then we need to select the local computers and azure role instances into this group. After the Azure Fabric updated the group setting we can see the groups and the endpoints in the page. And if we switch back to the local machine we can see that the tray-icon have been changed and the status turned connected. The Windows Azure Connect will update the group information every 5 minutes. If you find the status was still in Disconnected please right-click the tray-icon and select the Refresh menu to retrieve the latest group policy to make it connected.   Test the Azure Connect between the Local Machine and the Azure Role Instance Now our local machine and azure role instance had been connected. This means each of them can communication to others in IP level. For example we can open the SQL Server port so that our azure role can connect to it by using the machine name or the IP address. The Windows Azure Connect uses IPv6 to connect between the local machines and role instances. You can get the IP address from the Windows Azure Portal Virtual Network section when select an endpoint. I don’t want to take a full example for how to use the Connect but would like to have two very simple tests. The first one would be PING.   When a local machine and role instance are connected through the Windows Azure Connect we can PING any of them if we opened the ICMP protocol in the Filewall setting. To do this we need to run a command line before test. Open the command window on the local machine and the role instance, execute the command as following netsh advfirewall firewall add rule name="ICMPv6" dir=in action=allow enable=yes protocol=icmpv6 Thanks to Jason Chen, Patriek van Dorp, Anton Staykov and Steve Marx, they helped me to enable  the ICMPv6 setting. For the full discussion we made please visit here. You can use the Remote Desktop Access feature to logon the azure role instance. Please refer my previous blog post to get to know how to use the Remote Desktop Access in Windows Azure. Then we can PING the machine or the role instance by specifying its name. Below is the screen I PING my local machine from my azure instance. We can use the IPv6 address to PING each other as well. Like the image following I PING to my role instance from my local machine thought the IPv6 address.   Another example I would like to demonstrate here is folder sharing. I shared a folder in my local machine and then if we logged on the role instance we can see the folder content from the file explorer window.   Summary In this blog post I introduced about another new feature – Windows Azure Connect. With this feature our local resources and role instances (virtual machines) can be connected to each other. In this way we can make our azure application using our local stuff such as database servers, printers, etc. without expose them to Internet.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • EBS: OPP Out of memory issue...

    - by ashish.shrivastava
    FO Processor is little more hungry for memory compare to other Java process. If XSLT scalable option is not set and the same time your RTF template is not well optimized definitely you are going to hit Out of memory exception while working with large volume of data. If the memory requirement is not too bad, you can set the OOP Heap size using following SQL queries. Check the current OPP JVM Heap size using following SQL query SQL select DEVELOPER_PARAMETERS from FND_CP_SERVICES where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME = 'FNDCPOPP' DEVELOPER_PARAMETERS ----------------------------------------------------- J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx512m Set the JVM Heap size using following SQL query SQL update FND_CP_SERVICES set DEVELOPER_PARAMETERS = 'J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx2048m' where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME = 'FNDCPOPP'); SQLCommit; . You need to restart the Concurrent Manager to make it effective. If this does not resolve the issue, You need to optimize RTF template and set the XSLT scalable option true.

    Read the article

  • Print Any Document Type with AutoVue Document Print Services

    - by [email protected]
    The newly released AutoVue Document Print Services allow development organizations to automate and process high volume printing operations, of both business and technical document types, within their broader enterprise applications. For many organizations, their printing processes are challenged by the fact that they can only print a small subset of the documents required by their enterprise users. By integrating AutoVue Document Print Services, and deploying them in conjunction with their existing print server solutions, organizations can address that challenge and automate the printing of virtually any document type required in any business process, greatly extending the value of their print server solutions, and improving business processes and workforce productivity. For further details, check out the AutoVue Document Print Services datasheet.

    Read the article

  • Mouse pointer strange problem...

    - by Paska
    Hi all, i have last ubuntu installed (10.10), but from an year and thousand updates, video drivers updates, an hundreds of tricks, the mouse pointer is showed like an UGLY square... These are the screenshots: First Second I have no idea what to do to solve this problem. Anyone of you have an idea to solve it? Edit: this problem was encountered from 8.10+! Edit 2, Video card specifications: paska@ubuntu:~$ hwinfo --gfxcard 35: PCI 100.0: 0300 VGA compatible controller (VGA) [Created at pci.318] UDI: /org/freedesktop/Hal/devices/pci_1106_3230 Unique ID: VCu0.QX54AGQKWeE Parent ID: vSkL.CP+qXDDqow8 SysFS ID: /devices/pci0000:00/0000:00:01.0/0000:01:00.0 SysFS BusID: 0000:01:00.0 Hardware Class: graphics card Model: "VIA K8M890CE/K8N890CE [Chrome 9]" Vendor: pci 0x1106 "VIA Technologies, Inc." Device: pci 0x3230 "K8M890CE/K8N890CE [Chrome 9]" SubVendor: pci 0x1043 "ASUSTeK Computer Inc." SubDevice: pci 0x81b5 Revision: 0x11 Memory Range: 0xd0000000-0xdfffffff (rw,prefetchable) Memory Range: 0xfa000000-0xfaffffff (rw,non-prefetchable) Memory Range: 0xfbcf0000-0xfbcfffff (ro,prefetchable,disabled) IRQ: 16 (10026 events) I/O Ports: 0x3c0-0x3df (rw) Module Alias: "pci:v00001106d00003230sv00001043sd000081B5bc03sc00i00" Driver Info #0: Driver Status: viafb is not active Driver Activation Cmd: "modprobe viafb" Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #17 (PCI bridge) Primary display adapter: #35 paska@ubuntu:~$ thanks, A

    Read the article

  • Streaming Netflix Media with My Wii

    - by Ben Griswold
    Late last year, I wrote about Streaming Media with my Sony Blu-ray Disc Player. I am still digging the Blu-ray player setup but guess what showed up in the mail yesterday?   That’s right!  A free Netflix disc which now let’s me instantly watch TV episodes and movies via my Wii console.  I popped the disc into the console and in less than 2 minutes the brain-numbingly simple activation was complete.  (Full-disclosure: I already had my Wi-Fi connection configured, but I’m confident that the Netflix installation disc would have helpfully walked me through this additional step if need be.) As it turns out, the Wii Netflix UI offers far more options than what one gets with the Blu-ray setup.  Not only can I view my Instant Queue, but there’s a list of recently watched movies, a list of recommended titles by category, the star rating system, movies information and nearly everything you find on the web.  I reread Steve Krug’s Don’t Make Me Think: A Common Sense Approach to Web Usability on a flight back from Orlando on Wednesday, so my current view of the world may be a little skewed but, the brilliance of Netflix Wii’s user interface is undeniable. It’s not like the Blu-ray navigation is complicated but the Wii navigation feels familiar and intuitive. How intuitive?  Well, you won’t find a single bit of help text on any of the Wii screens – just a simple and obvious point-and-click navigation system.  And the UI is really pretty (which is still very important if you ask me) and so easy it became fun. Did I mention the media streaming works!  Yep, we watched 2 half-hour kid videos yesterday without any streaming issues at all.  If you have a Netflix account and a Wii, order your disc and give it a go. It’s good stuff.

    Read the article

  • Oracle Service Bus JMS Deployments Utility by Mike Muller

    - by JuergenKress
    For proxy services utilizing the JMS transport, OSB receives messages from destinations by using an MDB. These MDBs get generated and deployed during activation of the service configuration. OSB creates a random, unique name for the J2EE application that gets deployed to WLS. The name starts with “_ALSB_” and ends in a unique series of digits. The EAR files are written to the sbgen subdirectory of the domain home directory. You will see these applications on the WLS console page for “Deployments”. For various operational reasons, there are times when the application name for a given proxy service needs to be determined. Since the generated name of the application doesn’t reflect the name of the service, it becomes difficult to determine the relationship between the service and its EAR file. In fact, it can not be discerned from either the OSB or WLS consoles.Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Service bus,OSB,JMS,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Performance considerations for common SQL queries

    - by Jim Giercyk
    Originally posted on: http://geekswithblogs.net/NibblesAndBits/archive/2013/10/16/performance-considerations-for-common-sql-queries.aspxSQL offers many different methods to produce the same results.  There is a never-ending debate between SQL developers as to the “best way” or the “most efficient way” to render a result set.  Sometimes these disputes even come to blows….well, I am a lover, not a fighter, so I decided to collect some data that will prove which way is the best and most efficient.  For the queries below, I downloaded the test database from SQLSkills:  http://www.sqlskills.com/sql-server-resources/sql-server-demos/.  There isn’t a lot of data, but enough to prove my point: dbo.member has 10,000 records, and dbo.payment has 15,554.  Our result set contains 6,706 records. The following queries produce an identical result set; the result set contains aggregate payment information for each member who has made more than 1 payment from the dbo.payment table and the first and last name of the member from the dbo.member table.   /*************/ /* Sub Query  */ /*************/ SELECT  a.[Member Number] ,         m.lastname ,         m.firstname ,         a.[Number Of Payments] ,         a.[Average Payment] ,         a.[Total Paid] FROM    ( SELECT    member_no 'Member Number' ,                     AVG(payment_amt) 'Average Payment' ,                     SUM(payment_amt) 'Total Paid' ,                     COUNT(Payment_No) 'Number Of Payments'           FROM      dbo.payment           GROUP BY  member_no           HAVING    COUNT(Payment_No) > 1         ) a         JOIN dbo.member m ON a.[Member Number] = m.member_no         /***************/ /* Cross Apply  */ /***************/ SELECT  ca.[Member Number] ,         m.lastname ,         m.firstname ,         ca.[Number Of Payments] ,         ca.[Average Payment] ,         ca.[Total Paid] FROM    dbo.member m         CROSS APPLY ( SELECT    member_no 'Member Number' ,                                 AVG(payment_amt) 'Average Payment' ,                                 SUM(payment_amt) 'Total Paid' ,                                 COUNT(Payment_No) 'Number Of Payments'                       FROM      dbo.payment                       WHERE     member_no = m.member_no                       GROUP BY  member_no                       HAVING    COUNT(Payment_No) > 1                     ) ca /********/                    /* CTEs  */ /********/ ; WITH    Payments           AS ( SELECT   member_no 'Member Number' ,                         AVG(payment_amt) 'Average Payment' ,                         SUM(payment_amt) 'Total Paid' ,                         COUNT(Payment_No) 'Number Of Payments'                FROM     dbo.payment                GROUP BY member_no                HAVING   COUNT(Payment_No) > 1              ),         MemberInfo           AS ( SELECT   p.[Member Number] ,                         m.lastname ,                         m.firstname ,                         p.[Number Of Payments] ,                         p.[Average Payment] ,                         p.[Total Paid]                FROM     dbo.member m                         JOIN Payments p ON m.member_no = p.[Member Number]              )     SELECT  *     FROM    MemberInfo /************************/ /* SELECT with Grouping   */ /************************/ SELECT  p.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         COUNT(Payment_No) 'Number Of Payments' ,         AVG(payment_amt) 'Average Payment' ,         SUM(payment_amt) 'Total Paid' FROM    dbo.payment p         JOIN dbo.member m ON m.member_no = p.member_no GROUP BY p.member_no ,         m.lastname ,         m.firstname HAVING  COUNT(Payment_No) > 1   We can see what is going on in SQL’s brain by looking at the execution plan.  The Execution Plan will demonstrate which steps and in what order SQL executes those steps, and what percentage of batch time each query takes.  SO….if I execute all 4 of these queries in a single batch, I will get an idea of the relative time SQL takes to execute them, and how it renders the Execution Plan.  We can settle this once and for all.  Here is what SQL did with these queries:   Not only did the queries take the same amount of time to execute, SQL generated the same Execution Plan for each of them.  Everybody is right…..I guess we can all finally go to lunch together!  But wait a second, I may not be a fighter, but I AM an instigator.     Let’s see how a table variable stacks up.  Here is the code I executed: /********************/ /*  Table Variable  */ /********************/ DECLARE @AggregateTable TABLE     (       member_no INT ,       AveragePayment MONEY ,       TotalPaid MONEY ,       NumberOfPayments MONEY     ) INSERT  @AggregateTable         SELECT  member_no 'Member Number' ,                 AVG(payment_amt) 'Average Payment' ,                 SUM(payment_amt) 'Total Paid' ,                 COUNT(Payment_No) 'Number Of Payments'         FROM    dbo.payment         GROUP BY member_no         HAVING  COUNT(Payment_No) > 1   SELECT  at.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         at.NumberOfPayments 'Number Of Payments' ,         at.AveragePayment 'Average Payment' ,         at.TotalPaid 'Total Paid' FROM    @AggregateTable at         JOIN dbo.member m ON m.member_no = at.member_no In the interest of keeping things in groupings of 4, I removed the last query from the previous batch and added the table variable query.  Here’s what I got:     Since we first insert into the table variable, then we read from it, the Execution Plan renders 2 steps.  BUT, the combination of the 2 steps is only 22% of the batch.  It is actually faster than the other methods even though it is treated as 2 separate queries in the Execution Plan.  The argument I often hear against Table Variables is that SQL only estimates 1 row for the table size in the Execution Plan.  While this is true, the estimate does not come in to play until you read from the table variable.  In this case, the table variable had 6,706 rows, but it still outperformed the other queries.  People argue that table variables should only be used for hash or lookup tables.  The fact is, you have control of what you put IN to the variable, so as long as you keep it within reason, these results suggest that a table variable is a viable alternative to sub-queries. If anyone does volume testing on this theory, I would be interested in the results.  My suspicion is that there is a breaking point where efficiency goes down the tubes immediately, and it would be interesting to see where the threshold is. Coding SQL is a matter of style.  If you’ve been around since they introduced DB2, you were probably taught a little differently than a recent computer science graduate.  If you have a company standard, I strongly recommend you follow it.    If you do not have a standard, generally speaking, there is no right or wrong answer when talking about the efficiency of these types of queries, and certainly no hard-and-fast rule.  Volume and infrastructure will dictate a lot when it comes to performance, so your results may vary in your environment.  Download the database and try it!

    Read the article

  • defunct dbus-daemon zombie freezes login for 30 seconds

    - by oldenburgb
    I'm running Ubuntu 12.04.2 LTS (Precise Pangolin) and noticed a 30 second delay whenever i log into my server via ssh or perform any kind of login via sudo on that machine. I can provoke immediate execution by killing the defunct dbus-daemon showing up during the delay: output of ps fax |grep dbus 19222 ? Ss 0:00 dbus-daemon --system --fork --activation=upstart 19752 ? Z 0:00 \_ [dbus-daemon] <defunct> taping into the dbus using dbus-monitor --system i'm getting: signal sender=org.freedesktop.DBus -> dest=(null destination) serial=7 path=/org/freedesktop/DBus; interface=org.freedesktop.DBus; member=NameOwnerChanged string ":1.4" string "" string ":1.4" each login. Stopping the dbus service eliminates this problem but probably causes many other... I'm not running xorg on the machine but the packages are present for X11 forwarding capabilities. I've ruled out the common motd script delay and ssh "UseDNS no" fixes one finds when looking up login delay issues. Many thanks in advance for any help with this, it's been driving me crazy ;-)

    Read the article

  • High CPU usage with Team Speak 3.0.0-rc2

    - by AlexTheBird
    The CPU usage is always around 40 percent. I use push-to-talk and I had uninstalled pulseaudio. Now I use Alsa. I don't even have to connect to a Server. By simply starting TS the cpu usage goes up 40 percent and stays there. The CPU usage of 3.0.0-rc1 [Build: 14468] is constantly 14 percent. This is the output of top, mpstat and ps aux while I am running TS3 ... of course: alexandros@alexandros-laptop:~$ top top - 18:20:07 up 2:22, 3 users, load average: 1.02, 0.85, 0.77 Tasks: 163 total, 1 running, 162 sleeping, 0 stopped, 0 zombie Cpu(s): 5.3%us, 1.9%sy, 0.1%ni, 91.8%id, 0.7%wa, 0.1%hi, 0.1%si, 0.0%st Mem: 2061344k total, 964028k used, 1097316k free, 69116k buffers Swap: 3997688k total, 0k used, 3997688k free, 449032k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2714 alexandr 20 0 206m 31m 24m S 37 1.6 0:12.78 ts3client_linux 868 root 20 0 47564 27m 10m S 8 1.4 3:21.73 Xorg 1 root 20 0 2804 1660 1204 S 0 0.1 0:00.53 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.01 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.45 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 7 root 20 0 0 0 0 S 0 0.0 0:00.08 ksoftirqd/1 8 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root 20 0 0 0 0 S 0 0.0 0:01.17 events/0 10 root 20 0 0 0 0 S 0 0.0 0:00.81 events/1 11 root 20 0 0 0 0 S 0 0.0 0:00.00 cpuset 12 root 20 0 0 0 0 S 0 0.0 0:00.00 khelper 13 root 20 0 0 0 0 S 0 0.0 0:00.00 async/mgr 14 root 20 0 0 0 0 S 0 0.0 0:00.00 pm 16 root 20 0 0 0 0 S 0 0.0 0:00.00 sync_supers 17 root 20 0 0 0 0 S 0 0.0 0:00.00 bdi-default 18 root 20 0 0 0 0 S 0 0.0 0:00.00 kintegrityd/0 19 root 20 0 0 0 0 S 0 0.0 0:00.00 kintegrityd/1 20 root 20 0 0 0 0 S 0 0.0 0:00.05 kblockd/0 21 root 20 0 0 0 0 S 0 0.0 0:00.02 kblockd/1 22 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpid 23 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpi_notify 24 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpi_hotplug 25 root 20 0 0 0 0 S 0 0.0 0:00.99 ata/0 26 root 20 0 0 0 0 S 0 0.0 0:00.92 ata/1 27 root 20 0 0 0 0 S 0 0.0 0:00.00 ata_aux 28 root 20 0 0 0 0 S 0 0.0 0:00.00 ksuspend_usbd 29 root 20 0 0 0 0 S 0 0.0 0:00.00 khubd alexandros@alexandros-laptop:~$ mpstat Linux 2.6.32-32-generic (alexandros-laptop) 16.06.2011 _i686_ (2 CPU) 18:20:15 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle 18:20:15 all 5,36 0,09 1,91 0,68 0,07 0,06 0,00 0,00 91,83 alexandros@alexandros-laptop:~$ ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 2804 1660 ? Ss 15:58 0:00 /sbin/init root 2 0.0 0.0 0 0 ? S 15:58 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S 15:58 0:00 [migration/0] root 4 0.0 0.0 0 0 ? S 15:58 0:00 [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S 15:58 0:00 [watchdog/0] root 6 0.0 0.0 0 0 ? S 15:58 0:00 [migration/1] root 7 0.0 0.0 0 0 ? S 15:58 0:00 [ksoftirqd/1] root 8 0.0 0.0 0 0 ? S 15:58 0:00 [watchdog/1] root 9 0.0 0.0 0 0 ? S 15:58 0:01 [events/0] root 10 0.0 0.0 0 0 ? S 15:58 0:00 [events/1] root 11 0.0 0.0 0 0 ? S 15:58 0:00 [cpuset] root 12 0.0 0.0 0 0 ? S 15:58 0:00 [khelper] root 13 0.0 0.0 0 0 ? S 15:58 0:00 [async/mgr] root 14 0.0 0.0 0 0 ? S 15:58 0:00 [pm] root 16 0.0 0.0 0 0 ? S 15:58 0:00 [sync_supers] root 17 0.0 0.0 0 0 ? S 15:58 0:00 [bdi-default] root 18 0.0 0.0 0 0 ? S 15:58 0:00 [kintegrityd/0] root 19 0.0 0.0 0 0 ? S 15:58 0:00 [kintegrityd/1] root 20 0.0 0.0 0 0 ? S 15:58 0:00 [kblockd/0] root 21 0.0 0.0 0 0 ? S 15:58 0:00 [kblockd/1] root 22 0.0 0.0 0 0 ? S 15:58 0:00 [kacpid] root 23 0.0 0.0 0 0 ? S 15:58 0:00 [kacpi_notify] root 24 0.0 0.0 0 0 ? S 15:58 0:00 [kacpi_hotplug] root 25 0.0 0.0 0 0 ? S 15:58 0:00 [ata/0] root 26 0.0 0.0 0 0 ? S 15:58 0:00 [ata/1] root 27 0.0 0.0 0 0 ? S 15:58 0:00 [ata_aux] root 28 0.0 0.0 0 0 ? S 15:58 0:00 [ksuspend_usbd] root 29 0.0 0.0 0 0 ? S 15:58 0:00 [khubd] root 30 0.0 0.0 0 0 ? S 15:58 0:00 [kseriod] root 31 0.0 0.0 0 0 ? S 15:58 0:00 [kmmcd] root 34 0.0 0.0 0 0 ? S 15:58 0:00 [khungtaskd] root 35 0.0 0.0 0 0 ? S 15:58 0:00 [kswapd0] root 36 0.0 0.0 0 0 ? SN 15:58 0:00 [ksmd] root 37 0.0 0.0 0 0 ? S 15:58 0:00 [aio/0] root 38 0.0 0.0 0 0 ? S 15:58 0:00 [aio/1] root 39 0.0 0.0 0 0 ? S 15:58 0:00 [ecryptfs-kthrea] root 40 0.0 0.0 0 0 ? S 15:58 0:00 [crypto/0] root 41 0.0 0.0 0 0 ? S 15:58 0:00 [crypto/1] root 48 0.0 0.0 0 0 ? S 15:58 0:03 [scsi_eh_0] root 50 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_1] root 53 0.0 0.0 0 0 ? S 15:58 0:00 [kstriped] root 54 0.0 0.0 0 0 ? S 15:58 0:00 [kmpathd/0] root 55 0.0 0.0 0 0 ? S 15:58 0:00 [kmpathd/1] root 56 0.0 0.0 0 0 ? S 15:58 0:00 [kmpath_handlerd] root 57 0.0 0.0 0 0 ? S 15:58 0:00 [ksnapd] root 58 0.0 0.0 0 0 ? S 15:58 0:03 [kondemand/0] root 59 0.0 0.0 0 0 ? S 15:58 0:02 [kondemand/1] root 60 0.0 0.0 0 0 ? S 15:58 0:00 [kconservative/0] root 61 0.0 0.0 0 0 ? S 15:58 0:00 [kconservative/1] root 213 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_2] root 222 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_3] root 234 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_4] root 235 0.0 0.0 0 0 ? S 15:58 0:01 [usb-storage] root 255 0.0 0.0 0 0 ? S 15:58 0:00 [jbd2/sda5-8] root 256 0.0 0.0 0 0 ? S 15:58 0:00 [ext4-dio-unwrit] root 257 0.0 0.0 0 0 ? S 15:58 0:00 [ext4-dio-unwrit] root 290 0.0 0.0 0 0 ? S 15:58 0:00 [flush-8:0] root 318 0.0 0.0 2316 888 ? S 15:58 0:00 upstart-udev-bridge --daemon root 321 0.0 0.0 2616 1024 ? S<s 15:58 0:00 udevd --daemon root 526 0.0 0.0 0 0 ? S 15:58 0:00 [kpsmoused] root 528 0.0 0.0 0 0 ? S 15:58 0:00 [led_workqueue] root 650 0.0 0.0 0 0 ? S 15:58 0:00 [radeon/0] root 651 0.0 0.0 0 0 ? S 15:58 0:00 [radeon/1] root 652 0.0 0.0 0 0 ? S 15:58 0:00 [ttm_swap] root 654 0.0 0.0 2612 984 ? S< 15:58 0:00 udevd --daemon root 656 0.0 0.0 0 0 ? S 15:58 0:00 [hd-audio0] root 657 0.0 0.0 2612 916 ? S< 15:58 0:00 udevd --daemon root 674 0.6 0.0 0 0 ? S 15:58 0:57 [phy0] syslog 715 0.0 0.0 34812 1776 ? Sl 15:58 0:00 rsyslogd -c4 102 731 0.0 0.0 3236 1512 ? Ss 15:58 0:02 dbus-daemon --system --fork root 740 0.0 0.1 19088 3380 ? Ssl 15:58 0:00 gdm-binary root 744 0.0 0.1 18900 4032 ? Ssl 15:58 0:01 NetworkManager avahi 749 0.0 0.0 2928 1520 ? S 15:58 0:00 avahi-daemon: running [alexandros-laptop.local] avahi 752 0.0 0.0 2928 544 ? Ss 15:58 0:00 avahi-daemon: chroot helper root 753 0.0 0.1 4172 2300 ? S 15:58 0:00 /usr/sbin/modem-manager root 762 0.0 0.1 20584 3152 ? Sl 15:58 0:00 /usr/sbin/console-kit-daemon --no-daemon root 836 0.0 0.1 20856 3864 ? Sl 15:58 0:00 /usr/lib/gdm/gdm-simple-slave --display-id /org/gnome/DisplayManager/Display1 root 856 0.0 0.1 4836 2388 ? S 15:58 0:00 /sbin/wpa_supplicant -u -s root 868 2.3 1.3 36932 27924 tty7 Rs+ 15:58 3:22 /usr/bin/X :0 -nr -verbose -auth /var/run/gdm/auth-for-gdm-a46T4j/database -nolisten root 891 0.0 0.0 1792 564 tty4 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty4 root 901 0.0 0.0 1792 564 tty5 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty5 root 908 0.0 0.0 1792 564 tty2 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty2 root 910 0.0 0.0 1792 568 tty3 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty3 root 913 0.0 0.0 1792 564 tty6 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty6 root 917 0.0 0.0 2180 1072 ? Ss 15:58 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket daemon 924 0.0 0.0 2248 432 ? Ss 15:58 0:00 atd root 927 0.0 0.0 2376 900 ? Ss 15:58 0:00 cron root 950 0.0 0.0 11736 1372 ? Ss 15:58 0:00 /usr/sbin/winbindd root 958 0.0 0.0 11736 1184 ? S 15:58 0:00 /usr/sbin/winbindd root 974 0.0 0.1 6832 2580 ? Ss 15:58 0:00 /usr/sbin/cupsd -C /etc/cups/cupsd.conf root 1078 0.0 0.0 1792 564 tty1 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty1 gdm 1097 0.0 0.0 3392 772 ? S 15:58 0:00 /usr/bin/dbus-launch --exit-with-session root 1112 0.0 0.1 19216 3292 ? Sl 15:58 0:00 /usr/lib/gdm/gdm-session-worker root 1116 0.0 0.1 5540 2932 ? S 15:58 0:01 /usr/lib/upower/upowerd root 1131 0.0 0.1 6308 3824 ? S 15:58 0:00 /usr/lib/policykit-1/polkitd 108 1163 0.0 0.2 16788 4360 ? Ssl 15:58 0:01 /usr/sbin/hald root 1164 0.0 0.0 3536 1300 ? S 15:58 0:00 hald-runner root 1188 0.0 0.0 3612 1256 ? S 15:58 0:00 hald-addon-input: Listening on /dev/input/event6 /dev/input/event5 /dev/input/event2 root 1194 0.0 0.0 3612 1224 ? S 15:58 0:00 /usr/lib/hal/hald-addon-rfkill-killswitch root 1200 0.0 0.0 3608 1240 ? S 15:58 0:00 /usr/lib/hal/hald-addon-generic-backlight root 1202 0.0 0.0 3616 1236 ? S 15:58 0:02 hald-addon-storage: polling /dev/sr0 (every 2 sec) root 1204 0.0 0.0 3616 1236 ? S 15:58 0:00 hald-addon-storage: polling /dev/sdb (every 2 sec) root 1211 0.0 0.0 3624 1220 ? S 15:58 0:00 /usr/lib/hal/hald-addon-cpufreq 108 1212 0.0 0.0 3420 1200 ? S 15:58 0:00 hald-addon-acpi: listening on acpid socket /var/run/acpid.socket 1000 1222 0.0 0.1 24196 2816 ? Sl 15:58 0:00 /usr/bin/gnome-keyring-daemon --daemonize --login 1000 1240 0.0 0.3 28228 7312 ? Ssl 15:58 0:00 gnome-session 1000 1274 0.0 0.0 3284 356 ? Ss 15:58 0:00 /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session gnome-session 1000 1277 0.0 0.0 3392 772 ? S 15:58 0:00 /usr/bin/dbus-launch --exit-with-session gnome-session 1000 1278 0.0 0.0 3160 1652 ? Ss 15:58 0:00 /bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session 1000 1281 0.0 0.2 8172 4636 ? S 15:58 0:00 /usr/lib/libgconf2-4/gconfd-2 1000 1287 0.0 0.5 24228 10896 ? Ss 15:58 0:03 /usr/lib/gnome-settings-daemon/gnome-settings-daemon 1000 1290 0.0 0.1 6468 2364 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd 1000 1293 0.0 0.6 38104 13004 ? S 15:58 0:03 metacity 1000 1296 0.0 0.1 30280 2628 ? Ssl 15:58 0:00 /usr/lib/gvfs//gvfs-fuse-daemon /home/alexandros/.gvfs 1000 1301 0.0 0.0 3344 988 ? S 15:58 0:03 syndaemon -i 0.5 -k 1000 1303 0.0 0.1 8060 3488 ? S 15:58 0:00 /usr/lib/gvfs/gvfs-gdu-volume-monitor root 1306 0.0 0.1 15692 3104 ? Sl 15:58 0:00 /usr/lib/udisks/udisks-daemon 1000 1307 0.4 1.0 50748 21684 ? S 15:58 0:34 python -u /usr/share/screenlets/DigiClock/DigiClockScreenlet.py 1000 1308 0.0 0.9 35608 18564 ? S 15:58 0:00 python /usr/share/screenlets-manager/screenlets-daemon.py 1000 1309 0.0 0.3 19524 6468 ? S 15:58 0:00 /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1 1000 1311 0.0 0.5 37412 11788 ? S 15:58 0:01 gnome-power-manager 1000 1312 0.0 1.0 50772 22628 ? S 15:58 0:03 gnome-panel 1000 1313 0.1 1.5 102648 31184 ? Sl 15:58 0:10 nautilus root 1314 0.0 0.0 5188 996 ? S 15:58 0:02 udisks-daemon: polling /dev/sdb /dev/sr0 1000 1315 0.0 0.6 51948 12464 ? SL 15:58 0:01 nm-applet --sm-disable 1000 1317 0.0 0.1 16956 2364 ? Sl 15:58 0:00 /usr/lib/gvfs/gvfs-afc-volume-monitor 1000 1318 0.0 0.3 20164 7792 ? S 15:58 0:00 bluetooth-applet 1000 1321 0.0 0.1 7260 2384 ? S 15:58 0:00 /usr/lib/gvfs/gvfs-gphoto2-volume-monitor 1000 1323 0.0 0.5 37436 12124 ? S 15:58 0:00 /usr/lib/notify-osd/notify-osd 1000 1324 0.0 1.9 197928 40456 ? Ssl 15:58 0:06 /home/alexandros/.dropbox-dist/dropbox 1000 1329 0.0 0.3 20136 7968 ? S 15:58 0:00 /usr/bin/gnome-screensaver --no-daemon 1000 1331 0.0 0.1 7056 3112 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd-trash --spawner :1.6 /org/gtk/gvfs/exec_spaw/0 root 1340 0.0 0.0 2236 1008 ? S 15:58 0:00 /sbin/dhclient -d -sf /usr/lib/NetworkManager/nm-dhcp-client.action -pf /var/run/dhcl 1000 1348 0.0 0.1 42252 3680 ? Ssl 15:58 0:00 /usr/lib/bonobo-activation/bonobo-activation-server --ac-activate --ior-output-fd=19 1000 1384 0.0 1.7 80244 35480 ? Sl 15:58 0:02 /usr/bin/python /usr/lib/deskbar-applet/deskbar-applet/deskbar-applet --oaf-activate- 1000 1388 0.0 0.5 26196 11804 ? S 15:58 0:01 /usr/lib/gnome-panel/wnck-applet --oaf-activate-iid=OAFIID:GNOME_Wncklet_Factory --oa 1000 1393 0.1 0.5 25876 11548 ? S 15:58 0:08 /usr/lib/gnome-applets/multiload-applet-2 --oaf-activate-iid=OAFIID:GNOME_MultiLoadAp 1000 1394 0.0 0.5 25600 11140 ? S 15:58 0:03 /usr/lib/gnome-applets/cpufreq-applet --oaf-activate-iid=OAFIID:GNOME_CPUFreqApplet_F 1000 1415 0.0 0.5 39192 11156 ? S 15:58 0:01 /usr/lib/gnome-power-manager/gnome-inhibit-applet --oaf-activate-iid=OAFIID:GNOME_Inh 1000 1417 0.0 0.7 53544 15488 ? Sl 15:58 0:00 /usr/lib/gnome-applets/mixer_applet2 --oaf-activate-iid=OAFIID:GNOME_MixerApplet_Fact 1000 1419 0.0 0.4 23816 9068 ? S 15:58 0:00 /usr/lib/gnome-panel/notification-area-applet --oaf-activate-iid=OAFIID:GNOME_Notific 1000 1488 0.0 0.3 20964 7548 ? S 15:58 0:00 /usr/lib/gnome-disk-utility/gdu-notification-daemon 1000 1490 0.0 0.1 6608 2484 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd-burn --spawner :1.6 /org/gtk/gvfs/exec_spaw/1 1000 1510 0.0 0.1 6348 2084 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd-metadata 1000 1531 0.0 0.3 19472 6616 ? S 15:58 0:00 /usr/lib/gnome-user-share/gnome-user-share 1000 1535 0.0 0.4 77128 8392 ? Sl 15:58 0:00 /usr/lib/evolution/evolution-data-server-2.28 --oaf-activate-iid=OAFIID:GNOME_Evoluti 1000 1601 0.0 0.5 69576 11800 ? Sl 15:59 0:00 /usr/lib/evolution/2.28/evolution-alarm-notify 1000 1604 0.0 0.7 33924 15888 ? S 15:59 0:00 python /usr/share/system-config-printer/applet.py 1000 1701 0.0 0.5 37116 11968 ? S 15:59 0:00 update-notifier 1000 1892 4.5 7.0 406720 145312 ? Sl 17:11 3:09 /opt/google/chrome/chrome 1000 1896 0.0 0.1 69812 3680 ? S 17:11 0:02 /opt/google/chrome/chrome 1000 1898 0.0 0.6 91420 14080 ? S 17:11 0:00 /opt/google/chrome/chrome --type=zygote 1000 1916 0.2 1.3 140780 27220 ? Sl 17:11 0:12 /opt/google/chrome/chrome --type=extension --disable-client-side-phishing-detection - 1000 1918 0.7 1.8 155720 37912 ? Sl 17:11 0:31 /opt/google/chrome/chrome --type=extension --disable-client-side-phishing-detection - 1000 1921 0.0 1.0 135904 21052 ? Sl 17:11 0:02 /opt/google/chrome/chrome --type=extension --disable-client-side-phishing-detection - 1000 1927 6.5 3.6 194604 74960 ? Sl 17:11 4:32 /opt/google/chrome/chrome --type=renderer --disable-client-side-phishing-detection -- 1000 2156 0.4 0.7 48344 14896 ? Rl 18:03 0:04 gnome-terminal 1000 2157 0.0 0.0 1988 712 ? S 18:03 0:00 gnome-pty-helper 1000 2158 0.0 0.1 6504 3860 pts/0 Ss 18:03 0:00 bash 1000 2564 0.2 0.1 6624 3984 pts/1 Ss+ 18:17 0:00 bash 1000 2711 0.0 0.0 4208 1352 ? S 18:19 0:00 /bin/bash /home/alexandros/Programme/TeamSpeak3-Client-linux_x86_back/ts3client_runsc 1000 2714 36.5 1.5 210872 31960 ? SLl 18:19 0:18 ./ts3client_linux_x86 1000 2743 0.0 0.0 2716 1068 pts/0 R+ 18:20 0:00 ps aux Output of vmstat: alexandros@alexandros-laptop:~$ vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 1093324 69840 449496 0 0 27 10 476 667 6 2 91 1 Output of lsusb alexandros@alexandros-laptop:~$ lspci 00:00.0 Host bridge: Silicon Integrated Systems [SiS] 671MX 00:01.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 00:02.0 ISA bridge: Silicon Integrated Systems [SiS] SiS968 [MuTIOL Media IO] (rev 01) 00:02.5 IDE interface: Silicon Integrated Systems [SiS] 5513 [IDE] (rev 01) 00:03.0 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) 00:03.1 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) 00:03.3 USB Controller: Silicon Integrated Systems [SiS] USB 2.0 Controller 00:05.0 IDE interface: Silicon Integrated Systems [SiS] SATA Controller / IDE mode (rev 03) 00:06.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 00:07.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 00:0d.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) 00:0f.0 Audio device: Silicon Integrated Systems [SiS] Azalia Audio Controller 01:00.0 VGA compatible controller: ATI Technologies Inc Mobility Radeon X2300 02:00.0 Ethernet controller: Atheros Communications Inc. AR5001 Wireless Network Adapter (rev 01) The Team Speak log file : 2011-06-19 19:04:04.223522|INFO | | | Logging started, clientlib version: 3.0.0-rc2 [Build: 14642] 2011-06-19 19:04:04.761149|ERROR |SoundBckndIntf| | /home/alexandros/Programme/TeamSpeak3-Client-linux_x86_back/soundbackends/libpulseaudio_linux_x86.so error: NOT_CONNECTED 2011-06-19 19:04:05.871770|INFO |ClientUI | | Failed to init text to speech engine 2011-06-19 19:04:05.894623|INFO |ClientUI | | TeamSpeak 3 client version: 3.0.0-rc2 [Build: 14642] 2011-06-19 19:04:05.895421|INFO |ClientUI | | Qt version: 4.7.2 2011-06-19 19:04:05.895571|INFO |ClientUI | | Using configuration location: /home/alexandros/.ts3client/ts3clientui_qt.conf 2011-06-19 19:04:06.559596|INFO |ClientUI | | Last update check was: Sa. Jun 18 00:08:43 2011 2011-06-19 19:04:06.560506|INFO | | | Checking for updates... 2011-06-19 19:04:07.357869|INFO | | | Update check, my version: 14642, latest version: 14642 2011-06-19 19:05:52.978481|INFO |PreProSpeex | 1| Speex version: 1.2rc1 2011-06-19 19:05:54.055347|INFO |UIHelpers | | setClientVolumeModifier: 10 -8 2011-06-19 19:05:54.057196|INFO |UIHelpers | | setClientVolumeModifier: 11 2 Thanks for taking the time to read my message. UPDATE: Thanks to nickguletskii's link I googled for "alsa cpu usage" (without quotes) and it brought me to a forum. A user wrote that by directly selecting the hardware with "plughw:x.x" won't impact the performance of the system. I have selected it in the TS 3 configuration and it worked. But this solution is not optimal because now no other program can access the sound output. If you need any further information or my question is unclear than please tell me.

    Read the article

  • Community Megaphone Podcast #5 with Steve Michelotti

    - by Dane Morgridge
    Show 5 is finally up with special guest Steve Michelotti.  We talked about ASP.Net MVC, how to get started in the community and more! Steve Michelotti is a Microsoft ASP.NET MVP and an Architect/Developer for Applied Information Sciences (AIS). He has consulted at Advertising.com/AOL where he was the Tech Lead for one of the highest volume .NET applications in the world. He previously was the Chief Technologist at e.magination. Steve is a frequent presenter at developer user groups and Code Camps along the East Coast and holds the MCSD, MCPD, and MCT certifications. Steve has been on Microsoft Channel9 and his published articles include Visual Studio Magazine and his blog: www.geekswithblogs.net/michelotti. Audio: http://www.communitymegaphonepodcast.com/Content/Audio/Show-5-Steve-Michelotti.mp3 Show Url: http://www.communitymegaphonepodcast.com/Show/5/Steve-Michelotti Rss: http://feed.communitymegaphonepodcast.com/cm-podcast

    Read the article

  • How Do I Implement parameterMaps for ADF Regions and Dynamic Regions?

    - by david.giammona
    parameterMap objects defined by managed beans can help reduce the number of child <parameter> elements listed under an ADF region or dynamic region page definition task flow binding. But more importantly, the parameterMap approach also allows greater flexibility in determining what input parameters are passed to an ADF region or dynamic region. This can be especially helpful when using dynamic regions where each task flow utilized can provide an entirely different set of input parameters. The parameterMap is specified within an ADF region or dynamic region page definition task flow binding as shown below: <taskFlow id="checkoutflow1" taskFlowId="/WEB-INF/checkout-flow.xml#checkout-flow" activation="deferred" xmlns="http://xmlns.oracle.com/adf/controller/binding" parametersMap="#{pageFlowScope.userInfoBean.parameterMap}"/> The parameter map object must implement the java.util.Map interface. The keys it specifies match the names of input parameters defined by the task flows utilized within the task flow binding. An example parameterMap object class is shown below: import java.util.HashMap; import java.util.Map; public class UserInfoBean { private Map<String, Object> parameterMap = new HashMap<String, Object>(); public Map getParameterMap() { parameterMap.put("isLoggedIn", getSecurity().isAuthenticated()); parameterMap.put("principalName", getSecurity().getPrincipalName()); return parameterMap; }

    Read the article

  • Davicom Semiconductor, Inc. 21x4x DEC-Tulip not detected by Wireshark but IP operational

    - by deepsix86
    Recently flipped to Ubuntu 11.10 on a Dell 4300 (Intel). Getting IP address and no issues (ping/surf) but Wireshark unable to detect eth0 interface. I see references in forums to blacklist tulip but looks like I am running dmfe. Not sure if the blacklist is required and where to go from here. Maybe Driver update? Got a little lost looking in that area. Some h/w details below (IP/MAC/HOSTNAME removed) Linux xxxxxx 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 17:34:21 UTC 2012 i686 i686 i386 GNU/Linux network-admin (HOSTS TAB) does not list eth0, only loopback and bunch of IPv6 interfaces ifconfig eth0 Link encap:Ethernet HWaddr xxxxxxxx inet addr:192.168.x.xx Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: xxxxxxxxxxx 64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:36662 errors:0 dropped:1 overruns:0 frame:0 TX packets:24975 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:42115779 (42.1 MB) TX bytes:3056435 (3.0 MB) Interrupt:18 Base address:0xe800 lspci 02:09.0 Ethernet controller: Davicom Semiconductor, Inc. 21x4x DEC-Tulip compatible 10/100 Ethernet (rev 31) Subsystem: Device 4554:434e Flags: bus master, medium devsel, latency 64, IRQ 18 I/O ports at e800 [size=256] Memory at fe1ffc00 (32-bit, non-prefetchable) [size=256] Expansion ROM at fe200000 [disabled] [size=256K] Capabilities: [50] Power Management version 2 Kernel driver in use: dmfe Kernel modules: dmfe hwinfo --netcard 20: PCI 209.0: 0200 Ethernet controller [Created at pci.318] Unique ID: rBUF.0NgK5ZS9c0D Parent ID: 6NW+.siohrLUzzI4 SysFS ID: /devices/pci0000:00/0000:00:1e.0/0000:02:09.0 SysFS BusID: 0000:02:09.0 Hardware Class: network Model: "Davicom 21x4x DEC-Tulip compatible 10/100 Ethernet" Vendor: pci 0x1282 "Davicom Semiconductor, Inc." Device: pci 0x9102 "21x4x DEC-Tulip compatible 10/100 Ethernet" SubVendor: pci 0x4554 SubDevice: pci 0x434e Revision: 0x31 Driver: "dmfe" Driver Modules: "dmfe" Device File: eth0 I/O Ports: 0xe800-0xe8ff (rw) Memory Range: 0xfe1ffc00-0xfe1ffcff (rw,non-prefetchable) Memory Range: 0xfe200000-0xfe23ffff (ro,non-prefetchable,disabled) IRQ: 18 (61379 events) HW Address: 00:08:a1:01:35:70 Link detected: yes Module Alias: "pci:v00001282d00009102sv00004554sd0000434Ebc02sc00i00" Driver Info #0: Driver Status: dmfe is active Driver Activation Cmd: "modprobe dmfe" Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #11 (PCI bridge)

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >