Search Results

Search found 18534 results on 742 pages for 'dave long'.

Page 393/742 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • Migrate Thunderbird 3 Saved Searches Between Accounts

    - by UltraNurd
    Long story short, the sysadmins have moved me to a new mailserver. In the process, they needed to create a separate account in Thunderbird and disable my old account. They took care of all of the mail migration. However, my saved search folders didn't go along for the ride. I have over 20 complex searches that I'd rather not have to reenter manually by hand. You can't drag saved searches between accounts like other folders. I tried closing Thunderbird, doing a find/replace in virtualFolders.dat in my Thunderbird profile folder, saving that file, and reopening Thunderbird, but that didn't appear to do anything. I'm assuming the search folders are also saved in one of the sqlite databases... does anyone know where to look?

    Read the article

  • Team Foundation Service Preview now open for all!

    - by Tarun Arora
    The concept of TFS in the cloud was first presented back in early 2010, the product team worked hard to preview a constantly evolving solution at the BUILD conference last year and after having completed 31 Sprints today the preview service has been opened for all. No more invitation codes required, TfsPreview has been made public! “Since we announced the Team Foundation Service Preview at the BUILD conference last year, we’ve limited the on boarding of new customers by requiring invitation codes to create accounts.  The main reason for this has been to control the growth of the service to make sure it didn’t run away from us and end up with a bad user experience.  In this time period, we’ve continued to work on our infrastructure, performance, scale, monitoring, management and, of course, some cool new features like cloud build. ”   - Brian Harry Since the service is still in preview, it is free for all… If you haven’t, now is the best time to try out the offering. There is no fixed time line on how long before service becomes chargeable but the terms of service support production use, the service is reliable and the product team committed to carry all of your data forward into production. “The service will remain in “preview” for a while longer while we work through additional features like data portability, commercial terms, etc but the terms of service support production use, the service is reliable and we expect to carry all of your data forward into production. ”  - Brian Harry As of today it’s possible to use TFS Preview with VS 2012 RC, VS 2010 SP1, VS 2008 SP1, the service currently does not work with VS 2005, this is something the product team is actively working on. You can refer to Brian’s announcement blog post here, http://blogs.msdn.com/b/bharry/archive/2012/06/11/team-foundation-service-preview-is-public.aspx

    Read the article

  • SharePoint 2010 in the cloud a.k.a. SharePoint Online

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). There are 3 ways to run SharePoint On premises, you buy the servers, and you run the servers. Hosted servers, where you don’t run the servers, but you let a hosting company run dedicated servers. Multi-tenant, like SharePoint online – this is what I am talking about in this blog post. Also known as SaaS (Software as a Service). The advantages of a cloud solution are undeniable. Availability, (SharePoint line offers a 99.9% uptime SLA) Reliability. Cost. Due to economies of scale, and no need to hire specialized dedicated staff. Scalability. Security. Flexibility – grow or shrink as you need to. If you are seriously considering SharePoint 2010 in the cloud, there are some things you need to know about SharePoint online. What will work - OOTB Customization, collaboration features etc. will work SharePoint Designer 2010 is supported, so no code workflows will work Visual Studio sandbox solutions, client object model will work. What won’t work - SharePoint 2010 online cloud environment supports only sandbox solutions. BCS, business connectivity services is not supported in SharePoint online. What you can do however is to host your services in Azure, and call them using Silverlight. Custom timer jobs will not work. Long story short, get used to Sandbox solutions – and the new way of programming. Sandbox solutions are pretty damn good. Most of the complaints I have heard around sandbox solutions being too restrictive, are uninformed mechanisms of doing things mired in the ways of 2002. .. or you could just live in 2002 too. Comment on the article ....

    Read the article

  • Do 10m external SAS cables work?

    - by Joachim Sauer
    According to the Wikipedia page external SAS cables are specified for up to 10m length. However, I found it pretty hard to actually find places that sell cables of that length. This made me wonder: Are there any known problems with using cables that are as long as this? Will it be more fragile? Slower? And if 10m is not suggested, would 6m be any more stable? A little background: for several reasons we'd like to put a tape library physically separate from our main server and 10m would be enough to put it on a separate floor.

    Read the article

  • Clicking hyperlinks in Email messages becomes painfully slow

    - by Joel Spolsky
    Running Windows 7 (RC, 64 bit). Suddenly, today, after months without a problem, clicking on links has become extremely slow. I've noticed this in two places. (1) clicking hyperlinks in Outlook email messages, which launches Firefox, takes around a minute. Launching Firefox by itself is instantaneous - I have an SSD drive and a very fast CPU. (2) opening Word documents attached to Outlook email messages also takes a surprisingly long time. The only thing these two might have in common is that they use the DDE mechanism, if I'm not mistaken, to send a DDE open command to the application. Under Windows XP this problem could sometimes be fixed by unchecking the "Use DDE" checkbox in the file type mapping, however, I can't find any equivalent under Windows 7. See here for someone else having what I believe is the same problem. See here for more evidence that it's DDE being super-super-slow.

    Read the article

  • What should happen at the start of a software project startup?

    - by Willem
    A quick introduction My college semesters include a 8 week project working for an actual company with a software need in order to get some much needed practical experience. I have just started such a project with 5 other students. We're required to spend roughly 40 hours a week per student on this project. We're working with SCRUM as the software development method, this was assigned by our teachers. The question Day one of the project just ended which has created some questions for me as to how to start a project in the 'real world'. Our first day included working on a project planning document (not sure what the English term is), creating a appointment with the company for an introduction and the opportunity to start specifying the requirements and setting up some standards for the behavior within the group. However these items didn't take that long to finish. We've made some concrete plans for tomorrow and the day after we'll meet the company. This still leaves several hours of 'work-time' unspent. Is it usual not being able to fill every hour of a day for work at the start of a project or are we simply too inexperienced to see what work needs to be done at this stage of a project, or are we, perhaps, going through the above list too fast? How does this work in the 'real world'? Do you spend your time wondering 'what should I do now', or do you have a clear view of what you're supposed to do at that moment?

    Read the article

  • Windows replacement for HAProxy

    - by GrayWizardx
    It does not appear that there is a similar question to this already posted, so I will go ahead and ask. I am working on a project that could benefit from having two - four servers handling incoming requests to a backend webservice. The service does not require SSL but does need to support occassional long running processes (upto 120 secs). This project does not at present have the funding to purchase a hardware load balancing solution. I have previously used HAProxy as a solution for this, and found it very simple and straightforward. Is there a similar product for windows (server 2003 or 2008) which provides similar configuration options and runs as a lightweight service? For reasons outside my control I cannot setup a Linux machine (physical or virtual) and so I am looking for behaviour that can be deployed on a windows machine. I can only find Perlbal which appears to fall into this category. So as not to keep this open indifenitely I will give credit to the only answer.

    Read the article

  • Editing the Microsoft Security Essentials context-menu

    - by GPX
    As all MSE users would know, the context-menu item that it adds to Explorer is really long, with one whole sentence "Scan with Microsoft Security Essentials...". Is there a way to edit this and shorten it? I figured out the the file shellext.dll is responsible for registering the context menu. I used ResEdit to edit the DLL and changed the string table entry from Scan with ($BrandName) to Scan with MSE. But it still won't change. I've also tried de-registering the DLL and then registering it again. No luck! Any ideas? Or am I doing something wrong?

    Read the article

  • http to https upgrade -- SEO troubles

    - by SLIM
    I upgraded my site so that all pages have gone from using http to https. I didn't consider that Google treats https pages differently than http. I re-created my sitemap to so that all links now reflect the new https and let it be for a few days. (Whoops!) Google is now re-indexing all https pages. I have about 19k pages on the site, and Google has already indexed about 8k of the new https. The problem is that Google sees all of these as brand new pages when many of them have a long http history. Of course most of you will recognize the problem, I didn't set up a 301 from the old http to the new https. Is it too late to do this? Should I switch my sitemap back to http and then 301 to the new https? Or should I leave the sitemap as is, and setup 301 redirects anyway.. I'm not even sure if Google is trying to reach the http site anymore. Currently the site is doing 303 redirects (from http to https), although I haven't figured out why yet. Thanks for any suggestions you can offer.

    Read the article

  • Wireless router not connection -> AP Not associated

    - by candido
    I can not connect to internet by wireless router after some months with ubuntu 10.04. I can connect with the same portable but with win OS. My SO is ubuntu 10.04 linux 2.6.32-41 arch SMP i686. The internet wireless network controller is Atheros AR9285 chipset (pci express) Kernel module ath9k I have tried a command line connection: $ sudo /etc/init.d/network-manager stop #stop gui network manager $ iwconfig wlan0 essid WLAN_3C key s:C001D20550B3C $ ifconfig Access Point: NOT-ASSOCIATED $dmesg ... AP 00:1a:2b:08:60:49 associated Is the SO has connected to router for booting long ( associated message), after boot and login why the connection to router is not possible by network-manager or command line (NOT associated message)? Thanks in advance

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Entity Framework and distributed Systems

    - by Dirk Beckmann
    I need some help or maybe only a hint for the right direction. I've got a system that is sperated into two applications. An existing VB.NET desktop client using Entity Framework 5 with code first approach and a asp.net Web Api client in C# that will be refactored right yet. It should be possible to deliver OData. The system and the datamodel is still involving and so migrations will happen in undefined intervalls. So I'm now struggling how to manage my database access on the web api system. So my favourd approch would be us Entity Framework on both systems but I'm running into trouble while creating new migrations. Two solutions I've thought about: Shared Data Access dll The first idea was to separate the data access layer to a seperate project an reference from each of the systems. The context would be the same as long as the dll is up to date in each system. This way both soulutions would be able to make a migration. The main problem ist that it is much more complicate to update a web api system than it is with the client Click Once Update Solution and not every migration is important for the web api. This would couse more update trouble and out of sync libraries Database First on Web Api The second idea was just to use the database first approch an on web api side. But it seems that all annotations will be lost by each model update. Other solutions with stored procedures have been discarded because of missing OData support and maintainability. Does anyone run into same conflicts or has any advices how such a problem can be solved!

    Read the article

  • Oracle VM 3.1.1 build 365 released

    - by wcoekaer
    A few days ago we released a patch update for Oracle VM 3.1.1 (build 365). Oracle VM Manager 3.1.1 Build 365 is now available from My Oracle Support patch ID 14227416 Oracle VM Server 3.1.1 errata updates are, as usual, released on ULN in the ovm3_3.1.1_x86_64_patch channel. Just a reminder, when we publish errata for Oracle VM, the notifications are sent through the oraclevm-errata maillist. You can sign up here. Some of the bugfixes in 3.1.1 : 14054162 - Removes unnecessary locks when creating VNICs in a multi-threaded operation. 14111234 - Fixes the issue when discovering a virtual machine that has disks in a un-discovered repository or has un-discovered physical disks. 14054133 - Fixes a bug of object not found where vdisks are left stale in certain multi-thread operations. 14176607 - Fixes the issue where Oracle VM Manager would hang after a restart due to various tasks running jobs in the global context. 14136410 - Fixes the stale lock issue on multithreaded server where object not found error happens in some rare situations. 14186058 - Fixes the issue where Oracle VM Manager fails to discover the server or start the server after the server hardware configuration (i.e. BIOS) was modified. 14198734 - Fixes the issue where HTTP cannot be disabled. 14065401 - Fixes Oracle VM Manager UI time-out issue where the default value was not long enough for storage repository creation. 14163755 - Fixes the issue when migrating a virtual machine the list of target servers (and "other servers") was not ordered by name. 14163762 - Fixes the size of the "Edit Vlan Group" window to display all information correctly. 14197783 - Fixes the issue that navigation tree (servers) was not ordered by name. I strongly suggest everyone to use this latest build and also update the server to the latest version. have at it.

    Read the article

  • For a Javascript library, what is the best or standard way to support extensibility

    - by Michael Best
    Specifically, I want to support "plugins" that modify the behavior of parts of the library. I couldn't find much information on the web about this subject. But here are my ideas for how a library could be extensible. The library exports an object with both public and "protected" functions. A plugin can replace any of those functions, thus modifying the library's behavior. Advantages of this method are that it's simple and that the plugin's functions can have full access to the library's "protected" functions. Disadvantages are that the library may be harder to maintain with a larger set of exposed functions and it could be hard to debug if multiple plugins are involved (how to know which plugin modified which function?). The library provides an "add plugin" function that accepts an object with a specific interface. Internally, the library will use the plugin instead of it's own code if appropriate. With this method, the internals of the library can be rearranged more freely as long as it still supports the same plugin interface. This could also support having different plugin interfaces to modify different parts of the library. A disadvantage of this method is that the plugins may have to re-implement code that is already part of the library since the library's internal functions are not exported. The library provides a "set implementation" function that accepts an object inherited from a specific base object. The library's public API calls functions in the implementation object for any functionality that can be modified and the base implementation object includes the core functionality, with both external (to the API) and internal functions. A plugin creates a new implementation object, which inherits from the base object and replaces any functions it wants to modify. This combines advantages and disadvantages of both the other methods.

    Read the article

  • Generic Repositories with DI & Data Intensive Controllers

    - by James
    Usually, I consider a large number of parameters as an alarm bell that there may be a design problem somewhere. I am using a Generic Repository for an ASP.NET application and have a Controller with a growing number of parameters. public class GenericRepository<T> : IRepository<T> where T : class { protected DbContext Context { get; set; } protected DbSet<T> DbSet { get; set; } public GenericRepository(DbContext context) { Context = context; DbSet = context.Set<T>(); } ...//methods excluded to keep the question readable } I am using a DI container to pass in the DbContext to the generic repository. So far, this has met my needs and there are no other concrete implmentations of IRepository<T>. However, I had to create a dashboard which uses data from many Entities. There was also a form containing a couple of dropdown lists. Now using the generic repository this makes the parameter requirments grow quickly. The Controller will end up being something like public HomeController(IRepository<EntityOne> entityOneRepository, IRepository<EntityTwo> entityTwoRepository, IRepository<EntityThree> entityThreeRepository, IRepository<EntityFour> entityFourRepository, ILogError logError, ICurrentUser currentUser) { } It has about 6 IRepositories plus a few others to include the required data and the dropdown list options. In my mind this is too many parameters. From a performance point of view, there is only 1 DBContext per request and the DI container will serve the same DbContext to all of the Repositories. From a code standards/readability point of view it's ugly. Is there a better way to handle this situation? Its a real world project with real world time constraints so I will not dwell on it too long, but from a learning perspective it would be good to see how such situations are handled by others.

    Read the article

  • Host wildcard subdomains using postfix.

    - by Jack M.
    I'm trying to work out how I can get postfix to accept email for any sub-domain of my main site. I don't have virtual domains, just a long list of sub-domains for local delivery. In specific, I'm feeding python@*.mydomain.com into a Python using the alias file: python: |/www/proc_email.py The Python can handle delivery from there. I envision this looking something along the lines of: mydestination = encendio, localhost.localdomain, localhost, *.mydomain.com I'm running the latest version of postfix on Ubuntu (not rightly sure how to check the version). Thanks in advance.

    Read the article

  • XenServer migrate machines between hosts

    - by Hubert Kario
    I have a XenServer 5.6 Free setup with 5 VMs (Windows and Linux) using about 1.5TB of directly attached storage. Because our virtualisation needs have grown a bit, we currently are preparing a faster XenServer 6.0 Free machine with more RAM and a more storage. Again, directly attached disks. How can I migrate the VMs between XenServer machines? I don't need to keep the machines up and running during migration, but using VM export and import would definitely take too long. Would making a VM with the same configuration on new host and dd'ing the LVM volume over network be the only quick and least painful solution? Are there any "gotchas" I should look out for when doing something like this? The old machine has an AMD Phenom II, the new has Intel Xeon E5 CPUs.

    Read the article

  • Best practices for App Idea ownership and shares

    - by JOG
    I am developing apps on my sparetime. I am the sole developer, and two non-programmer friends of mine provide vision, content, algorithms and ideas. We always agree happily on all the features, todos and prioritizations. But naturally, coding it is the biggest part. When selling, we agree on splitting profit equally, that is 33% each. But version 1.0 naturally does not sell much. And I go on to try to make the app more viral. This includes tons of stuff where the others are of little help. Examples: Adding support for sharing, facebook connect, gameifying, letting users add content, home page, support, maintenance, server services to make it easier for to update content. The list is long. Suddenly I will be doing 100% of a lot of work but only "own" a third of the income. My friends may either "fade out" of the project after 1.0, or they continue to contribute, but with less value and I would rather exchange them for more programmers or graphic designers. The effort they made to version 1.0 is worth a lot to the app and I realize I would have never done it without them. But I am doing all the work in the end. It is hard to negotiate about splitting 90, 5, 5 instead of 33% each, because the idea is still theirs. How to solve this? What are the best practices to regard the ownership of the app? What kind of agreements could I make that make it beneficial and motivational for me to continue developing the app?

    Read the article

  • Is there such a thing as a super programmer? [closed]

    - by Muhammad Alkarouri
    Have you come across a super programmer? What identifies him or her as such, compared to "normal" experienced/great programmers? Also. how do you deal with a person in your team who believes he is a super programmer? Both in case he actually is or if he isn't? Edit: Interesting inputs all round, thanks. A few things can be gleaned: A few definitions emerged. Disregarding too localised definitions (that identified the authors or their acquaintance as super programmers), I liked a couple definitions: Thorbjørn's definition: a person who does the equivalent of a good team consistently for a long time. Free Electron, linked from Henry's answer. A very productive person, of exceptional abilities. The explanation is a good read. A Free Electron can do anything when it comes to code. They can write a complete application from scratch, learn a language in a weekend, and, most importantly, they can dive into a tremendous pile of spaghetti code, make sense of it, and actually getting it working. You can build an entire businesses around a Free Electron. They’re that good. Contrasting with the last definition, is the point linked to by James about the myth of the genius programmer (video). The same idea is expressed as egoless programming in rwong's comment. They present opposite opinions as whether to optimise for such a unique programmer or for a team. These definitions are definitely different, so I would appreciate it if you have an input as to which is better. Or add your own if you want of course, though it would help to say why it is different from those.

    Read the article

  • Starting with text based MUD/MUCK game

    - by Scott Ivie
    I’ve had this idea for a video game in my head for a long time but I’ve never had the knowledge or time to get it done. I still don’t really, but I am willing to dedicate a chunk of my time to this before it’s too late. Recently I started studying Lua script for a program called “MUSH Client” which works for MU* telnet style text games. I want to use the GUI capabilities of Mush Client with a MU* server to create a basic game but here is my dilemma. I figured this could be a suitable starting place for me. BUT… Because I’m not very programmer savvy yet, I don’t know how to download/install/use the MU* server software. I was originally considering Protomuck because a few of the MU*s I were more impressed with began there. http://www.protomuck.org/ I downloaded it, but I guess I'm too used to GUI style programs so I'm having great difficulty figuring out what to do next. Does anyone have any suggestions? Does anyone even know what I'm talking about? heh..

    Read the article

  • Changing website Url - Am I making an SEO mistake

    - by Denis
    I have a webiste with a .com domain that is a year old. The business is a shop based in Ireland and I have purchased a .ie domain. I plan to move the website over to the new domain, SEO Good or Bad idea? Old Url - SmythsOfTerenure.com | New Url - SmythsComputerRepair.ie (I am using Fake names and fictional business in the example Url's) The new domain has my main keyword in it. The old domain has my family name and business location (city district) It currently ranks high for lots of relevant keywords in Google with low traffic and low competition. Current website traffic is about 80 session per week. 80% of that traffic is Organic from Google. I am changing domain in an attempt to help SEO long term by having a CC TLD (.ie rather than .com) and having my main Keyword in the domain. I plan to do 301 re-directs from old to new and update GW Tools and G Analytics but am I making a mistake changing it at all as I know rankings may fall in the sort term. Homepage PR=0 and very few inbound links. Should I just leave it on the old domain? Or after a few months should I be back up ranking as well as I am now?

    Read the article

  • Is there ever a reason to do all an object's work in a constructor?

    - by Kane
    Let me preface this by saying this is not my code nor my coworkers' code. Years ago when our company was smaller, we had some projects we needed done that we did not have the capacity for, so they were outsourced. Now, I have nothing against outsourcing or contractors in general, but the codebase they produced is a mass of WTFs. That being said, it does (mostly) work, so I suppose it's in the top 10% of outsourced projects I've seen. As our company has grown, we've tried to take more of our development in house. This particular project landed in my lap so I've been going over it, cleaning it up, adding tests, etc etc. There's one pattern I see repeated a lot and it seems so mindblowingly awful that I wondered if maybe there is a reason and I just don't see it. The pattern is an object with no public methods or members, just a public constructor that does all the work of the object. For example, (the code is in Java, if that matters, but I hope this to be a more general question): public class Foo { private int bar; private String baz; public Foo(File f) { execute(f); } private void execute(File f) { // FTP the file to some hardcoded location, // or parse the file and commit to the database, or whatever } } If you're wondering, this type of code is often called in the following manner: for(File f : someListOfFiles) { new Foo(f); } Now, I was taught long ago that instantiated objects in a loop is generally a bad idea, and that constructors should do a minimum of work. Looking at this code it looks like it would be better to drop the constructor and make execute a public static method. I did ask the contractor why it was done this way, and the response I got was "We can change it if you want". Which was not really helpful. Anyway, is there ever a reason to do something like this, in any programming language, or is this just another submission to the Daily WTF?

    Read the article

  • Understanding how memory contents map into a struct

    - by user95592
    I am not able to understand how bytes in memory are being mapped into a struct. My machine is a little-endian x86_64. The code was compiled with gcc 4.7.0 from the Win64 mingw32-64 distribution for Win64. These are contents of the relevant memory fragment: ...450002cf9fe5000040115a9fc0a8fe... And this is the struct definition: typedef struct ip4 { unsigned int ihl :4; unsigned int version :4; uint8_t tos; uint16_t tot_len; uint16_t id; uint16_t frag_off; // flags=3 bits, offset=13 bits uint8_t ttl; uint8_t protocol; uint16_t check; uint32_t saddr; uint32_t daddr; /*The options start here. */ } ip4_t; When a pointer to such an structure (let it be *ip4) is initialized to the starting address of the above pasted memory region, this is what the debugger shows for the struct's fields: ip4: address=0x8da36ce ip4->ihl: address=0x8da36ce, value=0x5 ip4->version: address=0x8da36ce, value=0x4 ip4->tos: address=0x8da36d2, value=0x9f ip4->tot_len: address=0x8da36d4, value=0x0 ... I see how ihl and version are mapped: 4 bytes for a long integer, little-endian. But I don't understand how tos and tot_len are mapped; which bytes in memory correspond to each one of them. Thank you in advance.

    Read the article

  • Remote desktop sessions - Unwanted automatic log off after period of time

    - by alex
    I'm having an issue whenever I connect to any of our servers via RDP - After a certain period of time, it seems to close these sessions, closing all the applications i had open etc... This is particularly annoying if I am running a long process - for example, copying a file - it cuts it off... I then re-connect via RDP, and it effectively loads a new session. Is this set somewhere in Group Policy? Or somewhere else? This is happening on Windows 2008 (it may also be on our 2003 servers, although I haven't noticed...)

    Read the article

  • Easily tell if focused on VirtualBox window?

    - by Benjamin Oakes
    Is there a way to make it very obvious that my VirtualBox window isn't in focus? The problem I'm having is switching between workspaces on Ubuntu or OS X and then trying to type in my Windows virtual machine, only to find that it's not in focus. The Windows window looks like it's in focus (based on the title bar), but I'm actually typing in Firefox on the host machine, for example. It's even worse because the Windows text insertion cursor is blinking to show focus. Ideally, I'd like the VM's display to get unsaturated (e.g. a "partial" grayscale) when not in focus, just to prevent this keyboard-focus problem. Other options would be fine too, as long as I don't have to second guess where my focus is. I'm not using the seamless mode -- the display is all within a window.

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >