Search Results

Search found 14111 results on 565 pages for 'virtual machines'.

Page 90/565 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Daemon program that uses select() inside infinite loop uses significantly more CPU when ported from

    - by Jake
    I have a daemon app written in C and is currently running with no known issues on a Solaris 10 machine. I am in the process of porting it over to Linux. I have had to make minimal changes. During testing it passes all test cases. There are no issues with its functionality. However, when I view its CPU usage when 'idle' on my Solaris machine it is using around .03% CPU. On the Virtual Machine running Red Hat Enterprise Linux 4.8 that same process uses all available CPU (usually somewhere in the 90%+ range). My first thought was that something must be wrong with the event loop. The event loop is an infinite loop ( while(1) ) with a call to select(). The timeval is setup so that timeval.tv_sec = 0 and timeval.tv_usec = 1000. This seems reasonable enough for what the process is doing. As a test I bumped the timeval.tv_sec to 1. Even after doing that I saw the same issue. Is there something I am missing about how select works on Linux vs. Unix? Or does it work differently with and OS running on a Virtual Machine? Or maybe there is something else I am missing entirely? One more thing I am sure sure which version of vmware server is being used. It was just updated about a month ago though. Thanks.

    Read the article

  • C++ Multiple inheritance with interfaces?

    - by umanga
    Greetings all, I come from Java background and I am having difficulty with multiple inheritance. I have an interface called IView which has init() method.I want to derive a new class called PlaneViewer implementing above interface and extend another class. (QWidget). My implementation is as: IViwer.h (only Header file , no CPP file) : #ifndef IVIEWER_H_ #define IVIEWER_H_ class IViewer { public: //IViewer(); ///virtual //~IViewer(); virtual void init()=0; }; #endif /* IVIEWER_H_ */ My derived class. PlaneViewer.h #ifndef PLANEVIEWER_H #define PLANEVIEWER_H #include <QtGui/QWidget> #include "ui_planeviewer.h" #include "IViewer.h" class PlaneViewer : public QWidget , public IViewer { Q_OBJECT public: PlaneViewer(QWidget *parent = 0); ~PlaneViewer(); void init(); //do I have to define here also ? private: Ui::PlaneViewerClass ui; }; #endif // PLANEVIEWER_H PlaneViewer.cpp #include "planeviewer.h" PlaneViewer::PlaneViewer(QWidget *parent) : QWidget(parent) { ui.setupUi(this); } PlaneViewer::~PlaneViewer() { } void PlaneViewer::init(){ } My questions are: Is it necessary to declare method init() in PlaneViewer interface also , because it is already defined in IView? 2.I cannot complie above code ,give error : PlaneViewer]+0x28): undefined reference to `typeinfo for IViewer' collect2: ld returned 1 exit status Do I have to have implementation for IView in CPP file?

    Read the article

  • Abstract base class puzzle

    - by 0x80
    In my class design I ran into the following problem: class MyData { int foo; }; class AbstraktA { public: virtual void A() = 0; }; class AbstraktB : public AbstraktA { public: virtual void B() = 0; }; template<class T> class ImplA : public AbstraktA { public: void A(){ cout << "ImplA A()"; } }; class ImplB : public ImplA<MyData>, public AbstraktB { public: void B(){ cout << "ImplB B()"; } }; void TestAbstrakt() { AbstraktB *b = (AbstraktB *) new ImplB; b->A(); b->B(); }; The problem with the code above is that the compiler will complain that AbstraktA::A() is not defined. Interface A is shared by multiple objects. But the implementation of A is dependent on the template argument. Interface B is the seen by the outside world, and needs to be abstrakt. The reason I would like this is that it would allow me to define object C like this: Define the interface C inheriting from abstrakt A. Define the implementation of C using a different datatype for template A. I hope I'm clear. Is there any way to do this, or do I need to rethink my design?

    Read the article

  • Where in the standard is forwarding to a base class required in these situations?

    - by pgast
    Maybe even better is: Why does the standard require forwarding to a base class in these situations? (yeah yeah yeah - Why? - Because.) class B1 { public: virtual void f()=0; }; class B2 { public: virtual void f(){} }; class D : public B1,public B2{ }; class D2 : public B1,public B2{ public: using B2::f; }; class D3 : public B1,public B2{ public: void f(){ B2::f(); } }; D d; D2 d2; D3 d3; EDG gives: sourceFile.cpp sourceFile.cpp(24) : error C2259: 'D' : cannot instantiate abstract class due to following members: 'void B1::f(void)' : is abstract sourceFile.cpp(6) : see declaration of 'B1::f' sourceFile.cpp(25) : error C2259: 'D2' : cannot instantiate abstract class due to following members: 'void B1::f(void)' : is abstract sourceFile.cpp(6) : see declaration of 'B and similarly for the MS compiler. I might buy the first case,D. But in D2 - f is unambiguously defined by the using declaration, why is that not enough for the compiler to be required to fill out the vtable? Where in the standard is this situation defined?

    Read the article

  • What is the simplest way to map a folder on the file system to a url in Tomcat?

    - by Simon
    Here's my problem... I have a small prototype app (happens to be in Grails hosted on AWS) and I want to add the ability of the user to upload a few (max 10) images. I want to persist these images on disk on the server machine, in a folder location which is outside my WAR. I realise that there is probably a super-scalable solution involving more web servers and optimised static asset serving, but for the approximately 100 users I am likely to get, it's really not worth the effort and cost. So, what is the simplest way I can have a virtual folder from my url map to a physical folder on disk? I sort of want... http://myapp.com/static to map to a folder which I can configure e.g. /var/www/static so I can then have in my code... <img src="/static/user1/picture.jpg"/> I don't particularly mind whether the resulting physical folders are directly browsable. Security will eventually be an issue, but it isn't at the start. So, what are my options? I have looked at virtual hosts on the apache site, but it feels more complicated than I need. I don't want to use the Grails static rendering plugins.

    Read the article

  • Postfix: How to configure Postfix with virtual Dovecot mailboxes?

    - by user75247
    I have configured a Postfix mail server for two domains: domain1.com and domain2.com. In my configuration domain1 has both virtual users with Maildirs and aliases to forward mail to local users (eg. root, webmaster) and some small mailing lists. It also has some virtual mappings to non-local domains. Domain2 on the other hand has only virtual alias mappings, mainly to corresponding 'users' at domain1 (eg. mails to [email protected] should be forwarded to [email protected]). My problem is that currently Postfix accepts mail even for those users that don't exist in the system. Mail to existing users and /etc/aliases works fine. Postfix documentation states that the same domain should never be specified in both mydestination and virtual_mailbox_maps, but If I specify mydestination as blank then postfix validates recipients against virtual_mailbox_maps but rejects mail for local aliases of domain1.com. /etc/postfix/main.cf: myhostname = domain1.com mydomain = domain1.com mydestinations = $myhostname, localhost.$mydomain, localhost virtual_mailbox_domains = domain1.com virtual_mailbox_maps = hash:/etc/postfix/vmailbox virtual_mailbox_base = /home/vmail/domains virtual_alias_domains = domain2.com virtual_alias_maps = hash:/etc/postfix/virtual alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases virtual_transport = dovecot /etc/postfix/virtual: domain1.com right-hand-content-does-not-matter firstname.lastname user1 [more aliases..] domain2.com right-hand-content-does-not-matter @domain2.com @domain1.com /etc/postfix/vmailbox: [email protected] user1/Maildir [email protected] user2/Maildir /etc/aliases: root: :include:/etc/postfix/aliases/root webmaster: :include:/etc/postfix/aliases/webmaster [etc..] Is this approach correct or is there some other way to configure Postfix with Dovecot (virtual) Maildirs and Postfix aliases?

    Read the article

  • Frame rate on one of two machines running same code seems to be capped at 60 for no reason

    - by dennmat
    ISSUE I recently moved a project from my laptop to my desktop(machine info below). On my laptop the exact same code displays the fps(and ms/f) correctly. On my desktop it does not. What I mean by this is on the laptop it will display 300 fps(for example) where on my desktop it will show only up to 60. If I add 100 objects to the game on the laptop I'll see my frame rate drop accordingly; the same test on the desktop results in no change and the frames stay at 60. It takes a lot(~300) entities before I'll see a frame drop on the desktop, then it will descend. It seems as though its "theoretical" frames would be 400 or 500 but will never actually get to that and only do 60 until there's too much to handle at 60. This 60 frame cap is coming from no where. I'm not doing any frame limiting myself. It seems like something external is limiting my loop iterations on the desktop, but for the last couple days I've been scratching my head trying to figure out how to debug this. SETUPS Desktop: Visual Studio Express 2012 Windows 7 Ultimate 64-bit Laptop: Visual Studio Express 2010 Windows 7 Ultimate 64-bit The libraries(allegro, box2d) are the same versions on both setups. CODE Main Loop: while(!abort) { frameTime = al_get_time(); if (frameTime - lastTime >= 1.0) { lastFps = fps/(frameTime - lastTime); lastTime = frameTime; avgMspf = cumMspf/fps; cumMspf = 0.0; fps = 0; } /** DRAWING/UPDATE CODE **/ fps++; cumMspf += al_get_time() - frameTime; } Note: There is no blocking code in the loop at any point. Where I'm at My understanding of al_get_time() is that it can return different resolutions depending on the system. However the resolution is never worse than seconds, and the double is represented as [seconds].[finer-resolution] and seeing as I'm only checking for a whole second al_get_time() shouldn't be responsible. My project settings and compiler options are the same. And I promise its the same code on both machines. My googling really didn't help me much, and although technically it's not that big of a deal. I'd really like to figure this out or perhaps have it explained, whichever comes first. Even just an idea of how to go about figuring out possible causes, because I'm out of ideas. Any help at all is greatly appreciated.

    Read the article

  • Disk2vhd Hyper-V server question

    - by user297378
    Hello all I have a backed up about 30 servers using disk2vhd and now I have built my first of many hyper-v severs I did not realize this is all command line I did download CoreConfigurator and that does have some functionality I have been looking for. My question is how do I get the VHD files to run a Vitual Machines? its all command line I tried via vbs to mount the VHD's and I have not been able to any help on this would be great! Thanks!

    Read the article

  • Rolemanager not working when ASPDotNetStorefront served in a virtual folder with FormsAuthentication

    - by digiguru
    Does anyone know if there is a valid roleManager I can apply to ASPDotNetStorefront to get the site working? We have a website that has an ASP.Net storefront served in a virtual folder off the root. ourwebsite.com ourwebsite.com/shop Everything has been workign fine until we put the groundwork in place for forms authentication in the website recently. This caused an error on the production server when you tried to get into the shop... Unable to cast object of type 'System.Web.Security.RolePrincipal' to type 'AspDotNetStorefrontCore.AspDotNetStorefrontPrincipal'. Looking at the shop's web.config I noticed there was no RoleManager node, so I tried to fix the problem by removing it in the shop's web.config <roleManager enabled="false"> </roleManager> This prevented the error occurring, but also prevented the shopping basket from working. Instead I removed the rolemanager tag from the root website... <!-- Conflicting with AspDotNetStorefront <roleManager enabled="true" defaultProvider="CustomizedRoleProvider"> <providers> <add name="CustomizedRoleProvider" type="System.Web.Security.SqlRoleProvider" connectionStringName="constr" /> </providers> </roleManager> --> This worked, but obviously prevents role authentication working on the root website, which is okay for the next 2 or 3 releases. Anyone know the correct code for the rolemanager in DotNetStorefront?

    Read the article

  • Virtual Earth (Bing) Pin "moves" when zoom level changes

    - by Ali
    Hi guys, Created a Virtual Earth (Bing) map to show a simple pin at a particular point. Everything works right now - the pin shows up, the title and description pop up on hover. The map is initially fully zoomed into the pin, but the STRANGE problem is that when I zoom out it moves slightly lower on the map. So if I started with the pin pointing somewhere in Toronto, if I zoom out enough the pin ends up i the middle of Lake Ontario! If I pan the map, the pin correctly stays in its proper location. When I zoom back in, it moves slightly up until it's back to its original correct position! I've looked around for a solution for a while, but I can't understand it at all. Please help!! Thanks a lot! import with javascript: http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=6.2 $(window).ready(function(){ GetMap(); }); map = new VEMap('birdEye'); map.SetCredentials("hash key from Bing website"); map.LoadMap(new VELatLong(43.640144 ,-79.392593), 1 , VEMapStyle.BirdseyeHybrid, false, VEMapMode.Mode2D, true, null); var pin = new VEShape(VEShapeType.Pushpin, new VELatLong(43.640144 ,-79.392593)); pin.SetTitle("Goes to Title of the Pushpin"); pin.SetDescription("Goes as Description."); map.AddShape(pin);

    Read the article

  • Cocoa Virtual Keystrokes Pain

    - by bhargav
    I'm writing an application to respond on a hotkey by copying highlighted text into NSPasteboard's generalPasteboard. After looking around here for a solution for sending virtual keystrokes, I found this: http://stackoverflow.com/questions/1505933/how-to-send-a-cmd-c-keystroke-to-the-active-application-in-objective-c-or-tell I tried the applescript suggested with NSAppleScript: NSLog(@"Hotkey Pressed"); NSPasteboard *pasteboard = [NSPasteboard generalPasteboard]; NSAppleScript *playScript; playScript = [[NSAppleScript alloc] initWithSource:@"tell application \"System Events\" to keystroke \"c\" using command down"]; if([playScript isCompiled] == NO){ [playScript compileAndReturnError:nil]; } id exerror = [playScript executeAndReturnError:nil]; if(exerror == nil){ NSLog(@"Script Failed"); } It works, but only on the first time I hit the hotkey. Each subsequent hit will not to grab the highlighted text. The generalPasteboard still contains the same contents as before the script is run again. Clearing the generalPasteboard before I run the code is no use, because then the code fails when attempting to read the pasteboard contents. So I tried the next suggested solution: CFRelease(CGEventCreate(NULL)); CGEventRef event1, event2, event3, event4; event1 = CGEventCreateKeyboardEvent (NULL, (CGKeyCode)50, true); event2 = CGEventCreateKeyboardEvent (NULL, (CGKeyCode)8, true); event3 = CGEventCreateKeyboardEvent (NULL, (CGKeyCode)8, false); event4 = CGEventCreateKeyboardEvent (NULL, (CGKeyCode)50, false); CGEventPost(kCGHIDEventTap, event1); CGEventPost(kCGHIDEventTap, event2); CGEventPost(kCGHIDEventTap, event3); CGEventPost(kCGHIDEventTap, event4); The above should send the keystrokes Command + c, but all I get is a beep, and the pasteboard contents are unchanged. I'm at wits end - can anyone enlighten me as to what I'm missing or point me out to what I'm overlooking for something so simple?

    Read the article

  • Vector of pointers to base class, odd behaviour calling virtual functions

    - by Ink-Jet
    I have the following code #include <iostream> #include <vector> class Entity { public: virtual void func() = 0; }; class Monster : public Entity { public: void func(); }; void Monster::func() { std::cout << "I AM A MONSTER" << std::endl; } class Buddha : public Entity { public: void func(); }; void Buddha::func() { std::cout << "OHMM" << std::endl; } int main() { const int num = 5; // How many of each to make std::vector<Entity*> t; for(int i = 0; i < num; i++) { Monster m; Entity * e; e = &m; t.push_back(e); } for(int i = 0; i < num; i++) { Buddha b; Entity * e; e = &b; t.push_back(e); } for(int i = 0; i < t.size(); i++) { t[i]->func(); } return 0; } However, when I run it, instead of each class printing out its own message, they all print the "Buddha" message. I want each object to print its own message: Monsters print the monster message, Buddhas print the Buddha message. What have I done wrong?

    Read the article

  • Android 3.1+ USB as virtual COM port

    - by ZachMc
    I have a third party usb device, that when plugged into a Windows machine, is recognized as a serial device and assigned to the COM 4 port. I can communicate with the device just like I would with a device connected via a serial port. For instance, I can write "abc" serially to the device via the USB connection. I have been searching for a way to do a similar thing in Android. If I try the Usb Host method, and use a UsbManager to open the UsbDevice, I can get one interface, with 2 endpoints. I have tried sending control messages using the method in UsbDeviceConnection, but the method returns -1 for everything (though I don't know what I should use for the parameters of that method). Is there a way to get an OutputStream that I can write to that will send bytes to the USB device? Right now I am looking at recompiling the kernel to include a virtual COM port driver and write some native code to be able to do this. Thanks! Edit: I am using the FTDI serial to USB converter circuit. Is this compatible with Android?

    Read the article

  • Permission issue when webservice deployed as virtual directory.Works in VS IDE

    - by Shyju
    I have an ASP.NET web service which will create a text file in a path which is being passed as a parameter to the method. private void CreateFile(string path) { string strFileName = path; StreamWriter sw = new StreamWriter(strFileName, true); sw.WriteLine(""); sw.Write("Created at " + DateTime.Now.ToString()); sw.Close(); } Now I am passing a folder in the network as the parameter and calling the method CreateFile(@"\\192.168.0.40\\labels\\test.txt"); When running the code from the Visual studio IDE,the file is getting created in the path.But when i published this and deployed as a virtual directoty,Its throwing me some error like "System.UnauthorizedAccessException: Access to the path '\\192.168.0.40\labels\test.txt' is denied. at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options) at System.IO.StreamWriter.CreateFile(String path, Boolean append) at System.IO.StreamWriter..ctor(String path, Boolean append, Encoding encoding, Int32 bufferSize) at System.IO.StreamWriter..ctor(String path, Boolean append) I have in my web.config.My machine is running in XP and the other is in Windows Server 2003 Any idea to solve this ?? Thanks in advance

    Read the article

  • How to show virtual keypad in an android activity

    - by Maxood
    Why am i not able to show the virtual keyboard in my activity. Here is my code: package som.android.keypad; import android.app.Activity; import android.os.Bundle; import android.view.inputmethod.InputMethodManager; import android.widget.EditText; public class ShowKeypad extends Activity { InputMethodManager imm; @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); EditText editText = (EditText)findViewById(R.id.EditText); ((InputMethodManager) getSystemService(this.INPUT_METHOD_SERVICE)).showSoftInput(editText, 0); } } <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="som.android.keypad" android:versionCode="1" android:versionName="1.0"> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".ShowKeypad" android:windowSoftInputMode="stateAlwaysVisible" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-sdk android:minSdkVersion="4" /> </manifest>

    Read the article

  • Virtual audio driver (microphone)

    - by Dalamber
    Hello guys, I want to develop a virtual microphone driver. Please, do not say anything about DirectShow - that's not "the way". I need a solution that will work with any software including Skype and MSN. And DirectShow doesn't fit these requirements. I found AVStream Filter-Centric Simulated Capture Driver (avssamp.sys) in Windows 7 WDK. What I need is an audio part of it. By default it reads avssamp.wav and plays it. But this driver is registered as WDM streaming capture device. And I want it in Audio Capture Device. There are some posts in the web but they are all the same: http://www.tech-archive.net/Archive/Development/microsoft.public.development.device.drivers/2005-05/msg00124.html http://www.winvistatips.com/problem-installing-avssamp-audio-capture-sources-category-t184898.html I think registering this filter-driver as audio capture device will make Skype recognize it as a microphone and thefore I will be able to push any PCM file as if it goes from mic. If someone already faced this problem before, please help. Thanks in advance.

    Read the article

  • Convert Virtual Key Code to unicode string

    - by Joshua Weinberg
    I have some code I've been using to get the current keyboard layout and convert a virtual key code into a string. This works great in most situations, but I'm having trouble with some specific cases. The one that brought this to light is the accent key next to the backspace key on german QWERTZ keyboards. http://en.wikipedia.org/wiki/File:KB_Germany.svg That key generates the VK code I'd expect kVK_ANSI_Equal but when using a QWERTZ keyboard layout I get no description back. Its ending up as a dead key because its supposed to be composed with another key. Is there any way to catch these cases and do the proper conversion? My current code is below. TISInputSourceRef currentKeyboard = TISCopyCurrentKeyboardInputSource(); CFDataRef uchr = (CFDataRef)TISGetInputSourceProperty(currentKeyboard, kTISPropertyUnicodeKeyLayoutData); const UCKeyboardLayout *keyboardLayout = (const UCKeyboardLayout*)CFDataGetBytePtr(uchr); if(keyboardLayout) { UInt32 deadKeyState = 0; UniCharCount maxStringLength = 255; UniCharCount actualStringLength = 0; UniChar unicodeString[maxStringLength]; OSStatus status = UCKeyTranslate(keyboardLayout, keyCode, kUCKeyActionDown, 0, LMGetKbdType(), kUCKeyTranslateNoDeadKeysBit, &deadKeyState, maxStringLength, &actualStringLength, unicodeString); if(actualStringLength > 0 && status == noErr) return [[NSString stringWithCharacters:unicodeString length:(NSInteger)actualStringLength] uppercaseString]; }

    Read the article

  • Daemon Tools Lite Error- Unable to add adapter

    - by Lirik
    I just got a ThinkPad T510 with Windows 7 Professional and I installed Daemon Tools Lite, but I keep getting an error when I run it: Unable to add adapter. I tried starting it as an administrator, but I got the same problem. Does anybody know how to fix this?

    Read the article

  • Ubuntu 11.04 VM shows a black screen in VMware Player

    - by Roel Veldhuizen
    I have a Ubuntu Server 11.04 64 bit VM running on VMware Player 3.1.4 that only shows a black screen. No matter what I try, the screen remains black. The VM has worked the first time. When I reset the machine, it shows the VMware loader and a flickering _ for about a second. Then the screen turns black again. VM settings: Memory: 512MB Processors: 1 HD: 20GB CD: auto detect Floppy: auto detect Network adapter: NAT USB controller: present soundcard: auto detect printer: present display: auto detect I just created a fresh VM and the same happens, so it seems that the problem is consistent.

    Read the article

  • swapping or trashing with vast amounts of unmapped pagecache

    - by Marco
    I'm using kubuntu jaunty (i386 32bit), kernel 2.6.28-13-generic. I've 4Gb of RAM, of which only 3317Mb are seen by the system (I guess because of the 32bit system). I'm seeing that the pagecache utilization is continually growing, up to the point that the system is unusable (after a few days). This happens also when I don't do anything (all user applications closed and the bare minimum of services enabled). If enabled, the system starts to use swap space (using it all in the end). Even if swap is disabled, disk activity becomes continuous, with the system unresponsive. For example, right now the system is working (albeit a tad slow), with only Firefox and wing ide running, and I have 2Gb cached with only 45Mb mapped: $ free total used free shared buffers cached Mem: 3346388 3247328 99060 0 8416 2117980 -/+ buffers/cache: 1120932 2225456 Swap: 2144668 519448 1625220 $ cat /proc/meminfo MemTotal: 3346388 kB MemFree: 97128 kB Buffers: 7872 kB Cached: 2120224 kB SwapCached: 413860 kB Active: 2304596 kB Inactive: 865984 kB Active(anon): 2279168 kB Inactive(anon): 830236 kB Active(file): 25428 kB Inactive(file): 35748 kB Unevictable: 32 kB Mlocked: 32 kB HighTotal: 2492940 kB HighFree: 5456 kB LowTotal: 853448 kB LowFree: 91672 kB SwapTotal: 2144668 kB SwapFree: 1625244 kB Dirty: 84 kB Writeback: 0 kB AnonPages: 629304 kB Mapped: 45768 kB Slab: 45600 kB SReclaimable: 21756 kB SUnreclaim: 23844 kB PageTables: 4468 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3817860 kB Committed_AS: 3735020 kB VmallocTotal: 122880 kB VmallocUsed: 9352 kB VmallocChunk: 66600 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 888832 kB If I try to drop the caches, little happens: # sync ; echo 3 > /proc/sys/vm/drop_caches ; free total used free shared buffers cached Mem: 3346388 3220580 125808 0 3020 2100600 -/+ buffers/cache: 1116960 2229428 Swap: 2144668 519356 1625312 Right now I've vm.swappiness = 5, but I've tried also with 0 and 1 (without noticeable differences). I've also tried vm.vfs_cache_pressure = 50 and 150 (again, no differences). As I said the pagecache eats all memory even with swapping turned off. What is happening? How to avoid this?

    Read the article

  • windows xp mode for windows 7 - save text input language settings

    - by Gero
    When I change the 'default language' in 'text services and input languages' in windows xp mode from EN-US to DE-DE the settings are reverted with the next logoff / reboot - EN-US is the default language again. Is there a way around this behaviour? I'm using the default 'XPMUser' in windows xp mode. I also checked 'turn off advanced text services' and disabled the language bar and windows xp remembers these settings - just not the default language..

    Read the article

  • Magento Community - Hosting :: Need Advice which sharing hosting will run magento fast

    - by user43353
    Hi, Need Advice which sharing hosting will run Magento Community fast or some other not expensive solution. This website will not have a lot of users, I only need that it will run fast for 100-20 users in same time. The problem with magento is database design that make this system very slow , also other staff not the best. I had hostmonster.com and justhost.com for previous website but it wasn't fast enough for single user that not located in USA (my customer areas: Asia, Africa). each action that involve database takes a lot of time. Thanks

    Read the article

  • Open vSwitch and Xen Private Networks

    - by Joe
    I've read about the possibilities of using Open vSwitch with Xen to route traffic between domUs on multiple physical hosts. I'd like to be able to group the multiple domUs I have spread out across multiple physical hosts into a number of private networks. However, I've found no documentation on how to integrate Open vSwitch with Xen (rather than XenServer) and am unsure how I should go about doing so and then creating the private networks described. As you might have gathered then - from research I think Open vSwitch can do what I need it to, but I just can't find anything giving me a push in the right direction of how to actually use it to do so! This may well be because Open vSwitch is quite new (version 1.0 released on May 17). Any pointers in the right direction would be much appreciated!

    Read the article

  • How to Increase the VMWare Boot Screen Delay

    - by Trevor Bekolay
    If you’ve wanted to try out a bootable CD or USB flash drive in a virtual machine environment, you’ve probably noticed that VMWare’s offerings make it difficult to change the boot device. We’ll show you how to change these options. You can do this either for one boot, or permanently for a particular virtual machine. Even experienced users of VMWare Player or Workstation may not recognize the screen above – it’s the virtual machine’s BIOS, which in most cases flashes by in the blink of an eye. If you want to boot up the virtual machine with a CD or USB key instead of the hard drive, then you’ll need more than an eye’s-blink to press Escape and bring up the Boot Menu. Fortunately, there is a way to introduce a boot delay that isn’t exposed in VMWare’s graphical interface – you have to edit the virtual machine’s settings file (a .vmx file) manually. Editing the Virtual Machine’s .vmx Find the .vmx file that contains the settings for your virtual machine. You chose a location for this when you created the virtual machine – in Windows, the default location is a folder called My Virtual Machines in your My Documents folder. In VMWare Workstation, the location of the .vmx file is listed on the virtual machine’s tab. If in doubt, search your hard drive for .vmx files. If you don’t want to use Windows default search, an awesome utility that locates files instantly is Everything. Open the .vmx file with any text editor. Somewhere in this file, enter in the following line… save the file, then close out of the text editor: bios.bootdelay = 20000 This will introduce a 20 second delay when the virtual machine loads up, giving you plenty of time to press the Escape button and access the boot menu. The number in this line is just a value in milliseconds, so for a five second boot delay, enter 5000, and so on. Change Boot Options Temporarily Now, when you boot up your virtual machine, you’ll have plenty of time to enter one of the keystrokes listed at the bottom of the BIOS screen on boot-up. Press Escape to bring up the Boot Menu. This allows you to select a different device to boot from – like a CD drive. Your selection will be forgotten the next time you boot up this virtual machine. Change Boot Options Permanently When the BIOS screen comes up, press F2 to enter the BIOS Setup menu. Switch to the Boot tab, and change the ordering of the items by pressing the “+” key to move items up on the list, and the “-” key to move items down the list. We’ve switched the order so that the CD-ROM Drive boots first. Once you make this change permanent, you may want to re-edit the .vmx file to remove the boot delay. Boot from a USB Flash Drive One thing that is noticeably missing from the list of boot options is a USB device. VMWare’s BIOS just does not allow this, but we can get around that limitation using the PLoP Boot Manager that we’ve previously written about. And as a bonus, since everything is virtual anyway, there’s no need to actually burn PLoP to a CD. Open the settings for the virtual machine you want to boot with a USB drive. Click on Add… at the bottom of the settings screen, and select CD/DVD Drive. Click Next. Click the Use ISO Image radio button, and click Next. Browse to find plpbt.iso or plpbtnoemul.iso from the PLoP zip file. Ensure that Connect at power on is checked, and then click Finish. Click OK on the main Virtual Machine Settings page. Now, if you use the steps above to boot using that CD/DVD drive, PLoP will load, allowing you to boot from a USB drive! Conclusion We’re big fans of VMWare Player and Workstation, as they let us try out a ton of geeky things without worrying about harming our systems. By introducing a boot delay, we can add bootable CDs and USB drives to the list of geeky things we can try out. Download PLoP Boot Manager Similar Articles Productive Geek Tips How To Switch to Console Mode for Ubuntu VMware GuestHack: Turn Off Debug Mode in VMWare Workstation 6 BetaStart Your Computer More Quickly by Delaying the Startup of a Service in VistaEnable Hidden BootScreen in Windows VistaEnable Copy and Paste from Ubuntu VMware Guest TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error

    Read the article

  • Mac OS X - Force screen resolution

    - by wjlafrance
    Hello! I'm trying to use the lastest version of a certain development tool and it's sort of difficult to use on a 1200x800 display. Using VMWare Fusion, I can set the screen resolution inside a VM to 1900x1200 on my 13" MBP and it's still usable. Does anyone know of a way to force Mac OS X to scale it's resolution? I tried ScreenResX and it said the scaled resolution was "invalid" or something like that. I know that there are only a certain number of pixels on the screen. I'm only asking how to scale, not set a legit resolution. My current hack solution is to run Snow Leopard Server in a VM with resolution scaling.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >