Search Results

Search found 4268 results on 171 pages for 'srikanth vm'.

Page 29/171 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Windows 8.1 Professional Hyper-V - can I create my own linux VM through it?

    - by KevinM1
    I have a Windows 8.1 Professional desktop, and would like to have a Linux Mint VM on it so I can do some opensource web development work. I've tried VirtualBox, but it's giving me an IO error during the Mint .iso installation. VMWare Player flat out tells me it can't install because I have Hyper-V on my machine by default. My biggest concern is nuking the desktop, which is where I do most of my non-development work/browsing/etc. The desktop seems to be listed as a VM itself in the Hyper-V Manager, and I don't want to accidentally break it. So, two questions: Can I create the Mint VM I want/need for my work? Can I do it without messing up the desktop?

    Read the article

  • How to setup VM in KVM? Qcow or LVM etc.

    - by JohnAdams
    Finally, after quite a bit of this vs that, I have chosen to virtualize a couple of my servers with KVM. I did do a test setup as well, but I have a few questions about setting VM's in KVM. Would appreciate pointers. What is the best storage to use - Qcow2 or LVM? I like the fact that I can copy the VM file easily with a Qcow2 but what about LVM, how do I take a backup or make copy on a development server to play with? I know I can clone a LVM, but how do I bring to my development server? How do I setup the guest partitioning? For example, when setting up Ubuntu inside Ubuntu, do I choose LVM for that VM or regular fdisk partitioning? Can I increase the partition size then later, if I need a bigger disk?

    Read the article

  • Virtualbox VM (spawned by Vagrant) running but inaccessible. What now?

    - by Matt V.
    I have a Virtualbox VM running Ubuntu that was started by Vagrant. At some point my ssh session connected to the guest stopped responding. I tried "vagrant halt" from a terminal window on the host (OS X). The shutdown process seemed to also hang. Shutting down the Oracle VM VirtualBox Manager doesn't shut down the VMs themselves. Is there a way in either Vagrant or VirtualBox to force the running VM to shutdown? When running desktop guest OSes, closing the GUI window presents several options for shutting down the guest, but I don't know how to do the equivalent when the guest is running headless.

    Read the article

  • How to manage SOAP requests to a pool of VM each listening on a HTTP port with a priority value in these requests?

    - by sputnick
    I have a front SOAP web-server under Linux. It will have to communicate with Windows Servers VM listening each on a HTTP port, for a HTTP POST request. The chosen VM should return a report of the task to the SOAP client. In the SOAP requests, there's a special variable : the priority of the request (kind of SLA), and my question is coming right now : I think of using a ha software (nginx, HAProxy, HeartBeat...) that can manage priority in this point of view. Is it relevant or do you think I need to implement a queue by myself with some specific developments? Ex: I have a SOAP requests with low priority in the pipe : the weight priority for these VM should be decreased if I have high priority SOAP requests at the same time. Any clue will be really appreciated.

    Read the article

  • Is it possible to run my other partition inside of a VM?

    - by Parris
    So I have dual boot machine with windows 7 and ubuntu 11. I would like to, at times, run a vm from one partition to another. For example sometimes I would like to do certain things in windows while still using ubuntu. At other times, perhaps the opposite would be more effective. Why? Well running a vm is fairly resource intensive, and depending on what I am doing primarily I would rather be in one operating system vs the other, or perhaps it is just convenient. An important consideration is that you should NOT need to "copy" or "duplicate" the partition in any way. The VM should just read the partition as if it were a standard image. Any ideas? I am using virtual box currently. I have also used virtual pc. I could probably get a copy of vmware workstation if needed.

    Read the article

  • Why would Linux VM in vSphere ESXi 5.5 show dramatically increased disk i/o latency?

    - by mhucka
    I'm stumped and I hope someone else will recognize the symptoms of this problem. Hardware: new Dell T110 II, dual-core Pentium G860 2.9 GHz, onboard SATA controller, one new 500 GB 7200 RPM cabled hard drive inside the box, other drives inside but not mounted yet. No RAID. Software: fresh CentOS 6.5 virtual machine under VMware ESXi 5.5.0 (build 174 + vSphere Client). 2.5 GB RAM allocated. The disk is how CentOS offered to set it up, namely as a volume inside an LVM Volume Group, except that I skipped having a separate /home and simply have / and /boot. CentOS is patched up, ESXi patched up, latest VMware tools installed in the VM. No users on the system, no services running, no files on the disk but the OS installation. I'm interacting with the VM via the VM virtual console in vSphere Client. Before going further, I wanted to check that I configured things more or less reasonably. I ran the following command as root in a shell on the VM: for i in 1 2 3 4 5 6 7 8 9 10; do dd if=/dev/zero of=/test.img bs=8k count=256k conv=fdatasync done I.e., just repeat the dd command 10 times, which results in printing the transfer rate each time. The results are disturbing. It starts off well: 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.451 s, 105 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.4202 s, 105 MB/s ... but after 7-8 of these, it then prints 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GG) copied, 82.9779 s, 25.9 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 84.0396 s, 25.6 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 103.42 s, 20.8 MB/s If I wait a significant amount of time, say 30-45 minutes, and run it again, it again goes back to 105 MB/s, and after several rounds (sometimes a few, sometimes 10+), it drops to ~20-25 MB/s again. Plotting the disk latency in vSphere's interface, it shows periods of high disk latency hitting 1.2-1.5 seconds during the times that dd reports the low throughput. (And yes, things get pretty unresponsive while that's happening.) What could be causing this? I'm comfortable that it is not due to the disk failing, because I also had configured two other disks as an additional volume in the same system. At first I thought I did something wrong with that volume, but after commenting the volume out from /etc/fstab and rebooting, and trying the tests on / as shown above, it became clear that the problem is elsewhere. It is probably an ESXi configuration problem, but I'm not very experienced with ESXi. It's probably something stupid, but after trying to figure this out for many hours over multiple days, I can't find the problem, so I hope someone can point me in the right direction. (P.S.: yes, I know this hardware combo won't win any speed awards as a server, and I have reasons for using this low-end hardware and running a single VM, but I think that's besides the point for this question [unless it's actually a hardware problem].) ADDENDUM #1: Reading other answers such as this one made me try adding oflag=direct to dd. However, it makes no difference in the pattern of results: initially the numbers are higher for many rounds, then they drop to 20-25 MB/s. (The initial absolute numbers are in the 50 MB/s range.) ADDENDUM #2: Adding sync ; echo 3 > /proc/sys/vm/drop_caches into the loop does not make a difference at all. ADDENDUM #3: To take out further variables, I now run dd such that the file it creates is larger than the amount of RAM on the system. The new command is dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct. Initial throughput numbers with this version of the command are ~50 MB/s. They drop to 20-25 MB/s when things go south. ADDENDUM #4: Here is the output of iostat -d -m -x 1 running in another terminal window while performance is "good" and then again when it's "bad". (While this is going on, I'm running dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct.) First, when things are "good", it shows this: When things go "bad", iostat -d -m -x 1 shows this:

    Read the article

  • Why Virtualbox VDI doubles the space of the VM hard disk?

    - by logoff
    I have one Xubuntu 12.10 64 bit Virtualbox VM on a Windows 7 64 bit host. It has one dynamic allocated hard disk with VDI format with maximum capacity of 20GB. If I use a command df -h in the VM I get that 5.3GB are in use in th main partition. I have only 2 partitions, one for the ext4 hard disk and another with 512MB of swap. I have no snapshots. The VDI file of this VM has 10.7GB. It is normal this difference of space? It is caused because the VDI format?

    Read the article

  • [ebp + 6] instead of +8 in a JIT compiler

    - by David Titarenco
    I'm implementing a simplistic JIT compiler in a VM I'm writing for fun (mostly to learn more about language design) and I'm getting some weird behavior, maybe someone can tell me why. First I define a JIT "prototype" both for C and C++: #ifdef __cplusplus typedef void* (*_JIT_METHOD) (...); #else typedef (*_JIT_METHOD) (); #endif I have a compile() function that will compile stuff into ASM and stick it somewhere in memory: void* compile (void* something) { // grab some memory unsigned char* buffer = (unsigned char*) malloc (1024); // xor eax, eax // inc eax // inc eax // inc eax // ret -> eax should be 3 /* WORKS! buffer[0] = 0x67; buffer[1] = 0x31; buffer[2] = 0xC0; buffer[3] = 0x67; buffer[4] = 0x40; buffer[5] = 0x67; buffer[6] = 0x40; buffer[7] = 0x67; buffer[8] = 0x40; buffer[9] = 0xC3; */ // xor eax, eax // mov eax, 9 // ret 4 -> eax should be 9 /* WORKS! buffer[0] = 0x67; buffer[1] = 0x31; buffer[2] = 0xC0; buffer[3] = 0x67; buffer[4] = 0xB8; buffer[5] = 0x09; buffer[6] = 0x00; buffer[7] = 0x00; buffer[8] = 0x00; buffer[9] = 0xC3; */ // push ebp // mov ebp, esp // mov eax, [ebp + 6] ; wtf? shouldn't this be [ebp + 8]!? // mov esp, ebp // pop ebp // ret -> eax should be the first value sent to the function /* WORKS! */ buffer[0] = 0x66; buffer[1] = 0x55; buffer[2] = 0x66; buffer[3] = 0x89; buffer[4] = 0xE5; buffer[5] = 0x66; buffer[6] = 0x66; buffer[7] = 0x8B; buffer[8] = 0x45; buffer[9] = 0x06; buffer[10] = 0x66; buffer[11] = 0x89; buffer[12] = 0xEC; buffer[13] = 0x66; buffer[14] = 0x5D; buffer[15] = 0xC3; // mov eax, 5 // add eax, ecx // ret -> eax should be 50 /* WORKS! buffer[0] = 0x67; buffer[1] = 0xB8; buffer[2] = 0x05; buffer[3] = 0x00; buffer[4] = 0x00; buffer[5] = 0x00; buffer[6] = 0x66; buffer[7] = 0x01; buffer[8] = 0xC8; buffer[9] = 0xC3; */ return buffer; } And finally I have the main chunk of the program: void main (int argc, char **args) { DWORD oldProtect = (DWORD) NULL; int i = 667, j = 1, k = 5, l = 0; // generate some arbitrary function _JIT_METHOD someFunc = (_JIT_METHOD) compile(NULL); // windows only #if defined _WIN64 || defined _WIN32 // set memory permissions and flush CPU code cache VirtualProtect(someFunc,1024,PAGE_EXECUTE_READWRITE, &oldProtect); FlushInstructionCache(GetCurrentProcess(), someFunc, 1024); #endif // this asm just for some debugging/testing purposes __asm mov ecx, i // run compiled function (from wherever *someFunc is pointing to) l = (int)someFunc(i, k); // did it work? printf("result: %d", l); free (someFunc); _getch(); } As you can see, the compile() function has a couple of tests I ran to make sure I get expected results, and pretty much everything works but I have a question... On most tutorials or documentation resources, to get the first value of a function passed (in the case of ints) you do [ebp+8], the second [ebp+12] and so forth. For some reason, I have to do [ebp+6] then [ebp+10] and so forth. Could anyone tell me why?

    Read the article

  • How to properly shutdown a Linux VMware Server Host

    - by Mikee
    Hi Everybody: In our lab, we have a server running Ubuntu Linux 8.04.4 x64, with VMware Server 2.1 hosting 4 VM's. I have a major concern with regards to shutting down the host server. Mostly, how do I ensure that the guest VM's are being shut down safely? In the VMware web interface console, I have enabled: "Allow virtual machines to start and stop automatically with the system" I enabled the Default Startup Delay for 15 seconds along with the "Start next VM immediately if the VMware Tools start" option checked I enabled the Default Shutdown Delay with a 60 second shutdown delay and a Shutdown Action of "Shut Down Guest" All VM's have the VMware Tools installed and properly working. All VM's are moved up into the "Specified Order" section of "Startup Order", thus when powering the server back on, all those VM's should start up again in that specified order. When I went to shut down the server, I used the shutdown -h now command. Based on the settings I entered above, I was expecting a 4 minute shutdown, as there is an option to delay the shutdown of each VM by 60 seconds. However, that is not what happened. Instead, the server shutdown in under a minute. When I powered the server back on, only 2 VM's properly loaded. The other 2 showed the following error: "Power on Virtual Machine" failed to complete If these problems problems persist, please contact your system administrator. Details: Cannot open the disk '[location to .vmdk]' or one of the snapshot disks it depends on. Reason: Failed to lock the file. Obviously, if this error occured, then it is clear to me that the VM's were not properly shutdown, or the server powered off before the VM's were completely shutdown. I have fixed the above error by deleting the .lck files in the respective VM directories. How would I know if the VM's were properly shut down? I checked the VMware-server logs, but they only seem to display the logs of when the vmware-mgmt service is running in the current session. I'm mostly running Linux VM's, so is there an easy way to know whether or not a server was properly shut down in Linux? Thank you all for the help!

    Read the article

  • Get JVM to grow memory demand as needed up to size of VM limit?

    - by Ira Baxter
    We ship a Java application whose memory demand can vary quite a lot depending on the size of the data it is processing. If you don't set the max VM (virtual memory) size, quite often the JVM quits with an GC failure on big data. What we'd like to see, is the JVM requesting more memory, as GC fails to provide enough, until the total available VM is exhausted. e.g., start with 128Mb, and increase geometrically (or some other step) whenever the GC failed. The JVM ("Java") command line allows explicit setting of max VM sizes (various -Xm* commands), and you'd think that would be designed to be adequate. We try to do this in a .cmd file that we ship with the application. But if you pick any specific number, you get one of two bad behaviors: 1) if your number is small enough to work on most target systems (e.g., 1Gb), it isn't big enough for big data, or 2) if you make it very large, the JVM refuses to run on those systems whose actual VM is smaller than specified. How does one set up Java to use the available VM when needed, without knowing that number in advance, and without grabbing it all on startup?

    Read the article

  • Why are so many new languages written for the Java VM?

    - by sdudo
    There are more and more programming languages (Scala, Clojure,...) coming out that are made for the Java VM and are therefore compatible with the Java Byte-Code. I'm beginning to ask myself: Why the Java VM? What makes it so powerful or popular that there are new programming languages, which seem gaining popularity too, created for it? Why don't they write a new VM for a new language?

    Read the article

  • Ubuntu server on VM outgoing network(ping google.com) working, incoming(127.0.0.1:8080) is not. Was working previusly

    - by IvarsB
    I have recently installed Ubuntu server with LAMP,OpenSSH and mail on Oracle's VM, it's incoming networking was recently working, apache's default message could be seen when opening 127.0.0.1:8080. But now it's not! :( Could you give me any tips? I couldn't google anything that helped me. :( I'm running windows 7 with such settings http://www.bildites.lv/images/3d91ikwtraw0ld7lhv.png I recently used apt-get --purge remove phpmyadmin. Could that be the problem? How should I fix it? Thank you in advance! Ivars. EDIT: Sorry for the lame formating.

    Read the article

  • A dropdownlist list in ASP.NET MVC although not [required] field shows up as required and Model Sta

    - by VJ
    I have the following code- View- <% Html.BeginForm(); %> <div> <%= Html.DropDownList("DropDownSelectList", new SelectList( Model.DropDownSelectList, "Value", "Text"))%> Controller- public ActionResult Admin(string apiKey, string userId) { ChallengesAdminViewModel vm = new ChallengesAdminViewModel(); vm.ApiKey = apiKey; vm.UserId = userId; vm.DropDownSelectList = new List<SelectListItem>(); vm.DropDownSelectList.Add(listItem1); vm.DropDownSelectList.Add(listItem2); vm.DropDownSelectList.Add(listItem3); vm.DropDownSelectList.Add(listItem4); vm.DropDownSelectList.Add(listItem5); vm.DropDownSelectList.Add(listItem6); vm.DropDownSelectList.Add(listItem7); } [HttpPost] public ActionResult Admin(ChallengesAdminViewModel vm) { if (ModelState.IsValid)//Due to the null dropdownlist gives model state invalid { } } ViewModel- public class ChallengesAdminViewModel { [Required] public string ApiKey { get; set; } [Required] public string UserId { get; set; } public List<SelectListItem> DropDownSelectList { get; set; } } I dont know why it still requires the list although not required. I want to have only two attributes as required. So I wanted to know how do i declare or change that list to be not required and have my Model State Valid.

    Read the article

  • Although not [required] List field shows up as required and Model State is not Valid due to it bein

    - by VJ
    I have the following code- View- <% Html.BeginForm(); %> <div> <%= Html.DropDownList("DropDownSelectList", new SelectList( Model.DropDownSelectList, "Value", "Text"))%> Controller- public ActionResult Admin(string apiKey, string userId) { ChallengesAdminViewModel vm = new ChallengesAdminViewModel(); vm.ApiKey = apiKey; vm.UserId = userId; vm.DropDownSelectList = new List<SelectListItem>(); vm.DropDownSelectList.Add(listItem1); vm.DropDownSelectList.Add(listItem2); vm.DropDownSelectList.Add(listItem3); vm.DropDownSelectList.Add(listItem4); vm.DropDownSelectList.Add(listItem5); vm.DropDownSelectList.Add(listItem6); vm.DropDownSelectList.Add(listItem7); } [HttpPost] public ActionResult Admin(ChallengesAdminViewModel vm) { if (ModelState.IsValid)//Due to the null dropdownlist gives model state invalid { } } ViewModel- public class ChallengesAdminViewModel { [Required] public string ApiKey { get; set; } [Required] public string UserId { get; set; } public List<SelectListItem> DropDownSelectList { get; set; } } I dont know why it still requires the list although not required. I want to have only two attributes as required. So I wanted to know how do i declare or change that list to be not required and have my Model State Valid.

    Read the article

  • why does Virtualbox use 15-20% CPU when VM is paused?

    - by laramichaels
    Hello, I run VirtualBox 3.1 on Ubuntu with a Win XP guest. I have noticed to my surprise that when I pause the VM (its screen grays out) VirtualBox continues using 15-20% of the host's CPU. Is this normal behavior? Is there a way to avoid it? (Without saving the state of the VM and exiting VirtualBox.) Thanks for any insights! ~lara

    Read the article

  • How is a VM isolated from the physical host?

    - by dotnetdev
    Hi, I was thinking about virtualisation and how to explain it to a non-technical person and one of the things that I was wondering was how to explain the way that a VM is isolated and seperate from the phyiscal machine (so I can have a virus on a VM but this would never effect my physical host, right?). How does this technology work exactly? As I am a programmer, when I think of isolating processes, I think of using appdomains (I work with C# primarily). Thanks

    Read the article

  • Can I run 64-bit VM guests on a 32-bit host?

    - by Maestro1024
    Can I run 64-bit VM guests on a 32-bit host? If I have a physical PC with 32 bit can I launch a VM that is 64 bit? What virtual machine software (Virtual PC or VirtualBox or other) would allow this? I read out there that VMware may support this but I am looking for something Open source or free. Host would preferably be a Windows host but could be Linux. Guest needs to be Windows. Thanks

    Read the article

  • List of Lua derived VMs and Languages

    - by Shane Holloway
    Is there a compendium of virtual machines and languages derived or inspired by Lua? By derived, I mean usage beyond embedding and extending with modules. I'm wanting to research the Lua technology tree, and am looking for our combined knowledge of what already exists. Current List: Bright - A C-like Lua Derivative http://bluedino.net/luapix/Bright.pdf Agena - An Algol68/SQL like Lua Derivative http://agena.sourceforge.net/ LuaJIT - A (very impressive) JIT for Lua http://luajit.org MetaLua - An ML-style language extension http://metalua.luaforge.net/

    Read the article

  • What are the main benefits of implementing a virtual machine as part of an application?

    - by Marplesoft
    Several databases I've been looking at recently implement a virtual machine internally to perform the respective data reads and writes. For an example, check out this article on SQLite's virtual machine they call the 'VDBE'. I'm curious as to what the benefits of such an architecture are. I would assume performance is one but why would a virtual machine like this run faster? In fact, it seems to be that this extra layer could cause it to run slower. So perhaps it's for security? Or portability? Anyway, just curious about this.

    Read the article

  • Implementing eval and load functions inside a scripting engine with Flex and Bison.

    - by Simone Margaritelli
    Hy guys, i'm developing a scripting engine with flex and bison and now i'm implementing the eval and load functions for this language. Just to give you an example, the syntax is like : import std.*; load( "some_script.hy" ); eval( "foo = 123;" ); println( foo ); So, in my lexer i've implemented the function : void hyb_parse_string( const char *str ){ extern int yyparse(void); YY_BUFFER_STATE prev, next; /* * Save current buffer. */ prev = YY_CURRENT_BUFFER; /* * yy_scan_string will call yy_switch_to_buffer. */ next = yy_scan_string( str ); /* * Do actual parsing (yyparse calls yylex). */ yyparse(); /* * Restore previous buffer. */ yy_switch_to_buffer(prev); } But it does not seem to work. Well, it does but when the string (loaded from a file or directly evaluated) is finished, i get a sigsegv : Program received signal SIGSEGV, Segmentation fault. 0xb7f2b801 in yylex () at src/lexer.cpp:1658 1658 if ( YY_CURRENT_BUFFER_LVALUE->yy_buffer_status == YY_BUFFER_NEW ) As you may notice, the sigsegv is generated by the flex/bison code, not mine ... any hints, or at least any example on how to implement those kind of functions? PS: I've succesfully implemented the include directive, but i need eval and load to work not at parsing time but execution time (kind of PHP's include/require directives).

    Read the article

  • Algorithms for modern hardware?

    - by Jurily
    Once again, I find myself with a set of broken assumptions. The article itself is about a 10x performance gain by modifying a proven-optimal algorithm to account for virtual memory: What good is an O(log2(n)) algorithm if those operations cause page faults and slow disk operations? For most relevant datasets an O(n) or even an O(n^2) algorithm, which avoids page faults, will run circles around it. Are there more such algorithms around? Should we re-examine all those fundamental building blocks of our education? What else do I need to watch out for when writing my own?

    Read the article

  • How do polymorphic inline caches work with mutable types?

    - by kingkilr
    A polymorphic inline cache works by caching the actual method by the type of the object, in order to avoid the expensive lookup procedures (usually a hashtable lookup). How does one handle the type comparison if the type objects are mutable (i.e. the method might be monkey patched into something different at run time). The one idea I've come up with would be a "class counter" that gets incremented each time a method is adjusted, however this seems like it would be exceptionally expensive in a heavily monkey patched environ since it would kill all the PICs for that class, even if the methods for them weren't altered. I'm sure there must be a good solution to this, as this issue is directly applicable to Javascript and AFAIK all 3 of the big JS VMs have PICs (wow acronym ahoy).

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >