Search Results

Search found 17847 results on 714 pages for 'virtual disk'.

Page 157/714 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • How do you work out the IIS Virtual Path for an application?

    - by joshcomley
    When I try to change the ASP.NET version to v4 on IIS 6, I receive the following warning: Changing the Framework version requires a restart of the W3SVC service. Alternatively, you can change the Framework version without restarting the W3SVC service by running: aspnet_regiis.exe -norestart -s IIS-Viirtual-Path Do you want to continue (this will change the Framework version and restart the W3SVC service)? How do I work out IIS-Virtual-Path? I have tried the obvious paths i.e.: aspnet_regiis.exe -norestart -s "/WebSites/Extranet/AppName" Where WebSites is the name of the folder in IIS, Extranet the name of the root app and AppName the name of the Virtual Directory application I am trying to change. Thanks! Edit: How do I work out the virtual path for the Auth virtual directory in following IIS6 setup: I have tried: aspnet_regiis.exe -norestart -s "/Web Sites/Extranet/Auth" aspnet_regiis.exe -norestart -s "Auth" I get: Installation stopped because the specified path (WhateverIPutIn) is invalid.

    Read the article

  • Why can't I declare C# methods virtual and static?

    - by Luke
    I have a helper class that is just a bunch of static methods and would like to subclass the helper class. Some behavior is unique depending on the subclass so I would like to call a virtual method from the base class, but since all the methods are static I can't create a plain virtual method (need object reference in order to access virtual method). Is there any way around this? I guess I could use a singleton.. HelperClass.Instance.HelperMethod() isn't so much worse than HelperClass.HelperMethod(). Brownie points for anyone that can point out some languages that support virtual static methods. Edit: OK yeah I'm crazy. Google search results had me thinking I wasn't for a bit there.

    Read the article

  • Why does my 64-bit IIS app pool show 3 gigabytes more virtual memory than private memory?

    - by Brett
    I have an ASP.Net application that I am running on 64-bit IIS 6 on Windows XP x64. When I open performance counters after one page hit of a trivial page, I see a Private Bytes of about 88 megs, but a Virtual Bytes of about 3 Gigs. When I try the same thing with a VERY trivial ASP.Net app, I get the same result. We see something similar on Windows Server 2003 in production -- there it is an issue because we recycle when the virtual memory consumed outgrows a limit. Before we make any changes to our recycling settings, we'd like to answer the following questions: Why does the app pool grab such a large hunk of virtual memory? Is the amount of virtual memory headroom the app requests configurable? Thanks! Brett

    Read the article

  • Is it possible to define a virtual directory in IIS and make the files relative to the physical dire

    - by Mikey John
    Is it possible to define a virtual directory in IIS and somehow make the files in that directory relative to the physical directory and not to the virtual directory ? For instance on my server I have the following folders: D:\WebSite\Css\myTheme.css, D:\WebSite\Images\image1.jpg I created a virtual directory on IIS resources.mysite: Inside my website I reference the sheet like this resources.mysite/myTheme.css But inside myTheme.css I reference pictures from ../Images/images1.jpg. So the problem is that image1.jpg is not found because it is relative to the physical folder and not the virtual folder on IIS. Can I solve this problem without modifying the style sheet ?

    Read the article

  • How to implement an interface class using the non-virtual interface idiom in C++?

    - by andreas buykx
    Hi all, In C++ an interface can be implemented by a class with all its methods pure virtual. Such a class could be part of a library to describe what methods an object should implement to be able to work with other classes in the library: class Lib::IFoo { public: virtual void method() = 0; }; : class Lib::Bar { public: void stuff( Lib::IFoo & ); }; Now I want to to use class Lib::Bar, so I have to implement the IFoo interface. For my purposes I need a whole of related classes so I would like to work with a base class that guarantees common behavior using the NVI idiom: class FooBase : public IFoo // implement interface IFoo { public: void method(); // calls methodImpl; private: virtual void methodImpl(); }; The non-virtual interface (NVI) idiom ought to deny derived classes the possibility of overriding the common behavior implemented in FooBase::method(), but since IFoo made it virtual it seems that all derived classes have the opportunity to override the FooBase::method(). If I want to use the NVI idiom, what are my options other than the pImpl idiom already suggested (thanks space-c0wb0y).

    Read the article

  • Is there a way to split a widescreen monitor in to two or more virtual monitors?

    - by Mike Thompson
    Like most developers I have grown to love dual monitors. I won't go into all the reasons for their goodness; just take it as a given. However, they are not perfect. You can never seem to line them up "just right". You always end up with the monitors at slight funny angles. And of course the bezel always gets in the way. And this is with identical monitors. The problem is much worse with different monitors -- VMWare's multi monitor feature won't even work with monitors of differnt resolutions. When you use multiple monnitors, one of them becomes your primary monitor of focus. Your focus may flip from one monitor to the other, but at any point in time you are usually focusing on only one monitor. There are exceptions to this (WinDiff, Excel), but this is generally the case. I suggest that having a single large monitor with all the benefits of multiple smaller monitors would be a better solution. Wide screen monitors are fantastic, but it is hard to use all the space efficiently. If you are writing code you are generally working on the left-hand side of the window. If you maximize an editor on a wide-screen monitor the right-hand side of the window will be a sea of white. Programs like WinSplit Revolution will help to organise your windows, but this is really just addressing the symptom, not the problem. Even with WinSplit Revolution, when you maximise a window it will take up the whole screen. You can't lock a window into a specific section of the screen. This is where virtual monitors comes in. What would be really nice is a video driver that sits on top of the existing driver, but allows a single monitor to be virtualised into multiple monitors. Control Panel would see your single physical monitor as two or more virtual monitors. The software could even support a virtual bezel to emphasise what is happening, or you could opt for seamless mode. Programs like WinSplit Revolution and UltraMon would still work. This virtual video driver would allow you to slice & dice your physical monitor into as many virtual monitors as you want. Does anybody know if such software exists? If not, are there any budding Windows display driver guru's out there willing to take up the challenge? I am not after the myriad of virtual desktop/window manager programs that are available. I get frustrated with these programs. They seem good at first but they usually have some strange behaviour and don't work well with other programs (such as WinSplit Revolution). I want the real thing!

    Read the article

  • virtual directory not being configured as an application in IIS.

    - by david
    hi i am comletly new to iis and asp.net i am trying to setup bugNET on a godaddy server. i created a virtual directory and once i tried to launch the site i get this error: Parser Error Message: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. here is complete detail of what i did. hosting: godaddy created virtual directory (child folder of root) - named: devbunk with these settings (anonymous access, directory browsing) that is all i can do to with iis on godaddy. the error tells me that i need to turn the virtual directory as an application. godaddy doesnt let me do that... how do i do it? btw, i have iis7 setup.

    Read the article

  • Making Visual C++ DLL from C++ class

    - by prosseek
    I have the following C++ code to make dll (Visual Studio 2010). class Shape { public: Shape() { nshapes++; } virtual ~Shape() { nshapes--; }; double x, y; void move(double dx, double dy); virtual double area(void) = 0; virtual double perimeter(void) = 0; static int nshapes; }; class __declspec(dllexport) Circle : public Shape { private: double radius; public: Circle(double r) : radius(r) { }; virtual double area(void); virtual double perimeter(void); }; class __declspec(dllexport) Square : public Shape { private: double width; public: Square(double w) : width(w) { }; virtual double area(void); virtual double perimeter(void); }; I have the __declspec, class __declspec(dllexport) Circle I could build a dll with the following command CL.exe /c example.cxx link.exe /OUT:"example.dll" /DLL example.obj When I tried to use the library, Square* square; square->area() I got the error messages. What's wrong or missing? example_unittest.obj : error LNK2001: unresolved external symbol "public: virtual double __thiscall ... Square::area(void)" (?area@Square@@UAENXZ) ADDED Following wengseng's answer, I modified the header file, and for DLL C++ code, I added #define XYZLIBRARY_EXPORT However, I still got errors. example_unittest.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: __th iscall Circle::Circle(double)" (__imp_??0Circle@@QAE@N@Z) referenced in function "protected: virtual void __thiscall TestOne::SetUp(void)" (?SetUp@TestOne@@MAEXXZ) example_unittest.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: __th iscall Square::Square(double)" (__imp_??0Square@@QAE@N@Z) referenced in function "protected: virtual void __thiscall TestOne::SetUp(void)" (?SetUp@TestOne@@MAEXXZ) example_unittest.obj : error LNK2001: unresolved external symbol "public: virtual double __thiscall Square::area(void)" (?area@Square@@UAENXZ) example_unittest.obj : error LNK2001: unresolved external symbol "public: virtual double __thiscall Square::perimeter(void)" (?perimeter@Square@@UAENXZ) example_unittest.obj : error LNK2001: unresolved external symbol "public: virtual double __thiscall Circle::area(void)" (?area@Circle@@UAENXZ) example_unittest.obj : error LNK2001: unresolved external symbol "public: virtual double __thiscall Circle::perimeter(void)" (?perimeter@Circle@@UAENXZ)

    Read the article

  • Is there any way to prevent a Delphi application from using Virtual Storage on Vista/Win 7 without e

    - by croceldon
    The question pretty much says it all. I have an app with an older component that doesn't work right if runtime themes are enabled. But if I don't enable them, the app always ends up messing with the virtual store. Thanks! Update: Using Mark's solution below, the application no longer writes to the Virtual Store. But, now it won't access a tdb file (Tiny Database file) that it needs. This tdb file is the same file that was being written to the Virtual store. Any ideas on how I can give it access to the tdb file and still prevent writing the Virtual Store?

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • 3Ware 9650SE RAID-6, two degraded drives, one ECC, rebuild stuck

    - by cswingle
    This morning I came in the office to discover that two of the drives on a RAID-6, 3ware 9650SE controller were marked as degraded and it was rebuilding the array. After getting to about 4%, it got ECC errors on a third drive (this may have happened when I attempted to access the filesystem on this RAID and got I/O errors from the controller). Now I'm in this state: > /c2/u1 show Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB) ------------------------------------------------------------------------ u1 RAID-6 REBUILDING 4%(A) - - 64K 7450.5 u1-0 DISK OK - - p5 - 931.312 u1-1 DISK OK - - p2 - 931.312 u1-2 DISK OK - - p1 - 931.312 u1-3 DISK OK - - p4 - 931.312 u1-4 DISK OK - - p11 - 931.312 u1-5 DISK DEGRADED - - p6 - 931.312 u1-6 DISK OK - - p7 - 931.312 u1-7 DISK DEGRADED - - p3 - 931.312 u1-8 DISK WARNING - - p9 - 931.312 u1-9 DISK OK - - p10 - 931.312 u1/v0 Volume - - - - - 7450.5 Examining the SMART data on the three drives in question, the two that are DEGRADED are in good shape (PASSED without any Current_Pending_Sector or Offline_Uncorrectable errors), but the drive listed as WARNING has 24 uncorrectable sectors. And, the "rebuild" has been stuck at 4% for ten hours now. So: How do I get it to start actually rebuilding? This particular controller doesn't appear to support /c2/u1 resume rebuild, and the only rebuild command that appears to be an option is one that wants to know what disk to add (/c2/u1 start rebuild disk=<p:-p...> [ignoreECC] according to the help). I have two hot spares in the server, and I'm happy to engage them, but I don't understand what it would do with that information in the current state it's in. Can I pull out the drive that is demonstrably failing (the WARNING drive), when I have two DEGRADED drives in a RAID-6? It seems to me that the best scenario would be for me to pull the WARNING drive and tell it to use one of my hot spares in the rebuild. But won't I kill the thing by pulling a "good" drive in a RAID-6 with two DEGRADED drives? Finally, I've seen reference in other posts to a bad bug in this controller that causes good drives to be marked as bad and that upgrading the firmware may help. Is flashing the firmware a risky operation given the situation? Is it likely to help or hurt wrt the rebuilding-but-stuck-at-4% RAID? Am I experiencing this bug in action? Advice outside the spiritual would be much appreciated. Thanks.

    Read the article

  • VMRC equivalent for Hyper-V?

    - by Ian Boyd
    VMRC was the client tool used to connect to virtual machines running on Virtual Server. Upgrading to Windows Server 2008 R2 with the Hyper-V role, i need a way for people to be able to use the virtual machines. Note: not all virtual machines will have network connectivity not all virtual machines will be running Windows some people needing to connect to a virtual machine will be running Windows XP Hyper-V manager, allowing management of the hyper-v server, is less desirable (since it allows management of the hyper-v server (and doesn't work on all operating systems)) What is the Windows Server 2008 R2 equivalent of VMRC; to "vnc" to a virtual server? Update: i think Tatas was suggesting Microsoft System Center Virtual Machine Manager Self-Service Portal 2.0 (?): Which requires SQL Server IIS Installing those would unfortunately violate our Windows Server 2008 R2 license. i might be looking at the wrong product link, since commenter said there is a version that doesn't require "System Center". Update 2: The Windows Server 2008 R2 running HyperV is being licensed with the understanding that it only be used to host HyperV. From the [Windows Server 2008 R2 Licensing FAQ][4]: Q. If I have one license for Windows Server 2008 R2 Standard and want to run it in a virtual operating system environment, can I continue running it in the physical operating system environment? A. Yes, with Windows Server 2008 R2 Standard, you may run one instance in the physical operating system environment and one instance in the virtual operating system environment; however, the instance running in the physical operating system environment may be used only to run hardware virtualization software, provide hardware virtualization services, or to run software to manage and service operating system environments on the licensed server. This is why i'm weary about installing IIS or SQL Server.

    Read the article

  • How to run a Turnkey Linux virtual machine on XenServer?

    - by Jader Dias
    Turnkey Linux distributes Linux virtual machines in a Xen compatible format. I have a XenServer instance running and I would like to run a recently downloaded Turnkey Linux virtual machine on it. But I have never used XenServer before. Can you point me a tutorial specific for this case, since the manual doens't seem to cover it very well?

    Read the article

  • how to do automatic backup of running vmware virtual machine?

    - by Radek
    I want to do regular automatic backup of my vmware virtual machine (16GB big, Windows XP) that is running I do not have an access to ESX admin. I can ask our admin to set up something in the admin area but I do not have access for myself. I have installed few programs that are important to me so I want to have working backup at any point of time. Note: I know I can copy all the files when the virtual machine is not up and running.

    Read the article

  • How to backup boot information of a PC's Hard Disk?

    - by Bear Bear
    I just installed Windows 7 on a partition of my hard disk and I'm planning to install Vista and Windows 8 on other partitions on the same disk, and maybe a few Linux distributions too. Before I install other Operating Systems, I would like to backup the boot information, just to make sure I can get back to Windows 7 in case anything goes wrong with the booting data. I already made a backup of the MBR using MBRFix but I'm wondering if I should save some Boot Sector from the Windows 7 partition? I noticed that Windows Vista install DVD has a "bootrec.exe" utility in the recovery console (Shift+F10) and it has some options to rewrite the boot sector of the partitions. Therefore I wonder if there is any utility (preferably on Hiren's BootCD) to backup/write the boot sector I would also like to know how to backup the boot data of a Linux installation

    Read the article

  • Repair GRUB bootloader for multiboot system

    - by user1715324
    I have two hard disks – one a 1TB and the other a 250 GB. I installed the OSes in the following order: Windows 7 on the first hard disk (1 TB) After that Kubuntu 12.04 on a partition (/dev/sdb7) on the second hard disk (250 GB) The second drive also contains an NTFS partition. Now, kubuntu's bootloader was installed on the second hard drive's MBR (and successfully detected Windows 7). So, whenever I wanted to load Windows I used to select the first hard disk from the BIOS boot menu and the second hard disk whenever I wanted to load kubuntu. I know I could have set the second hard disk as the default drive, still I prefered this method. The problem started when I installed Linux Mint 13 on the second hard disk (/dev/sdb3) and overwrote kubuntu's original MBR. Now, the GRUB just detects Mint and Windows. The MBR on the 1 TB hard disk is untouched. Is there a way I can modify the MBR on the second hard disk now so that it will show kubuntu and Mint both?

    Read the article

  • Is dual-booting an OS more or less secure than running a virtual machine?

    - by Mark
    I run two operating systems on two separate disk partitions on the same physical machine (a modern MacBook Pro). In order to isolate them from each other, I've taken the following steps: Configured /etc/fstab with ro,noauto (read-only, no auto-mount) Fully encrypted each partition with a separate encryption key (committed to memory) Let's assume that a virus infects my first partition unbeknownst to me. I log out of the first partition (which encrypts the volume), and then turn off the machine to clear the RAM. I then un-encrypt and boot into the second partition. Can I be reasonably confident that the virus has not / cannot infect both partitions, or am I playing with fire here? I realize that MBPs don't ship with a TPM, so a boot-loader infection going unnoticed is still a theoretical possibility. However, this risk seems about equal to the risk of the VMWare/VirtualBox Hypervisor being exploited when running a guest OS, especially since the MBP line uses UEFI instead of BIOS. This leads to my question: is the dual-partitioning approach outlined above more or less secure than using a Virtual Machine for isolation of services? Would that change if my computer had a TPM installed? Background: Note that I am of course taking all the usual additional precautions, such as checking for OS software updates daily, not logging in as an Admin user unless absolutely necessary, running real-time antivirus programs on both partitions, running a host-based firewall, monitoring outgoing network connections, etc. My question is really a public check to see if I'm overlooking anything here and try to figure out if my dual-boot scheme actually is more secure than the Virtual Machine route. Most importantly, I'm just looking to learn more about security issues. EDIT #1: As pointed out in the comments, the scenario is a bit on the paranoid side for my particular use-case. But think about people who may be in corporate or government settings and are considering using a Virtual Machine to run services or applications that are considered "high risk". Are they better off using a VM or a dual-boot scenario as I outlined? An answer that effectively weighs any pros/cons to that trade-off is what I'm really looking for in an answer to this post. EDIT #2: This question was partially fueled by debate about whether a Virtual Machine actually protects a host OS at all. Personally, I think it does, but consider this quote from Theo de Raadt on the OpenBSD mailing list: x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit. You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes. -http://kerneltrap.org/OpenBSD/Virtualization_Security By quoting Theo's argument, I'm not endorsing it. I'm simply pointing out that there are multiple perspectives here, so I'm trying to find out more about the issue.

    Read the article

  • Windows 2012 Server Hyper-V: Cannot see LAN

    - by Samuel
    I have one NIC on the machine loaded XP on the Hyper-V and had chosen the network as virtual switch. No LAN and no internet shows up on the client. Am I missing something? it used to work in 2008-R2. Details: One network card on machine (Qualcomm Atheros AR8131 PIC-E Gigabit Ethernet controller) The virtual machine hard disk is pointing to and existing XP-SP2 hard disk created using VPC 2007 The Virtual machine Network Adapter is setup as Virtual Switch to the real ethernet controller with Enable virtual LAN identification set to 2 (no other virtual machine is created in the system) After the virtual machine boots LAN shows empty in Control Panel Network Connections (this is XP client) and I also cannot access the internet. XP is showing activation prompt but as far as I know it should not disable the network! Virtual network switch is set to External

    Read the article

  • Windows 7 x64 how to verify integrity of ALL files on an NTFS disk?

    - by kilves76
    Looking for a tool that would verify integrity of ALL files on a Windows 7 x64 NTFS disk reliably? This is for testing of experimental defrag software, so it really needs to be secure and foolproof. I know it will take a long time, there's millions of files on the disk, but safety just cannot be compromised in a situation like this. Freeware solution much preferred. Can be either Windows software (=inducing pitfalls about files changing due to booting Windows) or a stand alone boot (for example linux boot cd + usb key for storing chksum/metadata).

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >