Search Results

Search found 16587 results on 664 pages for 'virtual hardware'.

Page 166/664 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • How can the shared hosting server provide unlimited physical subdomains as opposed to unlimited virtual subdomains?

    - by xport
    Some hosting companies offer unlimited subdomains. There are two kind of subdomains: physical subdomains and virtual subdomains. A physical subdomains has its own site directory rather than being nested inside the site directory of its parent domain. A virtual subdomain site directory, on the other hand, is nested inside the site directory of its parent domain. I wonder how can the shared hosting company provide unlimited (theoritically) physical subdomains? In my understanding, each physical subdomain represents a new site (rather than a new application or virtual directory) in IIS. Please correct me if my mental model is wrong.

    Read the article

  • How to configure an Ubuntu 12.04 virtual server and VMWare ESXi5 the way VMWare would be able to shut it down properly?

    - by Ivan
    I run an Ubuntu 12.04 server as a virtual machine on a VMWare ESXi 5 server. I've configured VMWare to shut the quest machines down the sane way (with an ACPI (if I understand it righ) shutdown signal so that guest OSes would do it). And this works with other VMs (running Windows 7 Professional and openSuSE) but doesn't work with the Ubuntu server - VMWare still offers just to power them off when I ask it to stop the guest. Any ideas how to fix this?

    Read the article

  • Access a windows dynamic hard drive through a virtual machine on ubuntu?

    - by Enigma
    I have a Windows 7 OS and am thinking about transitioning to a dual boot set up with Ubuntu 12.04. From what I recall, it is not possible to natively access Dynamic Windows Partitions in a Linux OS. My thought is that it might be possible to have a virtual machine (running windows) installed within Ubuntu access the physical dynamic drive. The problem comes to whether VMWare can access the physical disk "high enough" to be able to mount it within the windows virtual machine as a native device or if it gets passed through from the native Linux OS. This is really the only thing holding me back from switching to a dual-boot set up as the dynamic disk is made up of 4 or 5 hard drives and I would very much like access to the data on both OS's. Alternatively, is there another solution for combining multiple physical hard drives into one virtual hard drive that would be readable on both OS's?

    Read the article

  • What does "kTriangles/s" mean in hardware graphics benchmark reports?

    - by swquinn
    I've looked around and found several sites offering benchmarking statistics for mobile platforms and I've been seeing the unit of measure as "kTriangles/s". Originally I misread this, missing the 'k'; does this translate to "thousand(s) of triangles/s", e.g.: 8902 kTriangles/s = 8,902,000 triangles/s (I'm pretty sure that my interpretation is correct, but I hope someone can confirm this for me) Thanks!

    Read the article

  • Hardware question; booting ubuntu off of USB flash drive?

    - by Kon
    I'm currently running ubuntu off of a USB 3.0 flash drive on my laptop, and I have another USB 3.0 flash drive I use for storing larger files that I need to access and whatnot. But the issue is that both of the usb 3.0 flash drives don't fit next to each other at all, so I'm forced to use another usb 2.0 port located in a different spot on my laptop. Now my question is that if I use a USB 3.0 hub will that slow my connections down at all? or cause some sort of interference? Another question I had which I was unable to find anywhere on the internet is if there was a USB 3.0 hub with more than one upstream connection? Thanks for your time!

    Read the article

  • Is virtual machine slower than the underlying physical machine?

    - by Michal Illich
    This question is quite general, but most specifically I'm interested in knowing if virtual machine running Ubuntu Enterprise Cloud will be any slower than the same physical machine without any virtualization. How much (1%, 5%, 10%)? Did anyone measure performance difference of web server or db server (virtual VS physical)? If it depends on configuration, let's imagine two quad core processors, 12 GB of memory and a bunch of SSD disks, running 64-bit ubuntu enterprise server. On top of that, just 1 virtual machine allowed to use all resources available.

    Read the article

  • why there is a lot change in the hardware support features in different linux os?

    - by TrazerWalker
    i am using lenovo b460 with 3 GB ram and 500 GB hard disk,(no graphics card). when i live boot Linux mint i was able to connect to WiFi.but when i booted up Ubuntu 11.4 i was not able to connect through WiFi.then later i installed Ubuntu 12.4 and then i got connected with WiFi, but after installing icons and themes through my unity i am not able to view the content in the terminal and the WiFi is also not connecting now even after i changed them all back. wireless connectivity

    Read the article

  • Choosing the right TV tuner - USB or PCI TV tuners, hardware/software, DVB? Hybrid/combo/analog?

    - by Nucleon
    Greetings, I'll start with some background information so you know what I'm trying to accomplish and then get to my question. I work at a Television station in the US and we are working on setting up an online DVR/Podcast system for all of our newscasts. So basically we would be recording every newscast in HD, encoding it to flv/h.264 for viewing in a browser on flash compatible and iphone/ipad devices, eventually migrating to WebM when it's browser compliant. This task is theoretically pretty simple as it all it involves is a TV tuner device and a program like VLC, MythTV or whatever to schedule and dump it to a file, encode it with VLC/FFMPEG and push it to the streaming server. Now to the hardware, in order to accomplish that task, should I use an internal PCI tuner or a USB 2.0 tuner? Is there a difference? The bus speeds of both are not too far apart, and is the bus speed really relevant in this case? Does it matter if the device has a hardware encoder or a software encoder? On many sites the USB was recommended for ease of set up and use, but would it overly task a processor, or is that not a concern as long as it's a decent PC (at least duo core, 6gb ram). What's the difference between the stick USB and the Box USBs? To my understanding analog is basically gone in the US, so we would want a hybrid or combo tuner correct? How do those differ from DVB? Are there any other features or concepts which I am missing which may influence the recommended product. It would be ideal if the device which could work in both Linux and a Windows environment, to my knowledge most Hauppauge are? Example 1: PCI Hauppage http://www.newegg.com/Product/Product.aspx?Item=N82E16815116033 Example 2: USB 2.0 Box http://www.newegg.com/Product/Product.aspx?Item=N82E16815116029 Example 3: USB 2.0 Stick http://www.newegg.com/Product/Product.aspx?Item=N82E16815116031 Any guidance from the Superusers would be much appreciated!

    Read the article

  • Can a hardware firewall block a server accessing its OWN UNC shares?

    - by Simon
    I need to set up a UNC share for my hosted dedicated server to access a share on itself. Unfortunately TFS requires a UNC share. I am on a Windows Server 2008 Standard SP2 64bit dedicated server behind a PIX 501 firewall hosted with GoDaddy. I just cannot get the server to access itself and get this error: Windows cannot access \\SERVER\SHARE Check the spelling of the name.. etc. I've found numerous questions about this but no answer to my problem. Server 2008 Standard x64 SP2 Workgroup - not domain Windows Firewall is off Computer browser service is on I am trying to access \\MYMACHINE\TFS-BUILDS by typing in - or double clicking. Neither works. Machine has single network card Filesharing wizard says share was ok Share was showing under 'Computer management' Permissions are set to 'everyone' full control No obvious errors in eventlog Reboot didn't fix it Unfortunately I cannot try to access other shares in or out of this machine because it is a hosted dedicated server and the only machine behind a hardware firewall. The only thing left i can think of is that the hardware firewall needs to be configured. Is this possible? Does 'UNC traffic' go out of the machine and then back in again?

    Read the article

  • What's the piece of hardware listening on Facebook's or Wikipedia's IP address?

    - by Igor Ostrovsky
    I am trying to understand how massive sites like Facebook or Wikipedia work, for my intellectual curiosity. I read about various techniques for building scalable sites, but I am still puzzled about one particular detail. The part that confuses me is that ultimately, the DNS will map the entire domain to a single IP address, or a handful of IP addresses in the case of round-robin DNS. For example, wikipedia.org has only one type-A DNS record. So, people from all over the world visiting Wikipedia have to send a request to the one IP address specified in DNS. What is the piece of hardware that listens on the IP address for a massive site, and how can it possibly handle all the load coming from the requests for users all over the world? Edit 1: Thanks for all the responses! Anycast seems like a feasible answer... Does anyone know of a way to check whether a particular IP address is anycast-routed, so that I could verify that this really is the trick used in practice by large sites? Edit 2: After more reading on the topic, it appears that anycast is not typically used for dynamic web content. Anycast is usually used for UDP (e.g., DNS lookups), or sometimes for static content. One interesting thing to note is that Facebook uses profile.ak.fbcdn.net to host static content like style sheets and javascript libraries. Each time I ping this name, I get a response from a different IP address. However, I can't tell whether this is anycast in action, or a completely different technique. Back to my original question: as far as I can tell, even a large site will have a single expensive piece of load-balancing hardware listening on its handful of public IP addresses.

    Read the article

  • How do I lower the hardware volume? (volume too high)

    - by Zom-B
    I have a 4yo Dell laptop with Windows XP Pro (modern ones unfortunately don't have a physical volume knob), and lately I'm using my Apple earphones, because they have much better low frequency response than my $10 earphones. They also have the side effect of being much louder. To give an indication of my agony, for most tasks (movie, music, games) I have my main volume at 3 ticks: drag to 0 with the mouse and press the up key 3 times (the handle does not even raise 1 pixel) and my wave volume at 50%. I notice that when I do this, I have a lot of digital noise, because I'm using just a tiny fraction of the 16-bit space. If I drag the Wave slider down until I barely hear the audio, it becomes really distorted and noisy, indicating that this is digital volume (in the DirectSound driver or something) and not hardware volume. I experimented in Audition. When I make a tone of 1000Hz at -50db, (all windows volumes at max) the volume is just below my pain threshold. When I zoom in to see how high the sample values reach, I see that just 8 of the 16 bits are used (about -100 ~ 100). When I generate such tone at -80db (minimum I can specify) then I can still clearly hear the tone, although really noisy. When I zoom in, I see that just 3 out of 16 bits are used. I created a squarewave tone that is just 1 bit high, and I can still hear it! For most uses, this is not a big problem (audiophiles will disagree!), as I just have more noise than usual (about the same as old 8 bit hardware), but I'm also in the process of programming a hearing test program, in which case this problem is a death blow as the test subjects will even hear tones at the bottom of the theoretical range (lowering the windows volume is futile, see above) (I cannot update drivers, as Dell has discontinued XP support for my model)

    Read the article

  • Zero downtime deployment (Tomcat), Nginx or HAProxy, behind hardware LB - how to "starve" old server?

    - by alexeypro
    Currently we have the following setup. Hardware Load Balancer (LB) Box A running Tomcat on 8080 (TA) Box B running Tomcat on 8080 (TB) TA and TB are running behind LB. For now it's pretty complicated and manual job to take Box A or Box B out of LB to do the zero downtime deployment. I am thinking to do something like this: Hardware Load Balancer (LB) Box A running Nginx on 8080 (NA) Box A running Tomcat on 8081 (TA1) Box A running Tomcat on 8082 (TA2) Box B running Nginx on 8080 (NB) Box B running Tomcat on 8081 (TB1) Box B running Tomcat on 8082 (TB2) Basically LB will be directing traffic between NA and NB now. On each of Nginx's we'll have TA1, TA2 and TB1, TB2 configured as upstream servers. Once one of the upstreams's healthcheck page is unresponsive (shutdown) the traffic goes to another one (HttpHealthcheckModule module on Nginx). So the deploy process is simple. Say, TA1 is active with version 0.1 of the app. Healthcheck on TA1 is OK. We start TA2 with Healthcheck on it as ERROR. So Nginx is not talking to it. We deploy app version 0.2 to TA2. Make sure it works. Now, we switch the Healthcheck on TA2 to OK, switch Healthcheck to TA1 to ERROR. Nginx will start serving TA2, and will remove TA1 out of rotation. Done! And now same with the other box. While it sounds all cool and nice, how do we "starve" the Nginx? Say we have pending connections, some users on TA1. If we just turn it off, sessions will break (we have cookie-based sessions). Not good. Any way to starve traffic to one of the upstream servers with Nginx? Thanks!

    Read the article

  • How do you work out the IIS Virtual Path for an application?

    - by joshcomley
    When I try to change the ASP.NET version to v4 on IIS 6, I receive the following warning: Changing the Framework version requires a restart of the W3SVC service. Alternatively, you can change the Framework version without restarting the W3SVC service by running: aspnet_regiis.exe -norestart -s IIS-Viirtual-Path Do you want to continue (this will change the Framework version and restart the W3SVC service)? How do I work out IIS-Virtual-Path? I have tried the obvious paths i.e.: aspnet_regiis.exe -norestart -s "/WebSites/Extranet/AppName" Where WebSites is the name of the folder in IIS, Extranet the name of the root app and AppName the name of the Virtual Directory application I am trying to change. Thanks! Edit: How do I work out the virtual path for the Auth virtual directory in following IIS6 setup: I have tried: aspnet_regiis.exe -norestart -s "/Web Sites/Extranet/Auth" aspnet_regiis.exe -norestart -s "Auth" I get: Installation stopped because the specified path (WhateverIPutIn) is invalid.

    Read the article

  • Why can't I declare C# methods virtual and static?

    - by Luke
    I have a helper class that is just a bunch of static methods and would like to subclass the helper class. Some behavior is unique depending on the subclass so I would like to call a virtual method from the base class, but since all the methods are static I can't create a plain virtual method (need object reference in order to access virtual method). Is there any way around this? I guess I could use a singleton.. HelperClass.Instance.HelperMethod() isn't so much worse than HelperClass.HelperMethod(). Brownie points for anyone that can point out some languages that support virtual static methods. Edit: OK yeah I'm crazy. Google search results had me thinking I wasn't for a bit there.

    Read the article

  • Why does my 64-bit IIS app pool show 3 gigabytes more virtual memory than private memory?

    - by Brett
    I have an ASP.Net application that I am running on 64-bit IIS 6 on Windows XP x64. When I open performance counters after one page hit of a trivial page, I see a Private Bytes of about 88 megs, but a Virtual Bytes of about 3 Gigs. When I try the same thing with a VERY trivial ASP.Net app, I get the same result. We see something similar on Windows Server 2003 in production -- there it is an issue because we recycle when the virtual memory consumed outgrows a limit. Before we make any changes to our recycling settings, we'd like to answer the following questions: Why does the app pool grab such a large hunk of virtual memory? Is the amount of virtual memory headroom the app requests configurable? Thanks! Brett

    Read the article

  • Is it possible to define a virtual directory in IIS and make the files relative to the physical dire

    - by Mikey John
    Is it possible to define a virtual directory in IIS and somehow make the files in that directory relative to the physical directory and not to the virtual directory ? For instance on my server I have the following folders: D:\WebSite\Css\myTheme.css, D:\WebSite\Images\image1.jpg I created a virtual directory on IIS resources.mysite: Inside my website I reference the sheet like this resources.mysite/myTheme.css But inside myTheme.css I reference pictures from ../Images/images1.jpg. So the problem is that image1.jpg is not found because it is relative to the physical folder and not the virtual folder on IIS. Can I solve this problem without modifying the style sheet ?

    Read the article

  • How to implement an interface class using the non-virtual interface idiom in C++?

    - by andreas buykx
    Hi all, In C++ an interface can be implemented by a class with all its methods pure virtual. Such a class could be part of a library to describe what methods an object should implement to be able to work with other classes in the library: class Lib::IFoo { public: virtual void method() = 0; }; : class Lib::Bar { public: void stuff( Lib::IFoo & ); }; Now I want to to use class Lib::Bar, so I have to implement the IFoo interface. For my purposes I need a whole of related classes so I would like to work with a base class that guarantees common behavior using the NVI idiom: class FooBase : public IFoo // implement interface IFoo { public: void method(); // calls methodImpl; private: virtual void methodImpl(); }; The non-virtual interface (NVI) idiom ought to deny derived classes the possibility of overriding the common behavior implemented in FooBase::method(), but since IFoo made it virtual it seems that all derived classes have the opportunity to override the FooBase::method(). If I want to use the NVI idiom, what are my options other than the pImpl idiom already suggested (thanks space-c0wb0y).

    Read the article

  • Is there a way to split a widescreen monitor in to two or more virtual monitors?

    - by Mike Thompson
    Like most developers I have grown to love dual monitors. I won't go into all the reasons for their goodness; just take it as a given. However, they are not perfect. You can never seem to line them up "just right". You always end up with the monitors at slight funny angles. And of course the bezel always gets in the way. And this is with identical monitors. The problem is much worse with different monitors -- VMWare's multi monitor feature won't even work with monitors of differnt resolutions. When you use multiple monnitors, one of them becomes your primary monitor of focus. Your focus may flip from one monitor to the other, but at any point in time you are usually focusing on only one monitor. There are exceptions to this (WinDiff, Excel), but this is generally the case. I suggest that having a single large monitor with all the benefits of multiple smaller monitors would be a better solution. Wide screen monitors are fantastic, but it is hard to use all the space efficiently. If you are writing code you are generally working on the left-hand side of the window. If you maximize an editor on a wide-screen monitor the right-hand side of the window will be a sea of white. Programs like WinSplit Revolution will help to organise your windows, but this is really just addressing the symptom, not the problem. Even with WinSplit Revolution, when you maximise a window it will take up the whole screen. You can't lock a window into a specific section of the screen. This is where virtual monitors comes in. What would be really nice is a video driver that sits on top of the existing driver, but allows a single monitor to be virtualised into multiple monitors. Control Panel would see your single physical monitor as two or more virtual monitors. The software could even support a virtual bezel to emphasise what is happening, or you could opt for seamless mode. Programs like WinSplit Revolution and UltraMon would still work. This virtual video driver would allow you to slice & dice your physical monitor into as many virtual monitors as you want. Does anybody know if such software exists? If not, are there any budding Windows display driver guru's out there willing to take up the challenge? I am not after the myriad of virtual desktop/window manager programs that are available. I get frustrated with these programs. They seem good at first but they usually have some strange behaviour and don't work well with other programs (such as WinSplit Revolution). I want the real thing!

    Read the article

  • virtual directory not being configured as an application in IIS.

    - by david
    hi i am comletly new to iis and asp.net i am trying to setup bugNET on a godaddy server. i created a virtual directory and once i tried to launch the site i get this error: Parser Error Message: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. here is complete detail of what i did. hosting: godaddy created virtual directory (child folder of root) - named: devbunk with these settings (anonymous access, directory browsing) that is all i can do to with iis on godaddy. the error tells me that i need to turn the virtual directory as an application. godaddy doesnt let me do that... how do i do it? btw, i have iis7 setup.

    Read the article

  • Making Visual C++ DLL from C++ class

    - by prosseek
    I have the following C++ code to make dll (Visual Studio 2010). class Shape { public: Shape() { nshapes++; } virtual ~Shape() { nshapes--; }; double x, y; void move(double dx, double dy); virtual double area(void) = 0; virtual double perimeter(void) = 0; static int nshapes; }; class __declspec(dllexport) Circle : public Shape { private: double radius; public: Circle(double r) : radius(r) { }; virtual double area(void); virtual double perimeter(void); }; class __declspec(dllexport) Square : public Shape { private: double width; public: Square(double w) : width(w) { }; virtual double area(void); virtual double perimeter(void); }; I have the __declspec, class __declspec(dllexport) Circle I could build a dll with the following command CL.exe /c example.cxx link.exe /OUT:"example.dll" /DLL example.obj When I tried to use the library, Square* square; square->area() I got the error messages. What's wrong or missing? example_unittest.obj : error LNK2001: unresolved external symbol "public: virtual double __thiscall ... Square::area(void)" (?area@Square@@UAENXZ) ADDED Following wengseng's answer, I modified the header file, and for DLL C++ code, I added #define XYZLIBRARY_EXPORT However, I still got errors. example_unittest.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: __th iscall Circle::Circle(double)" (__imp_??0Circle@@QAE@N@Z) referenced in function "protected: virtual void __thiscall TestOne::SetUp(void)" (?SetUp@TestOne@@MAEXXZ) example_unittest.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: __th iscall Square::Square(double)" (__imp_??0Square@@QAE@N@Z) referenced in function "protected: virtual void __thiscall TestOne::SetUp(void)" (?SetUp@TestOne@@MAEXXZ) example_unittest.obj : error LNK2001: unresolved external symbol "public: virtual double __thiscall Square::area(void)" (?area@Square@@UAENXZ) example_unittest.obj : error LNK2001: unresolved external symbol "public: virtual double __thiscall Square::perimeter(void)" (?perimeter@Square@@UAENXZ) example_unittest.obj : error LNK2001: unresolved external symbol "public: virtual double __thiscall Circle::area(void)" (?area@Circle@@UAENXZ) example_unittest.obj : error LNK2001: unresolved external symbol "public: virtual double __thiscall Circle::perimeter(void)" (?perimeter@Circle@@UAENXZ)

    Read the article

  • Is there any way to prevent a Delphi application from using Virtual Storage on Vista/Win 7 without e

    - by croceldon
    The question pretty much says it all. I have an app with an older component that doesn't work right if runtime themes are enabled. But if I don't enable them, the app always ends up messing with the virtual store. Thanks! Update: Using Mark's solution below, the application no longer writes to the Virtual Store. But, now it won't access a tdb file (Tiny Database file) that it needs. This tdb file is the same file that was being written to the Virtual store. Any ideas on how I can give it access to the tdb file and still prevent writing the Virtual Store?

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >