Search Results

Search found 6317 results on 253 pages for 'persistent storage'.

Page 15/253 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Persistent static route stops working after VPN drops and reconnects

    - by user76157
    I've got a VPN between two networks, one home and one office (A and B). Their subnets are: (A) 192.168.1.0 and (B) 192.168.0.0 The two networks have identical ADSL routers. Unfortunately these can only do dial-out VPN. So I've got a Windows 2008 server on Network B acting as a VPN server (ServerB). Network A's router (RouterA) passes through Network B's router and connects via PPTP to ServerB. RouterA is assigned the static IP 192.168.0.40 on Network B. There's a persistent static route on ServerB telling it to use 0.40 for all requests to Network A's subnet, 192.168.1.0. (route -p add 192.168.1.0 mask 255.255.255.0 192.168.0.40). This enables ServerB to ping all machines on A (and those machines to ping ServerB). The VPN connection occasionally drops (I'm not sure why - it's set to remain always on and seems to drop randomly). This wouldn't be too much of a problem, as it reconnects automatically and quickly, except that when it does reconnect, the static route on ServerB no longer works. Route print (on ServerB) shows that the persistent static route still exists. However a tracert to a machine on Network A doesn't use the static route; it tries instead to use ServerB's default gateway (which is RouterB), and fails to find the machine. Deleting and re-adding the static route fixes the problem - a tracert uses the static route. At the moment, a batch file to delete and re-add the static route is scheduled to run every day. But this is clearly far from an ideal solution! I hope that's not too confusing. Any help would be very much appreciated.

    Read the article

  • What is a good layout for a somewhat advanced home network and storage solution?

    - by Shaun
    My home network/storage needs are changing and I am searching for some opinions and starting points on what a good network/storage layout would be that can serve my needs for a few years into the future. I think I have a decent starting point for equipment, but I am also willing to invest fairly heavily in a solution that can last me for a while. I am a bit of a tech nerd and I have a moderate tolerance for setup of the solution. I would prefer if maintenance of the system is somewhat low once it is setup, but I am willing to accept some tradeoffs. Existing equipment: Router - Netgear WNDR3700 (gigabit) Router - DLink Gamerlounge DGL-4300 (gigabit) Switch - 16 port Trendnet green switch (gigabit) Switch - 5 port Trendnet green (gigabit) Computer - i7-950 office computer (gigabit ethernet) Computer - Q6600 quad core media center, hooked up to TV, records shows (gigabit ethernet) Computer - Acer 1810T ultraportable laptop (gigabit and N ethernet) NAS - Intel SS4200-E (gigabit) External hard drive - 2TB WD Green drive (esata) All kinds of miscellaneous network connected TV, Bluray, Verizon network extender, HDhomerun TV tuners, etc. Requirements: -Robust backup solution for a growing collection of huge family picture files and personal files, around 1.5TB. (Including offsite backup) -Central location for all user's files, while also keeping them secure from each other. -Storage for terabytes of movie backups and recorded TV, and access to them from all computers (maybe around 4TB eventually) -Possibility to host files to friends and family easily Nice to have: -Backup of terabytes of movie backups Intriguing possibilities: -Capability to have users' Windows desktops and files look the same from all network computers I am not sure if the new Windows Home Server 2011 would fit into this well, if I need a domain server, how best to organize my backups, or how to most effectively use RAID. Currently I am simply backing up all computers to a RAID 1 on the NAS box, which I was thinking could prevent a situation where I reach for a backup and find that the disk is corrupt. One possibility that I am thinking about now is simply using my media center PC with a huge RAID of hard drives on which all files are stored. Pseudo-backup of all files would be present because of the RAID, but important files would also be backed up off site via carrying hard drives to work. But what if corruption seeps into the files and the corrupted data is then backed up? Does RAID protect against this? I really want to take next to zero risks with the irreplaceable files. I can handle some degree of risk with the movies and other files. I'm looking for critiques on this idea as well as other possibilities. To summarize, my goal is high functionality, media capable, and robust backup of irreplaceable files.

    Read the article

  • Speaking in Raleigh NC June 15th

    - by Andrew Kelly
    Just a heads up to those in the area that I will be speaking at the (TriPASS) Raleigh SQL Server user group on the 15th of June 2010. The topic is Storage & I/O Best Practices. The abstract is listed below: SQL Server relies heavily on a well configured storage sub-system to perform at its peak but unfortunately this is one of the most neglected or mis-configured areas of a SQL Server instance. Here we will focus on the best practices related to how SQL Server works with the underlying storage...(read more)

    Read the article

  • Is there an online storage service that works with Windows 7 Backup and Restore? [closed]

    - by user57813
    I love Windows 7 Backup and Restore.. However, I kind of find it inconvenient to use external hard drive, plug it in, take it out, put it away safely etc.. Is there an online storage service to which Windows Backup can save the backup files to? Can I make Windows Backup store the backed up data to an online storage service? I do not want to manually upload the backup files. I want it to save directly to the cloud. Doesn't matter if the service costs me some $$$. I am using Windows 7 Ultimate 32-bit edition.

    Read the article

  • Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

    - by darrenm
    Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together. When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it. We start by configuring the zone and specifying an rootzpool resource: # zonecfg -z eizoss Use 'create' to begin configuring a new zone. zonecfg:eizoss> create create: Using system default template 'SYSdefault' zonecfg:eizoss> set zonepath=/zones/eizoss zonecfg:eizoss> set file-mac-profile=fixed-configuration zonecfg:eizoss> add rootzpool zonecfg:eizoss:rootzpool> add storage \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 zonecfg:eizoss:rootzpool> end zonecfg:eizoss> verify zonecfg:eizoss> commit zonecfg:eizoss> Now lets create the pool and specify encryption: # suriadm map \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 PROPERTY VALUE mapped-dev /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # echo "zfscrypto" > /zones/p # zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \ /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # zpool export eizoss Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install: zoneadm -z eizoss install -x force-zpool-import Configured zone storage resource(s) from: iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 Imported zone zpool: eizoss_rpool Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install Image: Preparing at /zones/eizoss/root. AI Manifest: /tmp/manifest.xml.ujaq54 SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: eizoss Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/solaris/release/ Please review the licenses for the following packages post-install: consolidation/osnet/osnet-incorporation (automatically accepted, not displayed) Package licenses may be viewed using the command: pkg info --license <pkg_fmri> DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 187/187 33575/33575 227.0/227.0 384k/s PHASE ITEMS Installing new actions 47449/47449 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 929.606 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install That was really all we had to do, when the install is done boot it up as normal. The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases: rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally. # zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local  

    Read the article

  • How do I recover lost/inacessible data from my storage device?

    - by TomWij
    What steps can I take to try to recover lost or inaccessible data from any storage device? Answers: This applies to any main storage devices; eg. internal/external hard drive, USB stick, Flash memory. The most important thing is to STOP using it, any type of I/O can ruin your chances of a recovery. Click here in case you suspect corruption or bad sectors. Click here in case you suspect mechanical issues.

    Read the article

  • How to disable 3G USB Modem internal storage from being loaded by linux kernel?

    - by Krystian
    Hi, I've got a problem with my 3G modem [Huawei E122]. It has internal storage and kernel assigns a device [/dev/sdX] to it. Because of that, every second time my machine will not boot - kernel panic - as my usb hdd gets assigned /dev/sdb instead of /dev/sda. I cannot use LABEL nor UUID in root= kernel parameter, as it is only available when using initrd, and I can't use it - I am using Debian on my router - mips architecture machine. I have to prevent this from happening, as my router has to start everyday and I have to be sure it works ok. I don't have physical access to restart it when something goes wrong. I don't use my modem internal storage, there's no SD card inserted. However kernel detects the reader and loads it. I can not prevent loading od usb drivers since my hdd is on USB as well. I will appreciate any ideas.

    Read the article

  • Web/Cloud Based OS with Torrent Features and Free Storage?

    - by Kristina E
    Hi, I want a web-based OS with a torrent client and I want to link it to one of the many free cloud storage solutions. I think it would be really cool to be able to check and download torrents anywhere and not use my hardware or connection until I want to transfer the files down to my actual desktop (like burning a Linux ISO or to convert the file to a IFO format). Anywyas, I created accounts at 4Shared, EyeOS, GlideOS, ADrive and iCloud and am having no luck. There is an eyeTorrent app but I can't seem to get it configured and I can't log into my cloud storage from the cloud OS. Has anyone been able to pull this off and if so would you please explain how? Thanks, Kristina

    Read the article

  • Reducing storage cost by moving old files to external USB HDDs. Your thoughts?

    - by cparker4486
    I've got about 300GB of pictures and marketing data that is rarely accessed and I'd like to get it off my main storage. I was thinking to simply add two external USB HDDs to the server and move all the files to one of the drives. The second drive would be the backup destination for the first drive. I'm working with Server 2003 R2 SP2. This will help me free a good amount of space on my main storage as well as reduce the complexity, backup window, and usage of my backups to tape.

    Read the article

  • How does storage spaces decide where to put my files?

    - by George Duckett
    With Windows 8 Storage Spaces, you can lump many hard disks of varying types, speeds and sizes together to use as a single storage space / logical drive. How does Windows decide what to place where? For example will it move files about depending on frequency of access? Maybe splitting files frequently accessed together between hard disks etc. What does it optimize for? Speed, reliability, etc? If the above is asking too much, can I easily see where the files are physically (on which physical disk)?

    Read the article

  • Distributed, persistent cache using EHCache

    - by Richard
    I currently have a distributed cache using EHCache via RMI that works just fine. I was wondering if you can include persistence with the caches to create a distributed, persistent cache. Alongside this, if the cache was persistent, would it load from the file store, then bootstrap from the cache cluster? Basically, what I want is: Cache starts Cache loads persistent objects from the file store Cache joins the distruted cluster and bootstraps as normal The usecase behind this is having 2 identical components running on independent machines, distributing the cache to avoid losing data in the event that one of the components fails. The persistence would guard against losing all data on the rare occasion that both components fail. Would moving to another distribution method (such as Terracotta) support this?

    Read the article

  • Microsoft Cuts Windows Azure Compute and Storage Pricing

    The savings begin with Microsoft's Windows Azure Storage Pay-As-You-Go service, which now costs $0.125 per GB as opposed to $0.14 per GB, a savings of 12 percent. Microsoft also slashed the pricing for Windows Azure Storage's 6 Month Plans as much as 14 percent across all tiers. Lastly, compute customers can now enjoy Windows Azure Extra Small Compute pricing of $0.02 per hour instead of $0.04 per hour, a savings of 50 percent. To exhibit the cost advantages offered by Windows Azure, Microsoft noted in a blog post that a 24x7 Extra Small Compute instance with a 100MB SQL Azure database can b...

    Read the article

  • Storage and BackUp Strategies

    - by Chandra Vennapoosa
    Many of us are familiar with backing up our data.  While it sounds pretty simple, the fact is that most of the computer users do not backup their data. Some of the excuses which they often make involve how long it takes, how slow it is, or how many DVDs or disks they need. However, once disaster strikes, the loss that you will suffer by not having your data backed up can be very severe. Topics Introduction What is an Online Back Up? Online Back Up Strategies Implementing Disaster Avoidance Implementation and Storage Security Issues to Consider Reader complete article : Storage and BackUp Strategies

    Read the article

  • Help me with CDMA usb modem (How to change storage mode to modem mode)

    - by user98698
    My modem isn't in the device list of usb_modeswitch.... so I cannot automatically use the modem on the ubuntu.... I found the way to change the modem from usb storage mode to modem mode and I can use my modem. But I forgot the instructions. I don't know the modem's name. It's chinese modem. But here is vendor/product id ....05 Qualcomm, Inc. Mass Storage Device and I think the target vendor/product id is ..... 05. So help me to change. I don't want to replace modem. pls. Sorry for my english....

    Read the article

  • FREE Windows Azure Platform Compute and Storage through the Cloud Essentials Pack for Partners

    - by Eric Nelson
    It can be difficult to find something to look forward to in January – but this year it was a little easier as a) I got lots of great Xbox 360 games and b) the Windows Azure Platform element of the Cloud Essentials Pack for Microsoft Partner Network partners went live. I have previously explained what the Cloud Essentials Pack is and how you can access – but at the time I couldn’t share the details of the Windows Azure Platform element. The Windows Azure Platform element is now available. It gives you each month, for FREE: Windows Azure: 750 hours of extra small compute instance 25 hours of small compute instance 3GB of storage and 250,000 storage transactions SQL Azure: 1 SQL Azure Web Edition database (5GB) Windows Azure AppFabric: App Fabric with 100,000 Access Control transactions and 2 Service Bus connections Plus: Data Transfer:  3GB in and 6GB out (More details of the offer) To activate this offer You need to: Sign your company up to Microsoft Platform Ready (NB: there are other routes to get this benefit – but I know about MPR) Read about Microsoft Platform Ready Visit http://www.microsoftcloudpartner.com/ and sign up.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >