Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 106/361 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Problem installing Cardbus/PCMCIA drivers (USB 2.0 2-port)

    - by Carl
    I obtained the drivers from the manufacturer for my HT-Link NEC USB 2.0 2-port Cardbus card. When I plugged in the card before I got the drivers, 3 new entries showed up in the Device Manager - two "NEC PCI to USB Open Host Controller" and one "Standard Enhanced PCI to USB Host controller." With the card plugged in, I uninstalled those two drivers. I then removed the card. I copied the new drivers to c:\windows\system32\drivers and the .inf file to c:\windows\inf. I also copied the drivers & inf to a new directory called c:\windows\drivers\ousb2. I reinserted the card. Windows automatically installed the same drivers as before. I selected 'update driver' on the "NEC PCI to USB..." entry and didn't see any other options. I then selected 'have disk' and pointed to c:\windows\drivers\ousb2 and got a message "The specified location does not contain information about your hardware." I then selected 'update driver' on the "Standard Enhanced PCI to USB...," and manually selected "USB 2.0 Enhanced Host Controller" (OWC 4/15/2003 2.1.3.1). Windows then automatically found a USB root hub, and I manually selected "USB 2.0 Root Hub Device" (OWC 4/15/2003 2.1.3.1). Now there are two sections in the Device Manager titled "Universal Serial Bus controllers." I plugged in my external USB hard disk adapter, and "USB Mass Storage Device" was added to the first set. Here's how it looks (w/drivers from the properties): [Universal Serial Bus controllers] Intel(R) 82801DB/DBM USB 2.0 Enhanced Host Controller - 24CD (6/1/2002 5.1.2600.0) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C2 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C4 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C7 (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) USB Mass Storage Device USB Root Hub (7/1/2001 5.1.2600.5512) (5 more USB Root Hubs - same driver) [Universal Serial Bus controllers] USB 2.0 Enhanced Host Controller (OWC 4/15/2003 2.1.3.1) USB 2.0 Root Hub Device (OWC 4/15/2003 2.1.3.1) When I unplug the card the two "NEC PCI to USB..." entries in the first set disappear, and the whole second set disappears. (I unplugged the hard disk adapter first...) The hard disk adapter still doesn't work in that Cardbus card with the new drivers. I don't think the above looks right - a second set of USB controllers listed in the Device Manager, and the NEC entries still in the first set, and the the USB mass storage device still in the first set. Any help appreciated. (Windows XP PRO SP3 w/all current updates.)

    Read the article

  • Windows 8 Disk Mirroring vs Intel Fake RAID

    - by Johnny W
    So Windows 8 is out and I have a new motherboard. I wish to create a RAID 1 coupling between two HDDs -- for storage purposes only (my OS is on an SSD) -- but I don't know which is the best route to take. My motherboard (Z77 chipset) comes with the age old Intel Fake RAID, but since I only wish to use my RAID for storage, I wondered if I might be better to use Windows 8 Disk Mirroring. Can anyone advise which is better? Or perhaps the pros and cons of each, if that's too contentious? I just can't see the benefit of FakeRAID. You can see my current setup here, if that might change things(?): Thanks!

    Read the article

  • Logging Bounced messages to a Database (Postfix with virtual domains/users)

    - by Gurunandan
    We have a postfix installation with a couple of virtual domains each with virtual users. These domains and users are mapped using a mysql database. I have been until now tracking bounces by parsing the postfix log file. I suspect there must be better and more efficient ways of doing this. I thought of three but I am not sure what is best: Write a Postfix content filter that logs the bounce and throws away the mail Use procmail - but I am not sure how procmail would work with virtual users who have no $HOME defined Write a script that POPs mail from mailboxes; parses and logs them and deletes the bounced email I would appreciate advise on which would be best from a maintenance point of view and efficient from conserving server resources point of view. Thanks

    Read the article

  • ZFS SAS/SATA controller recommendations

    - by ewwhite
    I've been working with OpenSolaris and ZFS for 6 months, primarily on a Sun Fire x4540 and standard Dell and HP hardware. One downside to standard Perc and HP Smart Array controllers is that they do not have a true "passthrough" JBOD mode to present individual disks to ZFS. One can configure multiple RAID 0 arrays and get them working in ZFS, but it impacts hotswap capabilities (thus requiring a reboot upon disk failure/replacement). I'm curious as to what SAS/SATA controllers are recommended for home-brewed ZFS storage solutions. In addition, what effect does battery-backed write cache (BBWC) have in ZFS storage?

    Read the article

  • Could we build a mega-processor out of superconductors?

    - by Carson Myers
    A superconductor, once cooled below a critical temperature, loses all of its electrical resistance and therefore becomes 100% efficient. This means that when a current flows through a superconductor, none of the energy is lost to heat or light. Theoretically, could we build a processor out of superconductive materials, that could effectively run at, oh I don't know, say, 300ghz? or 5,000ghz? Since a superconductive circuit is 100% efficient, this means that once supplied with electricity, the source of power could be completely removed from the circuit and the current would continue to flow forever. So if we made all the components inside a computer out of superconductive materials, could we get away with only supplying power to the peripherals and save a-whole-lot on energy, while dramatically increasing computing speed? Might this be one of the next big breakthroughs in computing? What do you think?

    Read the article

  • local cache for NAS or network folder

    - by HugoRune
    I am planning to build a network attached storage (NAS) server. Is there a way to cache frequently acccessed files from the remote storage automatically on the local PC? (I am not looking for a way to sync whole folders like rsync, but rather something that automatically and transparently caches the last accessed 50 gb of files.) Ideally I am searching for something that caches writes as well as reads, since only one pc will be accessing the server (and one day of lost changes if the local cache is damaged would be acceptable) I looked into windows offline files, but as far as I could tell this requires manual interaction to disconnect the server or go into offline mode in order to use the cache. The server would probably be running Linux or freeNAS, the pc runs Windows xp, but could be upgraded to 7 if required.

    Read the article

  • Issue with VMWare vSphere and NFS: re occurring apd state

    - by Bastian N.
    I am experiencing issues with VMWare vSphere 5.1 and NFS storage on 2 different setups, which result in an "All Path Down" state for the NFS shares. This first happened once or twice a day, but lately it occurs much more frequent, as specially when Acronis Backup jobs are running. Setup 1 (Production): 2 ESXi 5.1 hosts (Essentials Plus) + OpenFiler with NFS as storage Setup 2 (Lab): 1 ESXi 5.1 host + Ubuntu 12.04 LTS with NFS as storage Here is an example from the vmkernel.log: 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 248: APD Timer started for ident [987c2dd0-02658e1e] 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 395: Device or filesystem with identifier [987c2dd0-02658e1e] has entered the All Paths Down state. 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 846: APD Start for ident [987c2dd0-02658e1e]! 2013-05-28T08:07:37.485Z cpu0:2052)NFSLock: 610: Stop accessing fd 0x410007e4cf28 3 2013-05-28T08:07:37.485Z cpu0:2052)NFSLock: 610: Stop accessing fd 0x410007e4d0e8 3 2013-05-28T08:07:41.280Z cpu1:2049)StorageApdHandler: 277: APD Timer killed for ident [987c2dd0-02658e1e] 2013-05-28T08:07:41.280Z cpu1:2049)StorageApdHandler: 402: Device or filesystem with identifier [987c2dd0-02658e1e] has exited the All Paths Down state. 2013-05-28T08:07:41.281Z cpu1:2049)StorageApdHandler: 902: APD Exit for ident [987c2dd0-02658e1e]! 2013-05-28T08:07:52.300Z cpu1:3679)NFSLock: 570: Start accessing fd 0x410007e4d0e8 again 2013-05-28T08:07:52.300Z cpu1:3679)NFSLock: 570: Start accessing fd 0x410007e4cf28 again As long as the issue occurred once or twice a day it really wasn't a problem, but now this issue has impact on the VMs. The VMs get slow or even hang, resulting in a reset through vCenter in the production environment. I searched the web extensively and asked in forums, but till now nobody was able to help me. Based on blog posts and VMWare KB articles I tried the following NFS settings: Net.TcpipHeapSize = 32 Net.TcpipHeapMax = 128 NFS.HartbeatFrequency = 12 NFS.HartbeatMaxFailures = 10 NFS.HartbeatTimeout = 5 NFS.MaxQueueDepth = 64 Instead of NFS.MaxQueueDepth = 64 I already tried other settings like NFS.MaxQueueDepth = 32 or even NFS.MaxQueueDepth = 1. Unfortunately without any luck. It would be great if someone could help me on this issue. It is really annoying. Thanks in advance for all the help. [UPDATE] As I explained in the comment below, here is the network setup: On the production setup the NFS traffic is bound to a separate VLAN with ID 20. I am using a HP 1810 24 Port Switch. The OpenFiler system is connected to the VLAN with 4 Intel GbE NICs with dynamic LACP. The ESXis both have 4 Intel GbE NICs using 2 static LACP trunks containing 2 NICs each. One pair is connected to the regular LAN and the other one to the VLAN 20. And here is a screenshot of the vSwitch: Switch configuration: Port configuration: On the lab setup its a single Intel NIC on each side without VLAN, but with different IP subnet.

    Read the article

  • INNODB mysql. Plugin disabled

    - by alexcunn
    When I startup mysql on my unbuntu server I will get a message. 121122 17:39:37 [Note] Plugin 'FEDERATED' is disabled. 121122 17:39:37 InnoDB: The InnoDB memory heap is disabled 121122 17:39:37 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121122 17:39:37 InnoDB: Compressed tables use zlib 1.2.3.4 121122 17:39:37 InnoDB: Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 121122 17:39:37 InnoDB: Completed initialization of buffer pool 121122 17:39:37 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121122 17:39:37 [ERROR] Plugin 'InnoDB' init function returned error. 121122 17:39:37 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121122 17:39:37 [ERROR] Unknown/unsupported storage engine: InnoDB 121122 17:39:37 [ERROR] Aborting 121122 17:39:37 [Note] mysqld: Shutdown complete a few times I have got a message saying that the plugin is disabled. I use webmin to configure it. Could that be a problem?

    Read the article

  • How to Remove a VM From Hyper-V Without Deleting the Configuration File?

    - by Steven Murawski
    I'm in the process of moving a number of virtual machines that are homed on shared storage (a file share, though shared cluster disk would work as well) to a new VM host with access to the same shared storage. The new host is a different build version (moving from Windows Server 2012 Beta to Windows Server 2012 RC - though this same process could be used with migrations of Windows Server 2008/2008 R2 to Windows Server 2012 as well), so I cannot migrate the machine with inbox tooling. I need to remove the VM from management of the source Hyper-V host in order to import the VM to the new Hyper-V host. I want to retain the configuration file, so I can import the VM as it stands and not need to reconfigure it. The VHD files are rather large and they are staying on the same file share, so I'd rather not duplicate them during the move process.

    Read the article

  • development server?

    - by ajsie
    for a project there will be me and one more programmer to develop a web service. i wonder how the development environment should be like. cause we need central storage (documents, pictures, business materials etc), file version handling, lamp (testing the web service) etc. i have never set up an environment for this before and want to have suggestions from experienced people which tools to use for effective collaboration. what crossed my mind: seperate applications: - google wave (for communication forth and back, setting up guide lines, other information) - team viewer (desktop sharing) - skype (calling) vps (ubuntu server): - svn (version tracking) - ftp (central storage) - lamp (testing the web service) - ssh (managing the vps) is this an appropriate programming environment? and regarding the vps, is it best practice to use ONE vps for all tasks listed up there? all suggestions and feedbacks are welcome!

    Read the article

  • UnicodeEncodeError when uploading files in Django admin

    - by Samuel Linde
    Note: I asked this question on StackOverflow, but I realize this might be a more proper place to ask this kind of question. I'm trying to upload a file called 'Testaråäö.txt' via the Django admin app. I'm running Django 1.3.1 with Gunicorn 0.13.4 and Nginx 0.7.6.7 on a Debian 6 server. Database is PostgreSQL 8.4.9. Other Unicode data is saved to the database with no problem, so I guess the problem must be with the filesystem somehow. I've set http { charset utf-8; } in my nginx.conf. LC_ALL and LANG is set to 'sv_SE.UTF-8'. Running 'locale' verifies this. I even tried setting LC_ALL and LANG in my nginx init script just to make sure locale is set properly. Here's the traceback: Traceback (most recent call last): File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/handlers/base.py", line 111, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/options.py", line 307, in wrapper return self.admin_site.admin_view(view)(*args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 93, in _wrapped_view response = view_func(request, *args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/views/decorators/cache.py", line 79, in _wrapped_view_func response = view_func(request, *args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 197, in inner return view(request, *args, **kwargs) File "/srv/django/letebo/app/cms/admin.py", line 81, in change_view return super(PageAdmin, self).change_view(request, obj_id) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 28, in _wrapper return bound_func(*args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 93, in _wrapped_view response = view_func(request, *args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 24, in bound_func return func(self, *args2, **kwargs2) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/transaction.py", line 217, in inner res = func(*args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/options.py", line 985, in change_view self.save_formset(request, form, formset, change=True) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/options.py", line 677, in save_formset formset.save() File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/forms/models.py", line 482, in save return self.save_existing_objects(commit) + self.save_new_objects(commit) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/forms/models.py", line 613, in save_new_objects self.new_objects.append(self.save_new(form, commit=commit)) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/forms/models.py", line 717, in save_new obj.save() File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/base.py", line 460, in save self.save_base(using=using, force_insert=force_insert, force_update=force_update) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/base.py", line 504, in save_base self.save_base(cls=parent, origin=org, using=using) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/base.py", line 543, in save_base for f in meta.local_fields if not isinstance(f, AutoField)] File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/fields/files.py", line 255, in pre_save file.save(file.name, file, save=False) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/fields/files.py", line 92, in save self.name = self.storage.save(name, content) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/files/storage.py", line 48, in save name = self.get_available_name(name) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/files/storage.py", line 74, in get_available_name while self.exists(name): File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/files/storage.py", line 218, in exists return os.path.exists(self.path(name)) File "/srv/.virtualenvs/letebo/lib/python2.6/genericpath.py", line 18, in exists st = os.stat(path) UnicodeEncodeError: 'ascii' codec can't encode characters in position 52-54: ordinal not in range(128) I tried running Gunicorn with debugging turned on, and the file uploads without any problem at all. I suppose this must mean that the issue is with Nginx. Still beats me where to look, though. Here are the raw response headers from Gunicorn and Nginx, if it makes any sense: Gunicorn: HTTP/1.1 302 FOUND Server: gunicorn/0.13.4 Date: Thu, 09 Feb 2012 14:50:27 GMT Connection: close Transfer-Encoding: chunked Expires: Thu, 09 Feb 2012 14:50:27 GMT Vary: Cookie Last-Modified: Thu, 09 Feb 2012 14:50:27 GMT Location: http://my-server.se:8000/admin/cms/page/15/ Cache-Control: max-age=0 Content-Type: text/html; charset=utf-8 Set-Cookie: messages="yada yada yada"; Path=/ Nginx: HTTP/1.1 500 INTERNAL SERVER ERROR Server: nginx/0.7.67 Date: Thu, 09 Feb 2012 14:50:57 GMT Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Connection: close Vary: Cookie 500 UPDATE: Both locale.getpreferredencoding() and sys.getfilesystemencoding() outputs 'UTF-8'. locale.getdefaultlocale() outputs ('sv_SE', 'UTF8'). This seem correct to me, so I'm still not sure why I keep getting these errors.

    Read the article

  • sudo/su command for Red Hat Server 5.4

    - by rednaxela
    Without going into too much detail, I need to execute one linux command on redhat with root user access. Red Hat Server 5.4 does not recognise the sudo command. The command su can be used to switch to the root user on redhat, but su cannot be done in one line. For example the command: su ; cd opt/storage/RootAccessFolder will not work because this only switches you to root, then executes the cd command once you have logged out from the root user. I guess what i'm looking for is like a.. sudo cd opt/storage/RootAccessFolder but I say again, sudo doesn't work. Any ideas?

    Read the article

  • Email Servers that Abstracts Mailbox Concepts [on hold]

    - by David
    Lately I've been really interested in doing some very unique things with email most of which rely on a SMTP and POP or IMAP server that gives the administrator an API to create arbitrary methods for email storage, notifications, or delivery. What I'm looking for would be analogous to mod_php and apache where apache handles the delivery protocol and php handles the content creation and storage. I've considered making my own, as those three protocols are quite simple, but I'm always nervous about putting my code public facing especially when it's at that low of a level. So are there any email servers that allow for this much arbitrary control over email delivery, fetching, and receiving.

    Read the article

  • selective backup script in bash

    - by Sake
    Hi, I've been using this simple command (that's all I can do :) to backup the whole tree from my user data in NAS server for a year. cp -r /STORAGE /BACKUP-STORAGE/YYYY-MM-DD Unfortunately, after a year of service. My user start filling the spaces with lot of photo and cliparts (jpg, gif, bmp) And that start to make my backup process get much slower. The space is also a big issue. Now I no longer have enough space for a week-long daily backup set. I think I want to change from backup everything to backup only non-image data. How can I exclude jpg, gif, and bmp from the backup ? It's quite easy with DOS XCOPY command, but I really have no idea how to do that in bash. Thanks

    Read the article

  • How can I tell if ZFS (zfs-fuse) dedup/compression is applied to a particular file?

    - by asari
    I have a zfs formatted partition using zfs-fuse for linux (Ubuntu). I had used it for a while, and then enabled dedup and compression on it (zfs set compression=on/dedup=on). Now I think I have some files that are dedup'ed and compressed, and file that are not yet. It was OK, but sometimes I was confused. Let's see, following command would consume almost 4GB of my zfs storage: cp oldfile.4GB newfile.4GB .. and this would consume almost zero: cp newfile.4GB newfile.4GB.2 This is because the old file is not yet compressed, so dedup not happened, I think. My idea is -- if I can find old files that are not yet dedup/compressed, I can perform batch copy/rename/remove them to eliminate duplicity and redundancy. But how I can check that? I know I can re-copy whole contents of my storage should work (even better with checking the time stamp of each file), but I'd be happier if I have zfsstat-like tool that shows some file properties.

    Read the article

  • Device Manager - does USB listing look right?

    - by Carl
    I obtained the drivers from the manufacturer for my HT-Link NEC USB 2.0 2-port Cardbus card. When I plugged in the card before I got the drivers, 3 new entries showed up in the Device Manager - two "NEC PCI to USB Open Host Controller" and one "Standard Enhanced PCI to USB Host controller." With the card plugged in, I uninstalled those two drivers. I then removed the card. I copied the new drivers to c:\windows\system32\drivers and the .inf file to c:\windows\inf. I also copied the drivers & inf to a new directory called c:\windows\drivers\ousb2. I reinserted the card. Windows automatically installed the same drivers as before. I selected 'update driver' on the "NEC PCI to USB..." entry and didn't see any other options. I then selected 'have disk' and pointed to c:\windows\drivers\ousb2 and got a message "The specified location does not contain information about your hardware." I then selected 'update driver' on the "Standard Enhanced PCI to USB...," and manually selected "USB 2.0 Enhanced Host Controller" (OWC 4/15/2003 2.1.3.1). Windows then automatically found a USB root hub, and I manually selected "USB 2.0 Root Hub Device" (OWC 4/15/2003 2.1.3.1). Now there are two sections in the Device Manager titled "Universal Serial Bus controllers." I plugged in my external USB hard disk adapter, and "USB Mass Storage Device" was added to the first set. Here's how it looks (w/drivers from the properties): [Universal Serial Bus controllers] Intel(R) 82801DB/DBM USB 2.0 Enhanced Host Controller - 24CD (6/1/2002 5.1.2600.0) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C2 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C4 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C7 (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) USB Mass Storage Device USB Root Hub (7/1/2001 5.1.2600.5512) (5 more USB Root Hubs - same driver) [Universal Serial Bus controllers] USB 2.0 Enhanced Host Controller (OWC 4/15/2003 2.1.3.1) USB 2.0 Root Hub Device (OWC 4/15/2003 2.1.3.1) When I unplug the card the two "NEC PCI to USB..." entries in the first set disappear, and the whole second set disappears. (I unplugged the hard disk adapter first...) The hard disk adapter still doesn't work in that Cardbus card with the new drivers. I don't think the above looks right - a second set of USB controllers listed in the Device Manager, and the NEC entries still in the first set, and the the USB mass storage device still in the first set. Any help appreciated. (Windows XP PRO SP3 w/all current updates.)

    Read the article

  • NETAPP Fragmentation

    - by mdpc
    We all know that once a disk (or storage system for that matter) gets introduced into use, the performance degrades due to fragmentation of files. This seems to be why disk defragmentors are in fairly wide use on Windows boxes. And they do increase the performance substancially. As an aside, I haven't heard of many defragmentors in the Unix/Linux area. Despite the claimed WAFL protections for the NetApp, file fragmentation still will occur, especially with all the sparsely crated VMs. My question is does anybody do any sort of defragmention of such a storage system? Do you notice any measurable degration/improvement of either not doing/doing anything to address this situation? Does anybody do anything about it? If so what? Thanks

    Read the article

  • (simple) linux HA with vmware vsphere?

    - by derhelge
    I hope my upcoming question is specific enough, and you are able and willing to support :-) We have several openSUSE VMs in an ESX-Cluster (three ESX-Servers) with an attached iSCSI-SAN. All of those Linux VMs are "single point of failure"-configured, which means in the case of a Web-Server: LAMP, storage, etc. everything on this machine. This was very simple and in case of a failure (in the last years: kernel panics or apache crashes) a simple reboot triggered by a script did it. But the problem is: How to upgrade/maintain the w(eb-)application or the underlying OS without downtime? This wasn't really managable and i did this in the early morning ;) How can i achieve a "simple" High-Availability Cluster now? I thought of: DRBD with heartbeat with 2 VMs. And for the storage a RDM (raw device mapped) LUN and change the read-write-permissions for both VMs. Is this a good idea? Anyone has a better solution?

    Read the article

  • connecting to bsnl wll, with huawei fixed wireless modem with connected to usb port on linux mint.

    - by Rakesh
    On windows, when I plugged in the usb connector, to the device was recognized as a mass storage device and installed all the drivers, to my computer, then the device, automatically restarted and started behaving like a modem, I'm writing this post, using the same modem, through windows... On linux mint (debian kernel, similar to ubuntu 8.10), the device is recognized as a mass storage device, some times, but, has no programs, useful for linux... When I use the modem on windows and restart the computer and login to linux, the device shows up as "not yet configured" in the terminal command "lsusb"... I googled a lot for solutions, tried many things, atlast, configured it and run the command "wvdial". But, get the error, that the modem is not responding, at last ! :'( Please help me out... Many more people are facing this problem, as I could discover, when I googled for it. this is the website address of huawei, the company of my modem... "www.huawei.com" Specification: model: ETS1220 FEQ: 800M Thank you.

    Read the article

  • Feedback on Using ZFS and FreeBSD

    - by ToiletOverflow
    I need to create a server that will be used solely for backing up files. The server will have 2TB of storage to begin with but I may want to add additional storage later on. As such, I am currently considering using FreeBSD + ZFS as the OS and file system. Is ZFS a reliable, trusted file system? Should I use it in this scenario? I have read that ZFS should be used with OpenSolaris over FreeBSD as OpenSolaris is usually ahead of the curve with ZFS as far as version updates and stability. However, I am not interested in using OpenSolaris for this project. An alternative option that I am considering is to stick with ext3 and create multiple volumes if need be, because I know that I will not need a single, continuous volume larger than 2TB. Thanks in advance for your feedback.

    Read the article

  • Best practice for assigning private IP ranges?

    - by Tauren
    Is it common practice to use certain private IP address ranges for certain purposes? I'm starting to look into setting up virtualization systems and storage servers. Each system has two NICs, one for public network access, and one for internal management and storage access. Is it common for businesses to use certain ranges for certain purposes? If so, what are these ranges and purposes? Or does everyone do it differently? I just don't want to do it completely differently from what is standard practice in order to simplify things for new hires, etc.

    Read the article

  • Are FC and SAS DAS devices standard enough?

    - by user222182
    Before I ask my questions, here is some background info that may or may not be useful: For the first time I find myself needing a DAS solution. My priority is data through-put in a single direction. I can write large blocks, and I don't need to read at the same time. The server (the data producing device) is not really a typical server, its a very powerful single board computer. As such I have limited options when it comes to the add-in cards I can install since it must use the fairly uncommon interface, XMC. Currently I believe I am limited PCIex8 gen 1 which means that the likely bottle neck for me will be this 16gbps connection. XMC Boards I have found so far offer the following connections: a) Dual 10GBE ethernet controller, total throughput 20gbps b) Dual Quad SAS 2.0 Connectors (SFF-8XXX) HBA (no raid), total throughput 48 gbps c) Dual FC 8gb HBA (no raid), total throughput 16gbps My questions for you guys are: 1) Are SAS and/or FC, and by extension their HBAs, standard enough that I could purchase a Dell or Aberdeen storage server with a raid controller that has external SAS or FC ports and expect that I can connect it to my SAS or FC HBA, be presented with a single volume (if I so configured the storage server), all without having to check for HBA compatibility? 2) On a device like a Dell PowerVault (either DAS or NAS) is there an OS on it to concern myself with, or is it meant to be remotely managed? Is there a local interface in case I cant remotely manage it (i.e. if my single board computer uses an OS not supported by Dell OpenManage). Would this be true of nearly any device which calls itself a DAS? 3) If I purchase some sort of Supermicro storage chassis, installed a raid controller with external connections, is there a nice lightweight OS I can run just for management of the controller? Would I even need an OS since the raid card would be configured pre-boot anyway? 4) It is much easier to buy XMC based 10gigabit ethernet cards (generally dual port). In what ways would I be getting into trouble by using iSCSI as a DAS are direct cabling with SFP+ cables? Thanks in advance

    Read the article

  • New standalone ESXi 5 deployments - USB versus SD card?

    - by ewwhite
    Now that the old full VMWare ESX with service console is no longer, I'm redeploying some standalone ESXi servers. I'm using HP ProLiant ML and DL G6 and G7 servers. Does it make more sense to utilize the internal USB port for ESXi or the internal SD card slot? I'm using the HP ESXi 5 build, but am not sure what the recommended practice is. Any recommendations on cards/USB drives for this purpose? BTW - these will be all-in-one storage servers with the onboard disk storage presented via PCIe passthrough.

    Read the article

  • Windows Server 2003 R2 sp2 and Exchange 2003 - missing pop-up menu "Exchange Tasks"

    - by Denis
    I need to recover database. That's what i doing step-by-step: - In Exchange System Manager check the server - Create new Recovery Storage Group - Add Database to recover - Mount store for it Database (Mailbox Store) - all finish successful Next step - I need check user and in pop-up menu click on "Exchange Tasks...", but in menu i see only "Help". Main question - why I have not "Exchange Tasks" and how I can get it? But I can see "Exchange Tasks" in "First storage Group"-Mailbox-User. Sorry for my bad English. Thanks, Denis

    Read the article

  • how to pipe data to sftp connection?

    - by JMW
    ftp supports the put "|..." "remote-file.name" command to pipe data to an ftp connection. Is there something similar available for sftp? In sftp i get the following error: sftp 'jmw@backupsrv:/uploads' sftp> put "| tar -cx /storage" "backup-2012-06-19--17-51.tgz" stat | tar -cv /storage: No such file or directory as above the sftp client doesn't obviously execute the command. i want to use the pipe command to directly redirect the file stream to sftp. (because there is not enough space left to create a backup file on the same disk before uploading it to sftp server.)

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >