Search Results

Search found 716 results on 29 pages for 'gokhan nas'.

Page 26/29 | < Previous Page | 22 23 24 25 26 27 28 29  | Next Page >

  • Poor NFS Performance: OpenFiler

    - by Safin09
    Good Day Everyone, I have an issue with OpenFiler, a Linux-based operating that converts a computer system into a SAN/NAS appliance. Here is the problem. In my environment we have two Netapp Storevault 500 appliances that I normally perform backups to a NFS share. There are two backup cronjobs that use ghettoVCB to backup two groups of VM's. One group is a pool of 3 VMs. This takes 13 mins to complete. A second job that backups a pool of 5 VMs to a 2nd Storevault appliance which takes 2 hours. We then installed Openfiler on a old server that has 2 core Xeon processors. There is a software RAID 5 process in place. When performing the same backups to a NFS Openfiler share, the first backup job, which takes 13 mins, takes around 4 hours. The second backup job, which takes 2 hours, takes almost 10 hours to complete. This is unacceptable!!!! Especially considering the strain placed on the host ESX Server. I assumed that because of the software RAID 5, the overhead on the CPU explained the long backup times. I then installed Openfiler on a 2nd server, an IBM x306 machine which has a P4 Intel processor. This time no software RAID or any RAID at all. A single 750GB hard drive that contained the OS and the rest of the disk uses to backup VMs to a NFS share. I performed the first backup job of the pool of 3 VMs. This time the backup job took 1 and 1/2 hours to complete instead of 13 mins!!!!!!!!!! Is Openfiler simply poor at being an NFS Server!!!!!!!!!!!!! Has anyone else had these issues with Openfiler?

    Read the article

  • pfsense 2.0.1 Firewall SMB Share not showing up under network

    - by atrueresistance
    I have a freenas NAS with a SMB share running at 192.168.2.2 of a 192.168.2.0/28 network. Gateway is 192.168.2.1. Originally this was running on a switch with my LAN, but now having upgraded to new hardware the Freenas has it's own port on the firewall. Before the switch the freenas would show up under Network on a windows 7 box and an OSX Lion box as freenas{wins} or CIFS shares on freenas{osx} so I know it doesn't have anything do to with the freenas. Here are my pfsense rules. ID Proto Source Port Destination Port Gateway Queue Schedule Description PASS TCP FREENAS net * LAN net 139 (NetBIOS-SSN) * none cifs lan passthrough PASS TCP FREENAS net * LAN net 389 (LDAP) * none cifs lan passthrough PASS TCP FREENAS net * LAN net 445 (MS DS) * none cifs lan passthrough PASS UDP FREENAS net * LAN net 137 (NetBIOS-NS) * none cifs lan passthrough PASS UDP FREENAS net * LAN net 138 (NetBIOS-DGM) * none cifs lan passthrough BLOCK * FREENAS net * LAN net * * none BLOCK * FREENAS net * OPTZONE net * * none BLOCK * FREENAS net * 192.168.2.1 * * none PASS * FREENAS net * * * * none BLOCK * * * * * * none I can connect if I use \\192.168.2.2 and enter the correct login details. I would just like this to show up on the network. Nothing in the log seems to be blocked when I filter by 192.168.2.2. What port am I missing for SMB to show up under the network and not have to connect by IP? ps. Do I really need the LDAP rule?

    Read the article

  • Solaris: detect hotswap SATA disk insert

    - by growse
    What's the method used on Solaris to get the system to rescan for new disks that have been hot-plugged on a SATA controller? I've got an HP X1600 NAS which had 9 drives configred in a ZFS pool. I've added 3 disks, but the format command still only shows the original 9. When I plugged them in, I saw this: cpqary3: [ID 823470 kern.notice] NOTICE: Smart Array P212 Controller cpqary3: [ID 823470 kern.notice] Hot-plug drive inserted, Port=1I Box=1 Bay=12 cpqary3: [ID 479030 kern.notice] Configured Drive ? ....... NO cpqary3: [ID 100000 kern.notice] cpqary3: [ID 823470 kern.notice] NOTICE: Smart Array P212 Controller cpqary3: [ID 823470 kern.notice] Hot-plug drive inserted, Port=1I Box=1 Bay=11 cpqary3: [ID 479030 kern.notice] Configured Drive ? ....... NO cpqary3: [ID 100000 kern.notice] cpqary3: [ID 823470 kern.notice] NOTICE: Smart Array P212 Controller cpqary3: [ID 823470 kern.notice] Hot-plug drive inserted, Port=1I Box=1 Bay=10 cpqary3: [ID 479030 kern.notice] Configured Drive ? ....... NO But can't figure out how to get the format command to see them so I know they've been detected by the system.

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • Can't access server on LAN new Router

    - by RMDan
    Earlier this week my roommates decided to change the router we are using for our home network. On the old router I had no problems accessing a laptop running Mint Linux over the network(SSH, FTP, and Shared Folders). However I am now not able to connect. I have verified the IP address of my Linux machine has not changed and I have no problems connecting to our NAS(WDMYCLOUD). The new router is a D-Link DIR-868L. PuTTY is giving me a Timed Out error(was giving me a EHOSTUNREACH before). Running Windows 8.1 the connecting laptop, but tried connecting using my phone via SSH did not work either. More information: Can ping the Linux machine from the router but not my computer. Pinging 192.168.0.111 with 32 bytes of data: Reply from 192.168.0.102: Destination host unreachable. Reply from 192.168.0.102: Destination host unreachable. Reply from 192.168.0.102: Destination host unreachable. Reply from 192.168.0.102: Destination host unreachable. Ping statistics for 192.168.0.111: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),

    Read the article

  • Using dropbox / symbolic link combo successfully

    - by wim
    In the past I have kept some files on dropbox by copying them into my ~/Dropbox folder on Ubuntu. I don't want to move the original files into Dropbox synch folder or muck around with my directory structure. Then I have found I was using dropbox more and more, and wasting a lot of space this way by duplication of data. I use a small SSD locally for OS, any other data is kept on mounted shares from my NAS. I found I could successfully get files up to the cloud by using symbolic links like: ln -s /some/mounted/share/dir ~/Dropbox/dir And dropbox would carry on and sync those files remotely whilst only using up the space of the symbolic link locally. This worked well for me for a few weeks, until I turned on my laptop one day and saw '421 files have been removed from your dropbox' notification. They were still there in the original mounted share, but the symbolic links I'd made were completely gone for some reason. What did I do wrong? It is possible the share could have become unmounted, but I didn't expect this would cause all my files to be deleted from the cloud could it? How can I 'share' files on my dropbox in this way without the danger of the originals being modified from remotely?

    Read the article

  • Start daemon after specific samba share is mounted

    - by getack
    I axed this question on AskUbuntu, but it's not getting any traction from there... So I'll try here as well: I have a homebrew headless NAS running 12.04. In it I have a bunch of disks that are presented as a samba share thanks to Greyhole. If I want to do anything to the files within this share, I must do it through greyhole so that everything is updated properly. Thus, the share must be mounted locally and then accessed from there if I want to work on the files from the local machine. I do this mounting automatically thanks to these instructions. I also have Deluge installed that takes care of all my torrenting needs. Deluge's default download location is in this share, so that all the downloads are immediately available to the rest of the network. Obviously for everything to work, the share must be mounted, otherwise Deluge is going to have a problem downloading to it. The problem is, it seems like Deluge is starting before the shares are mounted when the system boots. So downloading/seeding does not continue automatically after boot. I have to log in and force a manual rescan and start on each torrent otherwise all the torrents just hangs on the error. Is there a way I can make deluge start after the shares got properly mounted? I looked into Upstart's emits functionality but I cannot seem to get it to work properly. Any advice?

    Read the article

  • Syncronization between folders MAC OS Lion

    - by Andre Carvalho
    I have an iMac at home and I use a Macbook pro for work. I also have a time capsule at home containing my main folder with my main files. I use it as a NAS besides the Time Machine backup tool. I have several personal files I need to be accessing both at home and at work. My wife, who works at home, uses sometimes the same .XLS files and .DOC files I might have used during my day at work, away from home. My question is: Is there a software, or tool that a I can use to sync my iMac and my MB Pro folders? Remembering that: There might be a chance that my wife and I have changed the same files during the day, so the files would have to be merged so none of the information added by either me or my wife would be lost. The software/tool that would be installed on the MB Pro would need to mount the Time Capsule volume so it could locate the main folder on it. It has to be done automatically when my MB is at home ( with a schedule option ); I have tested some softwares like synctwofolders and Chronosync but none fulfilled all my needs. The first couldn't mount the Time Capsule Volume and didn't have the many schedule options. I really liked Chronosync, but it doesn't merge the files. When it detects a conflict ( for instance: my wife changed a .DOC file on the iMAC and I changed the same file on the MB it asks you to choose which version you want to keep instead of allowing you simply to merge them ). I don't have much experience with automator or scripts but maybe you can give me a hand with that.

    Read the article

  • Should I enabled 802.3x hardware flow control?

    - by Stu Thompson
    What is the conventional wisdom regarding 802.3x flow control? I'm setting up a network at a new colo and am wondering if I should be enabling it or not. My oh-cool-a-bright-and-shiny-new-toy self wants to enable it, but this seems like one of those decisions that could blow up in my face later on. My network: An HP ProCurve 2510G-24 switch A pair of Debian 5 HP DL380 G5's with built-in NC373i 2-port NIC LACP'd as one link. 9000 jumbo frames enabled. (Application) A pair of hand-built Ubuntu server with 4-port Intel Pro/1000 LACP'd as one link. 9000 jumbo frames enabled. (NAS) A few other servers with with single 1Gbps ports, but one with 100Mbps. Most of this kit is 802.3x. I've been enabling it as I go along, and am about to test the network. But as my 'go live' day nears, I am worried about the 802.3x decision as I've never explicitly used it before. Also, I've read some 10-year old articles out there on the Intertubes that warn against using flow control. Should I be enabling 802.3x hardware flow control?

    Read the article

  • Symantec Protection Suite Enterprise Edition

    - by rihatum
    We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) What Hardware would you recommend as a Server spec for the SQL server 16GB RAM, Dual XEON? d) What Hardware would you recommend as a Server spec for the MGMT Servers 16GB RAM each with DUAL xeon and sas disks? e) Also, how do you or would you recommend to protect these 4 servers (2 x SQL and 2 x MGMT Servers)? f) How would you recommend to store backups for these desktops? We do have a SAN and a NAS in our environment and we do have one spare DAS (Dell MD3000). If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks ! Rihatum

    Read the article

  • Cooling Server Closet - No A/C Is Possible

    - by JamesCo
    We're moving into a new office in an old building in London (that's England :) and are walling off a 2m x 1.3m area where the router & telephone equipment currently terminates to use as a server closet. The closet will contain: 2 24-port switches 1 router 1 VSDL modem 1 Dell desktop 1 4-bay NAS 1 HP micro-server 1 UPS Miscellaneous minor telephony boxes. There is no central A/C in the office and there never will be. We can install ducting to the outside quite easily - it's only a couple of metres to the windows, which face a courtyard. My question is whether installing an extractor fan with ducting to the window should be sufficient for cooling? Would an intake fan and intake duct (from the window, too) be required? We don't want to leave a gap in the closet door as that'll let noise out into the office. If we don't have to put a portable A/C unit into the closet, that'd be perfect. The office has about 12 people; London is temperate, average maximum in August is 31 Celsius, 25 Celsius is more typical. The same equipment runs fine in our current office (same building as new office, also no A/C) but it isn't in an enclosed space. I can see us putting say one Dell 2950 tower server into the closet, but no more than that. So, sustained power consumption in the closet would currently be about 800w (I'm guessing); possibly in the future 2kw. The closet will have a ceiling and no windows and be well-insulated. We don't care if the equipment runs hot, so long as it runs and we don't hear it.

    Read the article

  • No signal on monitor after plug it to a linux box

    - by yaroot
    I use my old computer as an NAS, so I remove the monitor after I installed linux on it (disconnect vga cable). I use ssh to control the machine and it works fine. Until some day, after kernel/softare upgrade or messing up some configs, I cannot connect to it through ssh, then I have to plug the monitor back, but the monitor says "No input signal". So I have to restart the computer WITH the monitor connected, and the monitor's back! I think the computer/linux kernel doesn't detect the monitor plug-in event. So how can I start my linux box without a monitor, but when it goes wrong I can still plug my monitor (vga) back and use the console. Edit: just one pci-e video card, has dvi, vga, tv/out (s-video) Edit2: Xorg is not running. I just need the console (CTRL+ALT+F1). The problem is, if the machine booted without a monitor connected, it won't give me a pseudo terminal after I attach the vga cable while it's running. Clearly the monitor is not auto detected as usb device. I'm wondering how to let the monitor auto detected.

    Read the article

  • Home media storage solution

    - by Dan
    I record lots of personal HD film footage and am looking for a cheap way to store all of this. I take ~120 GB of footage each month, so something expandable would be nice... something that might be able to hold 6+ SATA drives. There is a low load requirement, as there is never more than a user or two... but it should be able to keep up with streaming 2 simultanious HD videos. I don't really want to spend more than $200-$300 on top of the $900 I am thinking of spending for 6X2GB SATA drives@ $150 apiece, but I am willing to pay extra for a quality solution. Should I get a cheap NAS server? a cheap multi-drive external enclosure? should I just get some used systems off craigslist? If it is an independent system I'll probably just throw ubuntu on it since I can maintain that well. Its easy to do a software raid from ubuntu too, if I choose to go that way. Thanks

    Read the article

  • How do I make ESXi 5.0 to shutdown virtual machines when the physical power button is pushed?

    - by Pawel Sawicki
    I have a home NAS/DLNA server built out of an HP Micro Server with the HP branded VMware ESXi 5.0.0 build-623860 (free license) installed. Being a home media center I'd like it to be "manageable" by all my household members. This requires that it needs to be powered on an off (including all the VMs inside) by anybody with the physical access to the server by simply pressing the power button on the chassis. The "startup" part is easy to obtain - all I had to do was to configure the startup/shutdown policy: Once the server powers up, all VMs start as well and that's exactly what I need. Well.. it did work up until 5.0.0U1, but that's a different story: http://blogs.vmware.com/vsphere/2012/03/free-esxi-hypervisor-auto-start-breaks-with-50-update-1.html Unfortunately, pressing the power button doesn't gracefully shutdown the guest machines - they are terminated instead. If I run the "shut down" command from the vSphere Client interface guests are powered off. I'd like to get the same end result when the physical power button is switched. I've poked around a bit on the ESXi server. There's a "/sbin/shutdown.sh" script that seemed to do exactly what I need... but after trying it does exactly what the power off button. The "/etc/inittab" contains an entry for the "shutdown" level but I suppose it's not hooked to the power button. I can't find any acpi related configuration, neither do I know what exactly is executed when the power button is pressed. Does anybody have a clue how can I make the VMs shutdown automatically when the physical power switch is pressed to turn of the computer?

    Read the article

  • filesystem compatible with freenas and windows

    - by Daniel
    Hi all, I'm planning on using FreeNAS (was considering openfiler but freenas seems simpler) for my home NAS box running off ESXI. I have managed to get local sata drives to mount in ESXI (http://serverfault.com/questions/216902/esxi-add-datastore-without-partitioning). I've had one of the drives fail on my before and I was able to retrieve most of the data off it using windows tools (I'm not much of a linux guy I know enough to be dangerous!). If I go the freenas route in the event that something goes bad what would be the best file system to use so that I could pop the drive out of the freenas box (vm) and put it in another pc running windows so I could try and run various recovery tools to get the data back. All in all its not a major problem if I lose the data just would be a bit annoying, so I'm not looking for suggestions around backing up etc. I was considering using NTFS that the drives are already formatted as but it appears that while freenas does support NTFS that its a bit buggy and not 100% reliable, anyone know if this is still true? Read that on a forum somewhere.

    Read the article

  • Connecting a network printer via a Thecus N2100 - works in Vista, not in Windows 7

    - by Jon Skeet
    I have a Lexmark E250d printer attached to a Thecus N2100 NAS. On Windows Vista I've managed to configure this using an "Internet" printer port with the URL of http://thecus:631/printers/usb-printer. I can add a printer in a similar way in Windows 7, but it never manages to print the test page. If I go to "Configure Port" in Vista, it just has "Security Options" - on Windows 7 it's asking about Raw mode vs LPR mode etc. On Vista I'm using an E250d-specific driver from Lexmark; on Windows 7 there's a Microsoft E250d driver, or a Universal PCL XL driver from Lexmark... I wouldn't expect this different to be related to the problem, but I thought I'd mention it anyway. (Lexmark doesn't have a Windows 7 E250d-specific driver as far as I can see.) Any suggestions? I was thinking of upgrading my main laptop from Vista to Windows 7, but I'd really like to get this sorted first... EDIT: If I connect to http://thecus:631/printers/usb-printer via Chrome while capturing with Wireshark, I get this response: HTTP/1.1 200 OK Date: Wed, 06 Jan 2010 16:47:23 GMT Connection: Keep-Alive Keep-Alive: timeout=60 Content-Language: C Transfer-Encoding: chunked Content-Type: text/html;charset=iso-8859-1 0 No idea what that's meant to be doing... EDIT: On further consultation, this would appear to be the Internet Printing Protocol which is layered on HTTP. Printing a test page successfully from Vista posts to that URL. Will attempt the same on Windows 7...

    Read the article

  • The Server Fault Wiki of recommended practices [migrated]

    - by Avery Payne
    So I've noticed that there are several recommendations on basic practices on Server Fault, but there doesn't seem to be a cohesive view as to how those recommendations would all fit together. So I thought I would lump these together as a kind of mental exercise to see what the "ServerFault Community IT Department" would look like if it were implemented. This would give a few things: it would make a reasonable wiki (in the true wiki spirit of many contributions), it would provide several links to well-vetted practices, and it would be kind of fun to see what the amalgamation would look like. And who knows, it may even point out some interesting issues between different forms of "best practices", although I would be stunned if there was a conflict hidden in there someplace... Add your favorites from Server Fault as answers, and I'll re-edit this section with the results. Here's a few catagories to collect different ideas together. Hardware Configuration(s) Server room configuration. Server room temperature Firmware Updates and Scheduling Storage Configuration(s) Selecting a NAS box Linux: Dealing with /tmp Linux: Install apps in /var or /opt? Network Configuration(s) checking DNS health and compliance Security Practice(s) Password (General) Best Practices Password sharing methods Windows Update Updating Windows Servers that are hosts for VMs Network Service(s) User Service(s) User Naming & Deletion Upgrade Process(es) Disaster Recovery Checking Backups Documenting an outage for a post-mortem review Last Edit: 2010-02-17

    Read the article

  • Hardware needed to route between two networks over wireless

    - by AptDweller
    I recently rented an apartment about 100 yards from my brother's house. I have line of sight to his house and can pick up his home AP signal with one of my two laptops if I go out on my balcony (facing his house) or put the laptop by the window. The other laptop will sometimes see the SSID broadcast, but fails to connect, drops, etc. We would like to set up a persistent wireless connection between our homes. We would prefer each network be logically segmented as independent networks, but he will share his internet connection. I've got a bunch of tv shows saved to a NAS by my TiVO that I'd like to make available to him across the wireless link. My brother strongly prefers to not mess with his WAP at all. His network is running fine and is afraid to mess it up. I guess you could say he is "technologically declined". If we can get a reliable 11Mbps connection we will be satisfied. What hardware do I need to make this work? I was thinking a router with two wireless interfaces (external antennas) a wired interface, and a directional antenna mounted on my balcony facing his house. Can anyone recommend hardware to make this happen? Cheaper is better. I'll only be living in the area a year or two. I do have an old satellite TV antenna if that can be used to direct the signal.

    Read the article

  • VPN - Accessing computer outside of network. Only works one way

    - by Dan
    I could use some help here. My ideal goal is to create a VPN for 2 macs that are in different locations so that they can share each others screens and share files. I basically want to do what Logmein's Hamachi does, but without the 5 user limitation. I have set up the VPN on my Synology NAS at my house using the PPTP protocol. I could also use OpenVPN. The good news is that I can use a laptop outside of my home network to access any computer on my network at my house. The bad news is that I can not do the reverse. I want to use a computer in my home network (same network as the VPN server) to access a computer outside of my network (which is connected via VPN successfully). My internal IP is 192.168.1.xxx PPTP VPN assigns my laptop that is outside of my network with 192.168.5.xxx, but when I try to access it remotely either with afp://192.168.5.xxx or vnc://192.168.5.xxx I can't connect using either. Is this something that I should be able to do or is VPN only one way? I've also tried openvpn with the same results. Thanks for any help! -Dan

    Read the article

  • OpenSolaris livecd, NForce NIC driver, and NTFS USB mounting. Oh My!

    - by Jake Wharton
    I'm attempting to install OpenSolaris 2009.06 on my server. Before I do I would like to test that everything works and am running in to problems. It has an Abit AN-M2 motherboard with an NForce chipset. The driver config utility says that I need a third-party driver and links me to http://homepage2.nifty.com/mrym3/taiyodo/eng/. Scrolling to the bottom, I have downloaded both tgzs just in case. Now the fun part: The only way to get this on to the computer is via a USB drive since I can't access the network. Also, install CD in the drive otherwise I'd just burn them to DVD. Since my USB key is NTFS formatted I cannot mount it since the install CD seems to be lacking NTFS drivers which require more downloaded packages. What should I do? The server will simply be a dumb NAS and I know that there exists other OpenSolaris-based flavors such as Nexenta but from what I read the stock install is likely the best. If this is not the case and pursuing a different flavor is required or better I will also accept that as an answer (but please don't jump straight to it).

    Read the article

  • Can I trick Carbonite into backing up an external hard drive?

    - by Brian
    I use Carbonite to back up my PC (Windows XP). We were running low on disk space on our home PC (down to 15 GB), so I went out and purchased an external hard drive. However, Carbonite will not back it up. Is it possible to set up Carbonite to backup an external hard drive? I just want the external drive to be extra disk space. From their FAQ: The current version of Carbonite backs up only the files that reside on permanent hard drives on your computer. It will not back up network drives, external drives, and NAS (network accessed storage) drives. If there are files on a remote drive that you wish to include in your Carbonite backup, you should copy the files to a folder on your local hard drive. If the files are on a shared network drive, you could install Carbonite on the computer on which the network shared drive physically exists, and back the files up directly from that computer. Check back soon for a Carbonite service plan that will allow you to back up your external drives.

    Read the article

  • "Safe" personal router use on apartment-wide network

    - by noisetank
    I recently moved into an apartment with internet included in my rent. This was a boon at first, but now I'm feeling limited. To get devices connected (wired or wireless), I have to whitelist the MAC addresses on mycampusnet.com. This is annoying (considering I'm well over the 10 device limit including my roommate's stuff), but what's really driving me mad is that I don't seem to have any semblance of a "local" network. I've relied heavily on static IPs and port forwarding in the past (accessing NAS and remote desktop) and (as far as I can understand), that functionality is nonexistent without my router set up. Also, as my wired and wireless devices don't always seem to make it onto the same subnet, I'm unable to use any of my iDevices with my Apple TV (I can, however, mirror to no less than four strangers' Apple TVs at any moment, which is a whole other level of discomforting). I've talked to the head of the apartment complex and she told me that they personally don't have any issue with my using a router, but the provider (CampusConnect) does not currently allow it. Apparently, enough people have put in complaints/requests about the restriction (the apartments are for graduate students and University staff, many of which need to set up things like VPNs for work reasons) to open up some sort of ticket to get the functionality in place, but all the calls I've made to get status updates have been a waste of time. My question is: If I plugged my router into the apartment network, what would happen? I've been told already that personal routers would "interfere with the wireless" and that they would shut my port down if I used one, but is that a legitimate thing or just something made up that sounds real to keep the average Joe from pushing it further? I'm guessing there's some way of configuring my router to keep it from disrupting the rest of the network, but it's not something they want to tell me for obvious reasons. Am I right? And if so, what are the chances that they'd notice the difference in traffic or whatever and shut off my port?

    Read the article

  • Differential backup missing moved folders (flawed archive attribute logic)

    - by Max
    Recently I've discovered that my backup system it flawed: there are situation where various files/folders are missed. I do my backup from local disk to a network NAS. I use Cobian backup, and I have setup the backup software to create one full backup every week, and one differential backup every day. Now, the backup software (to my knowledge any backup software work this way) decide the files that go in the differential backup by looking at the file archive attribute. If the attribute is set, then the file go in to the backup. Now, when you move a file to a new location, on Windows systems, the archive attribute get set and the file is included in the backup, and that's fine... but when you move an entire folder, no archive attribute is set, nor on the folder, nor in any files inside the folder, so the moved folder isn't included in the differential backup! So, if you have a full backup plus a differential backup, and you moved folders around... then it's impossible to reconstruct the original files/folders structure starting from the full+differential backup, because the backup software didn't include the moved folders in the differential backup. So my differential backup are useless... Why does windows set the archive attribute when moving a file, but not when moving a folder? How can I deal with this issue? Is there a way to create a differential backup that works as it's supposed to do? Doing full backup every day is not practical, because the changed data is about 0.1% at day (by using a differential backup I can keep 4 weeks of files history without using too much disk space.)

    Read the article

  • New to building computers worried about temps

    - by dave
    I'm new to building my own computers and I was wondering about maximum temperatures. I understand that the room temp can affect the computers temp but how relevent is it? I understand that if my room temp is 20°C none of my computer parts could be lower than that. But if my room is 27°C instead of 20°C would this cause my computers parts to heat up more/faster? My new computer I built myself for gaming is i7 2600k 16gb ram ddr3 1600 hd6970 2 gb 240gb ssd ( bought a nas with 3 2tb drives in raid 5 for my home network ) 850w modular psu I also have my old hp computer i3 2120 8gb ram hd6770 1tb hdd I also have 3 laptops in my household, but I am not worried about their temps, they heat up my legs but they are never under stress. Due to size and money reasons I used an old case and it only has one of the sides left on it. Is this bad for the computer and will the extra dust cause problems? Or should I leave it this way or take the missus wrath and buy a case? If so is there any certain case I should get? I don't care about looks I just want card reader and usb slots and for it to run as cool or cooler than now, my case has 1 fan. Also what are the max temps for my new and old computer parts? Is 40°C under load ok for my CPU, what about 70°C for my GPU is that ok too, or should I worry? What are normal and safe temps for my components? I have looked around but there seem to be lots of different answers. I know that 100°C is bad but I want my parts to last as long as possible and this site always seems to give good replies without arguing or flaming.

    Read the article

  • Need advise for choosing software\hardware for virtualization.

    - by Anatoly
    Currently we have these servers : Windows SBS 2003 premium on IBM X266 double Xeon F43, 2GB ram. DC, exchange (70 users), Mssql. Windows 2003 R2 32bit on IBM x3400 with double XEON E5310 and 4GB ram. Terminal server (40+ users), ERP application based on uniPaaS platform from Magicsoftware, and Pervasive sql. Ubuntu 8.04 (simple pc box) with squid proxy, GLPI system and PHPBB3 forum for internal use. Recently number of concurrent users on Terminal server passed 40 users in rush hours and it gets stuck frequently. Therefore we need an upgrade. I think about transfer all physical servers to virtual servers based on cluster of 2 physical servers for reducing downtime. I think we will grow till 50-60 concurrent terminal users in rush hours. I also plan to virtualize 10-15 Win XP/7 workstation (office,ERP etc), and there is a little probability for Asterisk\Hylafax for 100 users (if it possible on same VM). Also we need NAS storage for 2-3TB. What hardware upgrade/purchase we need for complete this task? Which VM solution is preferable VmWare or Hyper-V? What backup software should we choose? Acronis or something another? Thank you in advance.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29  | Next Page >