Search Results

Search found 11618 results on 465 pages for 'shared storage'.

Page 361/465 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • Help building maya render node spec

    - by Ak
    Hi there, I'm looking to build 4x Maya render slaves/nodes for a friend of mine when his project gets green lit. The project involves MentalRay and lots of glass. I'm unsure if the new i7's 9xx or 8xx with hyper threading will do any better than a core 2 quad of the same (or close enough) speed. Does hyper threading make a difference to Maya or is it more performance per core based? I'm sure he's prefer I'd build another render node than pay for a bleeding edge CPU that only adds fractionly more GHz. -- The rest of the spec so far: 4Gb - 8Gb ram 64 bit OS: Probably Windows 7 (I know Linux is free, but want to build something my friend can support himself as easily as he supports his own workstation) 1TB HDD to hold textures, Maya files and renders which will be copied to central storage later Mobo with on-board video, gigabit NIC 500 - 650 watt PSU Desktop case something like a: Cooler Master ATCS 840 The machines will sold afterwards if necessary. -- If anyone has had experience in Maya and has done any tests with the new CPUs vs. the older ones I'd really appreciate your input.

    Read the article

  • TomTom GO & Ubuntu Linux: impersonating a GPRS phone with dund

    - by Broam
    Background: I've called TomTom support, and they don't support Linux. I can get my GO 730 to mount Mass Storage, and I found a shell script that will allow me to install maps (haven't tried it; will update when I do.). As of note: USB 2.0 only. 1.1 ports will not work. However--I still can't update the TomTom or take advantage of any traffic services. The GO will connect to a mobile phone, but I don't have one that supports tethering. However, I've found a site that claims to know a way to get a Linux Machine to impersonate a phone advertising GPRS services and it apparently works in Fedora as old as FC4. I'm having some serious trouble getting this to work on Ubuntu 9.10 Karmic, mainly because I think some of the built-in bluetooth stuff is getting in the way. Changing the class bits in main.conf (hcid.conf does not exist) doesn't crash..., and dund starts and listens, but the TomTom device never seems to want to connect to my machine. I haven't played around much with sdcptool (I think that's the name, not in front of a Linux machine right now) but maybe I have to advertise the DUN profile...I'm not very sure. My Question: I have no way to diagnose the problems. What are some diagnostic tools I can use to help dig down and figure out what's going on? Update: apparently dund is a legacy tool that's going away. What replaces it?

    Read the article

  • How do I install OpenStack on a single Ubuntu 12.04 node?

    - by Sam Edwards
    I'm having trouble installing OpenStack in Ubuntu 12.04, for various reasons: The official Ubuntu website recommends Juju and MAAS. However, this is a single node I am trying to get OpenStack installed on, and MAAS requires "two or more nodes" according to the docs. Additionally, I don't have any experience in MAAS and Juju and would rather stick to technologies I am more familiar with so that I can debug problems that arise. I have tried StackGeek but this fails because the node only has a single Ethernet port. The node does, however, have the second hard drive required for the nova storage. I have tried DevStack but cannot log into the dashboard. The login form appears fine, but as soon as I try to submit the page, my browser begins loading indefinitely. I have tried installing straight from packages, but I get an Internal Server Error in the dashboard upon trying to log in, with no helpful logs anywhere in sight to aid me in debugging the issue. Each of these attempts was with a fresh Ubuntu 12.04 LTS setup; I'm finding it really strange that no matter what I try, I cannot get OpenStack installed. Is this even a stable/mature project? Why am I encountering so many bugs?

    Read the article

  • Web Deploy 3.0 Installation Fails

    - by jkarpilo
    I am having difficulty installing Microsoft Web Deploy 3.0 to a Windows Server 2008 R2 box. I have tried installing with both the Web Platform Installer and the MSI package but installation fails while trying to execute the MSI custom action ExecuteRegisterUIModuleCA. This server is a VM and a member of a farm but shared config is disabled while I'm installing. Here's the point at which it fails in the MSI log (starting at line 1875): MSI (s) (80:FC) [15:29:01:358]: Executing op: ActionStart(Name=IISBeginTransactionCA,,) MSI (s) (80:FC) [15:29:01:374]: Executing op: CustomActionSchedule(Action=IISBeginTransactionCA,ActionType=3073,Source=BinaryData,Target=IISBeginTransactionCA,) MSI (s) (80:A8) [15:29:01:374]: Invoking remote custom action. DLL: C:\Windows\Installer\MSI6C6A.tmp, Entrypoint: IISBeginTransactionCA MSI (s) (80:FC) [15:29:01:436]: Executing op: ActionStart(Name=IISRollbackTransactionCA,,) MSI (s) (80:FC) [15:29:01:436]: Executing op: CustomActionSchedule(Action=IISRollbackTransactionCA,ActionType=3329,Source=BinaryData,Target=IISRollbackTransactionCA,) MSI (s) (80:FC) [15:29:01:436]: Executing op: ActionStart(Name=IISCommitTransactionCA,,) MSI (s) (80:FC) [15:29:01:436]: Executing op: CustomActionSchedule(Action=IISCommitTransactionCA,ActionType=3585,Source=BinaryData,Target=IISCommitTransactionCA,) MSI (s) (80:FC) [15:29:01:436]: Executing op: ActionStart(Name=IISExecuteCA,,) MSI (s) (80:FC) [15:29:01:452]: Executing op: CustomActionSchedule(Action=IISExecuteCA,ActionType=3073,Source=BinaryData,Target=IISExecuteCA,CustomActionData=1^3^21^WebDeployment_Current^154^Microsoft.Web.Deployment.UI.PackagingModuleProvider, Microsoft.Web.Deployment.UI.Server, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35^1^1^0^^1^3^28^DelegationManagement_Current^171^Microsoft.Web.Management.Delegation.DelegationModuleProvider, Microsoft.Web.Management.Delegation.Server, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35^1^1^0^^1^7^38^system.webServer/management/delegation^4^Deny^16^MachineToWebRoot^0^^3^yes^1^7^31^system.webServer/wdeploy/backup^4^Deny^20^MachineToApplication^0^^2^no^) MSI (s) (80:84) [15:29:01:452]: Invoking remote custom action. DLL: C:\Windows\Installer\MSI6CB9.tmp, Entrypoint: IISExecuteCA 1: IISCA IISExecuteCA : Begin CA Setup 1: IISCA IISExecuteCA : CA 'ExecuteRegisterUIModuleCA' completed with return code hr=0x8007000d 1: IISCA IISExecuteCA : CA 'IISExecuteCA' completed with return code hr=0x8007000d 1: IISCA IISExecuteCA : End CA Setup CustomAction IISExecuteCA returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox) Action ended 15:29:05: InstallFinalize. Return value 3. I can't seem to find any information regarding this particular issue; can someone help point me in the right direction?

    Read the article

  • Experience with AMCC 3ware 9650se raid cards? Ours seems dead

    - by antiduh
    We have a 8-port 3ware 9650se raid card for our main disk array. We had to bring the server down for a pending power outage, and when we turned the machine back on, the raid card never started. This card has been in service for a couple years without problems, and was working up until the shutdown. Now, when we turn the machine on, the bios option rom that normally kicks in before the bootloader doesn't show up, none of the drives start, and when the OS tries to access the device, it just times out. The firmware on it has been upgraded in the past, so it's possible we've hit some sort of firmware bug. We're using it in a Silicon Mechanics R272 machine with gentoo for the OS. The OS eventually boots, but alas, without the card. We've ordered a new one, but I'm worried that if we replace the card it won't recognize the existing array. Has anybody performed a card swap before? Any help would be greatly appreciated. Edit: These are the kernel errors we see: 3ware 9000 Storage Controller device driver for Linux v2.26.02.012. 3w-9xxx 0000:09:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 3w-9xxx 0000:09:00.0: setting latency timer to 64 3w-9xxx: scsi0: ERROR: (0x06:0x000D): PCI Abort: clearing. 3w-9xxx: scsi0: ERROR: (0x06:0x001F): Microcontroller not ready during reset sequence. 3w-9xxx: scsi0: ERROR: (0x06:0x0036): Response queue (large) empty failed during reset sequence. 3w-9xxx 0000:09:00.0: PCI INT A disabled

    Read the article

  • Cachefilesd (cachefiles) everything seems to be set up, still not working

    - by Evgenius
    I'm trying to set up cachefilesd to work with my network folder shared using NFS. I have seemingly everything set up, however cachefilesd starts normally, however caching isn't functioning. Here is output of commands, which I ran in the same order 1 sudo mount ... cache-1:/mnt/datashared on /mnt/nfsshare type nfs (rw,sync,ac,acregmin=3,acregmax=60,acdirmin=30,acdirmax=300,lookupcache=pos,vers=3,fsc) ... 2 lsbmod | grep cachefiles cachefiles 40555 1 fscache 57430 4 nfs,cifs,cachefiles,nfsv4 3 [edited - deleted] 4 uname -r 3.8.0-34-generic 5 grep CONFIG_NFS_FSCACHE /boot/config-3.8.0-34-generic CONFIG_NFS_FSCACHE=y 6 lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 13.04 Release: 13.04 Codename: raring 7 sudo service cachefilesd restart * Restarting FilesCache daemon cachefilesd [ OK ] 8 dmesg [6211206.141781] FS-Cache: Withdrawing cache "mycache" [6211210.135236] FS-Cache: Cache "mycache" added (type cachefiles) [6211210.135242] CacheFiles: File cache on sdb1 registered [6214644.348929] CacheFiles: File cache on sdb1 unregistering [6214644.348935] FS-Cache: Withdrawing cache "mycache" [6214654.575909] FS-Cache: Cache "mycache" added (type cachefiles) [6214654.575915] CacheFiles: File cache on sdb1 registered 9 ps aux | grep cachefilesd root 65399 0.0 0.0 4460 540 ? Ss 23:14 0:00 /sbin/cachefilesd 1000 65464 0.0 0.0 8160 916 pts/0 S+ 23:16 0:00 grep --color=auto cachefilesd finally, biggest problem is 10 cat /proc/fs/nfsfs/volumes NV SERVER PORT DEV FSID FSC v3 64476645 801 0:24 233e020f0da07a93 no tl;dr I think I configured everything properly but FSC mount option + fscachefilesd don't seem to work

    Read the article

  • Making the iPhone work with stripped down Windows XP

    - by Gabriel
    Hi, this is my first time posting here and I have a really specific question. I have an ASUS eee 901 running Windows XP Home. I had everything working well, but then I decided to improve performance by moving Windows to the smaller but faster internal SSD. I used Nlite to strip down Windows, following the instructions here: http://wiki.eeeuser.com/howto:nlitexp I now have a very lightweight installation of XP home with SP3 and all the current updates. Almost everything is working really well. I have installed iTunes and I CAN sync with no problems. However, each time I plug in my iPhone 3GS (latest firmware), Windows tries and fails to install drivers. The Found New Hardware Wizard launches, but nothing I do will make it complete successfully, with the result that the iphone does not show up in Windows as removable storage, or as a camera. When I launch the Camera and Scanner Wizard, it shows only my webcam, not the iphone. I have verified that I have the following files in place: Windows\System32\ptpusb.dll (regsvr32 successful) Windows\System32\ptpusd.dll (entry point not found, can not be registered) Windows\System32\usbaaplrc.dll (entry point not found, can not be registered) Windows\System32\drivers\usbaapl.sys Windows\System32\drivers\usbscan.sys Windows\System32\drivers\usbstor.sys Does anyone know if some other file is required or if there's some other element preventing this from working? Edit (From posted answer) I did select Cameras & Camcorders, and my webcam is working fine for video & still capture.

    Read the article

  • Will this increase my Virtual private Server failing rate ?

    - by Spencer Lim
    Will this increase my Virtual private Server failing rate if i :- install Microsoft Window Server 2008 Enterprise install SQL server enterprise 2008 install IIS 7.5 install ASP.Net Mvc 2 install Microsoft Exchange << should live inside MWS2008 ? or standalone without OS? install Team foundation server << should live inside MWS2008 ? or standalone without OS? on one mini VPS with specification of DELL Poweredge R710 shared plan DDR3 ECC RAMs 16GB and -- 1GB for this VPS using DELL PERC 6i raid controller (this thing alone about 1.5k-2k) and the SAS HDD (15K RPM) (146GB) -- 33GB to this VPS each hdd is freaking fast over 300MB read / write possible with proper tuning the motherboard is a DELL and it has twin redundant PSU (870watt 85%eff) its running on Intel Xeon 5502 (Quad Core) x2 so about 8 physical proc (fairly share) is there any ruler to measure for this about one VPS can only install what what what service ? because of my resource is limited =.@ may i know if it is install in this way,maybe it seem like defeat the way of "VPS"... what will happen ? or any guideline on this issue (fully configuring the window server 2008 R2) ? Thx for reply

    Read the article

  • Why are symbolic links not working in MySQL?

    - by Eno
    I'm having an issue, I searched a lot but I'm not sure if it's related to a previous security patch. On the last version of MySQL on Debian Lenny ( 5.0.51a-24 ) I need to share one table between two db, those two db are in the same path ( /var/lib/mysql/db1 & db2 ). I created symbolic links for db2 pointing to the table in db1. When I query the same table from db2 I get this : 'ERROR 1030 (HY000): Got error 140 from storage engine' This is how it looks : test-lan:/var/lib/mysql/test3# ls -alh drwx------ 2 mysql mysql 4.0K 2010-08-30 13:28 . drwxr-xr-x 6 mysql mysql 4.0K 2010-08-30 13:29 .. lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.frm -> /var/lib/mysql/test/blbl.frm lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYD -> /var/lib/mysql/test/blbl.MYD lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYI -> /var/lib/mysql/test/blbl.MYI -rw-rw---- 1 mysql mysql 65 2010-08-30 13:24 db.opt I really need those symlinks, is there a way to make them working like before ? ( old MySQL-server is fine ) Thanks,

    Read the article

  • BitLocker with Windows DPAPI Encryption Key Management

    - by bigmac
    We have a need to enforce resting encryption on an iSCSI LUN that is accessible from within a Hyper-V virtual machine. We have implementing a working solution using BitLocker, using Windows Server 2012 on a Hyper-V Virtual Server which has iSCSI access to a LUN on our SAN. We were able to successfully do this by using the "floppy disk key storage" hack as defined in THIS POST. However, this method seems "hokey" to me. In my continued research, I found out that the Amazon Corporate IT team published a WHITEPAPER that outlined exactly what I was looking for in a more elegant solution, without the "floppy disk hack". On page 7 of this white paper, they state that they implemented Windows DPAPI Encryption Key Management to securely manage their BitLocker keys. This is exactly what I am looking to do, but they stated that they had to write a script to do this, yet they don't provide the script or even any pointers on how to create one. Does anyone have details on how to create a "script in conjunction with a service and a key-store file protected by the server’s machine account DPAPI key" (as they state in the whitepaper) to manage and auto-unlock BitLocker volumes? Any advice is appreciated.

    Read the article

  • How to upgrade Apache 2 from 2.2 to 2.4

    - by Nina
    I was in the process of doing a test upgrade from Apache 2.2 to 2.4.3. I'm using Ubuntu 10.04. I would have upgraded to 12.04 for this to see if the upgrade would go a lot smoother. Unfortunately, I was told it wasn't an option...so I'm stuck using 10.04. The process I did this was: Before attempting this, I have managed to upgrade APR from 1.3 to 1.4 as well since apache told me it was a requirement beforehand: http://apr.apache.org/download.cgi First remove all traces of the current apache: sudo apt-get --purge remove apache2 sudo apt-get remove apache2-common apache2-utils apache2.2-bin apache2-common sudo apt-get autoremove whereis apache2 sudo rm -Rf /etc/apache2 /usr/lib/apache2 /usr/include/apache2 Afterwards, I did the following: sudo apt-get install build-essential sudo apt-get build-dep apache2 Then install apache 2.4 with the following: wget http://apache.mirrors.tds.net//httpd/httpd-2.4.3.tar.gz tar -xzvf httpd-2.4.3.tar.gz && cd httpd-2.4.3 sudo ./configure --prefix=/usr/local/apache2 --with-apr=/usr/local/apr --enable-mods-shared=all --enable-deflate --enable-proxy --enable-proxy-balancer --enable-proxy-http --with-mpm=prefork sudo make sudo make install After the make install, I ended up getting a series of errors that prevented it from installing correctly: exports.c:2513: error: redefinition of 'ap_hack_apr_uid_current' exports.c:1838: note: previous definition of 'ap_hack_apr_uid_current' was here exports.c:2514: error: redefinition of 'ap_hack_apr_uid_name_get' exports.c:1839: note: previous definition of 'ap_hack_apr_uid_name_get' was here exports.c:2515: error: redefinition of 'ap_hack_apr_uid_get' exports.c:1840: note: previous definition of 'ap_hack_apr_uid_get' was here exports.c:2516: error: redefinition of 'ap_hack_apr_uid_homepath_get' Looking for exports.c only leads me back to the httpd-2.4.3 folder. So I'm not sure what these errors mean... Thanks in advance for any help you have to offer!

    Read the article

  • process and memory issue on linux server

    - by zapping
    Need some assistance in analyzing apache and php process running on linux server. Its a 8-core intel processor with 4GB ram. When the website on it runs the top displays like this. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23459 username1 16 0 151m 27m 8388 S 11.3 0.7 0:11.71 php5 23730 username1 16 0 151m 28m 8388 S 11.3 0.7 0:03.87 php5 23458 username1 16 0 151m 28m 8388 S 3.0 0.7 0:19.20 php5 16202 mysql 15 0 459m 38m 4624 S 0.7 1.0 62:33.81 mysqld 24141 nobody 15 0 311m 5832 2304 S 0.3 0.1 0:00.03 httpd Why does the command say php5 when the website is accessed. Both apache and php was preconfigured so not sure whats done there. Tried setting up the same site and db on a different server but on it the process shows httpd always and not php5. The site uses mysql db. The problem is server load seems to go till about 5.x when the website was access by about 16users. When the free -m command was given the output shows total used free shared buffers cached Mem: 3941 3727 213 0 236 2734 -/+ buffers/cache: 756 3184 Swap: 4095 0 4095 Lots of memory seems to be in cache and free memory is less. Even when the website is not accessed that is leaving it very much idle for about 2days the free memory showed just 190. When the site is accessed the free memory seems to be go till 90mb then it increases to about 150mb. It always seems to remain just about 200mb. Is it somehow related to the server load showing 5.x. Will adding some more RAM resolve the load issue?

    Read the article

  • Advice on networking career

    - by fmysky
    Hello! I need some insights on a networking career. I have a valid CCNA and few months of experience as a Jr. network analyst. My focus is a type of job where I can administer networking/storage hardware and possibly managing servers/workstations too. I am new to this job area, specifically new to city like LA. I am currently unemployed and so, my question is: should I continue with my cisco certifications(routing/wireless) or other comptia certs or both to get a reliable job? I am really interested in CCNA wireless but watching craigslist(LA) and other job sites, it seems people need more cisco voice people. While, some others also ask for Net+ certs. I have scarce financial sources so, its better to make some good decisions. I have not been applying yet due to some personal problems but I will soon. Thank you! P.S. I don't know if I can ask questions like this here. Sorry about that.

    Read the article

  • Apache2 refuses to process php files - "Snow Leopard" OSX 10.6.4

    - by w-01
    I have a macbook pro i5. my understanding is that by default it should be able to serve php5. i have uncommented the relevant line in /etc/apache2/httpd.conf LoadModule php5_module libexec/apache2/libphp5.so I have restarted apache with sudo apachectl -k restart and when i try to access a file with a php extension, Apache prompts me to download the file. i.e. instead of processing the php and sending me html, it thinks i want to download the file.... when i look in apache error log i see this [Fri Nov 12 10:16:14 2010] [notice] Apache/2.2.14 (Unix) PHP/5.3.2 mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_wsgi/3.2 Python/2.6.1 configured -- resuming normal operations so it looks like php5 is loading properly. I'd like to know either: How do i fix this? or How do I reinstall apache2 so that it's like i just installed the os? thanks in advance update @Zayne - the end of my httpd.conf has Include /private/etc/apache2/other/*.conf and i have a file /etc/apache2/other/php.conf with the contents <IfModule php5_module> AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> </IfModule> @Zayne I've already copied php.ini.default to php.ini in the same folder. when i run sudo apachectl configtest i get /usr/sbin/apachectl: line 82: ulimit: open files: cannot modify limit: Invalid argument httpd: Could not reliably determine the server's fully qualified domain name, using ::1 for ServerName Syntax OK furthermore i decided to try apachectl -M which shows all loaded modules Most importantly in the list of loaded modules i got Loaded Modules: php5_module (shared) Since the module is being loaded, it seems like the issue has more to do with making apache use php engine to process the php files.... so something wrong with the ifmodule directive?

    Read the article

  • ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?

    - by ewwhite
    I'm using Nexentastor on a secondary storage server running on an HP ProLiant DL180 G6 with 12 Midline (7200 RPM) SAS drives. The system has an E5620 CPU and 8GB RAM. There is no ZIL or L2ARC device. Last week, I created a 750GB sparse zvol with dedup and compression enabled to share via iSCSI to a VMWare ESX host. I then created a Windows 2008 file server image and copied ~300GB of user data to the VM. Once happy with the system, I moved the virtual machine to an NFS store on the same pool. Once up and running with my VMs on the NFS datastore, I decided to remove the original 750GB zvol. Doing so stalled the system. Access to the Nexenta web interface and NMC halted. I was eventually able to get to a raw shell. Most OS operations were fine, but the system was hanging on the zfs destroy -r vol1/filesystem command. Ugly. I found the following two OpenSolaris bugzilla entries and now understand that the machine will be bricked for an unknown period of time. It's been 14 hours, so I need a plan to be able to regain access to the server. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 and http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=593704962bcbe0743d82aa339988?bug_id=6924824 In the future, I'll probably take the advice given in one of the buzilla workarounds: Workaround Do not use dedupe, and do not attempt to destroy zvols that had dedupe enabled. Update: I had to force the system to power off. Upon reboot, the system stalls at Importing zfs filesystems. It's been that way for 2 hours now.

    Read the article

  • Running mod_php and suPHP same time

    - by BHare
    I recently went from Debian Lenny with 5.2.x and was able to use mod_php for any php files that were not located in /home/ and suPHP for all the php files that were located in /home/. I did this because I needed a default php.ini (given me all features of php) for my websites in /var/www/ and I didn't want to have to change the owner of all the .php files from root. I also had a default php.ini for all the /home/ php files without dangerous features. This was I had setup: <IfModule mod_suphp.c> <Directory /home/> AddType application/x-httpd-php .php .php3 .php4 .php5 suPHP_AddHandler application/x-httpd-php suPHP_Engine on suPHP_ConfigPath /home/shared/ </Directory> </IfModule> This was working perfect, but recently I upgraded to PHP to 5.3.5 from dotdeb (Lenny has no official php 5.3) . This had weird issues on lenny such as not display errors correctly and little tid bits. So I decided to upgrade from lenny to squeeze. Uninstalled php (along with it came suphp) and reinstalled with the new source. I now have 5.3.3-7 with Debian Squeeze but I cannot get mod_php and suPHP to run at the same time anymore. mod_php will always work and there are no errors in apache2 or suphp logs. If I disabled mod_php then suPHP will work. Is there thing I am doing wrong?

    Read the article

  • ZFS - Impact of L2ARC cache device failure (Nexenta)

    - by ewwhite
    I have an HP ProLiant DL380 G7 server running as a NexentaStor storage unit. The server has 36GB RAM, 2 LSI 9211-8i SAS controllers (no SAS expanders), 2 SAS system drives, 12 SAS data drives, a hot-spare disk, an Intel X25-M L2ARC cache and a DDRdrive PCI ZIL accelerator. This system serves NFS to multiple VMWare hosts. I also have about 90-100GB of deduplicated data on the array. I've had two incidents where performance tanked suddenly, leaving the VM guests and Nexenta SSH/Web consoles inaccessible and requiring a full reboot of the array to restore functionality. In both cases, it was the Intel X-25M L2ARC SSD that failed or was "offlined". NexentaStor failed to alert me on the cache failure, however the general ZFS FMA alert was visible on the (unresponsive) console screen. The zpool status output showed: pool: vol1 state: ONLINE scan: scrub repaired 0 in 0h57m with 0 errors on Sat May 21 05:57:27 2011 config: NAME STATE READ WRITE CKSUM vol1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c8t5000C50031B94409d0 ONLINE 0 0 0 c9t5000C50031BBFE25d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c10t5000C50031D158FDd0 ONLINE 0 0 0 c11t5000C5002C823045d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c12t5000C50031D91AD1d0 ONLINE 0 0 0 c2t5000C50031D911B9d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c13t5000C50031BC293Dd0 ONLINE 0 0 0 c14t5000C50031BD208Dd0 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 c15t5000C50031BBF6F5d0 ONLINE 0 0 0 c16t5000C50031D8CFADd0 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 c17t5000C50031BC0E01d0 ONLINE 0 0 0 c18t5000C5002C7CCE41d0 ONLINE 0 0 0 logs c19t0d0 ONLINE 0 0 0 cache c6t5001517959467B45d0 FAULTED 2 542 0 too many errors spares c7t5000C50031CB43D9d0 AVAIL errors: No known data errors This did not trigger any alerts from within Nexenta. I was under the impression that an L2ARC failure would not impact the system. But in this case, it surely was the culprit. I've never seen any recommendations to RAID L2ARC. Removing the bad SSD entirely from the server got me back running, but I'm concerned about the impact of the device failure (and maybe the lack of notification from NexentaStor as well). Edit - What's the current best-choice SSD for L2ARC cache applications these days?

    Read the article

  • Blocking requests from specific IPs using IIS Rewrite module

    - by Thomas Levesque
    I'm trying to block a range of IP that is sending tons of spam to my blog. I can't use the solution described here because it's a shared hosting and I can't change anything to the server configuration. I only have access to a few options in Remote IIS. I see that the URL Rewrite module has an option to block requests, so I tried to use it. My rule is as follows in web.config: <rule name="BlockSpam" enabled="true" stopProcessing="true"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{REMOTE_ADDR}" pattern="10\.0\.146\.23[0-9]" ignoreCase="false" /> </conditions> <action type="CustomResponse" statusCode="403" /> </rule> Unfortunately, if I put it at the end of the rewrite rules, it doesn't seem to block anything... and if I put it at the start of the list, it blocks everything! It looks like the condition isn't taken into account. In the UI, the stopProcessing option is not visible and is true by default. Changing it to false in web.config doesn't seem to have any effect. I'm not sure what to do now... any ideas?

    Read the article

  • DIY NAS - links for Instructions

    - by Kaushik Gopal
    Good folk of SU, I'm planning to build a NAS (Network Access Storage). I'm planning to do it cheap (:read Old PC Config + Open source software). I was looking for good DIY links . Before you shoot this down as a Repost, I'm only looking for good links containing detailed instructions for setting up a NAS. I did a fair bit of searching and found these links (so please suggest others. While these links are great they delve more on the hardware side, i'm looking for more instructions in the software side). For the sake of the interwebs: Ubuntu: http://snarfquest.com/wiki/index.php/Setting_up_a_Home_NAS http://www.smallnetbuilder.com/content/view/27962/77/ http://jonpeck.blogspot.com/2006/11/how-to-configure-80-fileserver-in-45.html FreeNAS http://www.smallbusinesscomputing.com/webmaster/article.php/3719706 http://www.codeproject.com/KB/system/homemade_nas.aspxhttp://www.codeproject.com/KB/system/homemade_nas.aspx There was one at rubbervir.us that everyone points to, but apparently the site has gone down. A couple of other queries: Is Printer/Scanner sharing a possibility with NAS devices? Many talk of torrent support with NAS Devices, a little more light on this? Does this mean, an auto download of torrent through a feed into NAS, or just support for storing Torrent download files onto the NAS(don't see the difference between the latter and a normal file tranfer)

    Read the article

  • Ubuntu 12.04 Server - eth0 1Gbps NIC eth1 10Gbps NIC - all traffic using eth0?

    - by James
    Ubuntu Server 12.04.1 x64 Primary role is an NFS fileserver, for Mac OSX Clients. Hardware: Eth0: 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 04) Eth1: 07:00.0 Ethernet controller: MYRICOM Inc. Myri-10G Dual-Protocol NIC Config: ifconfig eth0 Link encap:Ethernet HWaddr <MACADDRESS> inet addr:192.168.0.150 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:460042020 errors:0 dropped:148 overruns:0 frame:0 TX packets:231906707 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:581431978417 (581.4 GB) TX bytes:259057368617 (259.0 GB) Interrupt:20 Memory:f7d00000-f7d20000 eth1 Link encap:Ethernet HWaddr <MACADDRESS> inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6832208 errors:0 dropped:2 overruns:0 frame:0 TX packets:376 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:513826442 (513.8 MB) TX bytes:33688 (33.6 KB) Interrupt:59 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:507 errors:0 dropped:0 overruns:0 frame:0 TX packets:507 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:45057 (45.0 KB) TX bytes:45057 (45.0 KB) nano /etc/network/interfaces #The loopback network interface auto lo iface lo inet loopback #The primary network interface auto eth0 iface eth0 inet static address 192.168.0.150 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 #second network interface auto eth1 iface eth1 inet static address 192.168.0.100 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 Currently I am using on the OSX clients: nfs://192.168.0.100/Volumes/Storage to mount the NFS share. My problem is why would all the data (and I have checked using various monitoring tools bmon, iftop, glances, etc) be going over the slower connection?? Also, after configuring /etc/network/interfaces with the above setup I always get an error message at bootup something about waiting for network configuration. Are these connected?

    Read the article

  • SSH garbling characters in vim/nano on remote server

    - by geerlingguy
    ... and it's driving me insane. Basically (this has been happening over the past couple months), I log into a few different CentOS servers (one Linode, another VPS, and a shared host to which I have shell access), running 5.5, 5.7, and 6, from my Mac running OS X Lion, using Terminal. Basically: $ ssh [email protected] [remote-host] $ nano somefile.txt Once I start editing the file, if I use the arrow keys to move around the cursor, or start deleting, then typing again, the cursor jumps around a bit, and if I save the file and reopen it, it's obvious that the cursor was, in fact, jumping all over the place on a line for no apparent reason. I end up getting things like "This is a neof text." When I had typed in (to the cursor-crazy editor) "This is a line of text." It's a big problem when it comes to editing configuration files, because I often have to edit one line, save and close, then reopen just to make sure that line is right... then edit another line... and it's getting quite annoying. I found Linode Lish Shell Vim and Nano rendering troubles: lines not appearing / cursor positions wrong, but I don't know if that relates much, since that's specifically referring to lish.

    Read the article

  • Linux can't find file that exists

    - by Joe
    I'm trying to get Google's Dart language up and running, but it errors when running dart2js. I'm running Arch linux and I installed dart-sdk from AUR. Some relevant terminal output lies below. % dart2js main.dart /usr/local/bin/dart2js: line 7: /usr/local/bin/dart: No such file or directory % cat /usr/local/bin/dart2js #!/bin/sh # Copyright (c) 2012, the Dart project authors. Please see the AUTHORS file # for details. All rights reserved. Use of this source code is governed by a # BSD-style license that can be found in the LICENSE file. BIN_DIR=`dirname $0` exec $BIN_DIR/dart --allow_string_plus=false $BIN_DIR/../lib/dart2js/lib/compiler/implementation/dart2js.dart "$@" % file /usr/local/bin/dart /usr/local/bin/dart: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, BuildID[sha1]=0x27fe166ca015c1adfeaf3a6f9c018e7d7af46d9f, stripped % ls -alh /usr/local/bin total 4.9M drwxr-xr-x 2 root root 4.0K Jun 10 22:51 . drwxr-xr-x 12 root root 4.0K Jun 10 22:51 .. -rwxr-xr-x 1 root root 422K May 10 22:41 cargo -rwxr-xr-x 1 root root 2.7M Jun 10 22:50 dart -rwxr-xr-x 1 root root 360 Jun 6 16:20 dart2js -rwxr-xr-x 1 root root 176 Jun 6 16:20 pub -rwxr-xr-x 1 root root 166K May 10 22:41 rustc -rwxr-xr-x 1 root root 1.6M May 10 22:41 rustdoc % uname -rm 3.3.7-1-ARCH x86_64 Could it be because I'm running a 64bit OS and the dart binary is 32bit?

    Read the article

  • How do I install the pdo_mysql driver on Red Hat Enterprise Linux 6.1?

    - by Will Martin
    I have a RHEL box running PHP 5.3.3, which was installed using the binary packages provided by yum. I have installed the php-pdo package: # yum info php-pdo Loaded plugins: product-id, rhnplugin, subscription-manager Updating Red Hat repositories. Installed Packages Name : php-pdo Arch : x86_64 Version : 5.3.3 Release : 3.el6_1.3 Size : 168 k Repo : installed From repo : rhel-x86_64-server-6 Summary : A database access abstraction module for PHP applications URL : http://www.php.net/ License : PHP Description : The php-pdo package contains a dynamic shared object that will add : a database access abstraction layer to PHP. This module provides : a common interface for accessing MySQL, PostgreSQL or other : databases. It appears to be working correctly for SQLite databases, but not MySQL. There's no file including pdo_mysql.so in /etc/php.d, and there is no copy of pdo_mysql.so in /usr/lib64/php/modules. I'm pretty sure I just need the driver file and a line in the PHP configuration. A yum search pdo mysql didn't turn up any useful packages, and Google has failed me. If I were on Ubuntu or Debian, I'd apt-get install php5-mysql and be done with it. So ... where in Red Hat land do I get a copy of pdo_mysql.so, and install it properly?

    Read the article

  • Setup of HP ProCurve 2810-24G for iSCSI?

    - by 3molo
    Hi, I have a pair of ProCurve 2810-24G that I will use with a Dell Equallogic SAN and Vmware ESXi. Since ESXi does MPIO, I am a little uncertain on the configuration for links between the switches. Is a trunk the right way to go between the switches? I know that the ports for the SAN and the ESXi hosts should be untagged, so does that mean that I want tagged VLAN on the trunk ports? This is more or less the configuration: trunk 1-4 Trk1 Trunk snmp-server community "public" Unrestricted vlan 1 name "DEFAULT_VLAN" untagged 24,Trk1 ip address 10.180.3.1 255.255.255.0 no untagged 5-23 exit vlan 801 name "Storage" untagged 5-23 tagged Trk1 jumbo exit no fault-finder broadcast-storm stack commander "sanstack" spanning-tree spanning-tree Trk1 priority 4 spanning-tree force-version RSTP-operation The Equallogic PS4000 SAN has two controllers, with two network interfaces each. Dell recommends each controller to be connected to each of the switches. From vmware documentation, it seems creating one vmkernel per pNIC is recommended. With MPIO, this could allow for more than 1 Gbps throughput.

    Read the article

  • Digital Asset Management, iPhoto / Aperture server... alternative

    - by Sisyphus
    Afternoon, Clients, 10 : All Apples running either Leopard or Snow Leopard Server : Snow Leopard server, (and I have a old Dell Poweredge 650 at home running Gentoo 2.6, if anybody as a Linux solution). The situation: I work in small design company with 8 people, at present we are looking to consolidate all our image files onto one location, at present we each use our preferred single user DAM solution, be it, Adobe Bridge, iPhoto/Aperture (some don't bother at all) The filetypes commonly used are .psd, .pdf, .eps, .tiff, .jpg and RAW image files. Ideally what is needed: Centralised on one server, but allows us to search via spotlight (not essential, but would be nice) Include searchable metadata information such as date, location, title Open-source or as low cost as possibly Allow simultaneous users to import files So far, I have looked at a few open source DAM, systems, such as Razuna, Gallery (not strictly DAM), ResourceSpace, Notre-DAM, while these are brilliant and open-source, they don't integrate as smoothly with the Desktop as iPhoto and aperture. For iPhoto and aperture, I have tried creating a Shared library on the server (a tad laggy), and also using a drive with no permissions, put a library and letting each client read from it, however if they want to put images onto the library only, it's only supports one user at a time writing to the library... Any ideas what could fulfill our needs? Or is it time to bite the bullet for FinalCut Server? Thanks in advance.

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >