Search Results

Search found 3722 results on 149 pages for 'choice'.

Page 105/149 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • GlusterFS on VMWare ESXi 5

    - by Dharmavir
    I want to build network file system on top of my VMWare ESXi based virtual nodes which are running Ubuntu 12.04 LTS. I am evalaluating options and found that GlusterFS (http://www.gluster.org/) can turn out to be a good choice. Purpose: I have about 2 dozen VM nodes with different configurations, on 2 physical nodes which has following configuration: 16 core Intel Xeon 1 TB 48 GB RAM Now as I said earlier each Physical server has about 1TB hdd and I can increase if I want additional so for now I have 2TB disk space available, these space is distributed in VM nodes I have created on which about 2 dozen VM nodes live. Now some of them being application server and mgmt server, they have plenty of free disk space which I want to utilize for some heavy storage which I can not design if I do that individually on single VM node. This way if my storage is distributed between dozens of VM nodes and about 2 or more physical nodes I have some sort of backup as well. I do not mind if data gets stored redundently but per my knowledge it might hapeen that individual VM nodes will not be able to store all of the data because complete data size for example if we take 100GB will exceed VM disk size of 70GB and then VM will also have system and program files on it. I need some suggestion that will GlusterFS be the solution for which I am looking forward to or I should go with something like hadoop? I am not too sure. But yes, I would like to utilize my free space on each VM node and while doing that if I get store data redundently I am okay because it will give me data security.

    Read the article

  • Why have trackballs almost disappeared? [closed]

    - by Gary M. Mugford
    One of the movement sensors in my Microsoft Trackball Explorer has failed and right now I am using a mouse. Ugggh! I'll got steal one of the various Logitech trackballs spread around the house, but they all have issues. The Trackman has a horrible placement for the scroll wheel. Others have marbles for the thumb rather than a big ball for the fingers and at least one trackball is working around here without having a scroll wheel at all! (The one at the dinner table, for when I dine alone). My question is, why have trackballs fallen into disfavour? Seems to me that trackballs are great for crowded desktops (you know, the one with keyboards, notes, pens and coffee cups), and for laptops with those hated overly-sensitive touchpads. But right now, it seems to be a choice between that Logitech Trackman and some Kensington models that lack scroll wheels. All I want is a nice big ball to manipulate with the fingers and two buttons on the thumb side with the scroil wheel between them. Placement of other buttons is completely optional. Is that asking too much?

    Read the article

  • NATing IPv4 while routing IPv6

    - by Hugo
    I've the following setup: client(s) <---> (eth0) router (eth1) <---> wan I have a static IPv4 address and a /48 IPv6 address block. I need to connect all the clients to (wan). Each client will have it's own public IPv6. Meanwhile, I need to NAT those same clients over to (wan). Everything IPv4-related and the NAT are working fine. The IPv6 communication to/from (eth0)<-(clients) works fine, as does the IPv6 communication from (eth1)<-(wan) works fine. To provide IPv6 to all my clients, I've thought of too choices: Having the router as a gateway, which different IP on each interface. This sounds like I need to tell my ISP to route the entire block through that single IP, so it's not really an option. Transparently pass IPv6 packets to/from eth0<-eth1, so all clients can communicate with the upstream gateway (I would actually have a switch here if it weren't for the need to remain IPv4 compatible). So, since I've opted for the second choice, I'm in doubt: How can I pass all IPv6 traffic from eth0 to eth1 transparently? What I need is a level 3 bridge, but linux's bridgeutils create a level 2 bridge (which would bridge ipv4 as well, and I can't have that). This is a DD-WRT device, but it's pretty much an embeded linux, so most suggestions that would work on linux are welcome. Thanks.

    Read the article

  • What determines what resolutions a laptop is willing to output over VGA?

    - by Joshua McKinnon
    I'm responsible for several conference rooms and have setup 1080p projectors and I provide both HDMI and VGA connectivity. HDMI for DisplayPort and Mini-DisplayPort, and VGA as a fallback, universal option. Contrary to what I expected, people seem to have much more trouble with the HDMI than VGA, so VGA gets used a lot more than you'd think (even as most workstation laptops made in the last 3-4 years have DisplayPort or Mini-DisplayPort...). Also to my surprise, VGA outputs over 1080p on a 50ft cable run with very minimal degradation on certain laptops - other laptops just don't offer 1080p as a resolution choice and top out at 1600x1200 or something else. Specific example: a ThinkPad W530 will do 1080p, a W520 won't, over VGA. (both do 1080p over displayport/mini-DP) What determines what resolutions a laptop is willing to output over VGA? I'm thinking this will come down to either a video driver that says it supports only certain resolutions for output, or limitations of the RAMDAC (which wouldn't be in play, at least DAC wise, on a digital output, but WOULD on VGA, an analog output). The basic reason for the question is that I noticed, say, a ThinkPad W520 with 1080p built in display, will output 1080p fine over DisplayPort to a 1080p projector, but will cap out at 1600x1200 (practically the same pixel count, just a little shy) on VGA. Now, this wouldn't be surprising at all except SOME laptops have no issue outputting 1080p over VGA, even with lower native resolutions. Why do I care? Well if there's some way I could enable it... for situations where my users end up using VGA anyway, it's preferable for display mirroring if they can output their laptop's native resolution, which, you guessed it, is very often 1080p on 15" models. DISCLAIMER: This is primarily a curiosity, I'm not claiming 1080p over VGA is ideal by any means, but hey, if it works. I've seen HDMI start artifacting more over same-length, same gauge cabling (up to 50' run in certain rooms). If you think this is better suited to SuperUser, please move it, but this is framed from an IT standpoint of something that affects a real pool of users in a multiple conference room, 50+ deployed laptop scenario.

    Read the article

  • eee box "drobpox" server 24/7

    - by microspino
    I'd like to create a mini dropbox and print server on a small soho network of 5 users (all of them use windows XP desktops). The device need to run 24/7 or at least 12/7 (I can accept just workday hours too but the other two options would be better). Dropbox mini server: I mean I will have a 90gb dropbox on every computer on my network LAN syncing with It and the one onto It syncing to the web. Print Server: I have a Samsung A4 small laser printer, a HP500 Designjet Plotter, a Samsung Multifunction Machine (fax/print/scan/copy), a modern HP color A3 Deskjet printer and a HP laserjet A4 color printer. All of them need to be connected to this mini server. Fax/Scan server: since I have the above mentioned fax/print/scan/copy machine I would like to make people use It from/to their computers through the mini server. I thought to a recent EEEBOX machine because I heard good things about ATOM cpus and because It seems that a recent BIOS version could switch It off and on autonomously. I'd like to listen some advice from You. Best of all would be: - If You have something similar running for a long time - If You disagree with this hardware choice and If You would suggest some other device. - If You see any issues with my printing setup - Anything else ;) My budget is from Zero (using right sw to build something on top on a old PC) to 500€ max.

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

  • Fetch videos from sony handycam to linux

    - by bstpierre
    I've got a Sony Handycam DCR-DVD101. When I plug connect the USB cable to my laptop (Ubuntu 10) it doesn't mount any storage device. If I run usb-devices, I see: T: Bus=02 Lev=02 Prnt=02 Port=00 Cnt=01 Dev#= 6 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=054c ProdID=00c1 Rev=01.00 S: Manufacturer=SONY S: Product=Storage Device C: #Ifs= 1 Cfg#= 1 Atr=c0 MxPwr=2mA I: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=05 Prot=50 Driver=usb-storage The driver says usb-storage, but I'm not sure how to get the device mounted. Is there a way to make this work? Update: checking dmesg, I see: [259072.576559] usb 2-1.1: new high speed USB device using ehci_hcd and address 6 [259072.687200] usb 2-1.1: configuration #1 chosen from 1 choice [259072.836188] Initializing USB Mass Storage driver... [259072.836476] scsi5 : SCSI emulation for USB Mass Storage devices [259072.836632] usb-storage: device found at 6 [259072.836636] usb-storage: waiting for device to settle before scanning [259072.836660] usbcore: registered new interface driver usb-storage [259072.836666] USB Mass Storage support registered. [259077.830410] usb-storage: device scan complete [259077.832343] scsi 5:0:0:0: CD-ROM SONY DDX-A1010 R1.0 PQ: 0 ANSI: 0 [259077.888167] sr1: scsi3-mmc drive: 0x/0x pop-up [259077.888446] sr 5:0:0:0: Attached scsi CD-ROM sr1 [259077.888593] sr 5:0:0:0: Attached scsi generic sg2 type 5 [259080.002079] sr 5:0:0:0: [sr1] Unhandled sense code [259080.002085] sr 5:0:0:0: [sr1] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [259080.002091] sr 5:0:0:0: [sr1] Sense Key : Blank Check [current] [259080.002097] sr 5:0:0:0: [sr1] Add. Sense: No additional sense information [259080.002104] sr 5:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 00 00 [259080.002117] end_request: I/O error, dev sr1, sector 0 [259080.002123] Buffer I/O error on device sr1, logical block 0 [259080.002128] Buffer I/O error on device sr1, logical block 1 Those I/O errors don't look good, is there any hope?

    Read the article

  • Updating Dell PowerEdge firmware on Ubuntu?

    - by Shtééf
    The company I work for recently got hands on a batch of second hand PowerEdge SC1425 machines. We'd like to put these to good use. Our operating system of choice is Ubuntu Server 10.04 64-bit, which installs just peachy on this type of machine. Now I'd like to install the firmware updates from Dell, which are apparently marked as recommended. This includes the updates for the BIOS, the BMC, and possibly some other hardware. I find it incredibly difficult to locate the files on the Dell website, and install any of them on an Ubuntu system: I downloaded the file OM_6.2.0_SUU_A01.iso. I believe I've read that the SUU DVD should be able to update any recent PowerEdge. Is this correct? Is this the latest version? Besides the version number, does A01 have any meaning? Is this image bootable? (At the moment, I just nosed around with a loop device mount.) Running /bin/bash ./suu from the DVD, I get: # /bin/bash ./suu ./suu: line 262: ./java/linux/i386/bin/java: No such file or directory The file exists and is executable, though. But I cannot execute it directly from the shell either.

    Read the article

  • Chipset fan on the frtiz - compressed air hasn't fixed anything - is there anything I can do?

    - by Anthony
    Yesterday, my computer started to make an annoying whining noise. Knowing that this is likely a fan issue, I opened the case and proceeded to determine which fan was causing the issue. I got some compressed air and tried cleaning out the dust around it (and the rest of the computer while I was at it). This hasn't seemed to fix the issue. Now, if it were just any fan, I would probably just replace the fan - they're relatively cheap after all. However, this is a special fan. Aside: For what its worth, I feel bad that the graphics card blocks part of the fan, but it is the only slot the graphics card fits, so I had no choice. After pulling out my motherboard user guide, it looks like this is a fan placed directly on top of the chipset. To be perfectly honest, I have no clue what the purpose of the chipset is - but it sounds important. After some quick research, I see that it is responsible for providing the bridge between my CPU, RAM and graphics, among other things. Just a quick search at Newegg tells me that chipset fans can be purchased at pretty reasonable prices (< 20 dollars). Is it practical to replace this fan? It is an old computer as computers go and I wouldn't be terribly upset to upgrade the motherboard and processor, so perhaps this is a sign. Hardware Specs: Motherboard: Asus A8N-E Chipset: NVIDIA nForce4 Ultra

    Read the article

  • EEE PC dropbox server running 24/7

    - by microspino
    I'd like to create a mini dropbox and print server on a small soho network of 5 users (all of them use windows XP desktops). The device need to run 24/7 or at least 12/7 (I can accept just workday hours too but the other two options would be better). Dropbox mini server: I mean I will have a 90gb dropbox on every computer on my network LAN syncing with It and the one onto It syncing to the web. Print Server: I have Samsung SCX 4521F (fax/print/scan/copy), Samsung ML2010, HP Laser jet P1006, HP Color Laserjet CP1215, HP Office jet pro K8600, HP Design jet 500. All of them now are connected using little print servers and I want to get rid of them hooking everything to this mini server. The fax/print/scan/copy machine need to stay connected to a PC to make me able to use the software that comes with It. The mini server would save me on this too. Fax/Scan server: since I have the above mentioned fax/print/scan/copy machine I would like to make people use It from/to their computers through the mini server. I thought to a recent EEEBOX machine because I heard good things about ATOM cpus and because It seems that a recent BIOS version could switch It off and on autonomously. I'd like to listen some advice from You. Best of all would be: If You have something similar running for a long time If You disagree with this hardware choice and If You would suggest some other device. If You see any issues with my printing setup Anything else ;) My budget is from Zero (using right sw to build something on top on a old PC) to 500€ max.

    Read the article

  • Should one have a separate user account for work use? [closed]

    - by Tyler Wayne
    This question examines the practice of using a separate OS-level user account to divide work use from personal use (specifically, in a creative profession and on a personal computer). I recently left my in-the-flesh job to go to school, but I'm carrying on with the work remotely. I do all of my work on my laptop, and I currently have a separate user account called "Work" where I do exactly that. However, I'm now starting to question that practice. Because my hobby is the same as my job, I want to save notes of the things I learn while working. Because ideas come at any moment, I often want to throw something into my personal task manager's inbox and look at it again later. That task manager is well-suited to handle both the work and personal aspects of my life. Only my personal account has admin rights, but work sometimes requires me to install programs. My employer has no preference regarding my choice, so that is a non-issue. My work is essentially freelance web development, so advice given with that in mind will be much appreciated. Back up all opinion with some personal experience, please. Ideally, give a list of pros and cons and then name reasons for your position.

    Read the article

  • Using ZFS or XFS on a Xen guest running Linux

    - by zoot
    Background: I'm investigating the viability of using a filesystem other than ext3/4, with the ability to run snapshots for backup and rollback purposes. The servers under consideration are mailbox server nodes running on Linode's Xen based VPS platform. I'm particularly drawn to the various published benefits which ZFS offers in terms of data integrity and this year's stable release of native ZFS support in Linux - http://zfsonlinux.org ZFS appears to be the more thorough option in terms of benefits and simplicity (instead of LVM+XFS). Please note that I have little experience with ZFS (which I use on a local FreeNAS installation) and none with XFS, hence the post. To date, my servers are using ext3 filesystems, not managed under LVM. Question in detail: So, I have two questions. (1) Which of the two filesystems would be the better choice for the best of all of the following 3 aspects, running on a Xen Linux guest? Snapshots Data Integrity Performance (2) If ZFS is a viable option, is it practical to use ZRAID across Xen disk images to further enhance the solution for data integrity? Note: I'm reluctant to consider btrfs, given the many warnings I've read about in using it on production systems.

    Read the article

  • sql 2008 disk layout on a budget this is for database mirroring

    - by user22215
    Guys I'm rolling out a SQL database server that will be used to back Sharepoint 2007. Right now I need some advice on my disk layout. I have two Dell servers that are configured a little differently in terms of storage. The principle server will be using a combination of local storage and san storage. I have to work with what I have the organization is currently all allocated on san storage it was like pulling teeth to even get what I have to work with now. My disk setup on the principle is as follows: raid 1 for OS raid 10 for logs raid 10 fiber on san for high IO databases raid 10 sata on san for content databases My question in regards to the principle server is where should I place the temp db? I thought about placing it on the fiber raid 10 which will be hosting my high IO Sharepoint SSP databases my only other choice is to move it to the raid 1 os partition which I’m sure you guys will be against. Now let’s talk about the mirror server it is not connected to the san it is all local 6 15k SAS drives. Now my question is the same do I put tempdb on the os partition or do I leave the os partition and use a single raid 10 for everything? Any help you can provide is much appreciated.

    Read the article

  • Default Gateway solution on NAT'd network (best options)

    - by kwiksand
    I've recently changed a network from a bunch of machines exposed to the net on a network to a more security conscious Firewall-fronted network with a DMZ for public services. Everything's mostly working perfectly now, but I've got the old problem of NAT Loopback where a machine within the LAN wants to access a public service via the public/external IP. I've solved this problem previously in a small/SOHO environment simply using NAT loopback features of the router in use or a simple iptables rule to do the same, but I want to make sure I make the most resilient choice with the least concern. It seems I can: Use iptables as I've said to DNAT and MASQUERADE the change source/destination so the connection works correctly i.e iptables -A PREROUTING -t nat -d ip.of.eth0.here -p tcp --dport 8080 -j DNAT --to 192.168.0.201:8080 iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -p tcp --dport 8080 -d 192.168.0.201 -j MASQUERADE Use split DNS, with internal mappings for public IP's Potentially do some route nastyness by setting the Default Gateway to use a different externally exposed IP to then come back in the public route (messy) Someone mentioned putting the Default Gateway within the DMZ as well (on serverfault), but I can't find the post again. I'm sure this is a common issue for many with NAT'd networks, but I've not really seen the perfect solve all when it comes to fixing this problem. What is your opinion?

    Read the article

  • RabbitMQ message consumers stop consuming messages

    - by Bruno Thomas
    Hi server fault, Our team is in a spike sprint to choose between ActiveMQ or RabbitMQ. We made 2 little producer/consumer spikes sending an object message with an array of 16 strings, a timestamp, and 2 integers. The spikes are ok on our devs machines (messages are well consumed). Then came the benchs. We first noticed that somtimes, on our machines, when we were sending a lot of messages the consumer was sometimes hanging. It was there, but the messsages were accumulating in the queue. When we went on the bench plateform : cluster of 2 rabbitmq machines 4 cores/3.2Ghz, 4Gb RAM, load balanced by a VIP one to 6 consumers running on the rabbitmq machines, saving the messages in a mysql DB (same type of machine for the DB) 12 producers running on 12 AS machines (tomcat), attacked with jmeter running on another machine. The load is about 600 to 700 http request per second, on the servlets that produces the same load of RabbitMQ messages. We noticed that sometimes, consumers hang (well, they are not blocked, but they dont consume messages anymore). We can see that because each consumer save around 100 msg/sec in database, so when one is stopping consumming, the overall messages saved per seconds in DB fall down with the same ratio (if let say 3 consumers stop, we fall around 600 msg/sec to 300 msg/sec). During that time, the producers are ok, and still produce at the jmeter rate (around 600 msg/sec). The messages are in the queues and taken by the consumers still "alive". We load all the servlets with the producers first, then launch all the consumers one by one, checking if the connexions are ok, then run jmeter. We are sending messages to one direct exchange. All consumers are listening to one persistent queue bounded to the exchange. That point is major for our choice. Have you seen this with rabbitmq, do you have an idea of what is going on ? Thank you for your answers.

    Read the article

  • Office 365 - Outlook shows Global Address List clicking "Rooms" during a meeting request

    - by TheCleaner
    This appears to be a "known" issue, but apparently no fix for it. However, I've been impressed before at the tenacity of the experts here to figure out an answer/fix. ISSUE When booking a New Meeting in Outlook (2013 or 2010) and choosing the Rooms button: The default list that opens is the Offline Global Address List: Which means a user has to change from the Offline Global Address List to the All Rooms list as shown here in order to easily pick from the list of actual rooms/resources: This isn't the default however for On-Premise Exchange servers. They default "correctly" to the All Rooms list when you click the Rooms button in the meeting request. While the option of using the Room Finder is there and does work, users have to know to click the Room Finder choice and it doesn't fix the actual root issue here. MY RESEARCH A few links I've found: http://community.office365.com/en-us/forums/158/t/41013.aspx http://community.office365.com/en-us/forums/148/p/24139/113954.aspx http://community.office365.com/en-us/forums/172/t/58824.aspx It was suggested that it might be that the "msExchResourceAddressLists attribute has incorrect value set". I checked my config by running: Get-OrganizationConfig | Select-Object ResourceAddressLists and the output was what it should be: ResourceAddressLists -------------------- {\All Rooms} QUESTION Does anyone have a fix that will make the All Rooms list be the default list when clicking the Rooms button in Outlook when using Office 365 / Exchange Online?

    Read the article

  • Web filtering (Proxy or DNS) with option for users to ignore the block

    - by Jon Rhoades
    We are struggling with our users visiting infected or "attack" sites and Phising in general. Most of our machines are protected by an Enterprise anti virus and monitoring solution (McAffe ePO) and we try to get people to use Firefox... But no AV is perfect and we have to endure personal machines as well (albeit on their own 'Plague' VLANs) and would like to do something about Phishing as our users seem intent on disclosing their passwords to the world... To complicate matters we don't want to implement a block for many many reasons instead we would like to implement something akin to Firefox's "Reported Scam/Phish/Attack Site" - "Get me out of here" or crucially "Let me in anyway", giving the user a choice to still infect themselves if they feel like it (or look at a site incorrectly blacklisted). The reason we can't just use Firefox is we have a core enterprise App only certified on IE6&7 - thank you Oracle. Is it possible to implement this type of advisory filtering either using a proxy (in our case Squid) or DNS? http://serverfault.com/questions/15801/what-free-options-are-available-for-web-content-filtering http://serverfault.com/questions/47520/open-source-filtering-of-https-traffic Were a good start, but they don't address the advisory aspect of the filtering.

    Read the article

  • Creating basic, redundant gigE or IB storage network for Xen?

    - by StaringSkyward
    With only a modest budget, I want to move my 4 xen servers over to network storage -either NFS or iSCSI which will be determined based on how well it performs when we test it (we need good throughput and it must continue to work through link and switch failure tests). We may add another couple of xen servers at some point when this is done. I don't know much about the design and operation of storage networks, so would really appreciate some hints from those with experience. The budget is around $3,800 excluding the storage appliance. I am currently thinking these are my options to remain on budget: 1) Go for used infiniband hardware and aim for 10gb performance. 2) Stick with gig ethernet and buy some new switches (cisco or procurve) to create a storage-only ethernet LAN. Upgrade to 10gigE later but try to use hardware capable of it where possible to reduce upgrade costs. I have seen used, warrantied infiniband switches at reasonable prices (presumably because big companies are converging on 10gbit ethernet?) and the promise of cheap 10gb is attractive. I know nothing about IB, so here come the questions: Can I buy 2 x switches and have multiple HBAs in my xen and storage nodes to get redundancy and increased performance without complexity or expensive management software costs? If so, can you point me to some examples? Do NFS and iSCSI work just the same regardless? Is IB a sensible choice or could/should I use ethernet or FC on the same budget - I'm keen not to get boxed into a corner for future upgrades, however. For the storage I am likely to build a storage server using nexentastor with the intention that I can later add more disks, SSDs and add another server to provide a failover option at the storage level. An HP LeftHand starter SAN is also under consideration, too. Thanks in advance.

    Read the article

  • Software mirroring (RAID1) versus "Fake Raid" for new Windows 7 install

    - by kquinn
    I've just ordered two new hard drives for my main desktop and a copy of Windows 7 Professional 64-bit. I'd like to do a clean install of Win7 onto the new drives (leaving my old XP Pro boot partition around for a while in case something goes disastrously wrong, etc.). I want to have them set up in mirrored (RAID-1) mode. My understanding is that Win7 Pro can do software mirroring, but can I set this up directly at install time? If so, how? Note that I'd like the disk to be split into three partitions (OS/Apps&Data/Bulk data), all of which should be mirrored. Would it be better (more reliable or faster) to use my motherboard's hardware RAID support? My motherboard is an older nVidia nForce 680i SLI, which is not the most stable of motherboards, and I'm not sure how trustworthy its RAID1 configuration might be (or if Win7 could even detect and install onto a hardware-mirrored volume). Also, the performance characteristics of RAID1 are rather different than RAID0 or RAID5, and I'm wondering if Win7's software mirroring might actually be faster than hardware RAID1 (for example, I'm more of a Unix admin when I have to wear the sysadmin hat, and I've had great success deploying ZFS; most hardware RAID1 implementations have to read both disks and compare results to look for data errors, but ZFS can read from only one disk in the mirror and just use the built-in checksum, meaning it can have up to 2x the number of reads in-flight, as long as there's no data corruption). Edit: Okay, my question about whether Windows 7 can do software mirroring has been answered, and it can. I'm still unsure whether Windows software RAID or my motherboard's hardware "fake RAID" function is a better choice, though. Remember, I'm only interested in mirroring -- not the more complicated striping or parity operations that generally show the poor performance of crappy motherboard RAID solutions.

    Read the article

  • Alternatives to using email (in particular, Outlook) as a knowledge store?

    - by Umber Ferrule
    I suspect that, like many people, I use my work email account (accessed via Outlook 2007) to store information. I generally try to group similar things in folders and sub-folders, but with a multitude of folders this gets very unwieldy. In particular, it can be a bind to locate things using Outlook's tree structure. (As an aside: I've yet to come across a good free search add-on for Outlook.) I realise Outlook is not the best place to store all my information and I'd prefer not to. In an ideal world I'd like to be able to organise all of the information stored in Outlook in a MindMap (my software of choice being Freemind) or Wiki. To maintain an email audit-trail, I've considered saving individual emails as files using a MindMap or Wiki to link them. What do people think of this? (I can't say I relish the thought of the exporting process!) Whatever I do is going to involve some pain (i.e. setting up a Wiki/MindMap) or sticking with what Outlook provides currently. Has anyone been in the same position? Has anyone mass-migrated information from Outlook? If so, what was the best way? Any ideas or alternative proposals?

    Read the article

  • Can I get ethernet out of my Verizon FIOS set-top box?

    - by Tom Hughes
    Setup: my home network is long & skinny, and the FIOS-connected router is all the way at one of the apartment. At the other end, far away (and a floor higher) is my HD TV, which gets a cable-TV signal from a Verizon set-top box that is coax-connected back to the FIOS on-premises equipment. Wi-Fi won't work, the apartment is too stretched out, with old, thick walls and floors. Goal: I think there are three ways to get ethernet back to where the HD TV is: 1) run a cable! this isn't crazy but isn't cheap either (my building won't let me do it, it involves hiring an electrician because the cable would run partly through the public hallway ceiling) 2) split the coax near the TV and put in... a MoCA device? 3) somehow tease the set-top box, which has an RJ-11 (ethernet) port on the back, to give me network access. Question: any other choices? and, is one choice better than the others? #3 is by far the most desirable because it would involve the least wiring -- but I can't find any resources to help make it happen. #2 is a bit scary, I don't want to degrade service to the TV or anywhere else for that matter.

    Read the article

  • Ubuntu server or Debian server (to run C++ apps developed on Ubuntu)

    - by skyeagle
    I have written a number of C++ server side daemons for my website, using my Ubuntu 9.10 dev machine. The C++ apps I mentioned above are "GUI-less" daemons (and libraries used by the daemons). I am now about to host my website and need to decide whether to go with Debian server or Ubuntu server. In a nutshell, here is the situation: I developed on Ubuntu desktop because I preferred the more friendly GUI I would like to deploy on Debian Server because of the (perceived?) robustness of the Debian server over Ubuntu server (I may be totally wrong here - and in fact, this is really what this question is all about) If Debian server is indeed more robust than Ubuntu server, then I have no choice but to go with Debian server - BUT, will my Ubuntu developed C++ apps run on the server? (or do I need to recompile them on the server? (I'd HATE to have to do this, because I want to keep the server machine clean and light - no GUI, no dev tools etc). This last question is really about binary compatability between Ubuntu and Debian. I want the server to be robust, secure and stable, and simply act as a server (i.e. LAMP and very little else - no GUI etc). Given that requirement, and the fact that I need to run my C++ apps (developed on Ubuntu 9.10), I need advice on which OS to choose for the server. Ideally, any advice will be backed with a reason. I am particularly interested in hearing from people who have been in an identical situation, or done something similar.

    Read the article

  • Optimal file system type and mount options for an rsnapshot dedicated drive

    - by Nimmy Lebby
    We have an external USB 2 drive that we are using as a backup drive for our configuration. We use rsnapshot for the backups. It uses a few standard commands for managing snapshots: rm -rf: deletes expired snapshots mv: moves older snapshots down a slot cp -al: duplicates last snapshot to new slot rsync -a --delete --numeric-ids --relative: synchronizes new snapshot As you could see by the log below, the majority of the time is spent on the rm -rf and the cp -al steps: [25/Dec/2010:14:00:02] rsnapshot hourly: started [25/Dec/2010:14:00:02] echo 21012 > /var/run/rsnapshot.pid [25/Dec/2010:14:00:02] rm -rf /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.4/ /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.3/ /mnt/extdrive/snapshots/hourly.4/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.2/ /mnt/extdrive/snapshots/hourly.3/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.1/ /mnt/extdrive/snapshots/hourly.2/ [25/Dec/2010:14:15:48] cp -al /mnt/extdrive/snapshots/hourly.0 /mnt/extdrive/snapshots/hourly.1 [25/Dec/2010:14:23:32] rsync -a --delete --numeric-ids --relative /etc /mnt/extdrive/snapshots/hourly.0/sm4/ [25/Dec/2010:14:23:52] touch /mnt/extdrive/snapshots/hourly.0/ [25/Dec/2010:14:23:52] rm -f /var/run/rsnapshot.pid [25/Dec/2010:14:23:52] rsnapshot hourly: completed successfully My questions: I'm currently using ext4 for the filesystem. Maybe this is not the best choice from those available in Red Hat. Anyone have any recommendations that would speed up the process? The partition's mount options are sync,dirsync 1 2. Is there a way to optimize this since it's solely used for rsnapshot? Of course, reasoning would be greatly appreciated.

    Read the article

  • Setting my NIC to full duplex

    - by David
    I am trying to optimize the network speed of my Solaris X86 server, and have discovered that the Cisco 3548 that it is connected to has issues with the NIC in my server. The NIC appears to have not been configured fully, and is coming up 100 half-duplex. The 3548 ports are all set to 100 full. Ideally I'd like to have the server set for 100 full, and have been attempting to configure it using ndd commands. However I have had no results. The following command: -bash-3.00# dladm show-dev rtls0 link: unknown speed: 100 Mbps duplex: unknown The NIC shows up as: pci bus 0x0001 cardnum 0x06 function 0x00: vendor 0x10ec device 0x8139 Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ which should be configurable. I have modified the configuration file from auto config (5) to 100 fdx (4) to no avail. If there is no other choice, I could alter the Cisco 3548 to be 100 half-duplex. However, this solution causes huge performance loss. Currently throughput is about 500Kbps, when it should be around 40Mbps.

    Read the article

  • Unix Permissions: Enable access to files no matter the user?

    - by TK Kocheran
    I've been using Linux for a long time and I still am completely in the dark about how file permissions really work. With that in mind, does anyone have any books or thorough guides I could read to really understand things completely? I've done my fair share of sysadminning, so I know the easy stuff like making directories readable and writable, making files executable, and changing the owner of a file, but on sharing files across users, I'm lost. Here's my main problem. I have a number of machines across which I intend to synchronize my music library. I've been using Unison for a while now and it's a great choice as I can easily run it over SSH on my local network which I just set up. Win-win. Up until this point, I've been synchronizing computers using a 2TB external hard drive. (computer 1 unisons to HD, computer 2 unisons to HD, etc.) This is tedious at best, especially since I encrypted the drive, making it a huge hassle to hook it up to all of my machines and sync it. Anyway, the drive is running ext4 (in TrueCrypt), so it maintains all Unix filesystem info like owners and groups. I just set up a new machine and just Unison'd it to get the music on it, and I realized that now, all of my permissions are fubar. I had to run Unison as root since that was the only way I could get the files to come off of the external drive. Apparently, since I'm using a different user name on this machine than my usual "rfkrocktk" across all machines, this essentially throws a huge wrench in the gears. Here's my use case. This laptop has two effective users, "leandra" and "rfkrocktk". I want to share music between these two users, so I symlinked /home/rfkrocktk/Music to point to /home/leandra/Music. How do I (a) allow both users access to read/write/delete files in this folder, and (b) keep everything nicely in sync without messing up file ownership?

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >