Search Results

Search found 3094 results on 124 pages for 'fit'.

Page 84/124 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Which server software and configuration to retrieve from multiple POP servers, routing by address to correct user

    - by rolinger
    I am setting up a small email server on a Debian machine, which needs to pick up mail from a variety of POP servers and figure out who to send it to from the address, but I'm not clear what software will do what I need, although it seems like a very simple question! For example, I have 2 users, Alice and Bob. Any email to [email protected] ([email protected] etc) should go to Alice, all other mail to domain.example.com should go to Bob. Any email to [email protected] should go to Bob, and [email protected] should go to Alice Anything to *@bobs.place.com should go to Bob And so on... The idea is to pull together a load of mail addresses that have built up over the years and present them all as a single mailbox for Bob and another one for Alice. I'm expecting something like Postfix + Dovecot + Amavis + Spamassassin + Squirrelmail to fit the bill, but I'm not sure where the above comes in, can Postfix deal with it as a set of defined regular expressions, or is it a job for Amavis, or something else entirely? Do I need fetchmail in this mix, or is its role now included in one of the other components above. I think of it as content-filtering, but everything I read about content-filtering is focussed on detecting spam rather than routing email.

    Read the article

  • SSD cache to minimize HDD spin-up time?

    - by sirprize
    short version first: I'm looking for Linux compatible software which is able to transparently cache HDD writes using an SSD. However, I only want to spin up the HDD once or twice a day (to write the cached data to the HDD). The rest of the time, the HDD should not be spinning due to noise concerns. Now the longer version: I have built a completely silent computer running Xubuntu. It has a A10-6700T APU, huge fanless cooler, fanless PSU, SSD. The problem is: it also has (and needs) a noisy HDD and I want to forbid spinning it up during the night. All writes should be cached on the SSD, reads are not needed in the night. Throughout every day, this computer will automatically download about 5 GB of data which will be retained for about a year, giving a total needed disk capacity of slightly less than 2 TB. This data is currently stored on a 3 TB noisy hard disk drive which is spinning day and night. Sometimes, I'll need to access some data from several months ago. However, most times I'll only need data from the last 14 days, which would fit on the SSD. Ideally, I'd like a transparent solution (all data on one filesystem) which caches all writes to the SSD, writing to the HDD only once a day. Reads would be served by the cache if they were still on the SDD, else the HDD would have to spin up. I have tried bcache without much success (using cache_mode=writeback, writeback_running=0, writeback_delay=86400, sequential_cutoff=0, congested_write_threshold_us=0 - anything missing?) and I read about ZFS ZIL/L2ARC but I'm not sure I can achieve my goal with ZFS. Any pointers? If all else fails, I will simply use some scripts to automatically copy files over to the big drive while deleting the oldest files from the SSD.

    Read the article

  • Winamp question: Generating 'dynamic playlists' from file playlists -OR- mass-tagging by file playli

    - by Daddy Warbox
    I'm trying to think of a way to do this. I sort my songs into a variety of playlists corresponding to different 'moods' I might have as I listen to them, and some songs fit for more than one kind of mood (e.g. a jazz song might be 'stylish' and 'emotional', or something to that effect). I also give them star ratings for a general sort of opinion about them. I want to be able to filter and sort my media library by the moods I want or don't want, as well as by star rating. Anyone have a good way to do something like this? I can't seem to use Winamp's dynamic playlists to generate lists from existing filesystem playlists (e.g. songs in a given .m3u files). Hand-tagging files with Winamp's tag editor is a royal pain. It's trouble enough just giving a star rating and sorting into playlists as is. If there is there a way to mass tag songs within each playlist with mood words to allow me to create dynamic playlists, I'd be fine (for now). It'd be nice if I could do this via some kind of hotkey for each song, too. I'm looking to see if I can use a macro program or something to do that, though. Thanks in advance. P.S: Alternatively, would something like Foobar have functions like this? Note: Italics are recent edits.

    Read the article

  • Is it possible to use rsync over sftp (without an ssh shell)?

    - by Tom Feiner
    Rsync over ssh, works great every time. However, trying to rsync to a host which allows only sftp logins, but not ssh logins, provides the following error: rsync -av /source ssh user@remotehost:/target/ protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(171) [sender=3.0.6] Here's the relevant section from the rsync man page: This message is usually caused by your startup scripts or remote shell facility producing unwanted garbage on the stream that rsync is using for its transport. The way to diagnose this problem is to run your remote shell like this: ssh remotehost /bin/true > out.dat then look at out.dat. If everything is working correctly then out.dat should be a zero length file. If you are getting the above error from rsync then you will probably find that out.dat contains some text or data. Look at the contents and try to work out what is producing it. The most com- mon cause is incorrectly configured shell startup scripts (such as .cshrc or .profile) that contain output statements for non-interactive logins. Trying this on my system produced the following in out.dat: ssh-dummy-shell: Command not allowed. As I thought, the host is not allowing ssh logins. The following link shows that it is possible to accomplish this task using fuse with sshfs - however it is extremely slow, and not fit for production use. Is there any chance of getting rsync sftp to work?

    Read the article

  • Can I have damaged a CPU ?

    - by Pascal
    Hey guys, This is a question I asked on Anantech forums, but I just discovered this community and think this question would fit right in. Here goes: I built a PC around a Q9550 1 1/2 years ago (MoBo is an Asus P5Q deluxe). Specs give 2.83GHz, OC'ed it to 3.40 GHZ without any problem (or so I thought) till 2 months ago. Cooling is provided by stock Intel Fan... 2 Months ago, I began to see random crashes, bios saying CPU overheat error... PC would reboot at OC'ed speed without any problem. Since last saturday and a few more crashes, PC won't reboot at 3.40 GHz, and even at stock speed (2.83 GHz), I got core temperatures of (60 C idle, 95 C under load) on the first two cores. This is the 4 core temperatures I am talking about, not the T-CPU which obvioulsy is lower. Fan is running at a steady 2000 RPM. Questions for you : 1. Is 2000 RPM the normal speed of the Intel fan or is my fan somehow broken (which could explain the overheating). In this case, any recommendation for a good fan for OCing ? 2. Hypothesis I fear is the right one: can the CPU have been slowly damaged over time by this OCing, meaning there is nothing much to do except waiting for it to die ? (As a side note, I am surprised that the 9550 is still around 300 $CDN here... Thought it would have been cheaper with all those i3/i5/i7 around). Any help or advice would be more than welcome...

    Read the article

  • Warning for "Reallocated Event Count" S.M.A.R.T. attribute with a new/unused drive. How serious is t

    - by Developer Art
    I've just looked at the health status of my old 2,5 inch 500 Gb Fujitsu drive with a popular "HD Tune" utility. It shows a warning for the "Reallocated Event Count" property. How serious is that? The thing is that the drive is practically new. I pulled it out of a new laptop over a year ago and never used it since. Right now it only has 53 "Power On" hours which sounds about right since I only had it running a few evenings overnight before switching it for something more performant. Does this warning indicate that the drive is likely to fail some time in the future? I'm somewhat perplexed since the drive is effectively unused. What is more, I have arranged with somebody to buy off this drive since I don't really need. It is 12,5 mm thick (with 3 plates) meaning it doesn't fit into an external enclosure which makes it quite useless to me. Can I give away the drive without having it on my conscience or better cancel the deal? In other words, can the drive be used safely for years to come or better throw it away? I'm running a sector test now to see if there are any real problems. Will post the results as soon as they're available.

    Read the article

  • How to copy mailboxes from Exchange 2003 to Exchange 2007 across forests?

    - by Tor Ivar Larsen
    Hi. Were going through a quite difficult conversion from an old ASP-solution to an entirely new one. This includes moving mailboxes from Ex2003 to Ex2007. We want to do this without deleting the old mailboxes on the Ex2003 server, to have a "fall back" in case somehing goes wrong. I have investigated the "Move-Mailbox" cmdlet in the Ex2007 shell, and it seems to fit our needs quite well. The only problem being that we want to keep the old mailboxes. This could easily be accomplished with the -SourceMailboxCleanupOptions, but we can't use this when we have used the -AllowMerge switch. The reason we need -AllowMerge is because all the user accounts with connected mailboxes are already created on Ex2007(Some automatic user creation tool, no real relevance to the case in question) The twist is that the exchange servers are in two different forests... Windows 2003 SP1 on DC1, Windows 2003 SP2 on DC2 in forest 1. Windows 2003 R2 SP2 on DC1 in forest 2. Can we use the Move-Mailbox safely for this purpose? And if yes, how?

    Read the article

  • PDU management interface has low availability - product flaw or isolated issue

    - by DeanB
    Our colocation provider has supplied us with APC AP7932 switched 0U PDUs as part of several cabinets they provide us. We have had a lot of trouble with the network management aspect of these PDUs, which I'll describe below. We are moving to cage space in the same datacenter, and plan to provide our own PDUs, so I'd like to determine which enterprise-grade PDUs have been reliable performers from a remote management perspective. Our colo-provided PDUs are configured to support management via an SSL web UI and via telnet. We updated the firmware on all of them to the current version as of NOV2011. They respond to pings reliably, and we have no reason to suspect a network layer issue. However, we experience frequent hangs, timeouts, disconnects, and general unavailability from the embedded management host in all of the PDUs. We occasionally have to restart the microcontroller on the PDU to recover from what appears to be an occasional hard fault. The outlets stay powered (thankfully), but the management aspect is so unreliable that it has become an ops liability - we can't be confident that we could get into the PDU to power cycle a host if we needed to. We have 3 PDUs that all exhibit identical behavior. There are many manufacturers of enterprise-grade 0U switched PDUs, all with comparable features. If I looked at the datasheet for our current PDUs, they would appear to be a good fit -- only with the benefit of suffering through using them do we know to avoid them. I'd like to avoid picking a PDU that looks fine on paper, but has similar reliability issues. What has been others' experience with switched PDUs? Is this level of flakiness normal?

    Read the article

  • VMWare Workstation Linux Host performance tuning

    - by Hoghweed
    I need to improve my linux hosted vmware workstation for using multiple virtual machines at the same time. I feel very stupid I lost a great blog post link which I found last month (and I'm not able to find it again..) so I try to ask here if anyone can help me: This is my host (laptop): 16GB DDR3 Ram HDD Hybrid 750GB 7200 (8GB SSD Cache) Mint 15 x64 Kernel 3.9.7 swappiness set to 10 The above are the important things about the host. So, My need is the ability to run 2 or 3 VMs at the same time. The lack of performance is about the disk, The last time from that blog post I lost, I setup /tmp to be mounted ad a memory partition and in my previous installation that was good, now I'm not able to find a good solution to tweak the things. I think with 16GB o RAM there will be no problems to run multiple VMs, but whe they start to swap or use the /tmp things going bad (guest cursor going too fast after a freeze, guest freeze and so on) Anyone can help me to fit a good host tweak and configuration to get better performance? Thanks in advance

    Read the article

  • Will these instructions work when turning of journaling on a n ext4 SSD?

    - by snowlord
    I have an Acer Aspire One with an SSD for storage. I recently installed Ubuntu on it and chose ext4 for my filesystem. Then I read that journaling on an SSD isn't the best idea, so I will try to disable journaling and I have found these intstructions (from http://fenidik.blogspot.com/2010/03/ext4-disable-journal.html): # Create ext4 fs on /dev/sda10 disk mkfs.ext4 /dev/sda10 # Enable writeback mode. This mode will typically provide the best ext4 performance. tune2fs -o journal_data_writeback /dev/sda10 # Delete has_journal option tune2fs -O ^has_journal /dev/sda10 # Required fsck e2fsck -f /dev/sda10 # Check fs options dumpe2fs /dev/sda10 |more For more performance add fstab opions: data=writeback,noatime,nodiratime i.e: /dev/sda10 /opt ext4 defaults,data=writeback,noatime,nodiratime 0 0 I will use them on my boot partition. Are there any particularly bad parts here, or are there any missing steps? Will my boot partition be fit for being on an SSD after this? Or should I consider switching to ext2, or even reinstall it all and choose ext2 at partitioning time (I'd rather not though, since I've configured quite some stuff already)?

    Read the article

  • Suggestions on the best home server rack cabinet

    - by allentown
    I have a lot of gear in a colocation facility right now. Some of it is going to come home with me now. I do not know anything about the "rack mount" side of the industry. I lease a rack, and I put my stuff in it. I have a few 1U boxes, a few 2U boxes, and a few 4U boxes. 1U switch. One is a new Xserve, which means it is deep. I think I can get by with around 12U to 18U. I want to keep it as small as possible, since I do not have a lot of spare space at my home. I will not be able to bolt to the wall, floor etc, so it should not be tall. This is something I would love to more or less just be a box that sits on the floor but gives me the ability to mount nicely, do nice cable management etc. Are the "post" style racks junk? I am liking the open space, and the no limitations on depth of something like this: http://www.rackmountsolutions.net/images/products/Martin-relay-rack.jpg However, that thing is way too tall, and probably way too expensive. I am looking to be around $300.00 or less. More if I have to, though I would prefer not to. These look near perfect: (See comment for this link, the system will not let me post a second url) but I am worried the Xserve will not fit in it. If anyone has any good links, or website recommendations of good past experience, I would appreciate it. I am almost considering that I may be able to build something with random scraps of stuff at Home Depot as well.

    Read the article

  • Poor SSL performance with vsftpd

    - by petrus
    I'm trying to tweak vsftpd to achieve maximum performance for my usage: I have only one or two clients that connect to the server. File size is between ~15MB and 1GB. Typical transfer batch represent between 1 and 2GB of data. For testing purposes, I'm using a tmpfs on both sides (thus eliminating any disks bottleneck) with a single 1GB file. When SSL is disabled, performance is good, with a transfer rate at ~120MB/s (reaching the limits of gigabit networking). With SSL enabled only for control traffic (and not data traffic), performance drops at about 112MB/s, which is still within the acceptable limits. However, when SSL is enabled for data flows, the transfer speed drops dramatically: 6.7MB/s using 3DES & SHA (ssl_ciphers=DES-CBC3-SHA in vsftpd.conf) 16MB/s using DES & SHA (ssl_ciphers=DES-CBC-SHA) I didn't tested other ciphers, but from what I can see from the CPU usage during the transfer, it seems that vsftpd is only using a single cpu/core per client. While this can fit for large ftp sites with hundreds of clients, I'd like to avoid this behavior and use more ressources on the server. On a side note, if you have any ideas regarding other openssl ciphers...

    Read the article

  • The Server Fault Wiki of recommended practices [migrated]

    - by Avery Payne
    So I've noticed that there are several recommendations on basic practices on Server Fault, but there doesn't seem to be a cohesive view as to how those recommendations would all fit together. So I thought I would lump these together as a kind of mental exercise to see what the "ServerFault Community IT Department" would look like if it were implemented. This would give a few things: it would make a reasonable wiki (in the true wiki spirit of many contributions), it would provide several links to well-vetted practices, and it would be kind of fun to see what the amalgamation would look like. And who knows, it may even point out some interesting issues between different forms of "best practices", although I would be stunned if there was a conflict hidden in there someplace... Add your favorites from Server Fault as answers, and I'll re-edit this section with the results. Here's a few catagories to collect different ideas together. Hardware Configuration(s) Server room configuration. Server room temperature Firmware Updates and Scheduling Storage Configuration(s) Selecting a NAS box Linux: Dealing with /tmp Linux: Install apps in /var or /opt? Network Configuration(s) checking DNS health and compliance Security Practice(s) Password (General) Best Practices Password sharing methods Windows Update Updating Windows Servers that are hosts for VMs Network Service(s) User Service(s) User Naming & Deletion Upgrade Process(es) Disaster Recovery Checking Backups Documenting an outage for a post-mortem review Last Edit: 2010-02-17

    Read the article

  • Windows 7 taskbar groups to use vertical menus only, not thumbnails?

    - by Justin Grant
    I frequently have 10 or more windows of the same application (e.g. Outlook, Word, IE, etc.) open at one time. Windows 7's new taskbar grouping thumbnail feature shows a preview of open windows (aka thumbnails) when you single-click on the taskbar group for that application. But when I have over 10 windows open (more thumbnails than will fit horizontally on my screen), Windows reverts to a vertical menu. This is disconcerting since I need to train myself to deal with two separate behaviors and can never anticipate what I'm going to see when I click on a taskbar group. Furthermore, I find the thumbnails more difficult to visually scan through (vs. the vertical menu) because only 1-2 words of each window's title are shown. Typically that's not enough text to disambiguate and help me find the right window. I'd like to force Windows 7's taskbar grouping to always show a vertical menu (like in XP) instead of sometimes showing thumbnail previews and sometimes showing a vertical menu. Anyone know how (whether?) this can be done? UPDATE: BTW, I'm running the RTM version (build 7600) of Windows 7. There are apparently other solutions out there which work on earlier builds, but which don't work on the RTM build.

    Read the article

  • What to look for in a reliable backup hard disk?

    - by Senthil
    I want to buy an internal hard disk and use a docking station along with it for backing up important data. The size will be around 500GB to 1TB. I have a budget and several models fit into it. So far, they only seem to vary in size, speed and brand. These are the only things I can compare from the specs. I guess asking for which brand is best is completely subjective so I won't do that. I want my disk to have long life and be reliable. Doesn't matter if it is somewhat slow. Size: Should I go for the one with highest size within my budget? Will higher density cause problems? Or should I go for a moderately sized one? Does the number of platters have an impact? Speed: I do not want high performance. I want it to be reliable and last long. I am definitely not going to choose the expensive 10,000 rpm ones. Should I go for 5400 or 7200? Do these numbers affect longevity and reliability? Are there any other technical and objective factors that I should look for?

    Read the article

  • Windows 7 x64 RTM USB Port Has Power But Won't Recognize Mouse/Keyboard/Anything

    - by ben
    I have an odd error that doesn't seem to fit in with any of the other odd Windows 7 x64 USB errors that have been kicked up on Google. Here we go: Uninstalled Tortoise SVN and clicked restart computer. My machine had been up for around 28 days On reboot my mouse and keyboard failed to work anymore, couldn't log in. Tried every USB port I have on my Dell 390 and the ports on my Dell 19's, nothing worked. They had power but Windows would not respond when I manipulated the keyboard/mouse. Rebooted my computer and pressed F2 to get into bios, my keyboard is working fine in bios. Keyboard and mouse work fine on other computers when using USB. Found adapters for keyboard and mouse to convert from USB to PS/2 ports, works fine. I'm actually typing this question on the same keyboard, same computer, just using PS/2 ports for my mouse and keyboard. It appears to be a Windows 7 x64 issue. Other things I have tried: Multiple other mice and keyboards, iphone, all with no luck. Each one gets power, but Windows never tries to install drivers or sees that they are connected. Uninstall and reinstall all USB drivers. Drives uninstall and reinstall fine and report no errors in Control Panel. In Power Management I disallow Windows from turning off USB ports to save power Installed the latest nVidia drivers for my graphics card, no change. Anyplace else I can look/try? Thanks!

    Read the article

  • ffmpeg: converting an avi to a reduced, shareable flv/mp4...

    - by meder
    I recently followed a guide and recompiled my ffmpeg so x264 is enabled. I used some generic settings to convert my 700 MB avi file to a mp4 file, the result was a 407MB mp4 file. The original avi file's settings: Codec: DX50 Resolution: 704x304 Frame rate: 23.976023 Stream 1 Codec: mpga Type: Audio Channels: 2 Sample rate: 48000 Hz Bitrate 179 kb/s Command I used: ffmpeg -i input.avi -acodec libfaac -ab 128k -ac 2 -vcodec libx264 -vpre hq -crf 22 -threads 0 output.mp4 The settings of the output file (output.mp4): Codec: avc1 Resolution: 704x304 Display resolution: 704x304 Frame rate: 11.988011 Stream 1 Codec: mp4a Type: Audio Channels: 2 Sample Rate: 48000 Hz Bits per sample: 16 Bitrate: 1536 kb/s The quality of the output mp4 is pretty nice, it seems as if it's pretty much the same as the original source. However, I'm trying to reduce the filesize and I'm not really sure whether I should go with an flv format or keep it mp4. The advantage the flv would have obviously is that it would be playable with a flash player ( I have come across some swf players which take a flash parameter to play an flv file ).. but maybe I could use the video element, as I'm only going to be displaying this video privately so I don't have to worry about supporting legacy browsers such as IE. Can someone recommend some settings to specify in order for the filesize to be around ~100-150MB or so? I don't mind a reduction in quality, nor do I mind resizing it - I was going to do it initially but I wasn't sure what the guidelines were ( if any ) for dealing with resolution.. since this video's is 704x304 would it still be ok if I forced it into one that isn't perfectly fit for the aspect ratio? I have no clue about that part. I realize that I could have probably specified 28 instead of 22 for the CRF, I'm not sure if I should do that as opposed to maybe specifying smaller resolution, which might make it smaller as well?

    Read the article

  • Looking for suitable backup solution Mac OS X to offsite Centos 6 server 1TB of working data

    - by Brady
    I'll start by saying what we have in place currently: On site file server (Mac OS X Server) that is used by GFX designers and they have a working 1TB of data. Offsite server with 2TB available storage (Centos 6) Mac OS X server rsync data to offsite server every 6 hours (rsync -avz --delete --progress -e ssh ...) Mac OS X server does full data backup to LTO 4 tape on a 10 day recycle (Mon-Fri for 2 weeks) rsync pushes about 60GB of file changes a day. The problem: The onsite tape backup is failing as 1TB of graphics files don't compress well to fit onto a 800GB LTO4 tape. Backup is incredibly slow doing a full backup. Pain in the backside getting people to remember to change the tape. Often gets forgotten etc The quick solution: Buy LTO5 Drive and tapes. However this has been turned down because of the cost... What I would like: Something that works in the same way rysnc works. Only changed data is sent over the wire and can be scheduled to run multiple times during the day. Data that is sent is compressed and sent over SSH. Something that keeps a 14day retention but doesn't keep duplicate data So as an example if I have 1TB of working data and 60GB of changes are made each day then I expect around 1.84TB of data to be stored on the offsite server. To work with the Mac OS X server and Centos 6 server. Not cost an arm and a leg. Must be a cheaper solution than buying an LTO5 drive with tapes (around £1500). Be able to be setup to run autonomously. Have some sort of control panel that will allow an admin to easily restore a file/folder. Any recommendations?

    Read the article

  • Eee PC Seashell series netbook screen is cut off at bottom no matter the resolution

    - by Yzmir Ramirez
    I have an Eee PC 1015PE Seashell netbook running Windows 7 Home Premium with an Intel Graphics Media Accelerator 3150 (8.14.10.2230) with a "Generic Non-PnP Monitor" detected. I tried: Changing the resolution (Control Panel = Appearance and Personalization = Display = Screen Resolution) to 1024x768 Updating the video driver (to 8.14.10.2230) Uninstalling the driver and rebooting Pressing the Windows Key + "-" (magnifier) Pressing Ctrl + Mouse Scroll only resizes the desktop items Pressing Fn + F4 shows 1024x600 (which I think is what I should be using, but nothing happens) EDIT: Changed from Landscape to Portrait and it works Attached an External Monitor and when I extend or set as desktop it works only on the External Monitor (shows up as "Generic PnP Monitor in Device Manager) Basically the bottom inch of my desktop is off-screen hiding my start bar, but my wigets are in their proper position (the start bar is not hidden). Pressing Ctrl + Esc shows the start menu but its cut-off. I'm pretty sure I should be using 1024x600 resolution, any advice? What's odd is that this only started happening recently. EDIT2: Here are some screenshots showing the problem: Resized Window to fit: Opened Start Menu - notice it cut off: Maximized window and then scrolled down - notice no Start Menu: I downgraded my graphic driver I downloaded from the Intel Download Center for the Graphic Media Accelerator 3150 (now: 8.14.10.1972) and now my "Generic non-PnP Montior" detects as "Digital Flat Panel (1024x768 60Hz)".

    Read the article

  • Backing up server and multiple clients

    - by inquam
    I'm running a Amahi server. It's basically a Fedora14 x64 installation. I'm looking for a good solution to backup my 200GB system drive on the server to an external USB/eSATA drive every night. I looked into using dd but since other things might be running on the server at the same time it didn't feel quite safe. I would like the backups to be incremental so the following backups after the initial one would be quite fast. The backup should also be bootable or prehaps be able to produce a bootable disk after booting from a CD or something. I would also like the server to be able to do similar backups of my clients running Ubuntu, Windows 7 x64, Windows 7 Starter, OSX Lion, Windows XP and so on. So no applications backing up only shared folders or something like that. My guess is a client daemon would have to exist that would lock the system to allow backup of a Windows system drive that can otherwise be quite cranky. Booting up a CD in a crashed client and connecting to the server restoring the latest backup and being up running is my ideal goal. Is there anything out there that would fit these needs?

    Read the article

  • VSFTPD - FTP over TLS - Upload stops after exactly 82k?

    - by Redsandro
    I installed a VSFTP daemon on a CentOS server, using a RSA certificate for logging in using explicit TLS. Now, I cannot upload more than 82k. With files under that limit, there is no problem. The FTP works like a charm. But as soon as a file reaches 82k with FileZilla (81,952 bytes to be exact), the transfer will stop, and the FTP client hangs until time out is reached. FTP client console: 15:10:21 Command: STOR jquery-1.7.2.min.js 15:10:21 Response: 150 Ok to send data. 15:11:21 Error: Connection timed out 15:11:21 Error: File transfer failed after transferring 82 KB in 60 seconds /var/log/vsftpd.log FTP command: Client "x.x.x.x", "STOR jquery-1.7.2.min.js" FTP response: Client "x.x.x.x", "150 Ok to send data." OK UPLOAD: Client "x.x.x.x", "jquery-1.7.2.min.js", 81952 bytes, 1.32Kbyte/sec FTP response: Client "x.x.x.x", "226 File receive OK." // NOT okay, file is bigger // No mention of error here I cannot find relevant info about this problem, apart from a possible problem with trans_chunk_size (not mentioned in default config), but I tried different sizes and it has no impact on the problem. trans_chunk_size=4096 trans_chunk_size=8192 trans_chunk_size=9999 Ofcourse, after every configuration change, I restarted the server: /etc/init.d/vsftpd restart What else can cause this? It's not the latest version, but it's the latest update within the repositories that has been deemed fit for enterprise usage: Package info: $ yum info vsftpd Loaded plugins: fastestmirror Installed Packages Name : vsftpd Arch : x86_64 Version : 2.0.5 Release : 24.el5_8.1 Size : 286 k Repo : installed Summary : vsftpd - Very Secure Ftp Daemon URL : http://vsftpd.beasts.org/ License : GPL Description: vsftpd is a Very Secure FTP daemon. It was written completely from scratch.

    Read the article

  • Tips and Suggestions IP Address Re-Addressing?

    - by RSXAdmin
    Hello serverfault Universe, My ever evolving and expanding local area network is currently using a class-C address. My network consists of multiple subnets depending on site/location. 192.168.1.x is site HQ 192.168.5.x is secondary site 192.168.10.x is so on and so forth. Long story short - I have inherited this network design from the previous admin who has left the company which started off with a dozen people and now has just over 300 full time/part time employees. We do not yet have client VPN access; but we do have site to site VPN setup. My question is, in preparation for outside client access to my network via Cisco ASA, I would like to re-address the HQ site because I understand a 192.168.1.x or 192.168.0.x are not very good choices for a company subnet - it may conflict with a home user's LAN when connecting to my LAN, I believe? Through your experience, does anyone out there have any suggestions and tips on how I can proceed with re-addressing my subnets. If I designed this network I would have gone with a 10.0.0.0 (mask 255.255.255.0) so I am leaning towards changing it to fit. Thank you.

    Read the article

  • Recover NTFS data from a ZFS pool that was exposed as an iSCSI target

    - by David
    This was me being stupid and the data is by no means critical and is now a learning experience first, time saver second. I set up a 100GB iSCSI target via the bare bone instructions in napp-it. It's a volume LU. I then had my Windows 7 machine connect to the iSCSI target, formatted it to NTFS, and tested the performance of it with some large iso file transfers. I then unmapped the drive, reconnected to the target, and was forced to format to NTFS again. It was then I realized the files I had transferred only existed on the iSCSI target. I threw a little fit and then went about my business. When I was cleaning up my experiment I noticed in this screen: http://imgur.com/1xlcu.jpg That is my experimental target tank/iSCSI and it still has a lot of data in it. Assuming my isos are still in this pool how would I go about recovering them? While writing this I used GetDataBackup for NTFS from www.runtime.org. And while it found two previous NTFS partitions there was no data.

    Read the article

  • Dell R320 RAID 10 with CacheCade

    - by Geekman
    I'm looking for a higher-performance build for our 1RU Dell R320 servers, in terms of IOPS. Right now I'm fairly settled on: 4 x 600 GB 3.5" 15K RPM SAS RAID 1+0 array This should give good performance, but if possible, I want to also add an SSD Cache into the mix, but I'm not sure if there's enough room? According to the tech-specs, there's only up to 4 total 3.5" drive bays available. Is there any way to fit at least a single SSD drive along-side the 4x3.5" drives? I was hoping there's a special spot to put the cache SSD drive (though from memory, I doubt there'd be room). Or am I right in thinking that the cache drives are simply drives plugged in "normally" just as any other drive, but are nominated as CacheCade drives in the PERC controller? Are there any options for having the 4x600GB RAID 10 array, and the SSD cache drive, too? Based on the tech-specs (with up to 8x2.5" drives), maybe I need to use 2.5" SAS drives, leaving another 4 bays spare, plenty of room for the SSD cache drive. Has anyone achieved this using 3.5" drives, somehow?

    Read the article

  • IIS serving pages extremely slowly

    - by mos
    TL;DR: IIS 7 on WS2008R2 serves pages really slowly; everyone assumes it's because it's IIS and we should have gone with an Apache solution on Linux. I have no idea where to start debugging the problem. I work in a nearly all-MS shop with a bunch of fellow programmers who think Linux is the One True Way. Management recently added a Windows machine with IIS to serve Target Process (third-party agile system), but the site runs extremely slowly. Everyone, to a man, assumes it's because it's on IIS, and if only management would grow a brain and get some Linux servers in here, we could really start cleaning things up! ...Right. Everyone "knows" IIS isn't fit to serve .txt files. ...Well, as the only non-Microsoft hater in the bunch, I am apparently the only one who thinks maybe the Linux guy who hated being told to set up the IIS server may have screwed things up. I'd like to go fix it, but I don't have any clue as to where to start as I am not a sys admin. Help?

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >