Search Results

Search found 3618 results on 145 pages for 'huge'.

Page 98/145 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • SSD cache to minimize HDD spin-up time?

    - by sirprize
    short version first: I'm looking for Linux compatible software which is able to transparently cache HDD writes using an SSD. However, I only want to spin up the HDD once or twice a day (to write the cached data to the HDD). The rest of the time, the HDD should not be spinning due to noise concerns. Now the longer version: I have built a completely silent computer running Xubuntu. It has a A10-6700T APU, huge fanless cooler, fanless PSU, SSD. The problem is: it also has (and needs) a noisy HDD and I want to forbid spinning it up during the night. All writes should be cached on the SSD, reads are not needed in the night. Throughout every day, this computer will automatically download about 5 GB of data which will be retained for about a year, giving a total needed disk capacity of slightly less than 2 TB. This data is currently stored on a 3 TB noisy hard disk drive which is spinning day and night. Sometimes, I'll need to access some data from several months ago. However, most times I'll only need data from the last 14 days, which would fit on the SSD. Ideally, I'd like a transparent solution (all data on one filesystem) which caches all writes to the SSD, writing to the HDD only once a day. Reads would be served by the cache if they were still on the SDD, else the HDD would have to spin up. I have tried bcache without much success (using cache_mode=writeback, writeback_running=0, writeback_delay=86400, sequential_cutoff=0, congested_write_threshold_us=0 - anything missing?) and I read about ZFS ZIL/L2ARC but I'm not sure I can achieve my goal with ZFS. Any pointers? If all else fails, I will simply use some scripts to automatically copy files over to the big drive while deleting the oldest files from the SSD.

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • PowerPoint save group as picture creates asymmetric edge, how to fix?

    - by Se Norm
    I created tons of figures for my thesis in PowerPoint and now I realized that when I try to save the grouped items (= one figure) as a picture (EMF), it somehow asymmetrically adds a border on the left and the bottom. First one is original group, second is the same pasted as a picture. Original group: Pasted as a picture: Does anyone have an idea how to fix that for a huge number of figures? I think it only started happening when I used a page size of 1m x 1m in PowerPoint to be able to zoom in more for some figures. However, I cannot not simply change the page size now as it messes up font and object sizes. Also, copying it into a smaller page and then saving as EMF doesn't do the trick. Maybe it is not related to the page size after all. Cropping every figure individually would be a lot of work, so I hope there is a different solution. I found the origin of the problem: the text label in the left bottom corner of each image (0s, 8s, 16s). I still do not understand why it is happening though, since the text label does not expand over the edge of the image (it was aligned using the align left function). It would still be great if there was an easy way to fix this, especially as I want to keep the text where it is.

    Read the article

  • dd oflag=direct 5x fast

    - by César
    I have Centos 6.2 in server with this specs: 2xCPU 16 Core AMD Opteron 6282 SE 64GB RAM Raid controller H700 1GB cache NV - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (OS Centos 6.2) sda - 4HD 146GB SAS 15Krpm RAID10 stripe 16k (ext4 bs 4096, no barriers) sdb -> /vol01 Raid controller H800 1GB cache nv - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (For DB Postgres 8.3.18) (ext4 bs 4096, stride 64, stripe-width 384, no barriers) sdc -> /vol02 I'm benchmarking IO speed with dd, and view thah if in RAID10 12 disk exec: dd if=/dev/zero of=DD bs=8M count=10000 oflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 126,03 s, 666 MB/s but if I remove "oflag=direct" option obtain about 80 MB/s. In read benchmark, results are similar: dd of=/dev/null if=DD bs=8M count=10000 iflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 79,5918 s, 1,1 GB/s If remove iflag=direct obtain 150MB/s... I don't understand this huge differences, on other machines y don't have this behavior. Can I have some kernel parameter misconfigured? Thanks!

    Read the article

  • Firefox 29 - how do I delete history entries visited fewer than x times

    - by lousyuser
    Context: I've been using my Firefox profile for a couple of years now. My history file has become huge, naturally. I got Firefox Sync set up between my main desktop PC and my laptop. HW configs: PC: i5-3450, 8 GB DDR3 RAM, Crucial M4 128 GB SSD laptop: Pentium SU4100, 4 GB DDR3 RAM, WD 5400 rpm HDD Accessing history entries when typing into the Awesome Bar on my desktop takes quite a long time despite the decent config, the laptop is even slower. The experience is quite unresponsive. I figured if I cleared the history up a little bit, I might avoid creating a new profile to speed things up. The question itself: to illustrate: Is there a way to delete all history entries that have been visited fewer than x (let's say 5) times and at the same time the recent visit is fewer than y (let's say 120) days old? afaik the history file is some kind of SQL database, but I'm not really sure how the data is saved, if there's a "safe way" to edit it and what the query to do what I need would look like. Thanks in advance for any help. I kept browsing through previous SuperUser questions to see if I could find relevant information. "In my Firefox profile directory, there is a filed named places.sqlite. Opening it with sqlite reveals (amongst others) the tables moz_places and moz_historyvisits. It seems that moz_historyvisits uses the primary of moz_places to refer to the URLs." As I'm unfamiliar with databases, I don't really understand the way the two tables mentioned in the quote are related. screenshot of a part of the tables I've noticed the visit_count is in a standard format, making it easy to work with. The last_visit_date looks encrypted to my naked eye, but I can't see in which way. Hope that helps, I'm at my wits' end.

    Read the article

  • Load Sharing Regarding Large Websites

    - by JHarley1
    Hello, I have a question regarding Load Sharing for large websites. My Understanding: So if you have a website that has millions of fits a day you will need to have an architecture that can support this sort of pressure. You can either do one or two things: Invest in a single large server that has huge amounts of processing power, memory and storage (such as Microsoft's TerraServer). Spread the load of your website across a number of machines. Let me tackle the second approach, so you have a collection of machines all running Web Server Software and all having access to identical copies of the websites pages. You can either spread the load across these machines using a cyclic pattern in a DNS or you can use a Load Ballancing Switch. The advantages of this approach is: - Redundancy - servers can fail and the others would "pick up the slack" - Incremental - the ability to easily add new machines to this set-up. My Question's Is there a Virtual approach to this issue of load balancing now? If the website runs from a database - is there still only a single copy of the database? If a user had a session running on one Server (e.g. they had gone to www.example.org and had been assigned to Server 2 - were they had created a session) if they refreshed the website (and were allocated Server 3) would they still have their session? What are the other disadvantages associated with Load Balancing? Many Thanks, J

    Read the article

  • Setting my NIC to full duplex

    - by David
    I am trying to optimize the network speed of my Solaris X86 server, and have discovered that the Cisco 3548 that it is connected to has issues with the NIC in my server. The NIC appears to have not been configured fully, and is coming up 100 half-duplex. The 3548 ports are all set to 100 full. Ideally I'd like to have the server set for 100 full, and have been attempting to configure it using ndd commands. However I have had no results. The following command: -bash-3.00# dladm show-dev rtls0 link: unknown speed: 100 Mbps duplex: unknown The NIC shows up as: pci bus 0x0001 cardnum 0x06 function 0x00: vendor 0x10ec device 0x8139 Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ which should be configurable. I have modified the configuration file from auto config (5) to 100 fdx (4) to no avail. If there is no other choice, I could alter the Cisco 3548 to be 100 half-duplex. However, this solution causes huge performance loss. Currently throughput is about 500Kbps, when it should be around 40Mbps.

    Read the article

  • Using our own certificate authority for business email encryption

    - by LumenAlbum
    I've read the available similar questions on serverfault but I haven't quite found a definite answer to the security aspect of it - hence here's my question: I'm administrator of an office working with tax data and we want to start using certificate-based eMail encryption with our clients. Considering the prices for issued certificates by VeriSign & Co I was wondering if we couldn't issue the necessary certificates with a certificate authority of our own. I realize that they do not offer the trust hierarchy that commercial certificates do but I don't see why we would need that. Most of our clients have small businesses and only 20% of them even exchange data with us via email. So if we were to issue certificates for those 20% and our employees, that would enable us to use encrypted emails. Of course they would have to trust our certificate authority and thus once receive our public root certificate. But if we would hand them out to them (or install it) personally, they'd know that it really is our certificate. Is thery a huge security risk that I am missing here? As long as nobody has access to our certificate authority server nobody should be able to interfere with security, right? And the client certificates would be generated and handed out by us, as well... Please advise me if I am making an error in judgement here and thank you in advance.

    Read the article

  • SQL Server 2008 Web VS SQL Server 2008 Enterprise

    - by Jeremy
    I wrote an application a few months ago, and was hosting it out of our offices on a workstation with an Intel Core 2 Quad Q8200 @ 2.33GHz, 8 GB RAM, Windows Server 2008 Enterprise and SQL Server 2008 Enterprise. Both the webserver and database server were run on the same machine. We had a huge influx in traffic, and moved ClubUptime.com, and got 2 of their top teir windows VMs. The Database server runs Windows 2008 R2 Standard and SQL Server 2008 R2 Web on 8 GB ram and an Intel Xeon e5620 @ 2.40GHz. Ever since switching, the database which used to run at around 400MB in RAM now runs at around 4-7GB, and there haven't been any changes to it (other than a couple columns here and there). Our traffic has quadrupled, and our DB is 6 GB on disk, why would SQL server take up 7 GB if the DB is only 6. And why would it be storing the ENTIRE database in memory? Another thing is why growing 4 times in size did the database's memory footprint grow 12 times? Last question: Why does the CPU peg at 100% now where it didn't before? The design is simple, VERY few joins, NO subqueries. I am just at a loss, unless it is the SQL server edition, or the fact that I moved from real hardware to a VM.

    Read the article

  • Sluggish Windows SBS 2003

    - by TomWilsonFL
    One of my customers has a Windows 2003 Small Business Server which at this point is basically the DC, DNS, Fileserver and Symantec Protection Manager. I have disabled Exchange because I moved their mail to Google Apps. The server is extremely sluggish when doing anything. It is most noticeable when a dialog box is open (say the System properties), and you try to change tabs. This is usually instant, but on this machine can take 3-5 seconds. What additional services / packages can I uninstall from this machine knowing that it is only performing the above roles? Will removing the "Small Business Server" package in Add / Remove Programs get rid of a few unnecessary things? Any other thoughts? P.S. I know Symantec Endpoint and the Protection Manager are hogs, but I have nothing to replace the solution with at the moment. Thanks, Tom UPDATE: I looked over the different performance metrics, but nothing stood out as a problem. One of my friends mentioned Symantec's log and temp files can get quite huge and slow things down, so I ran CCleaner on the machine and found close to 3 GB of Symantec "stuff." Removed that and now the machine is MUCH better. I am still unsure why the data just sitting there would cause such a slowdown. The drive is not even near full. The only thing I can imagine is that Symantec must have to run through this stuff now and then.

    Read the article

  • nginx server over https using up all available file handles

    - by mmr
    Hi all, So I have an nginx server that's working over https with Sinatra. When I try to download a jnlp file in a configuration that works fine over Mongrel and http (no s), the nginx server fails to serve the file with a 504 error. Subsequent checking of the logs states that this error is due to overflowing the available number of file handles, ie, "24: too many open files". Running sudo lsof -p <nginx worker pid> gets me a huge list of files, all looking like: nginx 1771 nobody 11u IPv4 10867997 0t0 TCP localhost:44704->localhost:https (ESTABLISHED) nginx 1771 nobody 12u IPv4 10868113 0t0 TCP localhost:https->localhost:44704 (ESTABLISHED) nginx 1771 nobody 13u IPv4 10868114 0t0 TCP localhost:44705->localhost:https (ESTABLISHED) nginx 1771 nobody 14u IPv4 10868191 0t0 TCP localhost:https->localhost:44705 (ESTABLISHED) nginx 1771 nobody 15u IPv4 10868192 0t0 TCP localhost:44706->localhost:https (ESTABLISHED) nginx 1771 nobody 16u IPv4 10868255 0t0 TCP localhost:https->localhost:44706 (ESTABLISHED) nginx 1771 nobody 17u IPv4 10868256 0t0 TCP localhost:44707->localhost:https (ESTABLISHED) nginx 1771 nobody 18u IPv4 10868330 0t0 TCP localhost:https->localhost:44707 (ESTABLISHED) nginx 1771 nobody 19u IPv4 10868331 0t0 TCP localhost:44708->localhost:https (ESTABLISHED) nginx 1771 nobody 20u IPv4 10868434 0t0 TCP localhost:https->localhost:44708 (ESTABLISHED) Increasing the number of files that can be opened is no help, because then nginx just blows right past that limit. And no wonder, it looks like it's in some kind of loop to pull all available files. Any idea what's going on, and how to fix it?

    Read the article

  • Resotre single users Exchange 2003 mailbox from backup

    - by Campo
    I take weekly backups of exchange in full. I also take complete weekly backups of the entire server. It is a Server 2003 R2 with AD and Exchange 2003 all on one box. One users inbox has disappeared. She has 19000+ junk items now. It is possible the inbox got mixed into the junk. Regardless it is such a huge mess she is not going to go through all of that.... I want to restore he mailbox from the backup. I followed this MS KB http://support.microsoft.com/kb/823176 I had to use Method 3. I have a VM of Server 2003 R2 with exchange but I am having failures on the restore from NT backup. The backup log just states to check the application log.... Application log points to backup log... Only info Is failed to restore Only thing different is the computer name... The only error I can find is in the Applicaiton log. Information Store Database not found All others just say that the backup failed. Any assistance is greatly appreciated.

    Read the article

  • Mac Backup Plan

    - by Chuy77
    I'm reviewing my backup plan and would appreciate any thoughts about what more I should do (if anything) to make sure I'm properly covered in case of all hell breaking loose. :-) I have one machine. 1) I run a nightly clone with SuperDuper. I alternate the clone drive weekly so I have two clones, one never more than a week old. 2) I use BackBlaze as a sort of Time Machine in the cloud. It runs all the time and keeps everything on my machine backed up online. 3) I sync all my 1Password logins, etc. to my iPhone once a week. ...And that's it. I feel pretty covered. But I'm always reading stuff like this: http://www.43folders.com/2010/03/15/yes-another-backup-lecture And that doesn't even mention online backup, and seems like a huge pain in the behind. But maybe I'm being naive? Should I have more backups? Thanks for any feedback. I really appreciate it.

    Read the article

  • Simultanious process mysteriously ending

    - by Matt
    I'm trying to run a large air quality model, written in FORTRAN, setup with bash scripts, and run in a work queue (slurm.) The first part of the modeling is to run an "entry" model, this runs with MPI in the work queue but only on one process. At one point in the logs, there's a mysterious FORTRAN STOP, and then later the model fails because something wasn't set up properly. This FORTRAN STOP isn't from the main process, which continues running. This is a huge model, but as far as I know there should not be any other processes running at the same time. It consistently fails at the exact same spot. (I can move it by adding debug, but the debug is in the main process) How can I determine what this process is? I've tried added a call to strace -feprocess $SHELL in the run script, but I'm new to this, so if it has offered any info, I haven't been able to use it yet. The is no trace output around the FORTRAN STOP. The whole process occurs so fast that I can't seem to observe it by using ps. Is there a way I can somehow monitor all the processes being initiated from the time the work queue starts? Or some other way I can figure out what is failing? This is running on CentOS 6.4, with Slurm, compiled with PGI 13.

    Read the article

  • Offline copies in Windows file sharing

    - by netvope
    I frequently access media files (music or video) on a remote Windows file share. My Internet connection is not very fast, and I find it a waste of bandwidth when I repeatedly access the same files. For example, I may listen to the same song 30 times in a month. So, I would like to cache files I've used. I know Windows has an "Always available offline" feature but I think it doesn't suit my needs. I don't want to make the whole share "available offline" as the remote Windows file share is huge (in terabytes). Making individual files "available offline" is tedious as the files are scattered in many different directories. It would be much more convenient if I can simply cache those I've used. Also The files on the share seldom change. Many of the files are rarely used. Some of the files are frequently used. I don't have a list of the most frequently used files. So I think the best way is to have a caching proxy for the Windows share. What do you think? I have a Linux box sitting around. Perhaps I should try to setup samba4?

    Read the article

  • Offline cache copies in Windows file sharing

    - by netvope
    I frequently access media files (music or video) on a remote Windows file share. My Internet connection is not very fast, and I find it a waste of bandwidth when I repeatedly access the same files. For example, I may listen to the same song 30 times in a month. So, I would like to cache files I've used. I know Windows has an "Always available offline" feature but I dont' think it suit my needs. I don't want to make the whole share "available offline" as the remote Windows file share is huge (in terabytes). Making individual files "available offline" is tedious as the files are scattered in many different directories. It would be much more convenient if the system can simply cache those I've used. I could also manually make a local copy each time I use a file... but this is even more troublesome than making each file "available offline" Also The files on the share seldom change. Many of the files are rarely used. Some of the files are frequently used. I don't have a list of the most frequently used files. It would be the best if I could tell Windows to cache the last accessed 10GB, but apparently it doesn't have this feature. So I think the best way is to have a SMB/CIFS caching proxy. What do you think? I have a Linux box sitting around. Perhaps I should try to setup samba4?

    Read the article

  • How to manage processes-to-CPU cores affinities ?

    - by Philippe
    I use a distributed user-space filesystem (GlusterFS) and I would like to be sure GlusterFS processes will always have the computing power they need. Each execution node of my grid have 2 CPU, with 4 cores per CPU and 2 threads per core (16 "processors" are seen by Linux). My goal is to guarantee that GlusterFS processes have enough processing power to be reliable, responsive and fast. (There is no marketing here, just the dreams of a sysadmin ;-) I consider two main points : GlusterFS processes I/O for data access (on local disks, or remote disks) I thought about binding the Linux Kernel and GlusterFS instances on a specific "processor". I would like to be sure that : No grid job will impact the kernel and the GlusterFS instances Researchers jobs won't be affected by system processes (I'd like to reserve a pool of cores to job execution and be sure that no system process will use these CPUs) But what about I/O ? As we handle a huge amount of data (several terabytes), we'll have a lot of interuptions. How can I distribute these operations on my processors ? What are the "best practices" ? Thanks for your comments!

    Read the article

  • Parsing text files

    - by d03boy
    I encountered a situation tonight where I wanted to parse a text file. I had a very, very long word list that contained English words delimited by lines. I wanted to get rid of every word (or line) that was longer than 7 characters. This would be simple in Linux but I can't seem to find a simple solution in WindowsXP. I tried using Notepad++ regular expression search but that was a huge failure. I tried using the expression .{6,} without finding any matches. I'm really at a loss because I thought this sort of thing would be extremely easy and there would be tons of tools to accomplish a task like this. It seems like Notepad++ supports every other feature in the world except the very basic ones that seem the most obvious. Another one of my goals was to put some code before and after the word on each line. aardvark apple azolio would turn into INSERT INTO Words (word) VALUES ('aardvark'); INSERT INTO Words (word) VALUES ('apple'); INSERT INTO Words (word) VALUES ('azolio'); What suggestions/tools/tips do you have to accomplish tasks similar to this in WindowsXP?

    Read the article

  • Windows7 shows a drive as full in summary but files, including backup folder, shown on drive are ver

    - by Rob
    I have a drive partitioned so it is seen by Windows as 2 drives: C:\ and D:\ Windows7 shows D:\ as full up in the graphical summary in 'My Computer' summary of all the drives, e.g. the bar graph indicates full and nearly all of the drive's capacity, 108Gb, is full. So I go into the D:\ drive to look at the files, I see several folders. I select them all and the right click menu Properties to count their size, expecting the value to be about the same as what Windows reports in the summary, i.e. nearly 108Gb. But the properties shows the files are very small, Kbs and Mbs, nowhere near 108Gbs. One of the folders is a backup, but its size is very small. I've checked the folder options to show all system files and hidden files too - and counted these in the properties. Something invisible is holding the space. What is happening here? I'm afraid to delete anything if it removes valuable backups. Have I got huge backups here? Why can't I see them? How do I see them?

    Read the article

  • What does a connection timeout indicate when performing an NFS mount?

    - by DeeDee
    We have a shiny new QNAP NAS (TS-879U-RP), and I'm trying to mount it to our big ol' RHEL server in the same manner as our other two QNAP NAS devices. The IT department won't give me the root privileges to the NAS, so I can't SSH in (I know, I know). The first thing I did was to, via the QNAP web admin interface, create a network share named "Runs." I then added the IP of the RHEL server to the permissions list: On the RHEL server, I then added the following line to /etc/fstab: [IP of NAS]:/Runs /mnt/gsrnas3 nfs defaults 0 0 Aside from the IP and the specific mount directory name, this is how I mounted the other two NAS devices. I then created the gsrnas3 directory under /mnt/, and then ran `mount /mnt/gsrnas3' I got the following error: mount.nfs: Connection timed out My first thought is that it's a ports issue, but I don't have enough specific experience with this issue to know for sure. I have two other NAS devices by the same manufacturer already mounted to this RHEL server, so that leads me to believe the configuration issue is on the NAS side of things. I can ping the NAS device successfully from the RHEL server. Not being able to SSH into said NAS is a huge hassle, though. Any ideas?

    Read the article

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • Free tiered storage automation in linux?

    - by NginUS
    I have a couple virtualized fileservers running in QEMU/KVM on ProxmoxVE. The physical host has 4 storage tiers with significant performance variances. They're attached both locally and via NFS. These will be provided to the fileserver(s) as local disks, abstracted into pools, and handling multiple streams of data for the network. My aim is for this abstraction layer to intelligently pool the tiers. There's a similar post on the site here: Home-brew automatic tiered storage solutions with Linux? (Memory - SSD - HDD - remote storage) in which the accepted answer was a suggestion to abandon a linux solution for NexentaStor. I like the idea of running NexentaStor. It almost fits the bill. NexentaStor provides Hybrid Storage Pools, and I love the idea of checksumming. 16TB without incurring licensing fees is a huge plus as well. After the expense of the hardware, free is about all my budget can handle. I don't know if zfs pools are adaptive or dynamically allocated based on load, but it becomes irrelevant since NexentaStor doesn't support virtio network or block drivers, which is a must in my environment. Then I saw a commercial solution called SmartMove: http://www.enigmadata.com/smartmove.html And it looks like a step in the right direction, but I'm so broke I'd be wasting their time to even ask for a quote, so I'm looking for another option. I'm after a linux implementation that supports virtio drivers, and I'm at a loss as to which software is up to it.

    Read the article

  • WEIRD netstat behavior on Windows XP re: www.partypoker.com

    - by tbone
    I really don't know if this is the right place to ask this, but I would really appreciate if someone that is more savvy on Windows XP (Professional) could help me out. For background, I am a 10+ years programmer, so I'm not a total idiot, but I am far from an expert on TCP/IP, etc, and this has me totally confused. When I do a netstat (on Windows XP) I seem to always get a huge amount of www.partypoker.com connections and I can't figure out where they are coming from. A netstat -o shows me that some are coming from PID xxx, which is firefox, but if I kill it, the connections still remain. Some are coming from PID 0, which makes no sense to me. SECOND PROBLEM: One would think you could edit the C:\WINDOWS\system32\drivers\etc\hosts file to block this, but it seems like my machine is ignoring the hosts file! (I have tried with the DNS client service both enabled and disabled, same result). So I just rebooted, killed all my normal programs, and I can't seem to reproduce the problem. If I was a paranoid person, I would think there was some sort of an intelligent trojan running. I am running Windows XP Pro, Kaspersky Antivirus, ccCleaner, and am fully up to date on Windows Update. What gives???? So, I guess my questions are: 1. Is anyone else seeing these wird connections to partypoker.com? 2. Why isn't my hosts filter working? 3. Is there some utility I can run to find out whats happening? I've tried autoruns.exe from sysinternals but don't see anything interesting. Am I the only one with this problem? If there are any additional things you need me to run, let me know.

    Read the article

  • Outlook won't re-connect to exchange after network is re-connected

    - by stan503
    I have a setup at my desk where I connect my computer to a an RJ45 switch that switches between two networks. One network is the corporate network, which is maintained by my company's IT, and the other is my own private network where I do testing (the two networks have to be separated). The corporate network hosts the exchange server where I get e-mail. When I switch from the private network to the corporate network, I expect Outlook to re-connect to the exchange server. However, I have found that sometimes when I come back, Outlook take an extremely long time to re-connect. Send/Receive will give me back the error 'The server is not available' (0x8004011D). It will sit there for 10 minutes to a few hours before it finally re-connects. The only other option is to reboot my computer, which is a huge pain for me since I run multiple VMs on it. This usually happens when I'm connected to the private network for a significant amount of time, so I'm thinking it's because Outlook has cached the network status. Is there a way to force Outlook to do a 'hard' re-connect to the exchange server? I'm using Windows XP SP 3 with Outlook 2007.

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >