Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 206/361 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • Lightning fast forum based around metadata / tags? [closed]

    - by Dan W
    I wonder if anything like this exists. I'd like to add a forum to my site, but instead of the usual forum/subforum/sub-subforum structure, I'd like to use a metadata/tag approach where everything exists as a single directory, and where there's a search field at the top which instantly (<0.5 sec) filters the threads to a particular keyword or keywords. Also, as the admin, I would be able to add highly visible buttons at the top, which can be clicked on for the main categories I choose for the forum (nevertheless, users can also add tags to their own threads outside of these default main tags I supply if they wish). This approach, if done properly, is more powerful, efficient, maintenance free, scalable and friendly than a standard forum, so I was hoping someone had the same idea and made something out of it. It couldn't be that hard. I'd want the speed to be up to (or near) the standard of this: http://forum.dlang.org/ Other forums (e.g.: phpBB, shudder) are orders of magnitude worse than that in terms of latency (posting or browsing), and I think that is wrong, even in principle ;)

    Read the article

  • Calculating IOPS for a single HDD - what am I doing wrong?

    - by red888
    So I know there is no standardized way of calculating IOPS for a HDD, but from everything I have read it appears one of the most accurate formulas is the following: IOP/ms = + {rotational latency} + ({block size} / {data transfer rate}) Which is IOs per millisecond or what the book I've been reading calls "Disk Service Time". Also rotational latency is calculated as half of one rotation in milliseconds. This was taken from the EMC book "Information Storage and Management" -arguably a pretty reliable source right\wrong? Putting this formula into practice consider this Seagate data sheet. I am going to calculate IOPS for the ST3000DM001 model for a block size of 4kb: Seek Average (Write) = 9.5 -I'll measuring IOPS for writes Spindle speed = 7200rpm Average Data Rate = 156MB/s So my variables are: Seek Time = 9.5ms Rotational latency = (.5 / (7200rpm / 60)) = 0.004s = 4ms Data Rate = 156MB/s = (0.156MB/ms / 0.004MB) = 39 9.5ms + 4ms + 39 = IO/ms 52.5 1 / (52.5 * 0.001) = 19 IOPS 19 IOPS for this drive clearly is not right so what am I doing wrong?

    Read the article

  • Archive Outlook mail items into SQL Server

    - by marc_s
    I am looking (and so far not finding any) for a solution to archive e-mail items from my Outlook into SQL Server. My PST is beginning to get really really big, and I'd love to extract my older e-mail into SQL Server in a way so I can still easily find mails if needed. I would prefer SQL Server as the storage medium since I'm familiar with it, and it's rock solid - I don't want to have a collection of PST files or CHM files or anything like that. Does anyone know of such a solution? I'm a power/home user - I can't afford $5'000 enterprise licenses - I need a sub-$100 solution for private use.

    Read the article

  • Why is a single thread spread across CPU's?

    - by Marcus Lindblom
    I'm just curious why the scheduler constantly moves an app between CPUs, rather than keeping it on one. It looks a bit silly to have 4 cores at 25% rather than one at 100%. Does it has to do with heat, or is it more efficient somehow? Do other OS's do it differently? Insights or links to in-depth stuff would be nice. (Couldn't find much myself.) Update: By "spread out" I don't mean that it executes on several cpu's at once, but is being moved from one to the other several times per second, making the effect that it looks spread out.

    Read the article

  • What are performance limits of a database?

    - by Tommy
    What are some rough performance limits (read/s, write/s) for a single database server (no master-slave architecture), assuming storage on disk? How many read/s, write/s, depending on the kind of disk? (SSD vs non-SSD) , assuming simple operations (select one row by primary key, update one row, correctly indexed). I assume this limit is dependent on disk seek/write. EDIT: My question is more about getting rough metrics of the number of operations a database supports: to be able to know for example, if a new feature triggering 300 inserts/s can be supported without scaling out with additional servers.

    Read the article

  • Recommending simple appliance for DansGuardian, iptables, snort inline

    - by SRobertJames
    I'm currently using a Linksys E2000 with dd-wrt. I'd like to add DansGuardian for Content Filtering and snort-inline for IPS; but those require a more powerful box (mainly, more storage). Can you recommend a good device to use? I'm open to both overwrite-the-firmware (like dd-wrt) and designed-to-be-customized boxes. Requirements: 1. 5+ Ethernet ports, pref. GigE 2. small form factor 3. No noise (office environment) 4. low power 5. Not sure about 802.11 wireless Budget < $400, pref. less.

    Read the article

  • System has reached the maximum size allowed for the system part of the registry

    - by Bob Denny
    To be precise System has reached the maximum size allowed for the system part of the registry. Additional storage requests will be ignored. WinXP/64 running fine for 2 years (no /3Gb switch), just started happening. I used ntregopt and the problem went away at least temporarily. However, looking before and after in Windows\System32\Config I see that my System file was reduced only by 10% and is still 170+ Mb. According to my rather extensive research with Google, this is "huge" and should be more like 10-20Mb. The system runs fine. There is a System.bak that is only 11Mb and has the date when I ran ntregopt. That's what I know. Now my question: Is there anything I can do to reduce or rebuild the System registry hive given the above info?

    Read the article

  • Optimal Configuration for five 300 GB 15K SAS Drives

    - by Bob
    I recently acquired an HP Z800 workstation that has five 300 GB 15K SAS Drives. This system will be dedicated to running multiple virtual machines under VMware Workstation (Note: I'm not using ESXi because I do plan to use the system for other purposes.). For the host OS, I plan to install RHEL 5. My number one concern is guest performance. For example, should I create a RAID 10 array for the OS and virtual machine storage with four of the drives and reserve the 5th? Or, is there a solution that will provide better performance?

    Read the article

  • smallest footprint for Web Application server?

    - by edgardodelamanta
    There are times when you need to spare hardware resources (either to keep using legacy hardware, to play the embedded card, or just to be efficient because a large footprint is trashing CPU caches, leading to unacceptable levels of idle-states). In this spirit, some efforts have been made to make 'light' ports of Java or Mono (C# for Linux), and they range in the 80-50 MB (instead of the 100-200 MB). Add a Web server (Apache, IIS, etc.) to the scripting engine and you can happily dive into the GB (IIS + .Net) only to load the tool in memory. Anybody with more modest tools in the specs area?

    Read the article

  • Can someone explain the physical architecture of RAID 10 in complete layman's terms?

    - by Hank
    I am a newbie in the world of storage and I am having a hard time digesting the physical architecture of some of the RAID levels. I am particularly interested in RAID 10, and 50. I asked the question specifically about RAID 10, because I feel if I understand that, I'll understand the other. So, I get the definition of RAID 10 - "minimum 4 disks, a striped array whose segments are mirrored". If I've got 4 disks and Disks 1 and 2 are a mirrored pair, and Disks 3 and 4 are a mirrored pair - where does the data get striped? Thanks.

    Read the article

  • Win 7 Explorer backup and long paths

    - by user53299
    I use Explorer to do backups because Win 7's backup program asks me to take backups previously done and to put them back in the drive. I am opposed to that idea since I believe backups should remain in storage. With Explorer backups (burn and burn to disc) I have encountered the "destination path too long" error message and it shows the name of a folder "Debug" three times. I have hundreds of folders named "Debug" thanks to Visual Studio. At this moment I'm too angry at Microsoft to write a program to determine my 3 longest paths. (Aside: This is all after coincidentally reading two articles about path junctions earlier this evening which already made me kind of unhappy.) Please, is there an easy way to continue to make backups with Explorer? Edit: I should add that renaming paths wrecks Visual Studio projects so I really need to isolate the small number of problem paths or find a cleaner solution.

    Read the article

  • Optimal Configuration for five 300 GB 15K SAS Drives

    - by Bob
    I recently acquired an HP Z800 workstation that has five 300 GB 15K SAS Drives. This system will be dedicated to running multiple virtual machines under VMware Workstation (Note: I'm not using ESXi because I do plan to use the system for other purposes.). For the host OS, I plan to install RHEL 5. My number one concern is guest performance. For example, should I create a RAID 10 array for the OS and virtual machine storage with four of the drives and reserve the 5th? Or, is there a solution that will provide better performance?

    Read the article

  • Parity Initialization after putting in two new disks

    - by lbanz
    All my firmware is up to date on the server and the controllers. Storage crashed over the weekend. I rebooted it and it detected that I put in two new disks last week (I did check that both disk completed the rebuilding process last week). After it booted into the OS I see that it gave me an information message. After 18 hours it is at 54% so it is looking healthy. But I need to replace 5 more disk in the msa. Should I wait for this message to finish before replacing more disks? 785 Background parity initialization is currently queued or in progress on Logical Drive 1 (15.0 TB, RAID 5). If background parity initialization is queued, it will start when I/O is performed on the drive. When background parity initialization completes, the performance of the logical drive will improve.

    Read the article

  • Recursive Batch File

    - by MCZ
    I have a file that looks this: head1,head2,head3,head4,head5,head6 a11,a12,keyA,a14,a15,a16 a21,a22,keyB,a24,a25 a31,a32,keyC,a34 a41,a42,keyB,a44,a44 a51,a52,keyA,a54,a55,a56 a61,a62,keyA,a64,a65,a66 a71,a72,keyC,a74 some message Objective: Write list of unique keys to a text file. For example, the result for the file described above should be: keyA, keyB, keyC Here's the pseudocode I would like to implement in batch file recur.bat Read second line of inputfile If no key exist on second line, return else continue Append keyX to list FINDSTR /v keyX inputfile Pipe results to recur.bat I don't know if this is the most efficient way to do this without using actual programming language. Any suggestions for actual batch file code?

    Read the article

  • How can I improve performance over SMB/CIFS for an application that has poor write speeds?

    - by Jeremy
    I have a third party application that reads several large files and generates a third large file. Its performance is quite good when the generated file is stored on "local storage", i.e. either a direct attached or iSCSI-based disk. The source files that are read can be stored remotely on our NAS and accessed via SMB with little effect on performance. However, if we attempt to write the target file to any kind of SMB/CIFS share (Samba or Windows Server) the performance drops almost ten-fold. This is unacceptably slow in our case. Writing files to network shares is not otherwise slow. I can copy large files to SMB shares and get great performance - near what I would expect is possible given the disks and network in question. I have a theory that this application's problem with SMB shares has something to do with a lack of write caching over the share and perhaps lots of network roundtrips. Is this possible and is there anything that can be done about it?

    Read the article

  • What's the fastest way to store/access large files?

    - by philfreo
    I do a lot of video editing on my Mac and need a way to store very large (30 GB) files, and don't have room on my HD. A USB/Firewire external hard drive would work, but it seems way too slow for consistently working with such large files. I've also considered buying another computer, with a large hard drive, and putting it on the same network with a shared folder. What's the fastest / most efficient way to do this? Please consider USB 2.0 speeds, hard drive read times, ethernet speeds, etc. Are there other options I should consider?

    Read the article

  • Ways to increase my Ubuntu partition space

    - by Andreas Grech
    I am currently running Ubuntu and Windows 7 as dual-boot on a single HD. The problem is that when I installed Ubuntu, I didn't allocate as much space as I thought I would need and now I need 'reinstall' Ubuntu so that I can increase the amount of storage space. Now there are two ways to go about this. Either I use use gparted to increase my partition space (but I read that it's not really that safe as regards data loss) or create the new partition with more space and reinstall Ubuntu there. But if want to reinstall Ubuntu, is there a way I can somehow "save" my current Ubuntu and install that one? What I mean is that I don't want to lose my current installed packages and files that I have on this partition. Is there a way to kind of maybe 'streamline' my current Ubuntu so that I install this one on the new partition? If not, what are your opinions as regards gparted?

    Read the article

  • How can Nagios handle non-threshold based plugins?

    - by FliesLikeABrick
    I am writing a Nagios plugin to monitor trends of a certain storage resource utilization (e.g. gradual increases are fine, but an instantaneous/sudden increase or decrease in resource usage may indicate a problem). For what it's worth, it is reviewing the last N entries in an RRD file generated by a custom cacti data source/templates. What is the "right" way to handle Nagios notification config/implementation for this? The problem is that it the plugin would exit as warning/critical for one polling period, but in the next it would be fine (or 3 polling periods later, if I look at 3 polling periods worth of data). I guess the question is: should I just write it in such a way that it will alert for X polling periods, or should I find a way to write it such that manual intervention is required for it to clear (such as logging into the monitoring server or hitting a URL to run a script that submits a passive result)? Your input is appreciated, and if you have any tips for how to implement the latter I'm open to them (I can think of a few ways to possibly implement it)

    Read the article

  • RHEL raw device (over VMware RDM) performance issues

    - by jifa
    I'm running RHEL 5.3 over vSphere 4.0U1. I configured multiple LUNs on my NetApp (Fibre) storage, and added the RDM on two (Linux) VMs, using the Paravirtual SCSI adapter. One LUN is 100GB in size, successfully mapped to /dev/sdb on both VMs, 5 more are 500MB in size (mapped to /dev/sd{c-g}. I also created one partition per device. I have encountered two issues: First, writing directly to /dev/sdb1 gives me ~50MB/s, while any of the /dev/sd{c-g}1 gives me ~9MB/s. There is no difference in configuration of the LUNs apart from their size. I am wondering what causes this but this is not my main problem, as I would settle for 9 MB/s. I created raw devices using udev pretty straightforwardly: ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N" per device Writing to any of the new raw devices dramatically slows down performance to just over 900KB/s. Can anyone point me in a helpful direction? Thanks in advance, -- jifa

    Read the article

  • Why is hibernation still used?

    - by Moses
    I've never quite understood the original purpose of the Hibernation power state in Windows. I understand how it works, what processes take place, and what happens when you boot back up from Hibernate, but I've never truly understood why it's used. With today's technology, most notably with SSDs, RAM and CPUs becoming faster and faster, a cold boot on a clean/efficient Windows installation can be pretty fast (for some people, mere seconds from pushing the power button). Standby is even faster, sometimes instantaneous. Even SATA drives from 5-6 years ago can accomplish these fast boot times. Hibernation seems pointless to me when modern technology is considered, but perhaps there are applications that I'm not considering. What was the original purpose behind hibernation, and why do people still use it? Edit: I rescind my comment about hibernation being obsolete, as it obviously has very practical applications to laptops and mobile PCs, considering the power restrictions. I was mostly referring to hibernation being used on a desktop.

    Read the article

  • What are the cheap CDN for Origin Pull?

    - by DucDigital
    I've read several thread around ServerFault about this, but still I am not satisfy with the answer so I post a question here. I need a Origin Pull CDN that support big file (more than 200MB). I don't need a storage place since they are too small, just to relay the server. Also the price should be afforable, ofcourse not more than 150$ a month for their smallest plan. I also need to pay by credit card since I do not work or stays in the US so it's hard for me to do a bank wire. Thank you very much

    Read the article

  • Transfer many Gigabytes between two servers

    - by Bernhard
    Hello, I have a big problem. I have to move data from an old Webspace which is only accessibla by ftp. The new root server is accessible by ssh of course :-) I need to move all the data from the old space but the amount is just huge. Is there a way to move all the files directly from the old ftp to the storage and not over a third station (my local machine)? I´ve tried it with ftp but it didn't work. I think I´ve used the wrong commands. Is there a way to do this? Thank you in advance Bernhard

    Read the article

  • Serverlocation moved and how can I Move the files

    - by Bernhard
    Hello together, I´ve a big problem. I have to move data from an old Webspace which is only accessibla by ftp. No we have a new root server which is accessible by ssh of course :-) No i Need to move all data from the old space but there is a lot of Gb of files. Is there a way to fetch all files directly from the old ftp to the storage and not over a third station (my local machine)? I´ve tried it with ftp but without success. I think I´ve used the wrong commands. Is there a way to etablish something like this including all files and directorys? Thank you in advance Bernhard

    Read the article

  • Using Folder Redirection GPO and Offline Files and Folders

    - by user132844
    I want to use Folder Redirection to redirect user's My Documents to a network share. First question is: What is best practices for mapping the drive? Should I use the profile tab in AD with the %username% variable, or a net use logon script, or something else? Second question is: How do I deal with laptops and syncing the network with the local storage? I want to have 2-way syncing so if they manually map their networked home drive and edit it from a different computer, it will sync the newer version to their My Documents folder the next time they connect their normal work computer. I also want to be sure that if they edit a file offline on their laptop while away from the office, that the network version syncs the changes the next time they connect that laptop. Please advise best practices for this scenario in a 2008 R2/Win7 environment. I am also interested in Mac clients for this environment, and while I am very Mac savvy, I would like to hear what others consider to be best practices for Mac network homedirs in a Win environment.

    Read the article

  • How to Disable secondary drive from booting upon restart - Windows

    - by DevCompany
    I had a Windows 2003 Hard Drive on my server and it went bad so I installed a new clean hard drive and installed Windows 2008 R2 on the new clean drive. I moved the old 2003 drive to be used only for general storage on the same computer. It usually boots into Windows 2008 upon a restart, but just sometimes it starts trying to boot the old 2003 drive and causes boot issues(NTDLR Bootloader, and other errors), even though the order of boot preference is set to boot 2008, and NOT 2003. I need to know how to remove any old code that keeps this old drive as a bootable drive. I still want to use it as a secondary drive just dont want to have any boot code on it. hopefully my situation is clear for everyone to get a good response. Thank you...

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >