Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 61/869 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Sort objects and polymorphism

    - by ritmbo
    Suppose I have a class A. And B and C are child of A. Class A has a generic algorithm for sorting arrays of type A, so that I could use it for B and C without writing again the algorithm for each. In the algorithm, sometimes I have to swap. The problem is that I can only see the objects as type A, and if I do: A aux = array[i] array[i] = array[j] array[j] = aux I think I have a problem. Because array[i], maybe its of type B, and aux is of type A, so I think I'm losing information. I'm sure u understand this situation... how can I sort a generic array of objects using a father method algorithm?

    Read the article

  • Best place to create windows form objects

    - by user333484
    I'm creating a windows app in c# 2008 that will have around 8-10 dialog boxes. I want these forms to exist throughout the life of the program. Where's the best place to create and store the objects? I'm coming from Delphi, where Form objects were usually stored in global variables. I'm tempted to do it in the static Program class. Should I put them in the main form instead? Thanks for helping a C# newb out.

    Read the article

  • Is it possible to download a large database using mysql query

    - by Rose
    i am downloading files from server using WinSCP.Is it possible to write a query to download a large database using mysql query? Or using any other method i have tried with this code but i am not able to get the whole database structure <?php if(file_exists('backup_sql/my_backup.zip')) { unlink('backup_sql/my_backup.zip'); } $tables='*'; $host='MY HOST NAME'; $user='MY_USERNAME'; $pass='MYPASSWORD'; $name='MY_DB_NAME'; $link = mysql_connect($host,$user,$pass); mysql_select_db($name,$link); //get all of the tables if($tables == '*') { $tables = array(); $result = mysql_query('SHOW TABLES'); while($row = mysql_fetch_row($result)) { $tables[] = $row[0]; } } else { $tables = is_array($tables) ? $tables : explode(',',$tables); } $return=''; //cycle through foreach($tables as $table) { $result = mysql_query('SELECT * FROM '.$table); $num_fields = mysql_num_fields($result); //$return.= 'DROP TABLE '.$table.';'; $row2 = mysql_fetch_row(mysql_query('SHOW CREATE TABLE '.$table)); $return.= "\n\n".$row2[1].";\n\n"; for ($i = 0; $i < $num_fields; $i++) { while($row = mysql_fetch_row($result)) { $return.= 'INSERT INTO '.$table.' VALUES('; for($j=0; $j<$num_fields; $j++) { $row[$j] = addslashes($row[$j]); //$row[$j] = ereg_replace("\n","\\n",$row[$j]); if (isset($row[$j])) { $return.= '"'.$row[$j].'"' ; } else { $return.= '""'; } if ($j<($num_fields-1)) { $return.= ','; } } $return.= ");\n"; } } $return.="\n\n\n"; } $rand_var=time(); $files_to_zip = array( "'backup_sql/db-backup-'.$rand_var.'.sql'", ); $name = 'db-backup-'.$rand_var.'.sql'; $data = $return; ?> any one please help me... thank you

    Read the article

  • JavaScript: filter() for Objects

    - by AgileMeansDoAsLittleAsPossible
    ECMAScript 5 has the filter() prototype for Array types, but not Object types, if I understand correctly. How would I implement a filter() for Objects in JavaScript? Let's say I have this object: var foo = { bar: "Yes" }; And I want to write a filter() that works on Objects: Object.prototype.filter = function(predicate) { var result = {}; for (key in this) { if (this.hasOwnProperty(key) && !predicate(this[key])) { result[key] = this[key]; } } return result; }; This works when I use it in jsfiddle (http://jsfiddle.net/MPUnL/4/), but when I add it to my site that uses jQuery 1.5 and jQuery UI 1.8.9, I get JavaScript errors in FireBug.

    Read the article

  • Python iterate object with a list of objects

    - by nerd
    First time poster, long time reader. Is it possible to iterate though an object that contains a list of objects. For example, I have the following class Class Page(object) def __init__(self, name): self.name = name self.pages = [] I then create a new Page object and add other page objects to it. page = Page('FirstPage') apagepage = Page('FirstChild') anotherpagepage = Page('SecondChild') apagepage.pages.append(Page('FirstChildChild')) apagepage.pages.append(Page('SecondChildChild')) page.pages.append(apagepage) page.pages.append(anotherpagepage) What I would like to do is for thispage in page: print thispage.name And get the following output FirstPage FirstChild SecondChild FirstChildChild SecondChildChild So I get all the 1st level, then the 2nd, then the 3rd. However, the following output would be find as well FirstPage FirstChild FirstChildChild SecondChildChild SecondChild

    Read the article

  • Tracking Down a Stack Overflow in My Linq Query

    - by Lazarus
    I've written the following Linq query: IQueryable<ISOCountry> entries = (from e in competitorRepository.Competitors join c in countries on e.countryID equals c.isoCountryCode where !e.Deleted orderby c.isoCountryCode select new ISOCountry() { isoCountryCode = e.countryID, Name = c.Name }).Distinct(); The objective is to retrieve a list of the countries represented by the competitors found in the system. 'countries' is an array of ISOCountry objects explicitly created and returned as an IQueryable (ISOCountry is an object of just two strings, isoCountryCode and Name). Competitors is an IQueryable which is bound to a database table through Linq2SQL though I created the objects from scratch and used the Linq data mapping decorators. For some reason this query causes a stack overflow when the system tries to execute it. I've no idea why, I've tried trimming the Distinct, returning an anonymous type of the two strings, using 'select c', all result in the overflow. The e.CountryID value is populated from a dropdown that was in itself populated from the IQueryable so I know the values are appropriate but even if not I wouldn't expect a stack overflow. Can anyone explain why the overflow is occurring or give good speculation as to why it might be happening? EDIT As requested, code for ISOCountry: public class ISOCountry { public string isoCountryCode { get; set; } public string Name { get; set; } }

    Read the article

  • Left/Right/Inner joins using C# and LINQ

    - by Keith Barrows
    I am trying to figure out how to do a series of queries to get the updates, deletes and inserts segregated into their own calls. I have 2 tables, one in each of 2 databases. One is a Read Only feeds database and the other is the T-SQL R/W Production source. There are a few key columns in common between the two. What I am doing to setup is this: List<model.AutoWithImage> feedProductList = _dbFeed.AutoWithImage.Where(a => a.ClientID == ClientID).ToList(); List<model.vwCompanyDetails> companyDetailList = _dbRiv.vwCompanyDetails.Where(a => a.ClientID == ClientID).ToList(); foreach (model.vwCompanyDetails companyDetail in companyDetailList) { List<model.Product> productList = _dbRiv.Product.Include("Company").Where(a => a.Company.CompanyId == companyDetail.CompanyId).ToList(); } Now that I have a (source) list of products from the feed, and an existing (target) list of products from my prod DB I'd like to do 3 things: Find all SKUs in the feed that are not in the target Find all SKUs that are in both, that are active feed products and update the target Find all SKUs that are in both, that are inactive and soft delete from the target What are the best practices for doing this without running a double loop? Would prefer a LINQ 4 Objects solution as I already have my objects. EDIT: BTW, I will need to transfer info from feed rows to target rows in the first 2 instances, just set a flag in the last instance. TIA

    Read the article

  • Wnat is the preferred method of building extremely lightweight business object / DAL now that I have

    - by Seth Spearman
    Hello, I have completed a simple database for a project. Only 6tables. Of the 6, one is a "lookup" table. There is one "master" table that is the driver for the system. It is referenced as a foreign key by the other four tables. Give that this step is completed. What is the FASTEST, EASIEST way to create POCOs/BizObjects that can load load the data and the child data. Here are my CAVEATS. *I don't want to spend more than 30-60 minutes learning how? *There is very little biz logic needed in the POCOs. They will pretty much load data. Don't even really need to write back data. *I already know CSLA (up to version 3) but I feel that is overkill for this little project. *Nevertheless, I would love it if it ROOT objects could have collection classes that contain the CHILD objects as in CSLA...but again, without using CSLA. *Please give the answer for .NET 35 but also if I was restricted to only use .NET 20. *Ideally I could just point a tool at the database and the POCOs would be genn'ed. *FREE Just curious what you guys use for this kind of scenario. I understand that this question is subjective but I want to hear a variety of answers. Seth

    Read the article

  • How can I get type information at runtime from a DMP file in a Windbg extension?

    - by pj4533
    This is related to my previous question, regarding pulling objects from a dmp file. As I mentioned in the previous question, I can successfully pull object out of the dmp file by creating wrapper 'remote' objects. I have implemented several of these so far, and it seems to be working well. However I have run into a snag. In one case, a pointer is stored in a class, say of type 'SomeBaseClass', but that object is actually of the type 'SomeDerivedClass' which derives from 'SomeBaseClass'. For example it would be something like this: MyApplication!SomeObject +0x000 field1 : Ptr32 SomeBaseClass +0x004 field2 : Ptr32 SomeOtherClass +0x008 field3 : Ptr32 SomeOtherClass I need someway to find out what the ACTUAL type of 'field1' is. To be more specific, using example addresses: MyApplication!SomeObject +0x000 field1 : 0cae2e24 SomeBaseClass +0x004 field2 : 0x262c8d3c SomeOtherClass +0x008 field3 : 0x262c8d3c SomeOtherClass 0:000> dt SomeBaseClass 0cae2e24 MyApplication!SomeBaseClass +0x000 __VFN_table : 0x02de89e4 +0x038 basefield1 : (null) +0x03c basefield2 : 3 0:000> dt SomeDerivedClass 0cae2e24 MyApplication!SomeDerivedClass +0x000 __VFN_table : 0x02de89e4 +0x038 basefield1 : (null) +0x03c basefield2 : 3 +0x040 derivedfield1 : 357 +0x044 derivedfield2 : timecode_t When I am in WinDbg, I can do this: dt 0x02de89e4 And it will show the type: 0:000> dt 0x02de89e4 SomeDerivedClass::`vftable' Symbol not found. But how do get that inside an extension? Can I use SearchMemory() to look for 'SomeDerivedClass::`vftable'? If you follow my other question, I need this type information so I know what type of wrapper remote classes to create. I figure it might end up being some sort of case-statement, where I have to match a string to a type? I am ok with that, but I still don't know where I can get that string that represents the type of the object in question (ie SomeObject-field1 in the above example).

    Read the article

  • Fast, Unicode-capable, cross-platform programmer's text editor that shows invisibles like ZWSP?

    - by Roger_S
    Our publishing workflow includes Windows and Linux machines (there are some Macs too, but not in the critical-path workflow). Many texts include both English and Khmer and are marked-up in XML. XML Copy Editor is the best cross-platform open-source XML editor I've discovered. It utilizes the Scintilla editing component, which is generally good with Unicode but which does not enable non-printing or invisible characters like U+200B (zero-width space) and U+200C (zero-width non-joiner) to be displayed. Khmer does not separate words with a space character as Western languages do, so ZWSP is used in electronic texts to enable applications to break lines easily. Ideally I'd edit the markup and the content in a single editor, but XML awareness is less important at times than being able to display invisibles. (OpenOffice.org Writer and Microsoft Word are the only two apps I know that will display ZWSP. They are not suitable for the markup and text manipulations that need to be done to prepare manuscripts for publication, unfortunately, although I guess they're fine for authoring.) I tried out a promising editor last week, but a search-and-replace regex operation that took under a second in TextPad 4.7.3 lasted over twenty seconds. So I want to mention that speed and the ability to handle large (up to 150mb) files is also a concern. Is there a good, fast, free or not too expensive text editor, with versions on Windows and Linux and maybe mac too, Unicode-aware and capable of displaying invisibles like ZWSP? That has syntax highlighting, can handle large files and is customizable enough that I won't tear my hair out in frustration? Thanks, Roger_S

    Read the article

  • Use variables to decide which object in an array gets an attribute?

    - by DavidR
    I have a web app which has two text areas. When one text area receives a mousedown event, a variable "side" is set, either "left" or "right." When a user selects some text in a text area, three strings are made. One for the text before the beginning of the selection, the selection itself, and the text after the selection to the end. A function is set to return these like this: return { head: head_text, tail: tail_text, sel: sel_text, side: text_side } Now, I have created an array, and I want it to appear in such a way that we get, text.left({"head":"four score", "selection":"and seven", "tail":"years ago."}) I am assuming I would do this by text.side = getSelection(), but how do I get it to evaluate the variable "side" instead of thinking of it as an object within "text"? EDIT: Ok, just to clarify, I might be completely wrong in my ideas in how this works, but here it goes. I want to make it so that a function can look at "text" see within text two objects, "left" and "right," and then evaluate the head, sel, and tail of each object. Would it be easier for me to use two objects?

    Read the article

  • Reading and writing in parallel

    - by Malfist
    I want to be able to read and write a large file in parallel, or if not in parallel, at least in blocks so that I don't use up so much memory. This is my current code: // Define memory stream which will be used to hold encrypted data. MemoryStream memoryStream = new MemoryStream(); // Define cryptographic stream (always use Write mode for encryption). CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write); //start encrypting using (BinaryReader reader = new BinaryReader(File.Open(fileIn, FileMode.Open))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = reader.Read(buffer, 0, buffer.Length); cryptoStream.Write(buffer, 0, read); } while (read == buffer.Length); } // Finish encrypting. cryptoStream.FlushFinalBlock(); // Convert our encrypted data from a memory stream into a byte array. //byte[] cipherTextBytes = memoryStream.ToArray(); //write our memory stream to a file memoryStream.Position = 0; using (BinaryWriter writer = new BinaryWriter(File.Open(fileOut, FileMode.Create))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = memoryStream.Read(buffer, 0, buffer.Length); writer.Write(buffer, 0, read); } while (read == buffer.Length); } // Close both streams. memoryStream.Close(); cryptoStream.Close(); As you can see, it reads the entire file into memory, encrypts it, then writes it out. If I happen to be encrypting files that are very large (2GB+) it tends not to work, or at the very least, consumes ~97% of my memory. How could I do it in a more effective manner?

    Read the article

  • Use LINQ, to Sort and Filter items in a List<ReturnItem> collection, based on the values within a Li

    - by Daniel McPherson
    This is tricky to explain. We have a DataTable that contains a user configurable selection of columns, which are not known at compile time. Every column in the DataTable is of type String. We need to convert this DataTable into a strongly typed Collection of "ReturnItem" objects so that we can then sort and filter using LINQ for use in our application. We have made some progress as follows: We started with the basic DataTable. We then process the DataTable, creating a new "ReturnItem" object for each row This "ReturnItem" object has just two properties: ID ( string ) and Columns( List(object) ). The properties collection contains one entry for each column, representing a single DataRow. Each property is made Strongly Typed (int, string, datetime, etc). For example it would add a new "DateTime" object to the "ReturnItem" Columns List containing the value of the "Created" Datatable Column. The result is a List(ReturnItem) that we would then like to be able to Sort and Filter using LINQ based on the value in one of the properties, for example, sort on "Created" date. We have been using the LINQ Dynamic Query Library, which gets us so far, but it doesn't look like the way forward because we are using it over a List Collection of objects. Basically, my question boils down to: How can I use LINQ, to Sort and Filter items in a List(ReturnItem) collection, based on the values within a List(object) property which is part of the ReturnItem class?

    Read the article

  • Data transformation question

    - by tkm
    I have data composed of a list of employers and a list of workers. Each has a many-to-many relationship with the other (so an employer can have many workers, and a worker can have many employers). The way the data is retrieved (and given to me) is as follows: each employer has an array of workers. In other words: employer n has: worker x, worker y etc. So I have a bunch of employer objects each containing an array of workers. I need to transform this data (and basically invert the relationship). I need to have a bunch of worker objects, each containing and array of employers. In other words: worker x has: employer n1, employer n2 etc. The context is hypothetical so please don't comment on why I need this or why I am doing it this way. I would really just like help on the algorithm to perform this transformation (there isn't that much data so I would prefer readability over complex optimizations which reduce complexity). (Oh and I am using Java, but pseudocode would be fine). Thanks!

    Read the article

  • 24TB RAID 6 configuration

    - by Phil
    I am in charge of a new website in a niche industry that stores lots of data (10+ TB per client, growing to 2 or 3 clients soon). We are considering ordering about $5000 worth of 3TB drives (10 in a RAID 6 configuration and 10 for backup), which will give us approximately 24 TB of production storage. The data will be written once and remain unmodified for the lifetime of the website, so we only need to do a backup one time. I understand basic RAID theory, however I am not experienced with it. My question is, does this sound like a good configuration? What potential problems could this setup cause? Also, what is the best way to do a one-time backup? Have two RAID 6 arrays, one for offsite backup and one for production? Or should I backup the RAID 6 production array to a JBOD? EDIT: The data server is running Windows 2008 Server x64. EDIT 2: To reduce rebuild time, what would you think about using two RAID 5's instead of one RAID 6?

    Read the article

  • rm on a directory with millions of files

    - by BMDan
    Background: physical server, about two years old, 7200-RPM SATA drives connected to a 3Ware RAID card, ext3 FS mounted noatime and data=ordered, not under crazy load, kernel 2.6.18-92.1.22.el5, uptime 545 days. Directory doesn't contain any subdirectories, just millions of small (~100 byte) files, with some larger (a few KB) ones. We have a server that has gone a bit cuckoo over the course of the last few months, but we only noticed it the other day when it started being unable to write to a directory due to it containing too many files. Specifically, it started throwing this error in /var/log/messages: ext3_dx_add_entry: Directory index full! The disk in question has plenty of inodes remaining: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 60719104 3465660 57253444 6% / So I'm guessing that means we hit the limit of how many entries can be in the directory file itself. No idea how many files that would be, but it can't be more, as you can see, than three million or so. Not that that's good, mind you! But that's part one of my question: exactly what is that upper limit? Is it tunable? Before I get yelled at--I want to tune it down; this enormous directory caused all sorts of issues. Anyway, we tracked down the issue in the code that was generating all of those files, and we've corrected it. Now I'm stuck with deleting the directory. A few options here: rm -rf (dir)I tried this first. I gave up and killed it after it had run for a day and a half without any discernible impact. unlink(2) on the directory: Definitely worth consideration, but the question is whether it'd be faster to delete the files inside the directory via fsck than to delete via unlink(2). That is, one way or another, I've got to mark those inodes as unused. This assumes, of course, that I can tell fsck not to drop entries to the files in /lost+found; otherwise, I've just moved my problem. In addition to all the other concerns, after reading about this a bit more, it turns out I'd probably have to call some internal FS functions, as none of the unlink(2) variants I can find would allow me to just blithely delete a directory with entries in it. Pooh. while [ true ]; do ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; done ) This is actually the shortened version; the real one I'm running, which just adds some progress-reporting and a clean stop when we run out of files to delete, is: export i=0; time ( while [ true ]; do ls -Uf | head -n 3 | grep -qF '.png' || break; ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; export i=$(($i+10000)); echo "$i..."; done ) This seems to be working rather well. As I write this, it's deleted 260,000 files in the past thirty minutes or so. Now, for the questions: As mentioned above, is the per-directory entry limit tunable? Why did it take "real 7m9.561s / user 0m0.001s / sys 0m0.001s" to delete a single file which was the first one in the list returned by "ls -U", and it took perhaps ten minutes to delete the first 10,000 entries with the command in #3, but now it's hauling along quite happily? For that matter, it deleted 260,000 in about thirty minutes, but it's now taken another fifteen minutes to delete 60,000 more. Why the huge swings in speed? Is there a better way to do this sort of thing? Not store millions of files in a directory; I know that's silly, and it wouldn't have happened on my watch. Googling the problem and looking through SF and SO offers a lot of variations on "find" that obviously have the wrong idea; it's not going to be faster than my approach for several self-evident reasons. But does the delete-via-fsck idea have any legs? Or something else entirely? I'm eager to hear out-of-the-box (or inside-the-not-well-known-box) thinking. Thanks for reading the small novel; feel free to ask questions and I'll be sure to respond. I'll also update the question with the final number of files and how long the delete script ran once I have that. Final script output!: 2970000... 2980000... 2990000... 3000000... 3010000... real 253m59.331s user 0m6.061s sys 5m4.019s So, three million files deleted in a bit over four hours.

    Read the article

  • undelete big files - mission impossible?

    - by johnrembo
    Hi, I've accidentaly deleted outlook.pst (6.7GB) file, while there was only 400MB free space left on primary NTFS partition (winxp). I've tried several recovery tools to get this file back. "Ontrack Easy Recovery Pro" found 0 pst files (complete scan mode), while "Recover My Files" in sector scan mode found 5 pst's, but 4 of them of sizes from 3 to 28 KB, while the 5th one - 1Gb. I've managed to succesfuly recover 1Gb pst file, which was 1 year old copy (the one used after the latest windows reinstall). Now, I'm frustrated and confused Why 1 year old file was succesfuly recovered if there were only 400MB left on primary partition? Where's 6.7GB file gone? I did some reading (i.e. here), and it seems that there's almost no probability to retrieve the file I'm looking for, but wait - none of recovery tools i've used found zero-sized pst file, moreover - if due to fragmentation a file might be corrupted - we could use scanpst.exe to fix some errors and survive with 10 or 100 emails missing - whatever. Could you please recommend some more sophisticated recovery tools for this particular task? Appretiate your help - thanks in advance

    Read the article

  • Cannot copy files from external hard drive to desktop hard drive in Window 7

    - by Mohammad Reza Selim
    I'm trying to copy some old files from one of my external hard-drives to the hard drive of my desktop PC. Some files can not be copied but giving the error like 'Cannot read from source file or disk'. Those files are videos files (.DAT, .VOB, .MPG) and I watched them all the way through with no issues so the files aren't corrupted. I'm running Windows 7, with admin permissions. Could any one let me know the reason and a solution?

    Read the article

  • How to uncompress a 9GB file in Windows FAT32

    - by Kashif
    I have a 2GB RAR file that contains a 9GB video file. I'm using a FAT32 file system. Now I want to unzip that file but after 4GB I get an error due to the FAT32 file size limit. Now I want to know that how I can extract that video? I know that one way is to convert my partition to NTFS but I don't want to follow that way. I've also tried 7-zip but that again gives error after 4GB. One other way is to split that file but I don't know how I can split a video file that is zipped. So any idea please? How can I get rid of this problem.

    Read the article

  • Advise on a 240,000 sqft outdoor wireless network

    - by whlspacedude
    I would be very appreciative of some advice in the purchase of equipment to provide a wireless network that covers the entire area of an outdoor arena. The area is rectangular-ish in shape. 400ft wide and 600ft long. It has 6 light towers, 1 on each of the 400 foot ends and 2 on each of the 600 foot ends. I can mount on anything and spend as much money as needed. The needs of the network would be to provide access for, up to 15 wireless HD cameras with audio, and a public-wifi network. Can someone point me in the right direction as far as equipment and antennas ? I can provide any additional information that you may need.

    Read the article

  • 4TB HGST SATA drive only shows 1.62 TB in Windows Server 2012

    - by user136085
    I'm using a Supermicro X9SRE-3F motherboard with the latest BIOS and 2x 4TB drives connected to the on-board SATA controller. If I set the BIOS to RAID and create a RAID 1 array, the array shows up in the BIOS as 3.6TB. However when I boot Windows (on a separate RAID 1 array), the 4TB drives show up individually in disk manager as 2x 1.62TB drives. I could use Windows 2012 to set up software RAID 1, but when I set the BIOS back to 2x individual drives, they still show up in Windows as 2x 1.62TB drives. How do I access the full capacity of these drives? Thanks, Brian Bulaw

    Read the article

  • Delete files from directory: memory exhausted

    - by codeholic
    This question is a logical continuation of http://serverfault.com/questions/45245/how-can-i-delete-all-files-from-a-directory-when-it-reports-argument-list-too-lo I have drwxr-xr-x 2 doreshkin doreshkin 198291456 Apr 6 21:35 session_data I tried find session_data -type f -delete find session_data -type f | xargs rm -f find session_data -maxdepth 1 -type f -print0 | xargs -r0 rm -f The result is the same: find: memory exhausted What can I do to remove this directory?

    Read the article

  • Does NTFS performance degrade significantly in volumes larger than five or six TB?

    - by Josh Yeager
    One of my customers is planning to set up a new document store, which will probably grow by 1-2TB per year. One of my co-workers says that Windows performance is extremely bad if it has a single NTFS volume that is bigger than five or six TB. He thinks that we need to set up their system with multiple volumes so that no single volume will exceed that limit. Is this a real problem? Does Windows or NTFS slow down when the volume size reaches several terabytes? Or is it possible to create a single volume of 10 or more TB?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >