Search Results

Search found 10417 results on 417 pages for 'large'.

Page 56/417 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Windows Explorer - How can an large file have a zero "Size on disk" value? What does it mean

    - by Jaans
    I would expect some discrepancy between "Size" and "Size on disk" in Windows Explorer due to file system allocations etc. Below is a screenshot of an example file on a Windows 2012 R2 file server that has a 81.4 MB "Size" but for the "Size on disk" it's 0 bytes. What gives? I have other files doing the same, but yet another set of files and folders behaving as expected showing the size on disk relatively close to the actual file size. The volume is a basic disk, formatted with NTFS and the default 4K allocation units. No compression is set for any file or folder on the volume. (For those more paranoid, I did a malware scan, and also confirmed there is not ADS streams associated with the file in question). The user account running Windows Explorer is the domain administrator, and the file owner is also the domain administrator. Thanks for reading!

    Read the article

  • Email Discovery from Fairly Large Mailbox (15gig) Exchange 2003.

    - by nysingh
    I have a request from our legal team to search a users' mailbox. the mailbox is 15gig and it is on exchange 2003. I am trying to run windows desktop search and google desktop. I have gotten them to index mailbox but getting the results into a folder to backup on cd is getting bit difficult. Windows desktop search and google desktop search does not allow you to copy results to another folder. Can anyone point me to right direction? What is the best way to index and copy the results of pst, mailbox or edb file? What is the best discovery methods? Thanks

    Read the article

  • Is there a way to replicate a very large file shares in real-time?

    - by fsckin
    I have an hourly cron job that copies about 40GB of data from a source folder into a new folder with the hour appended on the end. When it's done, the job prunes anything older than 24 hours. This data changes very often during work hours and is on a samba file share. Here's how the folder structure looks: \server\Version.1 \server\Version.2 \server\Version.3 ... \server\Version.24 The contents of each new folder compared to the last one usually doesn't change very much, since this is a hourly job. Now you might be thinking that I'm an idiot for setting dreaming this up. Truth is, I just found out. It's actually been used for years and is so incredibly simple, anyone could delete the ENTIRE 40GB share (imagine that dialog spooling up... deleting thousands and thousands of files) and it would actually be faster to restore by moving the latest copy back to the source than it took to delete. Brilliant! Now to top this off, I need to efficiently replicate this 960GB of "mostly similar" data to a remote server over WAN link, with the replication happening as close to real-time as possible -- think hot spare, disaster recovery, etc. My first thought was rsync. Total failure. Rsync sees it sees a deletion of the folder that is 24 hours old and the addition of a new folder with 30GB of data to sync! I also looked at rdiff-backup and unison, they both appear to use similar algorithms and do not keep enough meta-data to do this intelligently. Best thing that I can find "out of the box" to do this is Windows Server "Distributed Filesystem Replication" which uses "Remote Differential Compression" -- After reading the background information on how this works, it actually looks like exactly what I need. Problem: Both servers are running Linux. D'oh! One approach to this I'm looking at is this, say it's 5AM and the cron job finishes: New Version.5 folder arrives at on local server SSH to remote server and copy Version.4 to Version.5 Run rsync on the local server pushing changes to the remote server. Rsync finally knows to do a differential copy between Version.4 and Version.5 Is there a smarter way to replicate Samba shares as close to real-time as possible? Anything out there that does "Remote Differential Compression" on Linux?

    Read the article

  • How are large companies handling the storing and cataloging of software installation disks?

    - by CT
    I just started working in the IT department of a small-medium sized construction company with about 200 users. One of my responsibilities is to setup and configure all new machines that come in. I would like suggestions on how to best manage the installation disks and licenses of the software that comes with them. Plus any additional licensed software such as Autocad, Photoshop, etc as well as peripheral driver disks such as printers and scanners. Right now every machine is associated and labeled with an asset id. All asset ids are kept in a spreadsheet with applicable serial numbers, current user, warranty info, and software licenses. The physical disks are then kept within a folder in a cabinet. Each folder is marked with the asset id number as well as the current user. My problems with this is that the system was not maintained very completely before I came to the company. There are plenty of software folders with no asset ids labeled on them. Plenty of missing software folders (most likely are a lot of the unlabeled folders). Folders with names but not asset ids. Machines get switched to different users without the folders and spreadsheet being updated. I am not saying this method would necessarily be bad if it was better implemented and managed, but if I am going to have to take a lot of time to fix this system currently in place. I thought I would ask the community first on how others manage this process in case there are easier, more efficient ways of doing so. Thank you.

    Read the article

  • How to contact an Email Administrator at a large company?

    - by Brett G
    Before I've had to deal with other larger companies, client of ours, when we have had e-mail issues with them. Most of the time it's when our messages have been tagged as spam, although right now it's because one of their systems has gone haywire and is repeatedly sending us e-mails. Anyway, my question is how do you get in touch with the email administrator. I've found at some larger companies unless you have the name of the person, the receptionist will refuse to connect you to them (I'd imagine they're acting as the gatekeeper from salespeople asking for "the person responsible for dealing with your printers"). I know we could always try to deal with our contact at the company, but sometimes that can be slow and difficult and these issues are usually time sensitive.

    Read the article

  • What's a good way to move large amounts of files from one place to another, using the file path?

    - by user165253
    I'm going to move my Winamp library from its current location (in various folders inside My Documents) to My Music, but I can't just drag and drop them, as there's thousands of files within My Documents that I don't want moved. I can get the path of every single music file from Winamp, but I don't know any way to move them all. I'd like some way to maintain their current folder arrangement, and not just dump all the files in a single folder, unorganised.

    Read the article

  • Large OSX 10.6 to Windows 7 smb transfers fail?

    - by user41724
    I'm connecting to a windows 7 box from a OSX 10.6 box via smb: smb://ftp1 Connection works fine, I can transfer individual files one at a time, but as soon as I try and move an entier folder I get the following error: The Finder can’t complete the operation because some data in “test” can’t be read or written. (Error code -36) This error happens on all our OSX boxes when trying to push the entier folder to the Win7 box. The folder TEST in the above error message has -R 777 permissions. I can move every image file to the windows box one by one with no errors. But if I try an move the entier folder. bam error out. This error seems to kill the smb client on the Windows box as well. There's an FTP server on the windows box and I can FTP in from the OSX box and move everything just fine. Not sure what is going on here?

    Read the article

  • Home server running 2008 R2 intermittently bringing our internet down by creating a large amount of connections [closed]

    - by Philip Strong
    Possible Duplicate: My server's been hacked EMERGENCY My Server 2008 R2 home server is intermittently (every 30 minutes or so, for about 3-4 minutes) creating a huge amount of connections which reaches the 4096 connection limit, thus effectively DOSing our internet connection. I've run a couple of network traffic monitors, and it appears to be a system process causing the problem. I thought I'd fixed it by reinstalling Comodo Antivirus, but it appears that wasn't the problem. Any thought? Thanks in advance.

    Read the article

  • Internet Explorer 9 Preview 2 link + webcasts for developers

    - by Eric Nelson
    At Web Directions last week in London (10th and 11th June 2010) I promised several folks I would put up a blog post to more information on IE 9.0. True to my word (albeit a little later than I had hoped), here is what I was thinking of: Install First up, Install Preview 2 and try out the demos I was showing at the conference. Remember that IE9 Preview installs side by side with IE8/7 etc. It is not a beta nor is it intended to be a full browser. It is a … preview :-)   Including good old SVG-oids :-) Learn And then check out the following webcasts which were recorded in March this year at MIX: In-Depth Look At Internet Explorer 9 Presenter:  Ted Johnson & John Hrvatin VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL28 Slides: Download Videos: MP4 Small WMV Large WMV High Performance Best Practices For Web Sites Presenter: Jason Weber VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL29 Slides: Download Videos: MP4 Small WMV Large WMV HTML5: Cross Browser Best Practices Presenter: Tony Ross VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL27 Slides: Download Videos: MP4 Small WMV Large WMV Internet Explorer Developer Tools Presenter: Jon Seitel VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/FT51 Slides: Download Videos: MP4 Small WMV Large WMV SVG: The Past, Present And Future of Vector Graphics For The Web Presenter: Patrick Dengler, Doug Schepers VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/EX30 Slides: Download Videos: MP4 Small WMV Large WMV Day 2 Keynote containing IE9 Presenter: Dean Hachamovitch VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/KEY02 Slides: Download Videos: MP4 Small WMV Large WMV

    Read the article

  • Are eCommerce platforms worth it for large scale systems?

    - by codeflunky
    My company and I are building a new system for a relatively large client. We are going to be replacing their entire system, which includes some eCommerce aspects to it among many other things. It is not a typical public shopping site, and there are many things about the system (both back end and front end) that are quite different. Some of the people I work for are convinced that we should be using a third party product to implement the eCommerce pieces (shopping cart, catalog management). Their opinion is that it is a solved problem, and we shouldn't have to reinvent it. Given that direction, I have reviewed around ten different .NET based eCommerce platforms, and I struggle to imagine how we will be able to smoothly integrate any of them without a lot of friction. They are so all-encompassing that I feel like they are probably better suited for implementing simple shopping sites rather than larger systems that happen to have some eCommerce aspects to them. We have a really nice architecture planned for everything else (Entity Framework, ASP.NET MVC, etc.), and my gut is telling me that trying to introduce a third party platform will cause unnecessary fragmentation and difficulty. I would love to hear some opinions from people who have been there. Have you used a third party platform for eCommerce? Was it a typical shopping site or something different? Did you feel it was a help or a hinderance? Thanks.

    Read the article

  • how to show a large jpg image to the right in jqGrid's edit form ?

    - by cLee
    Is it possible to show a large (i.e. bigger than a thumbnail) jpeg image in the right-hand side of jqGrid's edit form ? Users want to look at a photo while entering data into fields ... they are describing things in the photo. I'm sure all things are possible with jQuery, but I don't know where to begin. thanks ... html: function afterSubmit(r, data, action) { // if session timeout returned: if (r.responseText == "logout") { window.location = '../scripts/logout.php'; } // if an error message is returned: if (r.responseText != "") { $('#submit_errors').html('Alert:'+r.responseText+''); // show div with error message $('#submit_errors').slideDown(); // hide error div after 10 seconds window.setTimeout(function() { $('#submit_errors').slideUp(); }, 10000); return false; // don't remove this! } return true; // don't remove this! } var lastsel; jQuery(document).ready(function(){ var mygrid = jQuery("#mobile_incidents").jqGrid({ url:'list.php?q=e', editurl:'edit.php', datatype: "json", // note: all column names are required even though some columns are hidden colNames:['Rec#','Date','Line','Photo'], colModel:[{ name:'id', index:'id', editable:true, editoptions: {readonly:'readonly'} }, { name:'mobile_discoveryDate', index:'mobile_discoveryDate', sortable:false, editable:true, edittype:'text', formatter:'date', formatoptions:{ srcformat:'Y/m/d', newformat:'m/d/Y' }, editoptions:{ size:12, maxlength:10, dataInit: function(element) { $(element).blur(); $(element).datepicker({dateFormat:'mm/dd/yyyy'}) } } }, { name:'mobile_lineName', index:'mobile_lineName', editable:true, sortable:false}, { name:'mobile_photo_name', index:'mobile_photo_name', editable:false, sortable:false} ], pager: '#mobile_incidents_pager', altRows: false, rowNum:10, rowList:[10,20], imgpath: '../include/images/jqgrid', viewrecords: true, emptyrecords:'No submissions found!', height: 260, sortname: 'id', sortorder: 'desc', gridview: true, scrollrows: true, autowidth: true, rownumbers: false, multiselect: false, subGrid:false, caption: '' }) .navGrid('#mobile_incidents_pager', // params: {add:false, edit:true, del:false, search:false, view:false, refresh:true, alertcap:' to edit:', alerttext:' . . . click on a row to highlight' }, // edit params: {top:50, left:5, editCaption: 'Edit Submission', bSubmit: 'Approve/Save', closeAfterEdit:true, afterSubmit:function(r,data){return afterSubmit(r,data,'edit');} }, {}, // add params {}, // delete params // search params: {multipleSearch: false}, // view params: {top: 150, left: 5, caption: 'View Mobile Rail Submission'} ); });

    Read the article

  • How to read LARGE Sqlite file to be copied into Android emulator, or device from assets folder?

    - by Peter SHINe ???
    I guess many people already read this article: Using your own SQLite database in Android applications: http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/comment-page-2/#comment-12368 However it's keep bringing IOException at while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } I’am trying to use a large DB file. It’s as big as 8MB I built it using sqlite3 in Mac OS X, inserted UTF-8 encoded strings (for I am using Korean), added android_meta table with ko_KR as locale, as instructed above. However, When I debug, it keeps showing IOException at length=myInput.read(buffer) I suspect it’s caused by trying to read a big file. If not, I have no clue why. I tested the same code using much smaller text file, and it worked fine. Can anyone help me out on this? I’ve searched many places, but no place gave me the clear answer, or good solution. Good meaning efficient or easy. I will try use BufferedInput(Output)Stream, but if the simpler one cannot work, I don’t think this will work either. Can anyone explain the fundamental limits in file input/output in Android, and the right way around it, possibly? I will really appreciate anyone’s considerate answer. Thank you. WITH MORE DETAIL: private void copyDataBase() throws IOException{ //Open your local db as the input stream InputStream myInput = myContext.getAssets().open(DB_NAME); // Path to the just created empty db String outFileName = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myOutput = new FileOutputStream(outFileName); //transfer bytes from the inputfile to the outputfile byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } //Close the streams myOutput.flush(); myOutput.close(); myInput.close(); }

    Read the article

  • What is the best way to iterate over a large result set in JDBC/iBatis 3?

    - by paul_sns
    We're trying to iterate over a large number of rows from the database and convert those into objects. Behavior will be as follows: Result will be sorted by sequence id, a new object will be created when sequence id changes. The object created will be sent to an external service and will sometimes have to wait before sending another one (which means the next set of data will not be immediately used) We already have invested code in iBatis 3 so an iBatis solution will be the best approach for us (we've tried using RowBounds but haven't seen how it does the iteration under the hood). We'd like to balance minimizing memory usage and reducing number of DB trips. We're also open to pure JDBC approach but we'd like the solution to work on different databases. UPDATE: We need to make as few calls to DB as possible (1 call would be the ideal scenario) while also preventing the application to use too much memory. Are there any other solutions out there for this type of problem may it be pure JDBC or any other technology? Thanks and hope to hear your insights on this.

    Read the article

  • does lucene search function work in large size document?

    - by shaon-fan
    Hi,there I have a problem when do search with lucene. First, in lucene indexing function, it works well to huge size document. such as .pst file, the outlook mail storage. It can build indexing file include all the information of .pst. The only problem is to large sometimes, include very much words. So when i search using lucene, it only can process the front part of this indexing file, if one word come out the back part of the indexing file, it couldn't find this word and no hits in result. But when i separate this indexing file to several parts in stupid way when debugging, and searching every parts, it can work well. So i want to know how to separate indexing file, how much size should be the limit of searching? cheers and wait 4 reply. ++++++++++++++++++++++++++++++++++++++++++++++++++ hi,there, follow Coady siad, i set the length to max 2^31-1. But the search result still can't include what i want. simply, i convert the doc word to string array[] to analyze, one doc word has 79680 words include the space and any symbol. when i search certain word, it just return 300 count, actually it has more than 300 results. The same reason, when i search a word in back part of the doc, it also couldn't find. //////////////set the length idexwriter.SetMaxFieldLength(2147483647); ////////////////////search IndexSearcher searcher = new ndexSearcher(Program.Parameters["INDEX_LOCATION"].ToString()); Hits hits = searcher.Search(query); This is my code, as others same. I found that problem when i need to count every word hits in a doc. So i also found it couldn't search word in back part of doc. pls help me to find, is there any set searcher length somewhere? how u meet this problem.

    Read the article

  • Suggested (simple) approach for drawing large numbers of visual elements in WPF?

    - by Ender
    I'm writing an interface that features a large (~50000px width) "canvas"-type area that is used to display a lot of data in a fairly novel way. This involves lots of lines, rectangles, and text. The user can scroll around to explore the entire canvas. At the moment I'm just using a standard Canvas panel with various Shapes placed on it. This is nice and easy to do: construct a shape, assign some coordinates, and attach it to the Canvas. Unfortunately, it's pretty slow (to construct the children, not to do the actual rendering). I've looked into some alternatives, it's a bit intimidating. I don't need anything fancy - just the ability to efficiently construct and place objects in a coordinate plane. If all I get are lines, colored rectangles, and text, I'll be happy. Do I need Geometry instances inside of Geometry Groups inside of GeometryDrawings inside of some Panel container? Note: I'd like to include text and graphics (i.e. colored rectangles) in the same space, if possible.

    Read the article

  • How to parse large xml files on google app engine?

    - by Alon Carmel
    Hey, I have fairly large xml file 1mb in size that i host on s3. I need to parse that xml file into my app engine datastore entirely. I have written a simple DOM parser that works fine locally but online it reaches the 30sec error and stops. I tried lowering the xml parsing by downloading the xml file into a BLOB at first before the parser then parse the xml file from blob. problem is that blobs are limited to 1mb. so it fails. I have multiple inserts to the datastore which cause it to fail on 30 sec. i saw somewhere that they recommend using the Mapper class and save some exception where the process stopped but as i am a python n00b i cant figure out how to implement it on a DOM parser or an SAX one (please provide an example?) on how to use it. i'm pretty much doing a bad thing right now and i parse the xml using php outside the app engine and push the data via HTTP post to the app engine using a proprietary API which works fine but is stupid and makes me maintain two codes. can you please help me out?

    Read the article

  • WCF code generation for large/complex schema (HR-XML/OAGIS) - is there an alternative?

    - by Sasha Borodin
    Hello, and thank you for reading. I am implementing a WCF Service based on a predefined specification (HR-XML 3.0). As such, I am starting with the schema, and working my way back to code. There are a number of large Schema documents (which import yet more Schema documents) related to my implementation, provided by this specification. I am able to generate code using xsd.exe, by supplying the "main" and "supporting" xsd files as arguments. But there are several issues, and I am wondering if this is the right approach. there are litterally hundreds of classes - the code file is half a meg in size duplicate classes (ex. Type, Type1 - which both represent the same type) there are classes declared as inheriting from a base class, but that base class is not generated/defined I understand that there are limitations to the types of Schema supported by svcutil.exe/xsd.exe when targeting the DataContractSerializer and even XmlSerializer. My question is two-fold: Are code generation "issues" fairly common when dealing with larger, modular xsd files? Has anyone had success with generating data contracts from OAGIS or HR-XML schema? Given the above issues, are there better approaches to this task, avoiding generating code and working with concrete objects? Does it make better sence to read and compose a SOAP message directly, while still taking advantage of the rest of the WCF framework? I understand that I am loosing the convenience of working with .NET objects, and the framekwork-provided (de)serialization; given these losses, would it still be advantageous to base my Service on WCF? Is there some "middle ground" between working with .NET types and pure XML? Thank you very much! -Sasha Borodin DFWHC.org

    Read the article

  • jQuery AJAX: How to pass large HTML tags as parameters?

    - by marknt15
    Hello, How can I pass a large HTML tag data to my PHP using jQuery AJAX? When I'm receiving the result it is wrong. Thanks in advance. Cheers, Mark jQuery AJAX code: $('#saveButton').click(function() { // do AJAX and store tree structure to a PHP array (to be saved later in database) var treeInnerHTML = $("#demo_1").html(); alert(treeInnerHTML); var ajax_url = 'ajax_process.php'; var params = 'tree_contents=' + treeInnerHTML; $.ajax({ type: 'POST', url: ajax_url, data: params, success: function(data) { $("#show_tree").html(data); }, error: function(req, status, error) { } }); }); treeInnerHTML actual value: <ul class="ltr"> <li id="phtml_1" class="open"><a href="#"><ins>&nbsp;</ins>Root node 1</a> <ul> <li class="leaf" id="phtml_2"><a href="#"><ins>&nbsp;</ins>Child node 1</a></li> <li class="last leaf" id="phtml_3"><a href="#"><ins>&nbsp;</ins>Child node 2</a></li> </ul> </li> <li id="phtml_5" class="file last leaf"><a href="#"><ins>&nbsp;</ins>Root node 2</a></li> </ul> Returned result from my show_tree div: <ul class="\&quot;ltr\&quot;"> <li id="\&quot;phtml_1\&quot;" class="\&quot;open\&quot;"><a href="%5C%22#%5C%22"><ins></ins></a></li></ul>

    Read the article

  • Best practice for inserting large chunks of HTML into elements with Javscript?

    - by hamstar
    Hey guys. I'm building a web application (using prototype) at the moment that requires the addition of large chunks of HTML into the DOM. Most of these are rows that contain elements with all manner of attributes. Currently I keep a blank row of html in a variable and var blankRow = '<tr><td>' +'<a href="{LINK}" onclick="someFunc(\'{STRING}\');">{WORD}</a>' +'</td></tr>'; function insertRow(o) { newRow = blankRow .sub('{LINK}',o.link) .sub('{STRING}',o.string) .sub('{WORD}',o.word); $('tbodyElem').insert( newRow ); } Now that works all well and dandy, but is it the best practice? I have to update the code in the blankRow when I update code on the page, so the new elements being inserted are the same. It gets sucky when I have like 40 lines of HTML to go in a blankRow and then I have to escape it too. Is there an easier way? I was thinking of urlencoding and then decoding it before insertion but that would still mean a blankRow and lots of escaping. What would be mean would be a eof function a la PHP et al. $blankRow = <<<EOF text text EOF; That would mean no escaping but it would still need a blankRow. What do you do in this situation?

    Read the article

  • Is there a faster way to parse through a large file with regex quickly?

    - by Ray Eatmon
    Problem: Very very, large file I need to parse line by line to get 3 values from each line. Everything works but it takes a long time to parse through the whole file. Is it possible to do this within seconds? Typical time its taking is between 1 minute and 2 minutes. Example file size is 148,208KB I am using regex to parse through every line: Here is my c# code: private static void ReadTheLines(int max, Responder rp, string inputFile) { List<int> rate = new List<int>(); double counter = 1; try { using (var sr = new StreamReader(inputFile, Encoding.UTF8, true, 1024)) { string line; Console.WriteLine("Reading...."); while ((line = sr.ReadLine()) != null) { if (counter <= max) { counter++; rate = rp.GetRateLine(line); } else if(max == 0) { counter++; rate = rp.GetRateLine(line); } } rp.GetRate(rate); Console.ReadLine(); } } catch (Exception e) { Console.WriteLine("The file could not be read:"); Console.WriteLine(e.Message); } } Here is my regex: public List<int> GetRateLine(string justALine) { const string reg = @"^\d{1,}.+\[(.*)\s[\-]\d{1,}].+GET.*HTTP.*\d{3}[\s](\d{1,})[\s](\d{1,})$"; Match match = Regex.Match(justALine, reg, RegexOptions.IgnoreCase); // Here we check the Match instance. if (match.Success) { // Finally, we get the Group value and display it. string theRate = match.Groups[3].Value; Ratestorage.Add(Convert.ToInt32(theRate)); } else { Ratestorage.Add(0); } return Ratestorage; } Here is an example line to parse, usually around 200,000 lines: 10.10.10.10 - - [27/Nov/2002:16:46:20 -0500] "GET /solr/ HTTP/1.1" 200 4926 789

    Read the article

  • Which is faster for large "for" loop: function call or inline coding?

    - by zaplec
    Hi, I have programmed an embedded software (using C of course) and now I'm considering ways to improve the running time of the system. The most important single module in my system is one very large nested for loop module. That module consists of two nested for loops that loops max 122500 times. That's not very much yet, but the problem is that inside that nested for loop I have a function call to a function that is in another source file. That specific function consists mostly of two another nested for loops which loops always 22500 times. So now I have to make a function call 122500 times. I have made that function that is to be called a lot lighter and shorter (yet still works as it should) and now I started to think that would it be faster to rip off that function call and write that process directly inside those first two for loops? The processor in that system is ARM7TDMI and its frequency is 55MHz. The system itself isn't very time critical so it doesn't have to be real time capable. However the faster it can process its duties the better. Also would it be also faster to use while loops instead of fors? And any piece of advice about how to improve the running time is appreciated. -zaplec

    Read the article

  • What hash algorithms are paralellizable? Optimizing the hashing of large files utilizing on mult-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • Web page for IPhone - Large fonts instead of scrolling?

    - by chris_l
    I'm planning the layout of my web page, which should also be usable on the IPhone. I don't really have much experience with the IPhone yet - I just installed the IPhone Simulator on my Mac. The page's contents are flexible, so I think it would be better to use this flexibility to avoid that the user has to scroll around the entire page. Especially I have A header and a sidebar that will be used all the time to perform several actions. A main content area with a number of elements (e.g. images). The UI would stay usable pretty well, if the number of elements shown at one time is reduced for a small screen (e.g. by JavaScript). It would also be okay to make the main content area scrollable (as opposed to the entire page). The problem: If I simply display the page on the IPhone, it uses an extremely small font size, so that users must zoom in first, and then scroll around - so that they can't see the header and sidebar all the time. What's the best way to deal with this situation? Just leave it this way (very small fonts), because users expect that behaviour on the IPhone? Increase the font size (by specifying it in em or px or with xx-large, or what would be the best way?), if I detect - somehow - that it's being displayed on the IPhone. Or is there some way to restrict the viewport size to the screen size, and make it zoom in automatically? I think that would be the easiest solution in my case. Or ...?

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >