Search Results

Search found 38088 results on 1524 pages for 'large scale project'.

Page 210/1524 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • What's a good way to move large amounts of files from one place to another, using the file path?

    - by user165253
    I'm going to move my Winamp library from its current location (in various folders inside My Documents) to My Music, but I can't just drag and drop them, as there's thousands of files within My Documents that I don't want moved. I can get the path of every single music file from Winamp, but I don't know any way to move them all. I'd like some way to maintain their current folder arrangement, and not just dump all the files in a single folder, unorganised.

    Read the article

  • Home server running 2008 R2 intermittently bringing our internet down by creating a large amount of connections [closed]

    - by Philip Strong
    Possible Duplicate: My server's been hacked EMERGENCY My Server 2008 R2 home server is intermittently (every 30 minutes or so, for about 3-4 minutes) creating a huge amount of connections which reaches the 4096 connection limit, thus effectively DOSing our internet connection. I've run a couple of network traffic monitors, and it appears to be a system process causing the problem. I thought I'd fixed it by reinstalling Comodo Antivirus, but it appears that wasn't the problem. Any thought? Thanks in advance.

    Read the article

  • Internet Explorer 9 Preview 2 link + webcasts for developers

    - by Eric Nelson
    At Web Directions last week in London (10th and 11th June 2010) I promised several folks I would put up a blog post to more information on IE 9.0. True to my word (albeit a little later than I had hoped), here is what I was thinking of: Install First up, Install Preview 2 and try out the demos I was showing at the conference. Remember that IE9 Preview installs side by side with IE8/7 etc. It is not a beta nor is it intended to be a full browser. It is a … preview :-)   Including good old SVG-oids :-) Learn And then check out the following webcasts which were recorded in March this year at MIX: In-Depth Look At Internet Explorer 9 Presenter:  Ted Johnson & John Hrvatin VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL28 Slides: Download Videos: MP4 Small WMV Large WMV High Performance Best Practices For Web Sites Presenter: Jason Weber VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL29 Slides: Download Videos: MP4 Small WMV Large WMV HTML5: Cross Browser Best Practices Presenter: Tony Ross VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL27 Slides: Download Videos: MP4 Small WMV Large WMV Internet Explorer Developer Tools Presenter: Jon Seitel VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/FT51 Slides: Download Videos: MP4 Small WMV Large WMV SVG: The Past, Present And Future of Vector Graphics For The Web Presenter: Patrick Dengler, Doug Schepers VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/EX30 Slides: Download Videos: MP4 Small WMV Large WMV Day 2 Keynote containing IE9 Presenter: Dean Hachamovitch VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/KEY02 Slides: Download Videos: MP4 Small WMV Large WMV

    Read the article

  • ASP.NET MVC RememberMe(It's large, please don't quit reading. Have explained the problem in detail a

    - by nccsbim071
    After searching a lot i did not get any answers and finally i had to get back to you. Below i am explaining my problem in detail. It's too long, so please don't quit reading. I have explained my problem in simple language. I have been developing an asp.net mvc project. I am using standard ASP.NET roles and membership. Everything is working fine but the remember me functionality doesn't work at all. I am listing all the details of work. Hope you guys can help me out solve this problem. I simply need this: I need user to login to web application. During login they can either login with remember me or without it. If user logs in with remember me, i want browser to remember them for long time, let's say atleast one year or considerably long time. The way they do it in www.dotnetspider.com,www.codeproject.com,www.daniweb.com and many other sites. If user logs in without remember me, then browser should allow access to website for some 20 -30 minutes and after that their session should expire. Their session should also expire when user logs in and shuts down the browser without logging out. Note: I have succesfully implemented above functionality without using standard asp.net roles and membership by creating my own talbes for user and authenticating against my database table, setting cookie and sessions in my other projects. But for this project we starting from the beginning used standard asp.net roles and membership. We thought it will work and after everything was build at the time of testing it just didn't work. and now we cannot replace the existing functionality with standard asp.net roles and membership with my own custom user tables and all the stuff, you understand what i am taling about. Either there is some kind of bug with standard asp.net roles and membership functionality or i have the whole concept of standard asp.net roles and membership wrong. i have stated what i want above. I think it's very simple and reasonable. What i did Login form with username,password and remember me field. My setting in web.config: <authentication mode="Forms"> <forms loginUrl="~/Account/LogOn" timeout="2880"/> </authentication> in My controller action, i have this: FormsAuth.SignIn(userName, rememberMe); public void SignIn(string userName, bool createPersistentCookie) { FormsAuthentication.SetAuthCookie(userName, createPersistentCookie); } Now the problems are following: I have already stated in above section "I simply need this". user can successfully log in to the system. Their session exists for as much minutes as specified in timeout value in web.config. I have also given a sample of my web.config. In my samplem if i set the timeout to 5 minutes,then user session expires after 5 minutes, that's ok. But if user closes the browser and reopen the browser, user can still enter the website without loggin in untill time specified in "timeout" has not passed out. The sliding expiration for timeout value is also working fine. Now if user logs in to the system with remember me checked, user session still expires after 5 minutes. This is not good behaviour, is it?. I mean to say that if user logs in to the system with remember me checked he should be remembered for a long time untill he doesn't logs out of the system or user doesn't manually deletes all the cookies from the browser. If user logs in to the system without remember me checked his session should expire after the timeout period values specified in web.config and also if users closes the browser. The problem is that if user closes the browser and reopens it he can still enter the website without logging in. I search internet a lot on this topic, but i could not get the solution. In the blog post(http://weblogs.asp.net/scottgu/archive/2005/11/08/430011.aspx) made by Scott Gu on exactly the same topic. The users are complaining about the same thing in their comments ut there is no easy solution given in by Mr. Scott. I read it at following places: http://weblogs.asp.net/scottgu/archive/2005/11/08/430011.aspx http://geekswithblogs.net/vivek/archive/2006/09/14/91191.aspx I guess this is a problem of lot's of users. As seem from blog post made by Mr. Scott Gu. Your help will be really appreciated. Thanks in advance.

    Read the article

  • ASP.NET RememberMe(It's large, please don't quit reading. Have explained the problem in detail and s

    - by nccsbim071
    After searching a lot i did not get any answers and finally i had to get back to you. Below i am explaining my problem in detail. It's too long, so please don't quit reading. I have explained my problem in simple language. I have been developing an asp.net mvc project. I am using standard ASP.NET roles and membership. Everything is working fine but the remember me functionality doesn't work at all. I am listing all the details of work. Hope you guys can help me out solve this problem. I simply need this: I need user to login to web application. During login they can either login with remember me or without it. If user logs in with remember me, i want browser to remember them for long time, let's say atleast one year or considerably long time. The way they do it in www.dotnetspider.com,www.codeproject.com,www.daniweb.com and many other sites. If user logs in without remember me, then browser should allow access to website for some 20 -30 minutes and after that their session should expire. Their session should also expire when user logs in and shuts down the browser without logging out. Note: I have succesfully implemented above functionality without using standard asp.net roles and membership by creating my own talbes for user and authenticating against my database table, setting cookie and sessions in my other projects. But for this project we starting from the beginning used standard asp.net roles and membership. We thought it will work and after everything was build at the time of testing it just didn't work. and now we cannot replace the existing functionality with standard asp.net roles and membership with my own custom user tables and all the stuff, you understand what i am taling about. Either there is some kind of bug with standard asp.net roles and membership functionality or i have the whole concept of standard asp.net roles and membership wrong. i have stated what i want above. I think it's very simple and reasonable. What i did Login form with username,password and remember me field. My setting in web.config: in My controller action, i have this: FormsAuth.SignIn(userName, rememberMe); public void SignIn(string userName, bool createPersistentCookie) { FormsAuthentication.SetAuthCookie(userName, createPersistentCookie); } Now the problems are following: I have already stated in above section "I simply need this". user can successfully log in to the system. Their session exists for as much minutes as specified in timeout value in web.config. I have also given a sample of my web.config. In my samplem if i set the timeout to 5 minutes,then user session expires after 5 minutes, that's ok. But if user closes the browser and reopen the browser, user can still enter the website without loggin in untill time specified in "timeout" has not passed out. The sliding expiration for timeout value is also working fine. Now if user logs in to the system with remember me checked, user session still expires after 5 minutes. This is not good behaviour, is it?. I mean to say that if user logs in to the system with remember me checked he should be remembered for a long time untill he doesn't logs out of the system or user doesn't manually deletes all the cookies from the browser. If user logs in to the system without remember me checked his session should expire after the timeout period values specified in web.config and also if users closes the browser. The problem is that if user closes the browser and reopens it he can still enter the website without logging in. I search internet a lot on this topic, but i could not get the solution. In the blog post(http://weblogs.asp.net/scottgu/archive/2005/11/08/430011.aspx) made by Scott Gu on exactly the same topic. The users are complaining about the same thing in their comments ut there is no easy solution given in by Mr. Scott. I read it at following places: http://weblogs.asp.net/scottgu/archive/2005/11/08/430011.aspx http://geekswithblogs.net/vivek/archive/2006/09/14/91191.aspx I guess this is a problem of lot's of users. As seem from blog post made by Mr. Scott Gu. Your help will be really appreciated. Thanks in advance.

    Read the article

  • Loading a datagrid with large amounts of data in silverlight?

    - by JD
    Hi I am breaking up my project in small sections and one of the sections involves loading a grid with possibily lots of records (could be up to 1000s of records in the database). Ideally I would like some sort of mechanism where as the users scrolls the grid, more data is retrieved. I have read that certain controls (datapager with RIA) do this but I would like to know how I could implement this myself or do something similiar? I was thinking about first loading 50 records at a time and when the user gets to scroll near the 50th record, then get another 50 as a start and so on. Not sure how I do this but this does not feel right or whether I should load ids of records in the grid and then get each row to load itself via an async thread but then I am hitting my database for each record? Thanks JD.

    Read the article

  • What is the best way to partition large tables in SQL Server?

    - by RyanFetz
    In a recent project the "lead" developer designed a database schema where "larger" tables would be split across two seperate databases with a view on the main database which unioned the two seperate database-tables together. The main database is what the application was driven off of so these tables looked and felt like ordinary tables (except some quirkly things around updating). This seemed like a HUGE performance problem. We do see problems with performance around these tables but nothing to make him change his mind about his design. Just wondering what is the best way to do this, or if it is even worth doing?

    Read the article

  • how to show a large jpg image to the right in jqGrid's edit form ?

    - by cLee
    Is it possible to show a large (i.e. bigger than a thumbnail) jpeg image in the right-hand side of jqGrid's edit form ? Users want to look at a photo while entering data into fields ... they are describing things in the photo. I'm sure all things are possible with jQuery, but I don't know where to begin. thanks ... html: function afterSubmit(r, data, action) { // if session timeout returned: if (r.responseText == "logout") { window.location = '../scripts/logout.php'; } // if an error message is returned: if (r.responseText != "") { $('#submit_errors').html('Alert:'+r.responseText+''); // show div with error message $('#submit_errors').slideDown(); // hide error div after 10 seconds window.setTimeout(function() { $('#submit_errors').slideUp(); }, 10000); return false; // don't remove this! } return true; // don't remove this! } var lastsel; jQuery(document).ready(function(){ var mygrid = jQuery("#mobile_incidents").jqGrid({ url:'list.php?q=e', editurl:'edit.php', datatype: "json", // note: all column names are required even though some columns are hidden colNames:['Rec#','Date','Line','Photo'], colModel:[{ name:'id', index:'id', editable:true, editoptions: {readonly:'readonly'} }, { name:'mobile_discoveryDate', index:'mobile_discoveryDate', sortable:false, editable:true, edittype:'text', formatter:'date', formatoptions:{ srcformat:'Y/m/d', newformat:'m/d/Y' }, editoptions:{ size:12, maxlength:10, dataInit: function(element) { $(element).blur(); $(element).datepicker({dateFormat:'mm/dd/yyyy'}) } } }, { name:'mobile_lineName', index:'mobile_lineName', editable:true, sortable:false}, { name:'mobile_photo_name', index:'mobile_photo_name', editable:false, sortable:false} ], pager: '#mobile_incidents_pager', altRows: false, rowNum:10, rowList:[10,20], imgpath: '../include/images/jqgrid', viewrecords: true, emptyrecords:'No submissions found!', height: 260, sortname: 'id', sortorder: 'desc', gridview: true, scrollrows: true, autowidth: true, rownumbers: false, multiselect: false, subGrid:false, caption: '' }) .navGrid('#mobile_incidents_pager', // params: {add:false, edit:true, del:false, search:false, view:false, refresh:true, alertcap:' to edit:', alerttext:' . . . click on a row to highlight' }, // edit params: {top:50, left:5, editCaption: 'Edit Submission', bSubmit: 'Approve/Save', closeAfterEdit:true, afterSubmit:function(r,data){return afterSubmit(r,data,'edit');} }, {}, // add params {}, // delete params // search params: {multipleSearch: false}, // view params: {top: 150, left: 5, caption: 'View Mobile Rail Submission'} ); });

    Read the article

  • How to read LARGE Sqlite file to be copied into Android emulator, or device from assets folder?

    - by Peter SHINe ???
    I guess many people already read this article: Using your own SQLite database in Android applications: http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/comment-page-2/#comment-12368 However it's keep bringing IOException at while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } I’am trying to use a large DB file. It’s as big as 8MB I built it using sqlite3 in Mac OS X, inserted UTF-8 encoded strings (for I am using Korean), added android_meta table with ko_KR as locale, as instructed above. However, When I debug, it keeps showing IOException at length=myInput.read(buffer) I suspect it’s caused by trying to read a big file. If not, I have no clue why. I tested the same code using much smaller text file, and it worked fine. Can anyone help me out on this? I’ve searched many places, but no place gave me the clear answer, or good solution. Good meaning efficient or easy. I will try use BufferedInput(Output)Stream, but if the simpler one cannot work, I don’t think this will work either. Can anyone explain the fundamental limits in file input/output in Android, and the right way around it, possibly? I will really appreciate anyone’s considerate answer. Thank you. WITH MORE DETAIL: private void copyDataBase() throws IOException{ //Open your local db as the input stream InputStream myInput = myContext.getAssets().open(DB_NAME); // Path to the just created empty db String outFileName = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myOutput = new FileOutputStream(outFileName); //transfer bytes from the inputfile to the outputfile byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } //Close the streams myOutput.flush(); myOutput.close(); myInput.close(); }

    Read the article

  • What is the best way to iterate over a large result set in JDBC/iBatis 3?

    - by paul_sns
    We're trying to iterate over a large number of rows from the database and convert those into objects. Behavior will be as follows: Result will be sorted by sequence id, a new object will be created when sequence id changes. The object created will be sent to an external service and will sometimes have to wait before sending another one (which means the next set of data will not be immediately used) We already have invested code in iBatis 3 so an iBatis solution will be the best approach for us (we've tried using RowBounds but haven't seen how it does the iteration under the hood). We'd like to balance minimizing memory usage and reducing number of DB trips. We're also open to pure JDBC approach but we'd like the solution to work on different databases. UPDATE: We need to make as few calls to DB as possible (1 call would be the ideal scenario) while also preventing the application to use too much memory. Are there any other solutions out there for this type of problem may it be pure JDBC or any other technology? Thanks and hope to hear your insights on this.

    Read the article

  • does lucene search function work in large size document?

    - by shaon-fan
    Hi,there I have a problem when do search with lucene. First, in lucene indexing function, it works well to huge size document. such as .pst file, the outlook mail storage. It can build indexing file include all the information of .pst. The only problem is to large sometimes, include very much words. So when i search using lucene, it only can process the front part of this indexing file, if one word come out the back part of the indexing file, it couldn't find this word and no hits in result. But when i separate this indexing file to several parts in stupid way when debugging, and searching every parts, it can work well. So i want to know how to separate indexing file, how much size should be the limit of searching? cheers and wait 4 reply. ++++++++++++++++++++++++++++++++++++++++++++++++++ hi,there, follow Coady siad, i set the length to max 2^31-1. But the search result still can't include what i want. simply, i convert the doc word to string array[] to analyze, one doc word has 79680 words include the space and any symbol. when i search certain word, it just return 300 count, actually it has more than 300 results. The same reason, when i search a word in back part of the doc, it also couldn't find. //////////////set the length idexwriter.SetMaxFieldLength(2147483647); ////////////////////search IndexSearcher searcher = new ndexSearcher(Program.Parameters["INDEX_LOCATION"].ToString()); Hits hits = searcher.Search(query); This is my code, as others same. I found that problem when i need to count every word hits in a doc. So i also found it couldn't search word in back part of doc. pls help me to find, is there any set searcher length somewhere? how u meet this problem.

    Read the article

  • How to parse large xml files on google app engine?

    - by Alon Carmel
    Hey, I have fairly large xml file 1mb in size that i host on s3. I need to parse that xml file into my app engine datastore entirely. I have written a simple DOM parser that works fine locally but online it reaches the 30sec error and stops. I tried lowering the xml parsing by downloading the xml file into a BLOB at first before the parser then parse the xml file from blob. problem is that blobs are limited to 1mb. so it fails. I have multiple inserts to the datastore which cause it to fail on 30 sec. i saw somewhere that they recommend using the Mapper class and save some exception where the process stopped but as i am a python n00b i cant figure out how to implement it on a DOM parser or an SAX one (please provide an example?) on how to use it. i'm pretty much doing a bad thing right now and i parse the xml using php outside the app engine and push the data via HTTP post to the app engine using a proprietary API which works fine but is stupid and makes me maintain two codes. can you please help me out?

    Read the article

  • Suggested (simple) approach for drawing large numbers of visual elements in WPF?

    - by Ender
    I'm writing an interface that features a large (~50000px width) "canvas"-type area that is used to display a lot of data in a fairly novel way. This involves lots of lines, rectangles, and text. The user can scroll around to explore the entire canvas. At the moment I'm just using a standard Canvas panel with various Shapes placed on it. This is nice and easy to do: construct a shape, assign some coordinates, and attach it to the Canvas. Unfortunately, it's pretty slow (to construct the children, not to do the actual rendering). I've looked into some alternatives, it's a bit intimidating. I don't need anything fancy - just the ability to efficiently construct and place objects in a coordinate plane. If all I get are lines, colored rectangles, and text, I'll be happy. Do I need Geometry instances inside of Geometry Groups inside of GeometryDrawings inside of some Panel container? Note: I'd like to include text and graphics (i.e. colored rectangles) in the same space, if possible.

    Read the article

  • jQuery AJAX: How to pass large HTML tags as parameters?

    - by marknt15
    Hello, How can I pass a large HTML tag data to my PHP using jQuery AJAX? When I'm receiving the result it is wrong. Thanks in advance. Cheers, Mark jQuery AJAX code: $('#saveButton').click(function() { // do AJAX and store tree structure to a PHP array (to be saved later in database) var treeInnerHTML = $("#demo_1").html(); alert(treeInnerHTML); var ajax_url = 'ajax_process.php'; var params = 'tree_contents=' + treeInnerHTML; $.ajax({ type: 'POST', url: ajax_url, data: params, success: function(data) { $("#show_tree").html(data); }, error: function(req, status, error) { } }); }); treeInnerHTML actual value: <ul class="ltr"> <li id="phtml_1" class="open"><a href="#"><ins>&nbsp;</ins>Root node 1</a> <ul> <li class="leaf" id="phtml_2"><a href="#"><ins>&nbsp;</ins>Child node 1</a></li> <li class="last leaf" id="phtml_3"><a href="#"><ins>&nbsp;</ins>Child node 2</a></li> </ul> </li> <li id="phtml_5" class="file last leaf"><a href="#"><ins>&nbsp;</ins>Root node 2</a></li> </ul> Returned result from my show_tree div: <ul class="\&quot;ltr\&quot;"> <li id="\&quot;phtml_1\&quot;" class="\&quot;open\&quot;"><a href="%5C%22#%5C%22"><ins></ins></a></li></ul>

    Read the article

  • WCF code generation for large/complex schema (HR-XML/OAGIS) - is there an alternative?

    - by Sasha Borodin
    Hello, and thank you for reading. I am implementing a WCF Service based on a predefined specification (HR-XML 3.0). As such, I am starting with the schema, and working my way back to code. There are a number of large Schema documents (which import yet more Schema documents) related to my implementation, provided by this specification. I am able to generate code using xsd.exe, by supplying the "main" and "supporting" xsd files as arguments. But there are several issues, and I am wondering if this is the right approach. there are litterally hundreds of classes - the code file is half a meg in size duplicate classes (ex. Type, Type1 - which both represent the same type) there are classes declared as inheriting from a base class, but that base class is not generated/defined I understand that there are limitations to the types of Schema supported by svcutil.exe/xsd.exe when targeting the DataContractSerializer and even XmlSerializer. My question is two-fold: Are code generation "issues" fairly common when dealing with larger, modular xsd files? Has anyone had success with generating data contracts from OAGIS or HR-XML schema? Given the above issues, are there better approaches to this task, avoiding generating code and working with concrete objects? Does it make better sence to read and compose a SOAP message directly, while still taking advantage of the rest of the WCF framework? I understand that I am loosing the convenience of working with .NET objects, and the framekwork-provided (de)serialization; given these losses, would it still be advantageous to base my Service on WCF? Is there some "middle ground" between working with .NET types and pure XML? Thank you very much! -Sasha Borodin DFWHC.org

    Read the article

  • Best practice for inserting large chunks of HTML into elements with Javscript?

    - by hamstar
    Hey guys. I'm building a web application (using prototype) at the moment that requires the addition of large chunks of HTML into the DOM. Most of these are rows that contain elements with all manner of attributes. Currently I keep a blank row of html in a variable and var blankRow = '<tr><td>' +'<a href="{LINK}" onclick="someFunc(\'{STRING}\');">{WORD}</a>' +'</td></tr>'; function insertRow(o) { newRow = blankRow .sub('{LINK}',o.link) .sub('{STRING}',o.string) .sub('{WORD}',o.word); $('tbodyElem').insert( newRow ); } Now that works all well and dandy, but is it the best practice? I have to update the code in the blankRow when I update code on the page, so the new elements being inserted are the same. It gets sucky when I have like 40 lines of HTML to go in a blankRow and then I have to escape it too. Is there an easier way? I was thinking of urlencoding and then decoding it before insertion but that would still mean a blankRow and lots of escaping. What would be mean would be a eof function a la PHP et al. $blankRow = <<<EOF text text EOF; That would mean no escaping but it would still need a blankRow. What do you do in this situation?

    Read the article

  • Which is faster for large "for" loop: function call or inline coding?

    - by zaplec
    Hi, I have programmed an embedded software (using C of course) and now I'm considering ways to improve the running time of the system. The most important single module in my system is one very large nested for loop module. That module consists of two nested for loops that loops max 122500 times. That's not very much yet, but the problem is that inside that nested for loop I have a function call to a function that is in another source file. That specific function consists mostly of two another nested for loops which loops always 22500 times. So now I have to make a function call 122500 times. I have made that function that is to be called a lot lighter and shorter (yet still works as it should) and now I started to think that would it be faster to rip off that function call and write that process directly inside those first two for loops? The processor in that system is ARM7TDMI and its frequency is 55MHz. The system itself isn't very time critical so it doesn't have to be real time capable. However the faster it can process its duties the better. Also would it be also faster to use while loops instead of fors? And any piece of advice about how to improve the running time is appreciated. -zaplec

    Read the article

  • Is there a faster way to parse through a large file with regex quickly?

    - by Ray Eatmon
    Problem: Very very, large file I need to parse line by line to get 3 values from each line. Everything works but it takes a long time to parse through the whole file. Is it possible to do this within seconds? Typical time its taking is between 1 minute and 2 minutes. Example file size is 148,208KB I am using regex to parse through every line: Here is my c# code: private static void ReadTheLines(int max, Responder rp, string inputFile) { List<int> rate = new List<int>(); double counter = 1; try { using (var sr = new StreamReader(inputFile, Encoding.UTF8, true, 1024)) { string line; Console.WriteLine("Reading...."); while ((line = sr.ReadLine()) != null) { if (counter <= max) { counter++; rate = rp.GetRateLine(line); } else if(max == 0) { counter++; rate = rp.GetRateLine(line); } } rp.GetRate(rate); Console.ReadLine(); } } catch (Exception e) { Console.WriteLine("The file could not be read:"); Console.WriteLine(e.Message); } } Here is my regex: public List<int> GetRateLine(string justALine) { const string reg = @"^\d{1,}.+\[(.*)\s[\-]\d{1,}].+GET.*HTTP.*\d{3}[\s](\d{1,})[\s](\d{1,})$"; Match match = Regex.Match(justALine, reg, RegexOptions.IgnoreCase); // Here we check the Match instance. if (match.Success) { // Finally, we get the Group value and display it. string theRate = match.Groups[3].Value; Ratestorage.Add(Convert.ToInt32(theRate)); } else { Ratestorage.Add(0); } return Ratestorage; } Here is an example line to parse, usually around 200,000 lines: 10.10.10.10 - - [27/Nov/2002:16:46:20 -0500] "GET /solr/ HTTP/1.1" 200 4926 789

    Read the article

  • What hash algorithms are paralellizable? Optimizing the hashing of large files utilizing on mult-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • Web page for IPhone - Large fonts instead of scrolling?

    - by chris_l
    I'm planning the layout of my web page, which should also be usable on the IPhone. I don't really have much experience with the IPhone yet - I just installed the IPhone Simulator on my Mac. The page's contents are flexible, so I think it would be better to use this flexibility to avoid that the user has to scroll around the entire page. Especially I have A header and a sidebar that will be used all the time to perform several actions. A main content area with a number of elements (e.g. images). The UI would stay usable pretty well, if the number of elements shown at one time is reduced for a small screen (e.g. by JavaScript). It would also be okay to make the main content area scrollable (as opposed to the entire page). The problem: If I simply display the page on the IPhone, it uses an extremely small font size, so that users must zoom in first, and then scroll around - so that they can't see the header and sidebar all the time. What's the best way to deal with this situation? Just leave it this way (very small fonts), because users expect that behaviour on the IPhone? Increase the font size (by specifying it in em or px or with xx-large, or what would be the best way?), if I detect - somehow - that it's being displayed on the IPhone. Or is there some way to restrict the viewport size to the screen size, and make it zoom in automatically? I think that would be the easiest solution in my case. Or ...?

    Read the article

  • Javascript (using jQuery) in a large Project... organization, passing data, private method, etc.

    - by gaoshan88
    I'm working on a large project that is organized like so: Multiple javascript files are included as needed and most code is wrapped in anonymous functions... // wutang.js //Included Files Go Here // Public stuff var MethodMan; // Private stuff (function() { var someVar1; MethodMan = function(){...}; var APrivateMethod = function(){...}; $(function(){ //jquery page load stuff here $('#meh').click(APrivateMethod); }); })(); I'm wondering about a few things here. Assuming that there are a number of these files included on a page, what has access to what and what is the best way to pass data between the files? For example, I assume that MethodMan() can be accessed by anything on any included page. It is public, yes? var someVar1 is only accessible to the methods inside that particular anonymous function and nowhere else, yes? Also true of var APrivateMethod(), yes? What if I want APrivateMethod() to make something available to some other method in a different anonymous wrapped method in a different included page. What's the best way to pass data between these private, anonymous functions on different included pages? Do I simply have to make whatever I want to use between them public? How about if I want to minimize global variables? What about the following: var PootyTang = function(){ someVar1 = $('#someid').text(); //some stuff }; and in another included file used by that same page I have: var TangyPoot = function(){ someVar1 = $('#someid').text(); //some completely different stuff }; What's the best way to share the value of someVar1 across these anonymous (they are wrapped as the first example) functions?

    Read the article

  • SQL Database Schema Design For Large 3 Billion Relationship Database.

    - by K-Bell
    Get your geek on. Can you solve this? I am designing a products database for SQL Server 2008 R2 Ed. (not Enterprise Ed.) that will be used to store custom product configurations for over 30,000 distinct products. The database will have up to 500 users at a time. Here is the design problem… Each Product has a collection of Parts (up to 50 parts per product). So if I have 30,000 Products and each of them can have up to 50 Parts, that’s 1.5 million distinct Product-to-Part relationships …or as an equation… 30,000 (Products) X 50 (Parts) = 1.5 million Product-to-Parts records. …and If… Each Part can have up to 2000 finish options (A finish is a paint color). NOTE: Only one finish will be selected by a user at run-time. The 2000 finish options I need to store are the allowed options for a specific part on a specific product. So if I have 1.5 million distinct product-to-part relationships/records and each of those parts can have up to 2,000 finishes that is 3 billion allowable product-to-part-to finish relationships/records …or as an equation… 1.5 million (Parts) x 2,000 (Finishes) = 3 Billion Product-to-Part-to-Finishes records. How can I design this database so that I can execute fast and efficient queries for a specific product and return its list of Parts and all the allowable Finishes for each part without 3 Billion Product-to-Part-to-Finish records? Read time is more important then write time. Please post your thoughts/suggestions if you have experience with large databases. Thanks!

    Read the article

  • what can cause large discrepancy between minor GC time and total pause time?

    - by cxcg
    We have a latency-sensitive application, and are experiencing some GC-related pauses we don't fully understand. We occasionally have a minor GC that results in application pause times that are much longer than the reported GC time itself. Here is an example log snippet: 485377.257: [GC 485378.857: [ParNew: 105845K-621K(118016K), 0.0028070 secs] 136492K-31374K(1035520K), 0.0028720 secs] [Times: user=0.01 sys=0.00, real=1.61 secs] Total time for which application threads were stopped: 1.6032830 seconds The total pause time here is orders of magnitude longer than the reported GC time. These are isolated and occasional events: the immediately preceding and succeeding minor GC events do not show this large discrepancy. The process is running on a dedicated machine, with lots of free memory, 8 cores, running Red Hat Enterprise Linux ES Release 4 Update 8 with kernel 2.6.9-89.0.1EL-smp. We have observed this with (32 bit) JVM versions 1.6.0_13 and 1.6.0_18. We are running with these flags: -server -ea -Xms512m -Xmx512m -XX:+UseConcMarkSweepGC -XX:NewSize=128m -XX:MaxNewSize=128m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:-TraceClassUnloading Can anybody offer some explanation as to what might be going on here, and/or some avenues for further investigation?

    Read the article

  • What hash algorithms are parallelizable? Optimizing the hashing of large files utilizing on multi-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >