Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 65/457 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Fast inter-process (inter-threaded) communications IPC on large multi-cpu system.

    - by IPC
    What would be the fastest portable bi-directional communication mechanism for inter-process communication where threads from one application need to communicate to multiple threads in another application on the same computer, and the communicating threads can be on different physical CPUs). I assume that it would involve a shared memory and a circular buffer and shared synchronization mechanisms. But shared mutexes are very expensive (and there are limited number of them too) to synchronize when threads are running on different physical CPUs.

    Read the article

  • database design: table with large amount of columns (50+) or many sub tables with small amount of co

    - by Guillaume
    In our oroject we already have a lots of tables (100+). Some of them contains a lot of columns (50-100) and we are facing the need of adding more columns from time to time. What do you think is best - from maintenance and performance point of view - to split these huge tables in smaller entities or to keep the tables the way they are ? We are using an ORM tools, so we don't need to write custom request.

    Read the article

  • Int - number too large. How to get program to fail?

    - by Dave
    Hi Problem: How do you get a program to fail if a number goes beyond the bounds of its type? This code below gives the wrong answer for sum of primes under 2 million as I'd used an int instead of a long. [TestMethod] public void CalculateTheSumOfPrimesBelow2million() { int result = PrimeTester.calculateTheSumOfPrimesBelow(2000000); // 2million Assert.AreEqual(1, result); // 1,179,908,154 .. this was what I got with an int... // correct answer was 142,913,828,922 with a long } public static class PrimeTester { public static int calculateTheSumOfPrimesBelow(int maxPrimeBelow) { // we know 2 is a prime number int sumOfPrimes = 2; int currentNumberBeingTested = 3; while (currentNumberBeingTested < maxPrimeBelow) { double squareRootOfNumberBeingTested = (double)Math.Sqrt(currentNumberBeingTested); bool isPrime = true; for (int i = 2; i <= squareRootOfNumberBeingTested; i++) { if (currentNumberBeingTested % i == 0) { isPrime = false; break; } } if (isPrime) sumOfPrimes += currentNumberBeingTested; currentNumberBeingTested += 2; // as we don't want to test even numbers } return sumOfPrimes; } }

    Read the article

  • Parsing large txt files in ruby taking a lot of time?

    - by hershey92
    below is the code to download a txt file from internet approx 9000 lines and populate the database, I have tried a lot but it takes a lot of time more than 7 minutes. I am using win 7 64 bit and ruby 1.9.3. Is there a way to do it faster ?? require 'open-uri' require 'dbi' dbh = DBI.connect("DBI:Mysql:mfmodel:localhost","root","") #file = open('http://www.amfiindia.com/spages/NAV0.txt') file = File.open('test.txt','r') lines = file.lines 2.times { lines.next } curSubType = '' curType = '' curCompName = '' lines.each do |line| line.strip! if line[-1] == ')' curType,curSubType = line.split('(') curSubType.chop! elsif line[-4..-1] == 'Fund' curCompName = line.split(" Mutual Fund")[0] elsif line == '' next else sCode,isin_div,isin_re,sName,nav,rePrice,salePrice,date = line.split(';') sCode = Integer(sCode) sth = dbh.prepare "call mfmodel.populate(?,?,?,?,?,?,?)" sth.execute curCompName,curSubType,curType,sCode,isin_div,isin_re,sName end end dbh.do "commit" dbh.disconnect file.close 106799;-;-;HDFC ARBITRAGE FUND RETAIL PLAN DIVIDEND OPTION;10.352;10.3;10.352;29-Jun-2012 This is the format of data to be inserted in the table. Now there are 8000 such lines and how can I do an insert by combining all that and call the procedure just once. Also, does mysql support arrays and iteration to do such a thing inside the routine. Please give your suggestions.Thanks.

    Read the article

  • What's a good way to add a large number of small floats together?

    - by splicer
    Say you have 100000000 32-bit floating point values in an array, and each of these floats has a value between 0.0 and 1.0. If you tried to sum them all up like this result = 0.0; for (i = 0; i < 100000000; i++) { result += array[i]; } you'd run into problems as result gets much larger than 1.0. So what are some of the ways to more accurately perform the summation?

    Read the article

  • Is it a good idea to apply some basic macros to simplify code in a large project?

    - by DoctorT
    I've been working on a foundational c++ library for some time now, and there are a variety of ideas I've had that could really simplify the code writing and managing process. One of these is the concept of introducing some macros to help simplify statements that appear very often, but are a bit more complicated than should be necessary. For example, I've come up with this basic macro to simplify the most common type of for loop: #define loop(v,n) for(unsigned long v=0; v<n; ++v) This would enable you to replace those clunky for loops you see so much of: for (int i = 0, i < max_things; i++) With something much easier to write, and even slightly more efficient: loop (i, max_things) Is it a good idea to use conventions like this? Are there any problems you might run into with different types of compilers? Would it just be too confusing for someone unfamiliar with the macro(s)?

    Read the article

  • Parallelize or vectorize all-against-all operation on a large number of matrices?

    - by reve_etrange
    I have approximately 5,000 matrices with the same number of rows and varying numbers of columns (20 x ~200). Each of these matrices must be compared against every other in a dynamic programming algorithm. In this question, I asked how to perform the comparison quickly and was given an excellent answer involving a 2D convolution. Serially, iteratively applying that method, like so list = who('data_matrix_prefix*') H = cell(numel(list),numel(list)); for i=1:numel(list) for j=1:numel(list) if i ~= j eval([ 'H{i,j} = compare(' char(list(i)) ',' char(list(j)) ');']); end end end is fast for small subsets of the data (e.g. for 9 matrices, 9*9 - 9 = 72 calls are made in ~1 s). However, operating on all the data requires almost 25 million calls. I have also tried using deal() to make a cell array composed entirely of the next element in data, so I could use cellfun() in a single loop: # who(), load() and struct2cell() calls place k data matrices in a 1D cell array called data. nextData = cell(k,1); for i=1:k [nextData{:}] = deal(data{i}); H{:,i} = cellfun(@compare,data,nextData,'UniformOutput',false); end Unfortunately, this is not really any faster, because all the time is in compare(). Both of these code examples seem ill-suited for parallelization. I'm having trouble figuring out how to make my variables sliced. compare() is totally vectorized; it uses matrix multiplication and conv2() exclusively (I am under the impression that all of these operations, including the cellfun(), should be multithreaded in MATLAB?). Does anyone see a (explicitly) parallelized solution or better vectorization of the problem?

    Read the article

  • How do I make use of multiple cores in Large SQL Server Queries?

    - by Jonathan Beerhalter
    I have two SQL Servers, one for production, and one as an archive. Every night, we've got a SQL job that runs and copies the days production data over to the archive. As we've grown, this process takes longer and longer and longer. When I watch the utilization on the archive server running the archival process, I see that it only ever makes use of a single core. And since this box has eight cores, this is a huge waste of resources. The job runs at 3AM, so it's free to take any and all resources it can find. So what I need to do if figure out how to structure SQL Server jobs so they can take advantage of multiple cores, but I can't find any literature on tackling this problem. We're running SQL Server 2005, but I could certainly push for an upgrade if 2008 takes of this problem.

    Read the article

  • How can I efficiently retrieve a large number of database settings as PHP variables?

    - by Steven
    Currently all of my script's settings are located in a PHP file which I 'include'. I'm in the process of moving these settings (about 100) to a database table called 'settings'. However I'm struggling to find an efficient way of retrieving all of them into the file. The settings table has 3 columns: ID (autoincrements) name value Two example rows might be: 1 admin_user john 2 admin_email_address [email protected] The only way I can think of retrieving each setting is like this: $result = mysql_query("SELECT value FROM settings WHERE name = 'admin_user'"); $row = mysql_fetch_array($result); $admin_user = $row['value']; $result = mysql_query("SELECT value FROM settings WHERE name = 'admin_email_address'"); $row = mysql_fetch_array($result); $admin_email_address = $row['value']; etc etc Doing it this way will take up a lot of code and will likely be slow. Is there a better way? Thanks.

    Read the article

  • How to access a subset of XML data in Java when the XML data is too large to fit in memory?

    - by Michael Jones
    What I would really like is a streaming API that works sort of like StAX, and sort of like DOM/JDom. It would be streaming in the sense that it would be very lazy and not read things in until needed. It would also be streaming in the sense that it would read everything forwards (but not backwards). Here's what code that used such an API would look like. URL url = ... XMLStream xml = XXXFactory(url.inputStream()) ; // process each <book> element in this document. // the <book> element may have subnodes. // You get a DOM/JDOM like tree rooted at the next <book>. while (xml.hasContent()) { XMLElement book = xml.getNextElement("book"); processBook(book); } Does anything like this exist?

    Read the article

  • jQuery UI Calendar displays too large, would like the demo size???

    - by Phill Pafford
    So I downloaded a custom themed UI for jQuery and added the calendar control to my sight (Example: link text). In the example it shows/displays the size I would like but on my webpage it's about twice the size. why??? I do have a ton of other CSS but I don't have control over the look and feel of the page (Can't touch current CSS, MEH!!). Is there a way to get the demo look on my site? I think this is the code that jQuery UI has that might be complicating things /* Component containers ----------------------------------*/ .ui-widget { font-family: Arial, Helvetica, Verdana, sans-serif; font-size: 1.1em; } .ui-widget input, .ui-widget select, .ui-widget textarea, .ui-widget button { font-family: Arial, Helvetica, Verdana, sans-serif; font-size: 1em; } .ui-widget-content { border: 1px solid #B9C4CE; background: #ffffff url(../images/ui-bg_flat_75_ffffff_40x100.png) 50% 50% repeat-x; color: #616161; } .ui-widget-content a { color: #616161; } .ui-widget-header { border: 1px solid #467AA7; background: #467AA7 url(../images/ui-bg_highlight-soft_75_467AA7_1x100.png) 50% 50% repeat-x; color: #fff; font-weight: bold; } .ui-widget-header a { color: #fff; } It's part of the Custom UI CSS

    Read the article

  • CMS for a fairly large Mobile Website - Please Help Select.

    - by Vinod
    I am looking for :- A mature, scalable and proven CMS solution With Support for Mobilization (Android and iPhone) Good Amount of Customization using Java / .NET Lots of out of the box components to choose from. Please help with recommendations. p.s Are there any Mobile CMS providers which works in a SaaS model?

    Read the article

  • How can I parse free text (Twitter tweets) against a large database of values?

    - by user136416
    Hi there Suppose I have a database containing 500,000 records, each representing, say, an animal. What would be the best approach for parsing 140 character tweets to identify matching records by animal name? For instance, in this string... "I went down to the woods to day and couldn't believe my eyes: I saw a bear having a picnic with a squirrel." ... I would like to flag up the words "bear" and "squirrel", as they appear in my database. This strikes me as a problem that has probably been solved many times, but from where I'm sitting it looks prohibitively intensive - iterating over every db record checking for a match in the string is surely a crazy way to do it. Can anyone with a comp sci degree put me out of my misery? I'm working in C# if that makes any difference. Cheers!

    Read the article

  • (PHP) How do I get XMLReader behave like SimpleXML with Xpath? (Large directory like XML file)

    - by AESM
    So, I've got this huge XML file (10MB and up) that I want to parse and I figured instead of using SimpleXML, I'd better use XMLReader. Since the performance should be way better, right? But since XMLReader doesn't work with XPath... The xml is like this: <root name="bookmarks"> * <dir name="A directory"> <link name="blablabla"> <dir name="Sub directory"> ... </dir> * </dir> * <link name="another link"> </root> With SimpleXML combined with Xpath, I would simple do like: $xml = simplexml_load_file('/xmlFile.xml'); $xml-xpath('/root[@name]/dir[@name="A directory"]/dir[@name="Sub directory"]'); Which is so simple. But how do I do this with just using XMLReader? Ps. I'm converting the resulting node to DOM/SimpleXML to get its inner contents. Like this: $node = $xr-expand(); $dom = new DomDocument(); $n = $dom-importNode($node, true); $dom-appendChild($n); $selectedRoot = simplexml_import_dom($dom); Is this ok? ... Thanks!

    Read the article

  • exec sp_executesql error 'Incorrect syntax near 1' when using datetime parameter

    - by anne78
    I have a Ssrs report which sends the following text to the database : EXEC ( 'DECLARE @TeamIds as TeamIdTableType ' + @Teams + ' EXEC rpt.DWTypeOfSicknessByCategoryReport @TeamIds , ' + @DateFrom + ', ' + @DateTo + ', ' + @InputRankGroups + ', ' + @SubCategories ) When I view this in profiler it interprets this as : exec sp_executesql N'EXEC ( ''DECLARE @TeamIds as TeamIdTableType '' + @Teams + '' EXEC rpt.DWTypeOfSicknessByCategoryAndEmployeeDetailsReport @TeamIds, '' + @DateFrom + '', '' + @DateTo + '', '' + @InputRankGroups + '', '' + @SubCategories )',N'@Teams nvarchar(34),@DateFrom datetime,@DateTo datetime,@InputRankGroups varchar(1),@SubCategories bit',@Teams=N'INSERT INTO @TeamIds VALUES (5); ',@DateFrom='2010-02-01 00:00:00',@DateTo='2010-04-30 00:00:00',@InputRankGroups=N'1',@SubCategories=1 When this sql runs it errors, on the dates. I have tried changing the format of the date but it does not help. If I remove the dates it works fine. Any help would be appreciated.

    Read the article

  • What is the best way to partition large tables in SQL Server?

    - by RyanFetz
    In a recent project the "lead" developer designed a database schema where "larger" tables would be split across two seperate databases with a view on the main database which unioned the two seperate database-tables together. The main database is what the application was driven off of so these tables looked and felt like ordinary tables (except some quirkly things around updating). This seemed like a HUGE performance problem. We do see problems with performance around these tables but nothing to make him change his mind about his design. Just wondering what is the best way to do this, or if it is even worth doing?

    Read the article

  • To use an api or store a large dataset in a rails app?

    - by Dave
    Hi all- I am working on a site that has the potential to need a LOT of space. Basically we hope to have every video game every created stored in a database along with an image of the cover. There are some api's out there that might be able to help, like GiantBomb's (www.giantbomb.com). We are trying to decide whether to store the data locally and if so where to find that comprehensive a list, or make calls to the api on demand. The problem with the latter is likely latency and also downtime problems. Assuming we want to store it locally here are the questions: 1) Where can we find this kind of data (yes, I looked on google, and no I couldnt find anything:)) 2) What is the most efficient way to encode and store the images? Thanks!

    Read the article

  • Is it a good idea to include a large text variable in compiled code?

    - by gladman
    I am writing a program that produces a formatted file for the user, but it's not only producing the formatted file, it does more. I want to distribute a single binary to the end user and when the user runs the program, it will generate the xml file for the user with appropriate data. In order to achieve this, I want to give the file contents to a char array variable that is compiled in code. When the user runs the program, I will write out the char file to generate an xml file for the user. char* buffers = "a xml format file contents, \ this represent many block text \ from a file,..."; I have two questions. Q1. Do you have any other ideas for how to compile my file contents into binary, i.e, distribute as one binary file. Q2. Is this even a good idea as I described above?

    Read the article

  • Any efficient way to read datas from large binary file?

    - by limi
    Hi, I need to handle tens of Gigabytes data in one binary file. Each record in the data file is variable length. So the file is like: <len1><data1><len2><data2>..........<lenN><dataN> The data contains integer, pointer, double value and so on. I found python can not even handle this situation. There is no problem if I read the whole file in memory. It's fast. But it seems the struct package is not good at performance. It almost stuck on unpack the bytes. Any help is appreciated. Thanks.

    Read the article

  • Optimal ASP.Net cache duration for a large site?

    - by HeroicLife
    I've read lots of material on how to do ASP.Net caching but little on the optimal duration that pages should be cached for. Let's say that I have a popular site with 50,000 pages. The content does not change frequently, so I could cache pages for up to an hour if I wanted. The server has 16 GB of RAM, but database connections are limited. How long should pages be cached for? My thinking is that if I set the cache duration too high (let's say 60 minutes), I will fill up memory with a fraction of the total content, which will continually be shuffled in and out of memory. Furthermore, let's say that 10% of the pages are responsible for 90% of traffic. If the popular pages are hit every second, and the unpopular ones every hour, then a 60 second cache would only keep the load-intensive content cached without sacrificing freshness. Should numerous but rarely-accessed content be cached at all?

    Read the article

  • Why is there a large gap between Begin PreRenderComplete and End PreRenderComplete on my page?

    - by Middletone
    I'd like to know what can cause this kind of disparity between the begin and end PreRendercomplete events or how I migh go about locating the bottleneck. aspx.page End PreRender 0.193179639923915 0.001543 aspx.page Begin PreRenderComplete 0.193206263076064 0.000027 aspx.page End PreRenderComplete 1.96926008935549 1.776054 aspx.page Begin SaveState 2.13108461902679 0.161825

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >