Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 180/563 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • Decoupling software components via naming convention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • What are the benefits of PHP?

    - by acme
    Everybody knows that people that have prejudices against certain programming languages. Especially PHP seems to suffer from problems of its past and some other things (like loose types) and is often called a non-serious programming language that should not be used for professional applications. In that special case PHP: How do you argue using PHP as your chosen programming language for web applications? What are the benefits, where is PHP better than ColdFusion, Java, etc.?

    Read the article

  • Is switch-case over enumeration bad practice?

    - by Puckl
    I have an enumeration with the commands Play, Stop and Pause for a media player. In two classes I do a switch-case over the received commands. The player runs in a different thread and I deliver the commands in a command queue to the thread. If I generate class diagrams the enumeration has dependencies all over the place. Is there a nicer way to deal with the commands? If I would change/extend the enumeration, I would have to change several classes. (Its not super important to keep the player extensible, but I try to write nice software.)

    Read the article

  • Junior software developer - How to understand web aplications in depth?

    - by nat_gr
    I am currently a junior developer in web applications and specifically in asp.net mvc technology. My problem is that the c# senior developer in the company has no experience with this technology and I try to learn without any guidance. I went through all tutorials (e.g music store), codeplex projects and also read pro asp.net mvc 4. However, most of the examples are about crud and e-commerce applications. What I don't understand is how dependency injection fits in web applications (I have realized that is not only used for facilitating unit testing) or when i should use a custom model binder or how to model the business logic when there is already a database schema in place. I read the forum quite often and it would very helpful if some experienced developers could give me an insight about how to proceed. Do I need to read some books to understand the overall idea behind web applications? And what kind of application should I start building myself - I don't think it would be useful to create similar examples with the tutorials.

    Read the article

  • Agile bug fixing - what's the preferred process for testing?

    - by Andrew Stephens
    When a bug is fixed, the dev set its status to "resolved" and the bug is reassigned back to the person that created it. In our case this is usually the product owner - we don't have dedicated testers. But what's a good process for controlling how/when the PO tests the software? Should he be given the latest build after each bug is resolved/checked-in? Or what about every morning? Or should he only receive a build at (or close to) the end of the iteration, to include all of that iteration's new functionality and bug fixes? We are using TFS by the way.

    Read the article

  • Would learning any (linguistic) language imparticular further your programming career?

    - by Anonymous
    It seems apparent that English is the dominant international language for programming (in the West, at least!) based on previous P.SE questions. Or maybe not, given that a highly upvoted comment correctly points out that asking a question like that on a predominantly English site will skew the results. This question is about whether there is a benefit in learning a foreign language for software development. For example, do the Chinese have completely different software tools, langugages, technologies etc? How about Japanese, Russian, and other non-latin based languages? Am I/are we missing an entire world of software development languages, tools and so on that only exist in these other languages? Or do people that know these languages still learn and program using the tools and languages we all know and love?

    Read the article

  • What's special about currying or partial application?

    - by Vigneshwaran
    I've been reading articles on Functional programming everyday and been trying to apply some practices as much as possible. But I don't understand what is unique in currying or partial application. Take this Groovy code as an example: def mul = { a, b -> a * b } def tripler1 = mul.curry(3) def tripler2 = { mul(3, it) } I do not understand what is the difference between tripler1 and tripler2. Aren't they both the same? The 'currying' is supported in pure or partial functional languages like Groovy, Scala, Haskell etc. But I can do the same thing (left-curry, right-curry, n-curry or partial application) by simply creating another named or anonymous function or closure that will forward the parameters to the original function (like tripler2) in most languages (even C.) Am I missing something here? There are places where I can use currying and partial application in my Grails application but I am hesitating to do so because I'm asking myself "How's that different?" Please enlighten me.

    Read the article

  • As a programmer how do I plan to learn new things in my spare time

    - by priyank patel
    I am a Asp.Net/C# developer, I develop few projects in my spare time. I try to utilize my time as much as I can. I have been working for past 7 months. Suddenly these days I am a bit worried about learning the new stuff that is there for me as a programmer. I develop in my spare time so I don't get enough time to read books or blogs. So my question to some of you guys is how should I plan to learn new things, should I at least dedicate two-three evenings for new stuff, maybe ebooks while travelling is a good option too. How do you people plan to learn,should I also start to develop with whatever I learnt? As far as learning is concerned, should I just pick up the basics and then implement it or I should seek deeper understanding of the subject?

    Read the article

  • Is Java free/open source or not?

    - by user1598390
    On November 13, 2006, Sun released much of Java as free and open source software, (FOSS), under the terms of the GNU General Public License (GPL). On May 8, 2007, Sun finished the process, making all of Java's core code available under free software/open-source distribution terms, aside from a small portion of code to which Sun did not hold the copyright. OpenJDK (Open Java Development Kit) is a free and open source implementation of the Java programming language. It is the result of an effort Sun Microsystems began in 2006. The implementation is licensed under the GNU General Public License (GNU GPL) with a linking exception. Why there are still people that say Java is not open source or free as in free speech ? Am I missing something? Is Java still privative ?

    Read the article

  • Project Codenames - Yea or Nay?

    - by rmx
    Where I work, most of our projects have (or at least attempt) descriptive, useful names. However we have a few with names that make no sense: I found that an assembly named WiFi which actually has nothing whatsoever to do with wi-fi, but is a codename. When I asked why, I was told that it's to protect company secrets incase some intern has few too many at the pub on Friday and starts chatting about the brand new 'WiFi' project he's been working on. Its clear that some people find enjoyment in finding silly / amusing codenames for their projects (like in this question). My question is: is it really a good idea to use codenames for your projects or are you better off spending the time to decide upon a descriptive name? My opinion is that in the long-run its better to give your projects relevant names. My reasoning is that if you can't think of a decent name, perhaps you don't really know the requirements well enough. I think there are better ways to 'protect company secrets' and I find it quite confusing when the name does not correlate at all with the content. It's just common sense, surely?! So do you use codenames and what the your reasons for or against this seemingly common, yet annoying (to me at least) practice?

    Read the article

  • Does BizSpark preclude you from accepting funding elsewhere?

    - by Clay Shannon
    I am going to embark very soon on a software venture (have been a consultant and employee up until now). I see advantages in signing up for Microsoft's BizSpark. However, I wonder if doing so would preclude me from accepting funding from some equity-esque arrangements potentially available via crowdfunding. I know BizSpark's legal agreement probably spells this out, but it's about a gazillion pages long, so I'm hoping somebody here has existing knowledge of this so I don't have to spend a lot of time reading legalese.

    Read the article

  • Card deck and sparse matrix interview questions

    - by MrDatabase
    I just had a technical phone screen w/ a start-up. Here's the technical questions I was asked ... and my answers. What do think of these answers? Feel free to post better answers :-) Question 1: how would you represent a standard 52 card deck in (basically any language)? How would you shuffle the deck? Answer: use an array containing a "Card" struct or class. Each instance of card has some unique identifier... either it's position in the array or a unique integer member variable in the range [0, 51]. Shuffle the cards by traversing the array once from index zero to index 51. Randomly swap ith card with "another card" (I didn't remember how this shuffle algorithm works exactly). Watch out for using the same probability for each card... that's a gotcha in this algorithm. I mentioned the algorithm is from Programming Pearls. Question 2: how to represent a large sparse matrix? the matrix can be very large... like 1000x1000... but only a relatively small number (~20) of the entries are non-zero. Answer: condense the array into a list of the non-zero entries. for a given entry (i,j) in the array... "map" (i,j) to a single integer k... then use k as a key into a dictionary or hashtable. For the 1000x1000 sparse array map (i,j) to k using something like f(i, j) = i + j * 1001. 1001 is just one plus the maximum of all i and j. I didn't recall exactly how this mapping worked... but the interviewer got the idea (I think). Are these good answers? I'm wondering because after I finished the second question the interviewer said the dreaded "well that's all the questions I have for now." Cheers!

    Read the article

  • Data binding in web UI frameworks, what's the deal?

    - by c-smile
    I believe that most of modern Web frameworks that pretend to be MVC ones also has a notion of data binding in one form or another. Examples: AngularJS, EmberJS, KnockoutJS, etc. I am assuming that "data binding" is a declarative definition (oxymoron, no?) of live link between data (a.k.a. model) and its representation (a.k.a. view). With some transformers in between (a.k.a. controllers). I understand why declarativeness is kind of appealing but also understand that as usual it comes with the price. In particular: 1. Live binding is quite heavy, either with dirty watch (high CPU consumption) or with Object.observe() (high memory consumption with high CPU load in some scenarios). 2. There is a "frame" part in the framework word, means there are some boundaries/limits that can be hard to overcome if you need slightly more than it was designed for. Quite usual time split: 90% of features are made in 10% of project time. But 10% rest take 90% of project time. I suspect (a.k.a. educated guess) that those MVC things are not helping to implement more functionality in less time... If so their usage motivation is not quite clear. As an example: last week wanted to find virtual list idea/solution. Found one in vanilla JavaScript that is 120 LOC. Implementation of the same but in AngualrJS is about 420 LOC. Most of the code there seems like a fight with the framework itself... So is my question: what benefits that MVC stuff or data binding give us? Is it just a buzzword popular among project managers or they give us something useful. If later one then what exactly?

    Read the article

  • is it better to spend my free time mastering a language I work with or learning a new one?

    - by edthethird
    I work full time on an android project and am very comfortable with both java and the android framework. On a good day, I would rate my abilities at an 8, and maybe a 7 on a bad day. I've recently found myself with more free time then I'm used too, so I have been working on a lot of personal projects. I am beginning to wonder what others think about this; is it worth my time to continue experimenting and pushing Android, or would I be better off learning another language? What do you all think about this? What would you do with more free time and energy than you know what to do with?

    Read the article

  • How many developers before continuous integration becomes effective for us?

    - by Carnotaurus
    There is an overhead associated with continuous integration, e.g., set up, re-training, awareness activities, stoppage to fix "bugs" that turn out to be data issues, enforced separation of concerns programming styles, etc. At what point does continuous integration pay for itself? EDIT: These were my findings The set-up was CruiseControl.Net with Nant, reading from VSS or TFS. Here are a few reasons for failure, which have nothing to do with the setup: Cost of investigation: The time spent investigating whether a red light is due a genuine logical inconsistency in the code, data quality, or another source such as an infrastructure problem (e.g., a network issue, a timeout reading from source control, third party server is down, etc., etc.) Political costs over infrastructure: I considered performing an "infrastructure" check for each method in the test run. I had no solution to the timeout except to replace the build server. Red tape got in the way and there was no server replacement. Cost of fixing unit tests: A red light due to a data quality issue could be an indicator of a badly written unit test. So, data dependent unit tests were re-written to reduce the likelihood of a red light due to bad data. In many cases, necessary data was inserted into the test environment to be able to accurately run its unit tests. It makes sense to say that by making the data more robust then the test becomes more robust if it is dependent on this data. Of course, this worked well! Cost of coverage, i.e., writing unit tests for already existing code: There was the problem of unit test coverage. There were thousands of methods that had no unit tests. So, a sizeable amount of man days would be needed to create those. As this would be too difficult to provide a business case, it was decided that unit tests would be used for any new public method going forward. Those that did not have a unit test were termed 'potentially infra red'. An intestesting point here is that static methods were a moot point in how it would be possible to uniquely determine how a specific static method had failed. Cost of bespoke releases: Nant scripts only go so far. They are not that useful for, say, CMS dependent builds for EPiServer, CMS, or any UI oriented database deployment. These are the types of issues that occured on the build server for hourly test runs and overnight QA builds. I entertain that these to be unnecessary as a build master can perform these tasks manually at the time of release, esp., with a one man band and a small build. So, single step builds have not justified use of CI in my experience. What about the more complex, multistep builds? These can be a pain to build, especially without a Nant script. So, even having created one, these were no more successful. The costs of fixing the red light issues outweighed the benefits. Eventually, developers lost interest and questioned the validity of the red light. Having given it a fair try, I believe that CI is expensive and there is a lot of working around the edges instead of just getting the job done. It's more cost effective to employ experienced developers who do not make a mess of large projects than introduce and maintain an alarm system. This is the case even if those developers leave. It doesn't matter if a good developer leaves because processes that he follows would ensure that he writes requirement specs, design specs, sticks to the coding guidelines, and comments his code so that it is readable. All this is reviewed. If this is not happening then his team leader is not doing his job, which should be picked up by his manager and so on. For CI to work, it is not enough to just write unit tests, attempt to maintain full coverage, and ensure a working infrastructure for sizable systems. The bottom line: One might question whether fixing as many bugs before release is even desirable from a business prespective. CI involves a lot of work to capture a handful of bugs that the customer could identify in UAT or the company could get paid for fixing as part of a client service agreement when the warranty period expires anyway.

    Read the article

  • Code maintenance: keeping a bad pattern when extending new code for being consistent or not ?

    - by Guillaume
    I have to extend an existing module of a project. I don't like the way it has been done (lots of anti-pattern involved, like copy/pasted code). I don't want to perform a complete refactor. Should I: create new methods using existing convention, even if I feel it wrong, to avoid confusion for the next maintainer and being consistent with the code base? or try to use what I feel better even if it is introducing another pattern in the code ? Precison edited after first answers: The existing code is not a mess. It is easy to follow and understand. BUT it is introducing lots of boilerplate code that can be avoided with good design (resulting code might become harder to follow then). In my current case it's a good old JDBC (spring template inboard) DAO module, but I have already encounter this dilemma and I'm seeking for other dev feedback. I don't want to refactor because I don't have time. And even with time it will be hard to justify that a whole perfectly working module needs refactoring. Refactoring cost will be heavier than its benefits. Remember: code is not messy or over-complex. I can not extract few methods there and introduce an abstract class here. It is more a flaw in the design (result of extreme 'Keep It Stupid Simple' I think) So the question can also be asked like that: You, as developer, do you prefer to maintain easy stupid boring code OR to have some helpers that will do the stupid boring code at your place ? Downside of the last possibility being that you'll have to learn some stuff and maybe you will have to maintain the easy stupid boring code too until a full refactoring is done)

    Read the article

  • Algorithm to use for shop floor layout?

    - by jkohlhepp
    I ran into a classroom problem yesterday (business oriented class, not computer science) and I found it interesting from an algorithmic perspective. The problem goes something like this: Assume there is a shop floor with N different rooms, and you have N different departments that need to go in those rooms. The departments and the rooms are all the same size, so any department could go in any room. There is a known travel distance from each room to each other room. There is also a known amount of trips necessary from one department to another (trips are counted the same regardless which room they originate from, so a trip from A to B is equivalent to a trip from B to A). Given those inputs, determine a layout of departments into rooms which minimizes travel time. What is the best way to approach this problem algorithmically? Is there already a particular algorithm or class of algorithms designed to solve this type of problem? Does this type of problem have a name in computer science? I am not looking for you to design an algorithm to solve this, although feel free to do so if you would like. I'm wondering if this is a problem space that has already been well defined and studied algorithmically and if so get some links to research further. I can see a lot of different data structures and algorithms that might apply to this and I'm curious which approach would be "best". And don't worry, you are not doing my homework for me. This is not a homework problem per se, as this is a business course and we were simply discussing the concepts and not trying to solve the problem algorithmically.

    Read the article

  • Is it a bug or a task when something doesn't work, yet, in development process

    - by Patkos Csaba
    We usually have this dilemma in our team. Sometimes, in order to implement a task or a story we find out that the system must be in a specific state. For example, a specific system configuration has to be made beforehand. The task / story can be completed and it is working as specified on it with the proper configuration in place. Note that the configuration is not directly related with the task. Next, we have to create a new ... ??? ... something for the process of generating that configuration file. This is where the problems appear. Some say that it is a bug others say it is a task or an extra feature. So, where is the limit between bugs and tasks in the development phase? Should we even consider something a bug if all the tasks are working as stated in their definitions? Can a thing be considered a bug because one compares it to the current (unstable) state of the system? Short example: A feature requires configuring a communication service for a specific operation. In the process of the implementation the team discovers that the service requires the hostnames of the pears to be resolvable to an IP address. The team adds the hostnames to the DNS server (or hosts files) and continues implementing the required feature. After the initial feature is working, a question is risen. Should the sysadmin configure the DNS or hosts file or should our application do it automatically? An automatic solution is possible. So a decision is made to implement it. ... here start the discussions ... is this a bug or an extra feature / task? PS: I know that I mixed feature / task / story in the question. It is intentional. I am interested in separating bugs from the rest. Doesn't matter what the rest means in a particular case.

    Read the article

  • Any suggested approaches to track bugs/defects?

    - by deostroll
    What is the best way to track defect sources in tfs? We have various teams for a project like the vulnerability team, the customer, pre-sales, etc. We give a build and these teams independently test it. They do not have access to our tfs system. So they usually send in their defects via email. It will usually be send in an excel format. Our testing team takes these up and logs them into tfs. Sometimes they modify the original defect description (in excel) and add the expected/actual results. Sometimes they miss to cite the source. I am talking about managing the various sources as such. Is there a way we can add these sources into tfs, and actually link this particular source with the defects, with individual comments associated with them (saying where in the source we can find the actual material for the defect). Edit: I don't know if there is a way to manage various sources. Consider this: the vulnerability assessment team has come out with defects/suggestions. They captured it into an excel and passed that on to the testing team (in my case). The testing team takes the responsibility of elaborating the defect and logging it in tfs. Now say that the excel has come with 20 defect items. This is my source. (It answers the question where did this defect come from). So ultimately when I am looking at a bug I know from where it came from - I'll ultimately be looking at the email sent from the VA team which has the excel or the excel file itself sent by the VA team. It may be one of the 20 items in that excel. How should the tester link to this source just once? On the contrary, it does not make sense for the tester to attach the same excel 20 times (i.e. attach the same excel for the 20 defects while logging it into tfs) right? I hope you get my point.

    Read the article

  • Editing service for blogger with terrible English grammar

    - by Josh Moore
    I would like to write a technical blog. However, the biggest things holding me back is my poor spelling, punctuation, and grammar (I have all these problems even though I am a native English speaker). I am thinking about using a professional editing/proofreading service to fix my blog posts before I post them. However, given the content will be technical in nature (some articles will get into details of programming) and I would like to write them in markdown, I am not sure if the general online services will be a good fit. Can you recommend a editor (or company) that you like that can provide this service?

    Read the article

  • How was programming done 20 years ago?

    - by Click Upvote
    Nowadays we have a lot of programming aids that make it easier to work, including: IDEs Debuggers (line by line, breakpoints, etc) Ant scripts, etc for compiling Sites like Stackoverflow to help if you're too stuck on a bug. 20 years ago none of these things were around, which tools did people use to program and how did they make do without these tools? I'm interested in learning more about how programming was done back then.

    Read the article

  • How to translate formulas into form of natural language?

    - by Ricky
    I am recently working on a project aiming at evaluating whether an android app crashes or not. The evaluation process is 1.Collect the logs(which record the execution process of an app). 2.Generate formulas to predict the result (formulas is generated by GP) 3.Evaluate the logs by formulas Now I can produce formulas, but for convenience for users, I want to translate formulas into form of natural language and tell users why crash happened.(I think it looks like "inverse natural language processing".) To explain the idea more clearly, imagine you got a formula like this: 155 - count(onKeyDown) >= 148 It's obvious that if count(onKeyDown) 7, the result of "155 - count(onKeyDown) = 148" is false, so the log contains more than 7 onKeyDown event would be predicted "Failed". I want to show users that if onKeyDown event appears more than 7 times(155-148=7), this app will crash. However, the real formula is much more complicated, such as: (< !( ( SUM( {Att[17]}, Event[5]) <= MAX( {Att[7]}, Att[0] >= Att[11]) OR SUM( {Att[17]}, Event[5]) > MIN( {Att[12]}, 734 > Att[19]) ) OR count(Event[5]) != 1 ) > (< count(Att[4] = Att[3]) >= count(702 != Att[8]) + 348 / SUM( {Att[13]}, 641 < Att[12]) mod 587 - SUM( {Att[13]}, Att[10] < Att[15]) mod MAX( {Att[13]}, Event[2]) + 384 > count(Event[10]) != 1)) I tried to implement this function by C++, but it's quite difficult, here's the snippet of code I am working right now. Does anyone knows how to implement this function quickly?(maybe by some tools or research findings?)Any idea is welcomed: ) Thanks in advance.

    Read the article

  • What should be included in risk management section of software's architecture documentation?

    - by Limbo Exile
    I am going to develop a Java application (a Spring Web application that will be used to extract data from various data sources) and I want to include risk management of the software in the architecture documentation. By risk management (I am not sure if this is the right name) I mean documenting possibilities of what can go wrong with the software and what to do in those cases. At first I tried to draft some lists, including things like database performance decrease, change of external components that the software interacts with, security breaches etc. But as I am not an experienced developer I cannot rely on those drafts, I don't think they are exhaustive. I searched web hoping to find something similar to the Joel Test or to find any other resource that will cite the most popular causes of problems that should be included and analyzed in risk management documentation, but I haven't found much. Finally, my question is: What should be included in risk management section of software's architecture documentation?

    Read the article

  • perl comparing 2 data file as array 2D for finding match one to one [migrated]

    - by roman serpa
    I'm doing a program that uses combinations of variables ( combiData.txt 63 rows x different number of columns) for analysing a data table ( j1j2_1.csv, 1000filas x 19 columns ) , to choose how many times each combination is repeated in data table and which rows come from (for instance, tableData[row][4]). I have tried to compile it , however I get the following message : Use of uninitialized value $val in numeric eq (==) at rowInData.pl line 34. Use of reference "ARRAY(0x1a2eae4)" as array index at rowInData.pl line 56. Use of reference "ARRAY(0x1a1334c)" as array index at rowInData.pl line 56. Use of uninitialized value in subtraction (-) at rowInData.pl line 56. Modification of non-creatable array value attempted, subscript -1 at rowInData.pl line 56. nothing This is my code: #!/usr/bin/perl use strict; use warnings; my $line_match; my $countTrue; open (FILE1, "<combiData.txt") or die "can't open file text1.txt\n"; my @tableCombi; while(<FILE1>) { my @row = split(' ', $_); push(@tableCombi, \@row); } close FILE1 || die $!; open (FILE2, "<j1j2_1.csv") or die "can't open file text1.txt\n"; my @tableData; while(<FILE2>) { my @row2 = split(/\s*,\s*/, $_); push(@tableData, \@row2); } close FILE2 || die $!; #function transform combiData.txt variable (position ) to the real value that i have to find in the data table. sub trueVal($){ my ($val) = $_[0]; if($val == 7){ return ('nonsynonymous_SNV'); } elsif( $val == 14) { return '1'; } elsif( $val == 15) { return '1';} elsif( $val == 16) { return '1'; } elsif( $val == 17) { return '1'; } elsif( $val == 18) { return '1';} elsif( $val == 19) { return '1';} else { print 'nothing'; } } #function IntToStr ( ) , i'm not sure if it is necessary) that transforms $ to strings , to use the function <eq> in the third loop for the array of combinations compared with the data array . sub IntToStr { return "$_[0]"; } for my $combi (@tableCombi) { $line_match = 0; for my $sheetData (@tableData) { $countTrue=0; for my $cell ( @$combi) { #my $temp =\$tableCombi[$combi][$cell] ; #if ( trueVal($tableCombi[$combi][$cell] ) eq $tableData[$sheetData][ $tableCombi[$combi][$cell] - 1 ] ){ #if ( IntToStr(trueVal($$temp )) eq IntToStr( $tableData[$sheetData][ $$temp-1] ) ){ if ( IntToStr(trueVal($tableCombi[$combi][$cell]) ) eq IntToStr($tableData[$sheetData][ $tableCombi[$combi][$cell] -1]) ){ $countTrue++;} if ($countTrue==@$combi){ $line_match++; #if ($line_match < 50){ print $tableData[$sheetData][4]." "; #} } } } print $line_match." \n"; }

    Read the article

  • Applications affected by memory performance

    - by robotron
    I'm writing a paper on the topic of applications affected more by memory performance than processor performance. I've got a lot written regarding the gap between the two, however I can't seem to find anything about the applications that might be affected more by memory performance than by processor speed. I suppose these are applications that make a large amount of memory references, but I have no idea what kind of applications would make such large number of references to make it stand out? Perhaps databases? Can you please give me any pointers on how to proceed, some links to papers? I'm really stuck.

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >