Search Results

Search found 12719 results on 509 pages for 'language translator'.

Page 92/509 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • When is performance gain significant enough to implement that optimization?

    - by Zwei steinen
    Hi, following the text book, I do measure performance whenever I try optimizing my code. Sometimes, however, the performance gain is rather small and I can't decisively decide whether I should implement that optimization. For example, when a fix shortens an average response time of 100ms to 90ms under some conditions, should I implement that fix? What if it shortens 200ms to 190ms? How many condition should I try before I can conclude that it will be beneficial overall? I guess it's not possible to give a straight forward answer to this, as it depends on too many things, but is there a good rule of thumb that I should follow? Are there any guideline/best-practices?

    Read the article

  • Are there any well known algorithms to detect the presence of names?

    - by Rhubarb
    For example, given a string: "Bob went fishing with his friend Jim Smith." Bob and Jim Smith are both names, but bob and smith are both words. Weren't for them being uppercase, there would be less indication of this outside of our knowledge of the sentence. Without doing grammar analysis, are there any well known algorithms for detecting the presence of names, at least Western names?

    Read the article

  • As our favorite imperative languages gain functional constructs, should loops be considered a code s

    - by Michael Buen
    In allusion to Dare Obasanjo's impressions on Map, Reduce, Filter (Functional Programming in C# 3.0: How Map/Reduce/Filter can Rock your World) "With these three building blocks, you could replace the majority of the procedural for loops in your application with a single line of code. C# 3.0 doesn't just stop there." Should we increasingly use them instead of loops? And should be having loops(instead of those three building blocks of data manipulation) be one of the metrics for coding horrors on code reviews? And why? [NOTE] I'm not advocating fully functional programming on those codes that could be simply translated to loops(e.g. tail recursions) Asking for politer term. Considering that the phrase "code smell" is not so diplomatic, I posted another question http://stackoverflow.com/questions/432492/whats-the-politer-word-for-code-smell about the right word for "code smell", er.. utterly bad code. Should that phrase have a place in our programming parlance?

    Read the article

  • Are licenses relevant for small code snippets?

    - by Martin
    When I'm about to write a short algorithm, I first check in the base class library I'm using whether the algorithm is implemented in it. If not, I often do a quick google search to see if someone has done it before (which is the case, 19 times out of 20). Most of the time, I find the exact code I need. Sometimes it's clear what license applies to the source code, sometimes not. It may be GPL, LGPL, BSD or whatever. Sometimes people have posted a code snippet on some random forum which solves my problem. It's clear to me that I can't reuse the code (copy/paste it into my code) without caring about the license if the code is in some way substantial. What is not clear to me is whether I can copy a code snippet containing 5 lines or so without doing a license violation. Can I copy/paste a 5-line code snippet without caring about the license? What about one-liner? What about 10 lines? Where do I draw the line (no pun intended)? My second problem is that if I have found a 10-line code snippet which does exactly what I need, but feel that I cannot copy it because it's GPL-licensed and my software isn't, I have already memorized how to implement it so when I go around implementing the same functionality, my code is almost identical to the GPL licensed code I saw a few minutes ago. (In other words, the code was copied to my brain and my brain after that copied it into my source code).

    Read the article

  • Finding what makes strings unique in a list, can you improve on brute force?

    - by Ed Guiness
    Suppose I have a list of strings where each string is exactly 4 characters long and unique within the list. For each of these strings I want to identify the position of the characters within the string that make the string unique. So for a list of three strings abcd abcc bbcb For the first string I want to identify the character in 4th position d since d does not appear in the 4th position in any other string. For the second string I want to identify the character in 4th position c. For the third string it I want to identify the character in 1st position b AND the character in 4th position, also b. This could be concisely represented as abcd -> ...d abcc -> ...c bbcb -> b..b If you consider the same problem but with a list of binary numbers 0101 0011 1111 Then the result I want would be 0101 -> ..0. 0011 -> .0.. 1111 -> 1... Staying with the binary theme I can use XOR to identify which bits are unique within two binary numbers since 0101 ^ 0011 = 0110 which I can interpret as meaning that in this case the 2nd and 3rd bits (reading left to right) are unique between these two binary numbers. This technique might be a red herring unless somehow it can be extended to the larger list. A brute-force approach would be to look at each string in turn, and for each string to iterate through vertical slices of the remainder of the strings in the list. So for the list abcd abcc bbcb I would start with abcd and iterate through vertical slices of abcc bbcb where these vertical slices would be a | b | c | c b | b | c | b or in list form, "ab", "bb", "cc", "cb". This would result in four comparisons a : ab -> . (a is not unique) b : bb -> . (b is not unique) c : cc -> . (c is not unique) d : cb -> d (d is unique) or concisely abcd -> ...d Maybe it's wishful thinking, but I have a feeling that there should be an elegant and general solution that would apply to an arbitrarily large list of strings (or binary numbers). But if there is I haven't yet been able to see it. I hope to use this algorithm to to derive minimal signatures from a collection of unique images (bitmaps) in order to efficiently identify those images at a future time. If future efficiency wasn't a concern I would use a simple hash of each image. Can you improve on brute force?

    Read the article

  • Using different languages in one project

    - by Tarbal
    I recently heard about the use of several different languages in a (big) project, I also read about famous services such as Twitter using Rails as frontend, mixed with some other languages, and Scala I think it was as backend. Is this common practice? Who does that? I'm sure there are disadvantages to this. I think that you will have problems with the different interpreters/compilers and seamlessly connecting the different languages. Is this true? Why is this actually done? For performance?

    Read the article

  • Have you been in cases where TDD increased development time?

    - by BillyONeal
    Hello everyone :) I was reading http://stackoverflow.com/questions/2512504/tdd-how-to-start-really-thinking-tdd and I noticed many of the answers indicate that tests + application should take less time than just writing the application. In my experience, this is not true. My problem though is that some 90% of the code I write has a TON of operating system calls. The time spent to actually mock these up takes much longer than just writing the code in the first place. Sometimes 4 or 5 times as long to write the test as to write the actual code. I'm curious if there are other developers in this kind of a scenario.

    Read the article

  • Interface for classes that have nothing in common

    - by Tomek Tarczynski
    Lets say I want to make few classes to determine behaviour of agents. The good practice would be to make some common interface for them, such interface (simplified) could look like this: interface IModel { void UpdateBehaviour(); } All , or at least most, of such model would have some parameters, but parameters from one model might have nothing in common with parameters of other model. I would like to have some common way of loading parameters. Question What is the best way to do that? Is it maybe just adding method void LoadParameters(object parameters) to the IModel? Or creating empty interface IParameters and add method void LoadParameters(IParameters parameters)? That are two ideas I came up with, but I don't like either of them.

    Read the article

  • Find all complete sub-graphs within a graph

    - by mvid
    Is there a known algorithm or method to find all complete sub-graphs within a graph? I have an undirected, unweighted graph and I need to find all subgraphs within it where each node in the subgraph is connected to each other node in the subgraph. Is there an existing algorithm for this?

    Read the article

  • Function overloading by return type?

    - by dsimcha
    Why don't more mainstream statically typed languages support function/method overloading by return type? I can't think of any that do. It seems no less useful or reasonable than supporting overload by parameter type. How come it's so much less popular?

    Read the article

  • What statistics can be maintained for a set of numerical data without iterating?

    - by Dan Tao
    Update Just for future reference, I'm going to list all of the statistics that I'm aware of that can be maintained in a rolling collection, recalculated as an O(1) operation on every addition/removal (this is really how I should've worded the question from the beginning): Obvious Count Sum Mean Max* Min* Median** Less Obvious Variance Standard Deviation Skewness Kurtosis Mode*** Weighted Average Weighted Moving Average**** OK, so to put it more accurately: these are not "all" of the statistics I'm aware of. They're just the ones that I can remember off the top of my head right now. *Can be recalculated in O(1) for additions only, or for additions and removals if the collection is sorted (but in this case, insertion is not O(1)). Removals potentially incur an O(n) recalculation for non-sorted collections. **Recalculated in O(1) for a sorted, indexed collection only. ***Requires a fairly complex data structure to recalculate in O(1). ****This can certainly be achieved in O(1) for additions and removals when the weights are assigned in a linearly descending fashion. In other scenarios, I'm not sure. Original Question Say I maintain a collection of numerical data -- let's say, just a bunch of numbers. For this data, there are loads of calculated values that might be of interest; one example would be the sum. To get the sum of all this data, I could... Option 1: Iterate through the collection, adding all the values: double sum = 0.0; for (int i = 0; i < values.Count; i++) sum += values[i]; Option 2: Maintain the sum, eliminating the need to ever iterate over the collection just to find the sum: void Add(double value) { values.Add(value); sum += value; } void Remove(double value) { values.Remove(value); sum -= value; } EDIT: To put this question in more relatable terms, let's compare the two options above to a (sort of) real-world situation: Suppose I start listing numbers out loud and ask you to keep them in your head. I start by saying, "11, 16, 13, 12." If you've just been remembering the numbers themselves and nothing more, and then I say, "What's the sum?", you'd have to think to yourself, "OK, what's 11 + 16 + 13 + 12?" before responding, "52." If, on the other hand, you had been keeping track of the sum yourself while I was listing the numbers (i.e., when I said, "11" you thought "11", when I said "16", you thought, "27," and so on), you could answer "52" right away. Then if I say, "OK, now forget the number 16," if you've been keeping track of the sum inside your head you can simply take 16 away from 52 and know that the new sum is 36, rather than taking 16 off the list and them summing up 11 + 13 + 12. So my question is, what other calculations, other than the obvious ones like sum and average, are like this? SECOND EDIT: As an arbitrary example of a statistic that (I'm almost certain) does require iteration -- and therefore cannot be maintained as simply as a sum or average -- consider if I asked you, "how many numbers in this collection are divisible by the min?" Let's say the numbers are 5, 15, 19, 20, 21, 25, and 30. The min of this set is 5, which divides into 5, 15, 20, 25, and 30 (but not 19 or 21), so the answer is 5. Now if I remove 5 from the collection and ask the same question, the answer is now 2, since only 15 and 30 are divisible by the new min of 15; but, as far as I can tell, you cannot know this without going through the collection again. So I think this gets to the heart of my question: if we can divide kinds of statistics into these categories, those that are maintainable (my own term, maybe there's a more official one somewhere) versus those that require iteration to compute any time a collection is changed, what are all the maintainable ones? What I am asking about is not strictly the same as an online algorithm (though I sincerely thank those of you who introduced me to that concept). An online algorithm can begin its work without having even seen all of the input data; the maintainable statistics I am seeking will certainly have seen all the data, they just don't need to reiterate through it over and over again whenever it changes.

    Read the article

  • Easiest way to find the correct kademlia bucket

    - by Martin
    In the Kademlia protocol node IDs are 160 bit numbers. Nodes are stored in buckets, bucket 0 stores all the nodes which have the same ID as this node except for the very last bit, bucket 1 stores all the nodes which have the same ID as this node except for the last 2 bits, and so on for all 160 buckets. What's the fastest way to find which bucket I should put a new node into? I have my buckets simply stored in an array, and need a method like so: Bucket[] buckets; //array with 160 items public Bucket GetBucket(Int160 myId, Int160 otherId) { //some stuff goes here } The obvious approach is to work down from the most significant bit, comparing bit by bit until I find a difference, I'm hoping there is a better approach based around clever bit twiddling. Practical note: My Int160 is stored in a byte array with 20 items, solutions which work well with that kind of structure will be preferred.

    Read the article

  • Non-Relational DBMS Design Resources

    - by Matt Luongo
    Hey guys, As a personal project, I'm looking to build a rudimentary DBMS. I've read the relevant sections in Elmasri & Navathe (5ed), but could use a more focused text. The rub is that I want to play with novel non-relational data models. While a lot of E&N was great- indexing implementation details in particular- the more advanced DBMS implementation was only targeted to a relational model. I could also use something a bit more practical and detail-oriented, with real-world recommendations. I'd like to defer staring at DBMS source for a while if I can. Any ideas?

    Read the article

  • Equivalence of boolean expressions

    - by Iulian Serbanoiu
    Hello, I have a problem that consist in comparing boolean expressions ( OR is +, AND is * ). To be more precise here is an example: I have the following expression: "A+B+C" and I want to compare it with "B+A+C". Comparing it like string is not a solution - it will tell me that the expressions don't match which is of course false. Any ideas on how to compare those expressions? Any ideas about how can I tackle this problem? I accept any kind of suggestions but (as a note) the final code in my application will be written in C++ (C accepted of course). An normal expression could contain also parenthesis: (A * B * C) + D or A+B*(C+D)+X*Y Thanks in advance, Iulian

    Read the article

  • Is Java assert broken?

    - by BlairHippo
    While poking around the questions, I recently discovered the assert keyword in Java. At first, I was excited. Something useful I didn't already know! A more efficient way for me to check the validity of input parameters! Yay learning! But then I took a closer look, and my enthusiasm was not so much "tempered" as "snuffed-out completely" by one simple fact: you can turn assertions off.* This sounds like a nightmare. If I'm asserting that I don't want the code to keep going if the input listOfStuff is null, why on earth would I want that assertion ignored? It sounds like if I'm debugging a piece of production code and suspect that listOfStuff may have been erroneously passed a null but don't see any logfile evidence of that assertion being triggered, I can't trust that listOfStuff actually got sent a valid value; I also have to account for the possibility that assertions may have been turned off entirely. And this assumes that I'm the one debugging the code. Somebody unfamiliar with assertions might see that and assume (quite reasonably) that if the assertion message doesn't appear in the log, listOfStuff couldn't be the problem. If your first encounter with assert was in the wild, would it even occur to you that it could be turned-off entirely? It's not like there's a command-line option that lets you disable try/catch blocks, after all. All of which brings me to my question (and this is a question, not an excuse for a rant! I promise!): What am I missing? Is there some nuance that renders Java's implementation of assert far more useful than I'm giving it credit for? Is the ability to enable/disable it from the command line actually incredibly valuable in some contexts? Am I misconceptualizing it somehow when I envision using it in production code in lieu of statements like if (listOfStuff == null) barf();? I just feel like there's something important here that I'm not getting. *Okay, technically speaking, they're actually off by default; you have to go out of your way to turn them on. But still, you can knock them out entirely.

    Read the article

  • Examples of monoids/semigroups in programming

    - by jkff
    It is well-known that monoids are stunningly ubiquitous in programing. They are so ubiquitous and so useful that I, as a 'hobby project', am working on a system that is completely based on their properties (distributed data aggregation). To make the system useful I need useful monoids :) I already know of these: Numeric or matrix sum Numeric or matrix product Minimum or maximum under a total order with a top or bottom element (more generally, join or meet in a bounded lattice, or even more generally, product or coproduct in a category) Set union Map union where conflicting values are joined using a monoid Intersection of subsets of a finite set (or just set intersection if we speak about semigroups) Intersection of maps with a bounded key domain (same here) Merge of sorted sequences, perhaps with joining key-equal values in a different monoid/semigroup Bounded merge of sorted lists (same as above, but we take the top N of the result) Cartesian product of two monoids or semigroups List concatenation Endomorphism composition. Now, let us define a quasi-property of an operation as a property that holds up to an equivalence relation. For example, list concatenation is quasi-commutative if we consider lists of equal length or with identical contents up to permutation to be equivalent. Here are some quasi-monoids and quasi-commutative monoids and semigroups: Any (a+b = a or b, if we consider all elements of the carrier set to be equivalent) Any satisfying predicate (a+b = the one of a and b that is non-null and satisfies some predicate P, if none does then null; if we consider all elements satisfying P equivalent) Bounded mixture of random samples (xs+ys = a random sample of size N from the concatenation of xs and ys; if we consider any two samples with the same distribution as the whole dataset to be equivalent) Bounded mixture of weighted random samples Which others do exist?

    Read the article

  • Why is it useful to count the number of bits?

    - by Scorchin
    I've seen the numerous questions about counting the number of set bits in a insert type of input, but why is it useful? For those looking for algorithms about bit counting, look here: http://stackoverflow.com/questions/1517848/counting-common-bits-in-a-sequence-of-unsigned-longs http://stackoverflow.com/questions/472325/fastest-way-to-count-number-of-bit-transitions-in-an-unsigned-int http://stackoverflow.com/questions/109023/best-algorithm-to-count-the-number-of-set-bits-in-a-32-bit-integer

    Read the article

  • Alright, I'm still stuck on this homework problem. C++

    - by Josh
    Okay, the past few days I have been trying to get some input on my programs. Well I decided to scrap them for the most part and try again. So once again, I'm in need of help. For the first program I'm trying to fix, it needs to show the sum of SEVEN numbers. Well, I'm trying to change is so that I don't need the mem[##] = ####. I just want the user to be able to input the numbers and the program run from there and go through my switch loop. And have some kind of display..saying like the sum is?.. Here's my code so far. #include <iostream> #include <iomanip> #include <ios> using namespace std; int main() { const int READ = 10; const int WRITE = 11; const int LOAD = 20; const int STORE = 21; const int ADD = 30; const int SUBTRACT = 31; const int DIVIDE = 32; const int MULTIPLY = 33; const int BRANCH = 40; const int BRANCHNEG = 41; const int BRANCHZERO = 42; const int HALT = 43; int mem[100] = {0}; //Making it 100, since simpletron contains a 100 word mem. int operation; //taking the rest of these variables straight out of the book seeing as how they were italisized. int operand; int accum = 0; // the special register is starting at 0 int counter; for ( counter=0; counter < 100; counter++) mem[counter] = 0; // This is for part a, it will take in positive variables in //a sent-controlled loop and compute + print their sum. Variables from example in text. mem[0] = 1009; mem[1] = 1109; mem[2] = 2010; mem[3] = 2111; mem[4] = 2011; mem[5] = 3100; mem[6] = 2113; mem[7] = 1113; mem[8] = 4300; counter = 0; //Makes the variable counter start at 0. while(true) { operand = mem[ counter ]%100; // Finds the op codes from the limit on the mem (100) operation = mem[ counter ]/100; //using a switch loop to set up the loops for the cases switch ( operation ){ case READ: //reads a variable into a word from loc. Enter in -1 to exit cout <<"\n Input a positive variable: "; cin >> mem[ operand ]; counter++; break; case WRITE: // takes a word from location cout << "\n\nThe content at location " << operand << " is " << mem[operand]; counter++; break; case LOAD:// loads accum = mem[ operand ];counter++; break; case STORE: //stores mem[ operand ] = accum;counter++; break; case ADD: //adds accum += mem[operand];counter++; break; case SUBTRACT: // subtracts accum-= mem[ operand ];counter++; break; case DIVIDE: //divides accum /=(mem[ operand ]);counter++; break; case MULTIPLY: // multiplies accum*= mem [ operand ];counter++; break; case BRANCH: // Branches to location counter = operand; break; case BRANCHNEG: //branches if acc. is < 0 if (accum < 0) counter = operand; else counter++; break; case BRANCHZERO: //branches if acc = 0 if (accum == 0) counter = operand; else counter++; break; case HALT: // Program ends break; } } return 0; } part B int main() { const int READ = 10; const int WRITE = 11; const int LOAD = 20; const int STORE = 21; const int ADD = 30; const int SUBTRACT = 31; const int DIVIDE = 32; const int MULTIPLY = 33; const int BRANCH = 40; const int BRANCHNEG = 41; const int BRANCHZERO = 41; const int HALT = 43; int mem[100] = {0}; int operation; int operand; int accum = 0; int pos = 0; int j; mem[22] = 7; // loop 7 times mem[25] = 1; // increment by 1 mem[00] = 4306; mem[01] = 2303; mem[02] = 3402; mem[03] = 6410; mem[04] = 3412; mem[05] = 2111; mem[06] = 2002; mem[07] = 2312; mem[08] = 4210; mem[09] = 2109; mem[10] = 4001; mem[11] = 2015; mem[12] = 3212; mem[13] = 2116; mem[14] = 1101; mem[15] = 1116; mem[16] = 4300; j = 0; while ( true ) { operand = memory[ j ]%100; // Finds the op codes from the limit on the memory (100) operation = memory[ j ]/100; //using a switch loop to set up the loops for the cases switch ( operation ){ case 1: //reads a variable into a word from loc. Enter in -1 to exit cout <<"\n enter #: "; cin >> memory[ operand ]; break; case 2: // takes a word from location cout << "\n\nThe content at location " << operand << "is " << memory[operand]; break; case 3:// loads accum = memory[ operand ]; break; case 4: //stores memory[ operand ] = accum; break; case 5: //adds accum += mem[operand];; break; case 6: // subtracts accum-= memory[ operand ]; break; case 7: //divides accum /=(memory[ operand ]); break; case 8: // multiplies accum*= memory [ operand ]; break; case 9: // Branches to location j = operand; break; case 10: //branches if acc. is < 0 break; case 11: //branches if acc = 0 if (accum == 0) j = operand; break; case 12: // Program ends exit(0); break; } j++; } return 0; }

    Read the article

  • Algorithm possible amounts (over)paid for a specific price, based on denominations

    - by Wrikken
    In a current project, people can order goods delivered to their door and choose 'pay on delivery' as a payment option. To make sure the delivery guy has enough change customers are asked to input the amount they will pay (e.g. delivery is 48,13, they will pay with 60,- (3*20,-)). Now, if it were up to me I'd make it a free field, but apparantly higher-ups have decided is should be a selection based on available denominations, without giving amounts that would result in a set of denominations which could be smaller. Example: denominations = [1,2,5,10,20,50] price = 78.12 possibilities: 79 (multitude of options), 80 (e.g. 4*20) 90 (e.g. 50+2*20) 100 (2*50) It's international, so the denominations could change, and the algorithm should be based on that list. The closest I have come which seems to work is this: for all denominations in reversed order (large=>small) add ceil(price/denomination) * denomination to possibles baseprice = floor(price/denomination) * denomination; for all smaller denominations as subdenomination in reversed order add baseprice + (ceil((price - baseprice) / subdenomination) * subdenomination) to possibles end for end for remove doubles sort Is seems to work, but this has emerged after wildly trying all kinds of compact algorithms, and I cannot defend why it works, which could lead to some edge-case / new countries getting wrong options, and it does generate some serious amounts of doubles. As this is probably not a new problem, and Google et al. could not provide me with an answer save for loads of pages calculating how to make exact change, I thought I'd ask SO: have you solved this problem before? Which algorithm? Any proof it will always work?

    Read the article

  • How do you get your self focused with so many distractions around? (which you can't or don't want to

    - by Teja Kantamneni
    This question is definitely for a programmer and is centric towards a programmer. But if somebody feels it should not belong here I would not mind deleting it. I don't think this need to go as a WIKI, but if feel like it is a WIKI, I can do that too. The Question is: As a programmer you have to keep yourself up to date with the latest technologies and for that every programmer will generally follow some technology blogs and some social networking sites like (twitter, FB, SO, DZONE etc), how to keep your self focused on the things and still want to follow the technology trends? No Subjective or argumentative answers, Just want to know what practices other fellow programmers do for this...

    Read the article

  • Question about "Link Map" output and "Assume" directive of MASM assembler.

    - by smwikipedia
    I am new to MASM. So the questions may be quite basic. When I am using the MASM assembler, there's an output file called "Link Map". Its content is composed of the starting offset and length of various segments, such as Data segment, Code segment and Stack segment. I am wondering that, where are these information describing? Are they talking about how various segments are located within an EXE file or, how segments are located within memory after the EXE file being loaded into memory by a program loader? BTW: What does the "Assume" directive do? My understanding is that it tell the assembler to emit some information into the exe file header so the program loader could use it to set DS, CS, SS, ES register accordingly. Am I right on this? Thanks in advance.

    Read the article

  • java partial classes

    - by Dewfy
    Hello colleagues, Small preamble. I was good java developer on 1.4 jdk. After it I have switched to another platforms, but here I come with problem so question is strongly about jdk 1.6 (or higher :) ). I have 3 coupled class, the nature of coupling concerned with native methods. Bellow is example of this 3 class public interface A { public void method(); } final class AOperations { static native method(. . .); } public class AImpl implements A { @Override public void method(){ AOperations.method( . . . ); } } So there is interface A, that is implemented in native way by AOperations, and AImpl just delegates method call to native methods. These relations are auto-generated. Everything ok, but I have stand before problem. Sometime interface like A need expose iterator capability. I can affect interface, but cannot change implementation (AImpl). Saying in C# I could be able resolve problem by simple partial: (C# sample) partial class AImpl{ ... //here comes auto generated code } partial class AImpl{ ... //here comes MY implementation of ... //Iterator } So, has java analogue of partial or something like.

    Read the article

  • How to use a DHT for a social trading environment

    - by Lirik
    I'm trying to understand if a DHT can be used to solve a problem I'm working on: I have a trading environment where professional option traders can get an increase in their risk limit by requesting that fellow traders lend them some of their risk limit. The lending trader will can either search for traders with certain risk parameters which are part of every trader's profile, i.e. Greeks, or the lending trader can subscribe to requests from certain traders. I want this environment to be scalable and decentralized, but I don't know how traders can search for specific profile parameters when the data is contained in a DHT. Could anybody explain how this can be done?

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >