Search Results

Search found 7220 results on 289 pages for 'graph algorithm'.

Page 179/289 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • How can I compare market data feed sources for quality and latency improvement?

    - by yves Baumes
    I am in the very first stages of implementing a tool to compare 2 market data feed sources in order to prove the quality of new developed sources to my boss ( meaning there are no regressions, no missed updates, or wrong ), and to prove latencies improvement. So the tool I need must be able to check updates differences as well as to tell which source is the best (in term of latency). Concrectly, reference source could be Reuters while the other one is a Feed handler we develop internally. People warned me that updates might not arrive in the same order as Reuters implementation could differs totally from ours. Therefore a simple algorithm based on the fact that updates could arrive in the same order is likely not to work. My very first idea would be to use fingerprint to compare feed sources, as Shazaam application does to find the title of the tube you are submitting. Google told me it is based on FFT. And I was wondering if signal processing theory could behaves well with market access applications. I wanted to know your own experience in that field, is that possible to develop a quite accurate algorithm to meet the needs? What was your own idea? What do you think about fingerprint based comparison?

    Read the article

  • Asp.net membership salt?

    - by chobo2
    Hi Does anyone know how Asp.net membership generates their salt key and then how they encode it(ie is it salt + password or password + salt)? I am using sha1 with my membership but I would like to recreate the same salts so the built in membership stuff could hash the stuff the same way as my stuff can. Thanks Edit 2 Never Mind I mis read it and was thinking it said bytes not bit. So I was passing in 128 bytes not 128bits. Edit I been trying to make it so this is what I have public string EncodePassword(string password, string salt) { byte[] bytes = Encoding.Unicode.GetBytes(password); byte[] src = Encoding.Unicode.GetBytes(salt); byte[] dst = new byte[src.Length + bytes.Length]; Buffer.BlockCopy(src, 0, dst, 0, src.Length); Buffer.BlockCopy(bytes, 0, dst, src.Length, bytes.Length); HashAlgorithm algorithm = HashAlgorithm.Create("SHA1"); byte[] inArray = algorithm.ComputeHash(dst); return Convert.ToBase64String(inArray); } private byte[] createSalt(byte[] saltSize) { byte[] saltBytes = saltSize; RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider(); rng.GetNonZeroBytes(saltBytes); return saltBytes; } So I have not tried to see if the asp.net membership will recognize this yet the hashed password looks close. I just don't know how to convert it to base64 for the salt. I did this byte[] storeSalt = createSalt(new byte[128]); string salt = Encoding.Unicode.GetString(storeSalt); string base64Salt = Convert.ToBase64String(storeSalt); int test = base64Salt.Length; Test length is 172 what is well over the 128bits so what am I doing wrong? This is what their salt looks like vkNj4EvbEPbk1HHW+K8y/A== This is what my salt looks like E9oEtqo0livLke9+csUkf2AOLzFsOvhkB/NocSQm33aySyNOphplx9yH2bgsHoEeR/aw/pMe4SkeDvNVfnemoB4PDNRUB9drFhzXOW5jypF9NQmBZaJDvJ+uK3mPXsWkEcxANn9mdRzYCEYCaVhgAZ5oQRnnT721mbFKpfc4kpI=

    Read the article

  • C# Tupel group limitation

    - by user609511
    How can i controll the loop of Tupel Repeatation ? Someone has give me a hint about my algorithm. I modified a little bit his algorithm. int LimCol = Convert.ToInt32(LimitColis); result = oListTUP .GroupBy(x => x.Item1) .Select(g => new { Key = g.Key, Sum = g.Sum(x => x.Item2), Poids = g.Sum(x => x.Item3), }) .Select(p => new { Key = p.Key, Items = Enumerable.Repeat(LimCol , p.Sum / LimCol).Concat(Enumerable.Repeat(p.Sum % LimCol, 1)), CalculPoids = p.Poids / (Enumerable.Repeat(LimCol, p.Sum / LimCol).Concat(Enumerable.Repeat(p.Sum % LimCol, 1))).Count() }) .SelectMany(p => p.Items.Select(i => Tuple.Create(p.Key, i, p.CalculPoids))) .ToList(); foreach (var oItem in result) { Label1.Text += oItem.Item1 + "--" + oItem.Item2 + "--" + oItem.Item3 + "<br>"; } the result with LimCol = 3 as you can see i colored with red is the problem. i expected: 0452632--3--3,75 0452632--3--3,75 0452632--3--3,75 0452632--3--3,75 essai 49--3--79,00 essai 49--2--79,00 Thanks you in advance

    Read the article

  • Family Tree :- myheritage.com

    - by Nitesh Panchal
    Hello, The other day i just accidently visited the site myheritage.com. I was just wondering, how they must have created one? Can anybody tell me what can be their database design? and if possible, algorithm that we can use to generate such a tree? Generating simple binary tree is very easy using recursion. But if you have a look at the site(if you have time please make account on it and add few nodes to feel) when we add son to a father, it's mother is automatically added(if you don't add explicitly). Mother's family tree is also generated side by side and many such fancy things are happening. In a simple binary tree we have a root node and then many nodes below it. Thus we cannot show wife and husband in the tree and then show a line from wife and husband to child. In spare time, can anybody discuss what can be it's database design and the recursive algorithm that we can follow to generate it? I hope i am not asking too much from you :).

    Read the article

  • Reduce function calls

    - by Curious2learn
    Hello, I profiled my python program and found that the following function was taking too long to run. Perhaps, I can use a different algorithm and make it run faster. However, I have read that I can also possibly increase the speed by reducing function calls, especially when it gets called repeatedly within a loop. I am a python newbie and would like to learn how to do this and see how much faster it can get. Currently, the function is: def potentialActualBuyers(setOfPeople,theCar,price): count=0 for person in setOfPeople: if person.getUtility(theCar) >= price and person.periodCarPurchased==None: count += 1 return count where setOfPeople is a list of person objects. I tried the following: def potentialActualBuyers(setOfPeople,theCar,price): count=0 Utility=person.getUtility for person in setOfPeople: if Utility(theCar) >= price and person.periodCarPurchased==None: count += 1 return count This, however, gives me an error saying local variable 'person' referenced before assignment Any suggestions, how I can reduce function calls or any other changes that can make the code faster. Again, I am a python newbie and even though I may possibly be able to use a better algorithm, it is still worthwhile learning the answer to the above question. Thanks very much.

    Read the article

  • How best to calculate derived currency rate conversions using C#/LINQ?

    - by chillitom
    class FxRate { string Base { get; set; } string Target { get; set; } double Rate { get; set; } } private IList<FxRate> rates = new List<FxRate> { new FxRate {Base = "EUR", Target = "USD", Rate = 1.3668}, new FxRate {Base = "GBP", Target = "USD", Rate = 1.5039}, new FxRate {Base = "USD", Target = "CHF", Rate = 1.0694}, new FxRate {Base = "CHF", Target = "SEK", Rate = 8.12} // ... }; Given a large yet incomplete list of exchange rates where all currencies appear at least once (either as a target or base currency): What algorithm would I use to be able to derive rates for exchanges that aren't directly listed? I'm looking for a general purpose algorithm of the form: public double Rate(string baseCode, string targetCode, double currency) { return ... } In the example above a derived rate would be GBP-CHF or EUR-SEK (which would require using the conversions for EUR-USD, USD-CHF, CHF-SEK) Whilst I know how to do the conversions by hand I'm looking for a tidy way (perhaps using LINQ) to perform these derived conversions perhaps involving multiple currency hops, what's the nicest way to go about this?

    Read the article

  • Number of simple mutations to change one string to another?

    - by mstksg
    Hi; I'm sure you've all heard of the "Word game", where you try to change one word to another by changing one letter at a time, and only going through valid English words. I'm trying to implement an A* Algorithm to solve it (just to flesh out my understanding of A*) and one of the things that is needed is a minimum-distance heuristic. That is, the minimum number of one of these three mutations that can turn an arbitrary string a into another string b: 1) Change one letter for another 2) Add one letter at a spot before or after any letter 3) Remove any letter Examples aabca => abaca: aabca abca abaca = 2 abcdebf => bgabf: abcdebf bcdebf bcdbf bgdbf bgabf = 4 I've tried many algorithms out; I can't seem to find one that gives the actual answer every time. In fact, sometimes I'm not sure if even my human reasoning is finding the best answer. Does anyone know any algorithm for such purpose? Or maybe can help me find one? Thanks.

    Read the article

  • LINQ Joins - Performance

    - by Meiscooldude
    I am curious on how exactly LINQ (not LINQ to SQL) is performing is joins behind the scenes in relation to how Sql Server performs joins. Sql Server before executing a query, generates an Execution Plan. The Execution Plan is basically an Expression Tree on what it believes is the best way to execute the query. Each node provides information on whether to do a Sort, Scan, Select, Join, ect. On a 'Join' node in our execution plan, we can see three possible algorithms; Hash Join, Merge Join, and Nested Loops Join. Sql Server will choose which algorithm to for each Join operation based on expected number of rows in Inner and Outer tables, what type of join we are doing (some algorithms don't support all types of joins), whether we need data ordered, and probably many other factors. Join Algorithms: Nested Loop Join: Best for small inputs, can be optimized with ordered inner table. Merge Join: Best for medium to large inputs sorted inputs, or an output that needs to be ordered. Hash Join: Best for medium to large inputs, can be parallelized to scale linearly. LINQ Query: DataTable firstTable, secondTable; ... var rows = from firstRow in firstTable.AsEnumerable () join secondRow in secondTable.AsEnumerable () on firstRow.Field<object> (randomObject.Property) equals secondRow.Field<object> (randomObject.Property) select new {firstRow, secondRow}; SQL Query: SELECT * FROM firstTable fT INNER JOIN secondTable sT ON fT.Property = sT.Property Sql Server might use a Nested Loop Join if it knows there are a small number of rows from each table, a merge join if it knows one of the tables has an index, and Hash join if it knows there are a lot of rows on either table and neither has an index. Does Linq choose its algorithm for joins? or does it always use one?

    Read the article

  • Is is possible to determine a password input string as plaintext or hashed?

    - by Godders
    I have a RESTful API containing a URI of /UserService/Register. /UserService/Register takes an XML request such as: <UserRegistrationRequest> <Password>password</Password> <Profile> <User> <UserName>username</UserName> </User> </Profile> </UserRegistrationRequest> I have the following questions given the above scenario: Is there a way (using C# and .Net 3.5+) of enforcing/validating that clients calling Register are passing a hashed password rather than plaintext? Is leaving the choice of hashing algorithm to be used to the client a good idea? We could provide a second URI of /UserService/ComputePasswordHash which the client would call before calling /UserService/Register. This has the benefit of ensuring that each password is hashed using the same algorithm. Is there a mechanism within REST to ensure that a client has called one URI before calling another? Hope I've explained myself ok. Many thanks in advance for any help.

    Read the article

  • Good way to identify similar images?

    - by Nick
    I've developed a simple and fast algorithm in PHP to compare images for similarity. Its fast (~40 per second for 800x600 images) to hash and a unoptimised search algorithm can go through 3,000 images in 22 mins comparing each one against the others (3/sec). The basic overview is you get a image, rescale it to 8x8 and then convert those pixels for HSV. The Hue, Saturation and Value are then truncated to 4 bits and it becomes one big hex string. Comparing images basically walks along two strings, and then adds the differences it finds. If the total number is below 64 then its the same image. Different images are usually around 600 - 800. Below 20 and extremely similar. Are there any improvements upon this model I can use? I havent looked at how relevant the different components (hue, saturation and value) are to the comparison. Hue is probably quite important but the others? To speed up searches I could probably split the 4 bits from each part in half, and put the most significant bits first so if they fail the check then the lsb doesnt need to be checked at all. I dont know a efficient way to store bits like that yet still allow them to be searched and compared easily. I've been using a dataset of 3,000 photos (mostly unique) and there havent been any false positives. Its completely immune to resizes and fairly resistant to brightness and contrast changes.

    Read the article

  • Time with and without OpenMP

    - by was
    I have a question.. I tried to improve a well known program algorithm in C, FOX algorithm for matrix multiplication.. relative link without openMP: (http://web.mst.edu/~ercal/387/MPI/ppmpi_c/chap07/fox.c). The initial program had only MPI and I tried to insert openMP in the matrix multiplication method, in order to improve the time of computation: (This program runs in a cluster and computers have 2 cores, thus I created 2 threads.) The problem is that there is no difference of time, with and without openMP. I observed that using openMP sometimes, time is equivalent or greater than the time without openMP. I tried to multiply two 600x600 matrices. void Local_matrix_multiply( LOCAL_MATRIX_T* local_A /* in */, LOCAL_MATRIX_T* local_B /* in */, LOCAL_MATRIX_T* local_C /* out */) { int i, j, k; chunk = CHUNKSIZE; // 100 #pragma omp parallel shared(local_A, local_B, local_C, chunk, nthreads) private(i,j,k,tid) num_threads(2) { /* tid = omp_get_thread_num(); if(tid == 0){ nthreads = omp_get_num_threads(); printf("O Pollaplasiamos pinakwn ksekina me %d threads\n", nthreads); } printf("Thread %d use the matrix: \n", tid); */ #pragma omp for schedule(static, chunk) for (i = 0; i < Order(local_A); i++) for (j = 0; j < Order(local_A); j++) for (k = 0; k < Order(local_B); k++) Entry(local_C,i,j) = Entry(local_C,i,j) + Entry(local_A,i,k)*Entry(local_B,k,j); } //end pragma omp parallel } /* Local_matrix_multiply */

    Read the article

  • YouTube - Encrypted cookie string

    - by Robertof
    Hello! I'm new to Stack Overflow. I'm building a YouTube Downloader in PHP. But YouTube have some IP-checks. Because the PHP file is on a remote server, the ip of the server != the ip of the user and the video-download fails. So, maybe I've found a solution. YouTube sends a cookie with an encrypted string, which is the user IP. I need to know the encrypted-string algorithm and know how to crypt a string with this. Here there is the string: nQ0CrJmASJk . It could be base64, but when I try to decode it with base64_decode, it gives me strange characters. You could check the cookie by requesting the main page of youtube, and check the headers "Set-Cookie". You will found a cookie with the name "VISITOR_INFO1_LIVE". Here there is the encrypyed string. Anyone knows what is the algorithm? Thanks. PS: sorry for my bad english. Cheers, Roberto.

    Read the article

  • How can I cluster short messages [Tweets] based on topic ? [Topic Based Clustering]

    - by Jagira
    Hello, I am planning an application which will make clusters of short messages/tweets based on topics. The number of topics will be limited like Sports [ NBA, NFL, Cricket, Soccer ], Entertainment [ movies, music ] and so on... I can think of two approaches to this Ask for users to tag questions like Stackoverflow does. Users can select tags from a predefined list of tags. Then on server side I will cluster them on based of tags. Pros:- Simple design. Less complexity in code. Cons:- Choices for users will be restricted. Clusters will not be dynamic. If a new event occurs, the predefined tags will miss it. Take the message, delete the stopwords [ predefined in a dictionary ] and apply some clustering algorithm to make a cluster and depending on its popularity, display the cluster. The cluster will be maintained according to its sustained popularity. New messages will be skimmed and assigned to corresponding clusters. Pros:- Dynamic clustering based on the popularity of the event/accident. Cons:- Increased complexity. More server resources required. I would like to know whether there are any other approaches to this problem. Or are there any ways of improving the above mentioned methods? Also suggest some good clustering algorithms.I think "K-Nearest Clustering" algorithm is apt for this situation.

    Read the article

  • How to determine a text block of a file in one version come from which file in the previous version?

    - by Muhammad Asaduzzaman
    The problem is described below: Suppose I have a list of files in one version(say A,B,C,D). In the next version I have the following files(A,E,F,G). There are some similarities in their contents. The files in the later version comes from the previous version by file name renaming, content addition, deletion or partial modification or without any change( for example A is not changed). I take a block of text from a file(E, 2nd version) and check which files(in the 1st version) contain this text block. I found that B,C and D contain the text fragment. I want to determine from which file(B or c or d) this text block actually comes from.(I assume that E is a file whose name change in the second version). Since the contents may be changed, added or deleted in the later version, so in order to determine similarity I use LCS algorithm. But I cannot map the file with its previous version. I think one possible approach might be to use the location information of the match text blocks. But this heuristics not always work. Is there any research or algorithm exist to find so. Any direction will be helpful. Thanks in advance.

    Read the article

  • java : how to handle the design when template methods throw exception when overrided method not throw

    - by jiafu
    when coding. try to solve the puzzle: how to design the class/methods when InputStreamDigestComputor throw IOException? It seems we can't use this degisn structure due to the template method throw exception but overrided method not throw it. but if change the overrided method to throw it, will cause other subclass both throw it. So can any good suggestion for this case? abstract class DigestComputor{ String compute(DigestAlgorithm algorithm){ MessageDigest instance; try { instance = MessageDigest.getInstance(algorithm.toString()); updateMessageDigest(instance); return hex(instance.digest()); } catch (NoSuchAlgorithmException e) { LOG.error(e.getMessage(), e); throw new UnsupportedOperationException(e.getMessage(), e); } } abstract void updateMessageDigest(MessageDigest instance); } class ByteBufferDigestComputor extends DigestComputor{ private final ByteBuffer byteBuffer; public ByteBufferDigestComputor(ByteBuffer byteBuffer) { super(); this.byteBuffer = byteBuffer; } @Override void updateMessageDigest(MessageDigest instance) { instance.update(byteBuffer); } } class InputStreamDigestComputor extends DigestComputor{ // this place has error. due to exception. if I change the overrided method to throw it. evey caller will handle the exception. but @Override void updateMessageDigest(MessageDigest instance) { throw new IOException(); } }

    Read the article

  • Best implementation of Java Queue?

    - by Georges Oates Larsen
    I am working (In java) on a recursive image processing algorithm that recursively traverses the pixels of the image, outward from a center point. Unfortunately... That causes stack overflows, so I have decided to switch to a Queue-based algorithm. Now, this is all fine and dandy -- But considering the fact that its queue will be analyzing THOUSANDS of pixels in a very short amount of time, while constantly popping and pushing, WITHOUT maintaining a predictable state (It could be anywhere between length 100, and 20000); The queue implementation needs to have significantly fast popping and pushing abilities. A linked list seems attractive due to its ability to push elements unto its self without rearranging anything else in the list, but in order for it to be fast enough, it would need easy access to both its head, AND its tail (or second-to-last node if it were not doubly-linked). Sadly, though I cannot find any information related to the underlying implementation of linked lists in Java, so it's hard to say if a linked list is really the way to go... This brings me to my question... What would be the best implementation of the Queue interface in Java for what I intend to do? (I do not wish to edit or even access anything other than the head and tail of the queue -- I do not wish to do any sort of rearranging, or anything. On the flip side, I DO intend to do a lot of pushing and popping, and the queue will be changing size quite a bit, so preallocating would be inefficient)

    Read the article

  • Reduce durability in MySQL for performance

    - by Paul Prescod
    My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal. My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places: RAM Buffers commitlog binary log actual tables All of the above places on my DB slave This is too much "durability" given that I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives. During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables. In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that? If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.

    Read the article

  • "java.lang.OutOfMemoryError: Java heap space" in image and array storage

    - by totalconscience
    I am currently working on an image processing demonstration in java (Applet). I am running into the problem where my arrays are too large and I am getting the "java.lang.OutOfMemoryError: Java heap space" error. The algorithm I run creates an NxD float array where: N is the number of pixel in the image and D is the coordinates of each pixel plus the colorspace components of each pixel (usually 1 for grayscale or 3 for RGB). For each iteration of the algorithm it creates one of these NxD float arrays and stores it for later use in a vector, so that the user of the applet may look at the individual steps. My client wants the program to be able to load a 500x500 RGB image and run as the upper bound. There are about 12 to 20 iterations per run so that means I need to be able to store a 12x500x500x5 float in some fashion. Is there a way to process all of this data and, if possible, how? Example of the issue: I am loading a 512 by 512 Grayscale image and even before the first iteration completes I run out of heap space. The line it points me to is: Y.add(new float[N][D]) where Y is a Vector and N and D are described as above. This is the second instance of the code using that line.

    Read the article

  • How can I use TDD to solve a puzzle with an unknown answer?

    - by matthewsteele
    Recently I wrote a Ruby program to determine solutions to a "Scramble Squares" tile puzzle: I used TDD to implement most of it, leading to tests that looked like this: it "has top, bottom, left, right" do c = Cards.new card = c.cards[0] card.top.should == :CT card.bottom.should == :WB card.left.should == :MT card.right.should == :BT end This worked well for the lower-level "helper" methods: identifying the "sides" of a tile, determining if a tile can be validly placed in the grid, etc. But I ran into a problem when coding the actual algorithm to solve the puzzle. Since I didn't know valid possible solutions to the problem, I didn't know how to write a test first. I ended up writing a pretty ugly, untested, algorithm to solve it: def play_game working_states = [] after_1 = step_1 i = 0 after_1.each do |state_1| step_2(state_1).each do |state_2| step_3(state_2).each do |state_3| step_4(state_3).each do |state_4| step_5(state_4).each do |state_5| step_6(state_5).each do |state_6| step_7(state_6).each do |state_7| step_8(state_7).each do |state_8| step_9(state_8).each do |state_9| working_states << state_9[0] end end end end end end end end end So my question is: how do you use TDD to write a method when you don't already know the valid outputs? If you're interested, the code's on GitHub: Tests: https://github.com/mattdsteele/scramblesquares-solver/blob/master/golf-creator-spec.rb Production code: https://github.com/mattdsteele/scramblesquares-solver/blob/master/game.rb

    Read the article

  • Handling large (object) datasets with PHP

    - by Aron Rotteveel
    I am currently working on a project that extensively relies on the EAV model. Both entities as their attributes are individually represented by a model, sometimes extending other models (or at least, base models). This has worked quite well so far since most areas of the application only rely on filtered sets of entities, and not the entire dataset. Now, however, I need to parse the entire dataset (IE: all entities and all their attributes) in order to provide a sorting/filtering algorithm based on the attributes. The application currently consists of aproximately 2200 entities, each with aproximately 100 attributes. Every entity is represented by a single model (for example Client_Model_Entity) and has a protected property called $_attributes, which is an array of Attribute objects. Each entity object is about 500KB, which results in an incredible load on the server. With 2000 entities, this means a single task would take 1GB of RAM (and a lot of CPU time) in order to work, which is unacceptable. Are there any patterns or common approaches to iterating over such large datasets? Paging is not really an option, since everything has to be taken into account in order to provide the sorting algorithm.

    Read the article

  • Computing, storing, and retrieving values to and from an N-Dimensional matrix

    - by Adam S
    This question is probably quite different from what you are used to reading here - I hope it can provide a fun challenge. Essentially I have an algorithm that uses 5(or more) variables to compute a single value, called outcome. Now I have to implement this algorithm on an embedded device which has no memory limitations, but has very harsh processing constraints. Because of this, I would like to run a calculation engine which computes outcome for, say, 20 different values of each variable and stores this information in a file. You may think of this as a 5(or more)-dimensional matrix or 5(or more)-dimensional array, each dimension being 20 entries long. In any modern language, filling this array is as simple as having 5(or more) nested for loops. The tricky part is that I need to dump these values into a file that can then be placed onto the embedded device so that the device can use it as a lookup table. The questions now, are: What format(s) might be acceptable for storing the data? What programs (MATLAB, C#, etc) might be best suited to compute the data? C# must be used to import the data on the device - is this possible given your answer to #1?

    Read the article

  • servlet connection to DB

    - by underW
    Initially, after reading books on the subject, I firmly believed that the algorithm for working with a database from a servlet is as follows: create a connection - connect to the database - form a request - send the request to the database - get the query results - process them - close connection - OK. Now, with a better understanding of the practical side, I realized that nobody does it that way, and everything happens through a connection pool according to the following algorithm: initialize the servlet - create a connection pool - a request comes from a user - take a free connection from the pool - form a request - send the request to the database - get the query results - process them - return the connection back to the pool - ok. Now I have this problem: We have 100 users, they are divided into 10 groups, each group has it's own username and password to connect to the database. Moreover, each group may have different rights to the database. How am I supposed to use a connection pool in this situation? If I understand correctly, a pool is nothing more than just a group of similar connections with a single login and password. And here I have 10 pairs of username / password. It looks like I cannot use the pool in this situation. What should I do?

    Read the article

  • C++ STL question related to insert iterators and overloaded operators

    - by rshepherd
    #include <list> #include <set> #include <iterator> #include <algorithm> using namespace std; class MyContainer { public: string value; MyContainer& operator=(const string& s) { this->value = s; return *this; } }; int main() { list<string> strings; strings.push_back("0"); strings.push_back("1"); strings.push_back("2"); set<MyContainer> containers; copy(strings.begin(), strings.end(), inserter(containers, containers.end())); } The preceeding code does not compile. In standard C++ fashion the error output is verbose and difficult to understand. The key part seems to be this... /usr/include/c++/4.4/bits/stl_algobase.h:313: error: no match for ‘operator=’ in ‘__result.std::insert_iterator::operator* [with _Container = std::set, std::allocator ]() = __first.std::_List_iterator::operator* [with _Tp = std::basic_string, std::allocator ]()’ ...which I interpet to mean that the assignment operator needed is not defined. I took a look at the source code for insert_iterator and noted that it has overloaded the assignment operator. The copy algorithm must uses the insert iterators overloaded assignment operator to do its work(?). I guess that because my input iterator is on a container of strings and my output iterator is on a container of MyContainers that the overloaded insert_iterator assignment operator can no longer work. This is my best guess, but I am probably wrong. So, why exactly does this not work and how can I accomplish what I am trying to do?

    Read the article

  • How to merge arraylist element ?

    - by tiendv
    I have a string example = " Can somebody provide an algorithm sample code in your reply ", after token and do some i want on string example i have arraylist like : ArrayList <token > arl = " "Can somebody provide ", "code in your ", "somebody provide an algorith", " in your reply" ) "Can somebody provide ", i know position start and end in string test : star = 1 end = 3 " code in your ", i know position stat = 7 end = 10, "somebody provide an algorith", i know position stat = 7 end = 10, "in your reply" i know position stat = 11 end = 14, we can see,some element in arl overlaping :"Can somebody provide "," code in your ","somebody provide an algorith". The problem here is how can i merge overlaping element to recived arraylist like ArrayList result ="" Can somebody provide an algorithm sample code","" in your reply""; Here my code : but it only merge fist elecment if check is overloaping public ArrayList<TextChunks> finalTextChunks(ArrayList<TextChunks> textchunkswithkeyword) { ArrayList<TextChunks > result = (ArrayList<TextChunks>) textchunkswithkeyword.clone(); //System.out.print(result.size()); int j; for(int i=0;i< result.size() ;i++) { int index = i; if(i+1>=result.size()){ break; } j=i+1; if(result.get(i).checkOverlapingTwoTextchunks(result.get(j))== true) { TextChunks temp = new TextChunks(); temp = handleOverlaping(textchunkswithkeyword.get(i),textchunkswithkeyword.get(j),resultSearchEngine); result.set(i, temp); result.remove(j); i = index; continue; } } return result; } Thanks in avadce

    Read the article

  • The best way to predict performance without actually porting the code?

    - by ardiyu07
    I believe there are people with the same experience with me, where he/she must give a (estimated) performance report of porting a program from sequential to parallel with some designated multicore hardwares, with a very few amount of time given. For instance, if a 10K LoC sequential program was given and executes on Intel i7-3770k (not vectorized) in 100 ms, how long would it take to run if one parallelizes the code to a Tesla C2075 with NVIDIA CUDA, given that all kinds of parallelizing optimization techniques were done? (but you're only given 2-4 days to report the performance? assume that you didn't know the algorithm at all. Or perhaps it'd be safer if we just assume that it's an impossible situation to finish the job) Therefore, I'm wondering, what most likely be the fastest way to give such performance report? Is it safe to calculate solely by the hardware's capability, such as GFLOPs peak and memory bandwidth rate? Is there a mathematical way to calculate it? If there is, please prove your method with the corresponding problem description and the algorithm, and also the target hardwares' specifications. Or perhaps there already exists such tool to (roughly) estimate code porting? (Please don't the answer: 'kill yourself is the fastest way.')

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >