Search Results

Search found 7490 results on 300 pages for 'algorithm analysis'.

Page 54/300 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • SSLCipherSuite - disable weak encryption, cbc cipher and md5 based algorithm

    - by John
    A developer recently ran a PCI Scan with TripWire against our LAMP server. They identified several issues and instructed the following to correct the issues: Problem: SSL Server Supports Weak Encryption for SSLv3, TLSv1, Solution: Add the following rule to httpd.conf SSLCipherSuite ALL:!aNULL:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM Problem: SSL Server Supports CBC Ciphers for SSLv3, TLSv1 Solution: Disable any cipher suites using CBC ciphers Problem: SSL Server Supports Weak MAC Algorithm for SSLv3, TLSv1 Solution: Disable any cipher suites using MD5 based MAC algorithms I tried searching google for a comprehensive tutorial on how to construct an SSLCipherSuite directive to meet my requirements, but I didn't find anything I could understand. I see examples of SSLCipherSuite directives, but I need an explanation on what each component of the directive does. So even in the directive SSLCipherSuite ALL:!aNULL:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM, I dont understand for example what the !LOW means. Can someone either a) tell me the SSLCipherSuite directive that will meet my needs or b) show me a resource that clearly explains each segment of a SSLCipherSuite is and how to construct one?

    Read the article

  • What is the difference between these two find algorithms? [migrated]

    - by Joe
    I have these two find algorithm which look the same to me. Can anyone help me out why they are actually different? Find ( x ) : if x.parent = x then return x else return Find ( x.parent ) vs Find ( x ) : if x.parent = x then return x else x.parent <- Find(x.parent) return x.parent I interpret the first one as int i = 0; return i++; while the second one as int i = 0; int tmp = i++; return tmp which are exactly the same to me.

    Read the article

  • What algorithms do "the big ones" use to cluster news?

    - by marco92w
    I want to cluster texts for a news website. At the moment I use this algorithm to find the related articles. But I found out that PHP's similar_text() gives very good results, too. What sort of algorithms do "the big ones", Google News, Topix, Techmeme, Wikio, Megite etc., use? Of course, you don't know exactly how the algorithms work. It's secret. But maybe someone knows approximately the way they work? The algorithm I use at the moment is very slow. It only compares two articles. So for having the relations between 5,000 articles you need about 12,500,000 comparisons. This is quite a lot. Are there alternatives to reduce the number of necessary comparisons? [I don't look for improvements for my algorithm.] What do "the big ones" do? I'm sure they don't always compare one article to another and this 12,500,000 times for 5,000 news. It would be great if somebody can say something about this topic.

    Read the article

  • What Algorithm will Find New Longtail Keywords for *keyword* in PPC

    - by Becci
    I am looking for the algorithm (or combo) that would allow someone to find new longtail PPC search phrases based on say one corekeyword. Eg #1 word word corekeyword eg #2 word corekeyword word Google search tool allows a limited number vertically - mostly of eg#1 (https://adwords.google.com.au/select/KeywordToolExternal) I also know of other PPC apps that allow more volume than google adwords keyword tool, But I want to find other combos that mention the corekeyword & then naturally sort for the highest volume searched. Working example of exact match: corekeyword: copywriter (40,500 searches a month) google will serve up: become a copywriter (480 searches globally/month in english) But if I specifically look up: How to become a copywriter (720 searches a month) This exact longtail keyword phrase has 300 more searches than the 3 word version spat out by google. I want the algorithm to find any other highly search exact longtials like: how to become a copywriter Simply because it was save significant $ finding other longtail keywords after your campaign has been running an made google lots of money. I don't want a concantenation algorithm (I already have one of those), because hypothetically, I don't know what keywords will be that I want to find. Any gurus out there? Becci

    Read the article

  • looking for a license key algorithm.

    - by giulio
    There are a lot of questions relating to license keys asked on stackoverflow. But they don't answer this question. Can anyone provide a simple license key algorithm that is technology independent and doesn't required a diploma in mathematics to understand ? The license key algorithm is similar to public key encryption. I just need something simple that can be implemented in any platform .Net/Java and uses simple data like characters. Preferably no byte translations required. So if a person presents a string, a complementary string can be generated that is the authorisation code. Below is a common scenario that it would be used for. Customer downloads s/w which generates a unique key upon initial startup/installation. S/w runs during trial period. At end of trial period an authorisation key is required. Customer goes to designated web-site, enters their code and get authorisation code to enable s/w, after paying :) Don't be afraid to describe your answer as though you're talking to a 5 yr old as I am not a mathemtician. Just need a decent basic algorithm, we're not launching nukes... NB: Please no philosophy on encryption nor who is Diffie-Hellman. I just need a basic solution.

    Read the article

  • How to keep only duplicates efficiently?

    - by Marc Eaddy
    Given an STL vector, I'd like an algorithm that outputs only the duplicates in sorted order, e.g., INPUT : { 4, 4, 1, 2, 3, 2, 3 } OUTPUT: { 2, 3, 4 } The algorithm is trivial, but the goal is to make it as efficient as std::unique(). My naive implementation modifies the container in-place: My naive implementation: void keep_duplicates(vector<int>* pv) { // Sort (in-place) so we can find duplicates in linear time sort(pv->begin(), pv->end()); vector<int>::iterator it_start = pv->begin(); while (it_start != pv->end()) { size_t nKeep = 0; // Find the next different element vector<int>::iterator it_stop = it_start + 1; while (it_stop != pv->end() && *it_start == *it_stop) { nKeep = 1; // This gets set redundantly ++it_stop; } // If the element is a duplicate, keep only the first one (nKeep=1). // Otherwise, the element is not duplicated so erase it (nKeep=0). it_start = pv->erase(it_start + nKeep, it_stop); } } If you can make this more efficient, elegant, or general, please let me know. For example, a custom sorting algorithm, or copy elements in the 2nd loop to eliminate the erase() call.

    Read the article

  • Bubble sort algorithm implementations (Haskell vs. C)

    - by kingping
    Hello. I have written 2 implementation of bubble sort algorithm in C and Haskell. Haskell implementation: module Main where main = do contents <- readFile "./data" print "Data loaded. Sorting.." let newcontents = bubblesort contents writeFile "./data_new_ghc" newcontents print "Sorting done" bubblesort list = sort list [] False rev = reverse -- separated. To see rev2 = reverse -- who calls the routine sort (x1:x2:xs) acc _ | x1 > x2 = sort (x1:xs) (x2:acc) True sort (x1:xs) acc flag = sort xs (x1:acc) flag sort [] acc True = sort (rev acc) [] False sort _ acc _ = rev2 acc I've compared these two implementations having run both on file with size of 20 KiB. C implementation took about a second, Haskell — about 1 min 10 sec. I have also profiled the Haskell application: Compile for profiling: C:\Temp ghc -prof -auto-all -O --make Main Profile: C:\Temp Main.exe +RTS -p and got these results. This is a pseudocode of the algorithm: procedure bubbleSort( A : list of sortable items ) defined as: do swapped := false for each i in 0 to length(A) - 2 inclusive do: if A[i] > A[i+1] then swap( A[i], A[i+1] ) swapped := true end if end for while swapped end procedure I wonder if it's possible to make Haskell implementation work faster without changing the algorithm (there's are actually a few tricks to make it work faster, but neither implementations have these optimizations)

    Read the article

  • AStar in a specific case in C#

    - by KiTe
    Hello. To an intership, I have use the A* algorithm in the following case : the unit shape is a square of height and width of 1, we can travel from a zone represented by a rectangle from another, but we can't travel outside these predifined areas, we can go from a rectangle to another through a door, represented by a segment on corresponding square edge. Here are the 2 things I already did but which didn't satisfied my boss : 1 : I created the following classes : -a Door class which contains the location of the 2 separated squares and the door's orientation (top, left, bottom, right), -a Map class which contains a door list, a rectangle list representing the walkable areas and a 2D array representing the ground's squares (for additionnal infomations through an enumeration) - classes for the A* algorithm (node, AStar) 2 : -a MapCase class, which contains information about the case effect and doors through an enumeration (with [FLAGS] attribute set on, to be able to cummulate several information on each case) -a Map classes which only contains a 2D array of MapCase classes - the classes for the A* algorithm (still node an AStar). Since the 2 version is better than the first (less useless calculation, better map classes architecture), my boss is not still satisfied about my mapping classes architecture. The A* and node classes are good and easily mainainable, so I don't think I have to explain them deeper for now. So here is my asking : has somebody a good idea to implement the A* with the problem specification (rectangle walkable but with a square unit area, travelling through doors)? He said that a grid vision of the problem (so a 2D array) shouldn't be the correct way to solve the problem. I wish I've been clear while exposing my problem .. Thanks KiTe

    Read the article

  • Fast, very lightweight algorithm for camera motion detection?

    - by Ertebolle
    I'm working on an augmented reality app for iPhone that involves a very processor-intensive object recognition algorithm (pushing the CPU at 100% it can get through maybe 5 frames per second), and in an effort to both save battery power and make the whole thing less "jittery" I'm trying to come up with a way to only run that object recognizer when the user is actually moving the camera around. My first thought was to simply use the iPhone's accelerometers / gyroscope, but in testing I found that very often people would move the iPhone at a consistent enough attitude and velocity that there wouldn't be any way to tell that it was still in motion. So that left the option of analyzing the actual video feed and detecting movement in that. I got OpenCV working and tried running their pyramidal Lucas-Kanade optical flow algorithm, which works well but seems to be almost as processor-intensive as my object recognizer - I can get it to an acceptable framerate if I lower the depth levels / downsample the image / track fewer points, but then accuracy suffers and it starts to miss some large movements and trigger on small hand-shaking-y ones. So my question is, is there another optical flow algorithm that's faster than Lucas-Kanade if I just want to detect the overall magnitude of camera movement? I don't need to track individual objects, I don't even need to know which direction the camera is moving, all I really need is a way to feed something two frames of video and have it tell me how far apart they are.

    Read the article

  • Need help implementing this algorithm with map Hadoop MapReduce

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. EDIT(i did not clarified i previos version)The data set I have is on a Hadoop cluster, and I should make its MapReduce implementation I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • How toget a list of "fastest miles" from a set of GPS Points

    - by santiagobasulto
    I'm trying to solve a weird problem. Maybe you guys know of some algorithm that takes care of this. I have data for a cargo freight truck and want to extract some data. Suppose I've got a list of sorted points that I get from the GPS. That's the route for that truck: [ { "lng": "-111.5373066", "lat": "40.7231711", "time": "1970-01-01T00:00:04Z", "elev": "1942.1789265256325" }, { "lng": "-111.5372056", "lat": "40.7228762", "time": "1970-01-01T00:00:07Z", "elev": "1942.109892409177" } ] Now, what I want to get is a list of the "fastest miles". I'll do an example: Given the points: A, B, C, D, E, F the distance from point A to point B is 1 mile, and the cargo took 10:32 minutes. From point B to point D i've got other mile, and the cargo took 10 minutes, etc. So, i need a list sorted by time. Similar to: B -> D: 10 A -> B: 10:32 D -> F: 11:02 Do you know any efficient algorithm that let me calculate that? Thank you all. PS: I'm using Python. EDIT: I've got the distance. I know how to calculate it and there are plenty of posts to do that. What I need is an algorithm to tokenize by mile and get speed from that. Having a distance function is not helpful enough: results = {} for point in points: aux_points = points.takeWhile(point>n) #This doesn't exist, just trying to be simple for aux_point in aux_points: d = distance(point, aux_point) if d == 1_MILE: time_elapsed = time(point, aux_point) results[time_elapsed] = (point, aux_point) I'm still doing some pretty inefficient calculations.

    Read the article

  • Django: Applying Calculations To A Query Set

    - by TheLizardKing
    I have a QuerySet that I wish to pass to a generic view for pagination: links = Link.objects.annotate(votes=Count('vote')).order_by('-created')[:300] This is my "hot" page which lists my 300 latest submissions (10 pages of 30 links each). I want to now sort this QuerySet by an algorithm that HackerNews uses: (p - 1) / (t + 2)^1.5 p = votes minus submitter's initial vote t = age of submission in hours Now because applying this algorithm over the entire database would be pretty costly I am content with just the last 300 submissions. My site is unlikely to be the next digg/reddit so while scalability is a plus it is required. My question is now how do I iterate over my QuerySet and sort it by the above algorithm? For more information, here are my applicable models: class Link(models.Model): category = models.ForeignKey(Category, blank=False, default=1) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) modified = models.DateTimeField(auto_now=True) url = models.URLField(max_length=1024, unique=True, verify_exists=True) name = models.CharField(max_length=512) def __unicode__(self): return u'%s (%s)' % (self.name, self.url) class Vote(models.Model): link = models.ForeignKey(Link) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) def __unicode__(self): return u'%s vote for %s' % (self.user, self.link) Notes: I don't have "downvotes" so just the presence of a Vote row is an indicator of a vote or a particular link by a particular user.

    Read the article

  • Geohashing - recursively find neighbors of neighbors

    - by itsme
    I am now looking for an elegant algorithm to recursively find neighbors of neighbors with the geohashing algorithm (http://www.geohash.org). Basically take a central geohash, and then get the first 'ring' of same-size hashes around it (8 elements), then, in the next step, get the next ring around the first etc. etc. Have you heard of an elegant way to do so? Brute force could be to take each neighbor and get their neighbors simply ignoring the massive overlap. Neighbors around one central geohash has been solved many times (here e.g. in Ruby: http://github.com/masuidrive/pr_geohash/blob/master/lib/pr_geohash.rb) Edit for clarification: Current solution, with passing in a center key and a direction, like this (with corresponding lookup-tables): def adjacent(geohash, dir) base, lastChr = geohash[0..-2], geohash[-1,1] type = (geohash.length % 2)==1 ? :odd : :even if BORDERS[dir][type].include?(lastChr) base = adjacent(base, dir) end base + BASE32[NEIGHBORS[dir][type].index(lastChr),1] end (extract from Yuichiro MASUI's lib) I say this approach will get ugly soon, because directions gets ugly once we are in ring two or three. The algorithm would ideally simply take two parameters, the center area and the distance from 0 being the center geohash only (["u0m"] and 1 being the first ring made of 8 geohashes of the same size around it (= [["u0t", "u0w"], ["u0q", "u0n"], ["u0j", "u0h"], ["u0k", "u0s"]]). two being the second ring with 16 areas around the first ring etc. Do you see any way to deduce the 'rings' from the bits in an elegant way?

    Read the article

  • Determining the chances of an event occurring when it hasn't occurred yet

    - by sanity
    A user visits my website at time t, and they may or may not click on a particular link I care about, if they do I record the fact that they clicked the link, and also the duration since t that they clicked it, call this d. I need an algorithm that allows me to create a class like this: class ClickProbabilityEstimate { public void reportImpression(long id); public void reportClick(long id); public double estimateClickProbability(long id); } Every impression gets a unique id, and this is used when reporting a click to indicate which impression the click belongs to. I need an algorithm that will return a probability, based on how much time has past since an impression was reported, that the impression will receive a click, based on how long previous clicks required. Clearly one would expect that this probability will decrease over time if there is still no click. If necessary, we can set an upper-bound, beyond which we consider the click probability to be 0 (eg. if its been an hour since the impression occurred, we can be pretty sure there won't be a click). The algorithm should be both space and time efficient, and hopefully make as few assumptions as possible, while being elegant. Ease of implementation would also be nice. Any ideas?

    Read the article

  • Need help implementing this algorithm with map reduce(hadoop)

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • Calculating distance between two X,Y coordinates

    - by Umopepisdn
    I am writing a tool for a game that involves calculating the distance between two coordinates on a spherical plane 500 units across. That is, [0,0] through [499,499] are valid coordinates, and [0,0] and [499,499] are also right next to each other. Currently, in my application, I am comparing the distance between a city with an [X,Y] location respective to the user's own [X,Y] location, which they have configured in advance. To do this, I found this algorithm, which kind of works: Math.sqrt ( dx * dx + dy * dy ); Because sorting a paged list by distance is a useful thing to be able to do, I implemented this algorithm in a MySQL query and have made it available to my application using the following part of my SELECT statement: SQRT( POW( ( ".strval($sourceX)." - cityX ) , 2 ) + POW( ( ".strval($sourceY)." - cityY ) , 2 ) ) AS distance This works fine for many calculations, but does not take into account the fact that [0,0] and [499,499] are kitty-corner to one another. Is there any way I can tweak this algorithm to generate an accurate distance, given that 0 and 499 are adjacent? Thanks, -Umo

    Read the article

  • On counting pairs of words that differ by one letter

    - by Quintofron
    Let us consider n words, each of length k. Those words consist of letters over an alphabet (whose cardinality is n) with defined order. The task is to derive an O(nk) algorithm to count the number of pairs of words that differ by one position (no matter which one exactly, as long as it's only a single position). For instance, in the following set of words (n = 5, k = 4): abcd, abdd, adcb, adcd, aecd there are 5 such pairs: (abcd, abdd), (abcd, adcd), (abcd, aecd), (adcb, adcd), (adcd, aecd). So far I've managed to find an algorithm that solves a slightly easier problem: counting the number of pairs of words that differ by one GIVEN position (i-th). In order to do this I swap the letter at the ith position with the last letter within each word, perform a Radix sort (ignoring the last position in each word - formerly the ith position), linearly detect words whose letters at the first 1 to k-1 positions are the same, eventually count the number of occurrences of each letter at the last (originally ith) position within each set of duplicates and calculate the desired pairs (the last part is simple). However, the algorithm above doesn't seem to be applicable to the main problem (under the O(nk) constraint) - at least not without some modifications. Any idea how to solve this?

    Read the article

  • License key pattern detection?

    - by Ricket
    This is not a real situation; please ignore legal issues that you might think apply, because they don't. Let's say I have a set of 200 known valid license keys for a hypothetical piece of software's licensing algorithm, and a license key consists of 5 sets of 5 alphanumeric case-insensitive (all uppercase) characters. Example: HXDY6-R3DD7-Y8FRT-UNPVT-JSKON Is it possible (or likely) to extrapolate other possible keys for the system? What if the set was known to be consecutive; how do the methods change for this situation, and what kind of advantage does this give? I have heard of "keygens" before, but I believe they are probably made by decompiling the licensing software rather than examining known valid keys. In this case, I am only given the set of keys and I must determine the algorithm. I'm also told it is an industry standard algorithm, so it's probably not something basic, though the chance is always there I suppose. If you think this doesn't belong in Stack Overflow, please at least suggest an alternate place for me to look or ask the question. I honestly don't know where to begin with a problem like this. I don't even know the terminology for this kind of problem.

    Read the article

  • Vacancy Tracking Algorithm implementation in C++

    - by Dave
    I'm trying to use the vacancy tracking algorithm to perform transposition of multidimensional arrays in C++. The arrays come as void pointers so I'm using address manipulation to perform the copies. Basically, there is an algorithm that starts with an offset and works its way through the whole 1-d representation of the array like swiss cheese, knocking out other offsets until it gets back to the original one. Then, you have to start at the next, untouched offset and do it again. You repeat until all offsets have been touched. Right now, I'm using a std::set to just fill up all possible offsets (0 up to the multiplicative fold of the dimensions of the array). Then, as I go through the algorithm, I erase from the set. I figure this would be fastest because I need to randomly access offsets in the tree/set and delete them. Then I need to quickly find the next untouched/undeleted offset. First of all, filling up the set is very slow and it seems like there must be a better way. It's individually calling new[] for every insert. So if I have 5 million offsets, there's 5 million news, plus re-balancing the tree constantly which as you know is not fast for a pre-sorted list. Second, deleting is slow as well. Third, assuming 4-byte data types like int and float, I'm using up actually the same amount of memory as the array itself to store this list of untouched offsets. Fourth, determining if there are any untouched offsets and getting one of them is fast -- a good thing. Does anyone have suggestions for any of these issues?

    Read the article

  • Find existence of number in a sorted list in constant time? (Interview question)

    - by Rich
    I'm studying for upcoming interviews and have encountered this question several times (written verbatim) Find or determine non existence of a number in a sorted list of N numbers where the numbers range over M, M N and N large enough to span multiple disks. Algorithm to beat O(log n); bonus points for constant time algorithm. First of all, I'm not sure if this is a question with a real solution. My colleagues and I have mused over this problem for weeks and it seems ill formed (of course, just because we can't think of a solution doesn't mean there isn't one). A few questions I would have asked the interviewer are: Are there repeats in the sorted list? What's the relationship to the number of disks and N? One approach I considered was to binary search the min/max of each disk to determine the disk that should hold that number, if it exists, then binary search on the disk itself. Of course this is only an order of magnitude speedup if the number of disks is large and you also have a sorted list of disks. I think this would yield some sort of O(log log n) time. As for the M N hint, perhaps if you know how many numbers are on a disk and what the range is, you could use the pigeonhole principle to rule out some cases some of the time, but I can't figure out an order of magnitude improvement. Also, "bonus points for constant time algorithm" makes me a bit suspicious. Any thoughts, solutions, or relevant history of this problem?

    Read the article

  • Sonar Analysis crashing with default configuration in Maven

    - by Robert Mandeville
    I'm starting to experiment with Sonar, and having trouble. I'm running everything on the same Red Hat Linux server, against Java 1.6.10. I launched the server with "bin/linux-x86-32" (the JVM is 32-bit). The sonar.log shows no SEVERE or ERROR and one WARNING, that I'm using the default Derby database (I'll fix that once I get things running at all). I am trying to build a Maven project that builds a JAR. I made no Sonar-specific changes (other than one described below). I can run "mvn clean install" with no problem. However, if I then run "mvn -e sonar:sonar", I get the stacktrace listed below. The server logs no events. I added the dependency "commons-pool:commons-pool:20030825.183949, but to no avail. Any idea as to what I'm doing wrong? [INFO] Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building buildUtil 1.0 [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- sonar-maven-plugin:2.0:sonar (default-cli) @ buildUtil --- [INFO] Sonar version: 2.14 [WARN] [14:54:17.730] Derby database should be used for evaluation purpose only [INFO] [14:54:17.732] Create JDBC datasource [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2.130s [INFO] Finished at: Mon Apr 09 14:54:17 EDT 2012 [INFO] Final Memory: 8M/198M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.codehaus.mojo:sonar-maven-plugin:2.0:sonar (default-cli) on project buildUtil: Can not execute Sonar: PicoLifecycleException: method 'public final org.sonar.core.persistence.DefaultDatabase org.sonar.core.persistence.DefaultDatabase.start()', instance 'org.sonar.batch.bootstrap.BatchDatabase@41b635, java.lang.RuntimeException: wrapper: org/apache/commons/pool/impl/GenericObjectPool: org.apache.commons.pool.impl.GenericObjectPool -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.codehaus.mojo:sonar-maven-plugin:2.0:sonar (default-cli) on project buildUtil: Can not execute Sonar at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:319) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196) at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352) Caused by: org.apache.maven.plugin.MojoExecutionException: Can not execute Sonar at org.codehaus.mojo.sonar.Bootstraper.executeMojo(Bootstraper.java:118) at org.codehaus.mojo.sonar.Bootstraper.start(Bootstraper.java:65) at org.codehaus.mojo.sonar.SonarMojo.execute(SonarMojo.java:90) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) ... 19 more Caused by: org.picocontainer.PicoLifecycleException: PicoLifecycleException: method 'public final org.sonar.core.persistence.DefaultDatabase org.sonar.core.persistence.DefaultDatabase.start()', instance 'org.sonar.batch.bootstrap.BatchDatabase@41b635, java.lang.RuntimeException: wrapper at org.picocontainer.monitors.NullComponentMonitor.lifecycleInvocationFailed(NullComponentMonitor.java:77) at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.monitorAndThrowReflectionLifecycleException(ReflectionLifecycleStrategy.java:132) at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:115) at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.start(ReflectionLifecycleStrategy.java:89) at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.start(AbstractInjectionFactory.java:84) at org.picocontainer.behaviors.AbstractBehavior.start(AbstractBehavior.java:169) at org.picocontainer.behaviors.Stored$RealComponentLifecycle.start(Stored.java:132) at org.picocontainer.behaviors.Stored.start(Stored.java:110) at org.picocontainer.DefaultPicoContainer.potentiallyStartAdapter(DefaultPicoContainer.java:1009) at org.picocontainer.DefaultPicoContainer.startAdapters(DefaultPicoContainer.java:1002) at org.picocontainer.DefaultPicoContainer.start(DefaultPicoContainer.java:760) at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:70) at org.sonar.batch.bootstrap.Module.start(Module.java:82) at org.sonar.batch.bootstrapper.Batch.startBatch(Batch.java:71) at org.sonar.batch.bootstrapper.Batch.execute(Batch.java:58) at org.sonar.maven3.SonarMojo.execute(SonarMojo.java:143) at org.codehaus.mojo.sonar.Bootstraper.executeMojo(Bootstraper.java:113) ... 23 more Caused by: java.lang.RuntimeException: wrapper at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.monitorAndThrowReflectionLifecycleException(ReflectionLifecycleStrategy.java:130) ... 38 more Caused by: java.lang.NoClassDefFoundError: org/apache/commons/pool/impl/GenericObjectPool at org.apache.commons.dbcp.BasicDataSourceFactory.createDataSource(BasicDataSourceFactory.java:152) at org.sonar.core.persistence.DefaultDatabase.initDatasource(DefaultDatabase.java:114) at org.sonar.core.persistence.DefaultDatabase.start(DefaultDatabase.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:110) ... 37 more Caused by: java.lang.ClassNotFoundException: org.apache.commons.pool.impl.GenericObjectPool at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:50) at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:244) at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:230) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 45 more [ERROR] [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException The POM I'm using is: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.phoenix.build</groupId> <artifactId>buildUtil</artifactId> <version>1.0</version> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <build> <sourceDirectory>src/main/java</sourceDirectory> <testSourceDirectory>src/test/java</testSourceDirectory> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.7.2</version> <configuration> <excludes> <exclude>**/*integrationTest.java</exclude> </excludes> </configuration> <executions> <execution> <id>integration-tests</id> <phase>integration-test</phase> <goals> <goal>test</goal> </goals> <configuration> <skip>false</skip> <excludes> <exclude>none</exclude> </excludes> <includes> <include>**/*integrationTest.java</include> </includes> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.2</version> <executions> <execution> <goals> <goal>test-jar</goal> </goals> </execution> </executions> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>commons-cli</groupId> <artifactId>commons-cli</artifactId> <version>1.2</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>jaxen</groupId> <artifactId>jaxen</artifactId> <version>1.1.1</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>dom4j</groupId> <artifactId>dom4j</artifactId> <version>1.6.1</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.8.2</version> <type>jar</type> <scope>test</scope> <optional>false</optional> </dependency> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-artifact</artifactId> <version>2.0</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>org.codehaus.plexus</groupId> <artifactId>plexus-classworlds</artifactId> <version>2.2.2</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>db2jcc</artifactId> <version>9.7</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>Common</artifactId> <version>9.7</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>db2fs</artifactId> <version>9.7</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>db2java</artifactId> <version>9.7</version> <type>zip</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>db2jcc_javax</artifactId> <version>9.7</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>db2jcc_license_cisuz</artifactId> <version>9.7</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>db2jcc_license_cu</artifactId> <version>9.7</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>db2policy</artifactId> <version>9.7</version> <type>jar</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>sqlj</artifactId> <version>9.7</version> <type>zip</type> <optional>false</optional> </dependency> <dependency> <groupId>com.ibm.db2</groupId> <artifactId>db2qgjava</artifactId> <version>9.7</version> <type>jar</type> <optional>false</optional> </dependency> </dependencies>

    Read the article

  • image analysis question for matlab

    - by raekwon
    Hi, I asked a similar question but wasn't quite clear before. I am trying to put a blue or an orange or some other colored circle (more like a donut) around a feature in an image so that I can track it. It should be more like a donut so that I can still view the image... Anyway, I have a multidimensional array and then extract 2D submatrices out of it and plot each one using imagesc(subarray). For all the subarrays there are multiple features that I've located and want to track. Is it possible in Matlab to put a donut/circle around them to do that? How would you do that?

    Read the article

  • ORM market analysis

    - by bonefisher
    I would like to see your experience with popular ORM tools outhere, like NHibernate, LLBLGen, EF, S2Q, Genom-e, LightSpeed, DataObjects.NET, OpenAccess, ... From my exp: - Genom-e is quiet capable of Linq & performance, dev support - EF lacks on some key features like lazy loading, Poco support, pers.ignorance... but in 4.o it may have overcome .. - DataObjects.Net so far good, althrough I found some bugs - NHibernate steep learning curve, no 100% Linq support (like in Genom-e and DataObjects.Net), but very supportive, extensible and mature

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >