Search Results

Search found 60688 results on 2428 pages for 'spring data solr'.

Page 61/2428 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • How to make a legacy webapp spring aware at the container level for bean autowire into Servlets?

    - by Pete
    We have a legacy web application (not Spring based) and are looking for best practices to autowire some newer Spring configured (thread safe) service beans into instance variables in several of the legacy servlets. Rewriting every servlet to Spring MVC is out of scope. For testability, we do not want any Spring specific bean lookup code in the Servlets to look up beans by name or similar. Note that we are not concerned about web specific bean scopes such as session or request; all services are singleton scope. Below shows relevant code snippet MyServlet extends LegacyServletSuperclass { private MyThreadSafeServiceBean wantThisToBeAutowiredBySpring; .... }

    Read the article

  • Prevent Method call without Exception using @PreAuthorize Annotation

    - by Chepech
    Hi all. We are using Spring Security 3. We have a custom implementation of PermissionEvaluator that has this complex algorithm to grant or deny access at method level on the application. To do that we add a @PreAuthorize annotation to the method we want to protect (obviously). Everything is fine on that. However the behavior that we are looking for is that if a hasPermission call is denied, the protected method call only needs to be skipped, instead we are getting a 403 error each time that happens. Any ideas how to prevent that? You can find a different explanation of the problem here; AccessDeniedException handling during methodSecurityInterception

    Read the article

  • What's your "best practice" for the first Java EE Spring project?

    - by cringe
    I'm currently trying to get into the Java EE development with the Spring framework. As I'm new to Spring, it is hard to imaging how a good running project should start off. Do you have any best practices, tipps or major DO NOTs for a starter? How did you start with Spring - big project or small tutorial-like applications? Which technology did you use right away: AOP, complex Hibernate...

    Read the article

  • Spring/Eclipse 'referenced bean not found' warning when using <import>?

    - by HDave
    I have just broken up a Spring bean configuration file into smaller external files and have used the the "import" directive to include them in my Spring Test application context XML file. But whenever I reference one of the beans from the imported files I get a warning within Eclipse/STS/Spring XML editor complaining that "referenced bean 'foo' not found" Is this is a bug or is it me? It's really annoying because I don't want to disable the warning, yet at my company we try to eliminate all warnings.

    Read the article

  • Input not firing in jsp page

    - by GigaPr
    Hi, i have been using the spring mvc frameworks lately for a university project. Could you tell me why this work <FORM METHOD=POST ACTION="SaveName.jsp"> <input type="image" class="floatR marginTMinus10" src="images/delete.png" name="image" value="${rssItem.id}" alt="Delete"/> </FORM> while this not <input type="image" class="floatR marginTMinus10" src="images/delete.png" name="image" value="${rssItem.id}" alt="Delete"/> does it mean a button has to be in a form to work? Can i use a button? if yes how do i handle the event in the controller? thanks

    Read the article

  • What should the Java main method be for a standalone application (for Spring JMS) ?

    - by Brandon
    I am interested in creating a Spring standalone application that will run and wait to receive messages from an ActiveMQ queue using Spring JMS. I have searched a lot of places and cannot find a consistent way of implementing the main method for such a standalone application. There appears to be few examples of Spring standalone applications. I have looked at Tomcat, JBoss, ActiveMQ and other examples from the around the web but I have not come to a conclusion so ... What is the best practice for implementing a main method for a Java application (specifically Spring with JMS) ?

    Read the article

  • How should I secure my webapp written using Wicket, Spring, and JPA?

    - by Martin
    So, I have an web-based application that is using the Wicket 1.4 framework, and it uses Spring beans, the Java Persistence API (JPA), and the OpenSessionInView pattern. I'm hoping to find a security model that is declarative, but doesn't require gobs of XML configuration -- I'd prefer annotations. Here are the options so far: Spring Security (guide) - looks complete, but every guide I find that combines it with Wicket still calls it Acegi Security, which makes me think it must be old. Wicket-Auth-Roles (guide 1 and guide 2) - Most guides recommend mixing this with Spring Security, and I love the declarative style of @Authorize("ROLE1","ROLE2",etc). I'm concerned about having to extend AuthenticatedWebApplication, since I'm already extending org.apache.wicket.protocol.http.WebApplication, and Spring is already proxying that behind org.apache.wicket.spring.SpringWebApplicationFactory. SWARM / WASP (guide) - This looks the newest (though the main contributor passed away years ago), but I hate all of the JAAS-styled text files that declare permissions for principals. I also don't like the idea of making an Action class for every single thing a user might want to do. Secure models also aren't immediately obvious to me. Plus, there isn't an Authn example. Additionally, it looks like lots of folks recommend mixing the first and second options. I can't tell what the best practice is at all, though.

    Read the article

  • Why does Spring Security Core RC4 require Grails 2.3?

    - by bksaville
    On the Spring Security Core plugin GitHub repo, I see that Graeme on May 21st upped the required version of Grails from 2.0 to 2.3 before the RC4 version was released a couple of months later, but I don't see any explanation for why. Was it mismatched dependencies, bug reports, etc? I run a 2.2.4 app, and I would prefer not to upgrade at this point just to get the latest RC of spring security core. I understand if the upgrade to 3.2.0.RELEASE of spring security caused mismatched dependencies with older versions of Grails since I've run into the same issues before. This originally came up due to a pull request on the spring security OAuth2 provider plugin that I maintain. The pull request upped the required version to 2.3, and the requester pointed me to the RC4 release of core as the reason. Thanks for the good works as always!

    Read the article

  • Bookmarkable URL in JSF application - Trying to use Spring Webflow and JSF . Any suggestions ?

    - by vsingh
    Our application is JSF , hibernate & Spring. Currently the url is in following format http://www.skill-guru.com/skill/login/testDetails.faces?testId=62&testName=PMP-Certification-practice-test We want a clean url like http://www.skill-guru.com/urltitle?some parameter One of the ways we could do this is through integration with Spring webflow with JSF. Any other suggestions ? We are trying Spring webflow 1.0 with JSF 2.0 but that does not seem to work.

    Read the article

  • Sybase PowerDesigner Change Many (Find/Replace/Convert) Data Item's Data Types

    - by Andy
    Hello, I have a relatively large Conceptual Data Model in PowerDesigner. After generating a Physical Data Model and seeing the DBMS data types, I need to update all of data types(NUMBER/TEXT) for each data item. I'd like to either do a find/replace within the Conceptual Data Model or somehow map to different data types when creating the Physical Data Model. Ex. Change the auto conversion of Text - Clob, to Text - NVARCHAR(20). Thanks!

    Read the article

  • What to do with lookup entities selected from drop down select ? How to send them to the service lay

    - by arrages
    I am developing a spring mvc based application. I have a simple pojo form object, the problem is that many properties will be taked from drop down lists that are populated from lookup entities, so I return the entity ID to the form object. public NewCarRequestForm { private makeId; // this are selected from a drop down. private modelId; } Should I just send this lookup entity ID to the service layer? or should I validated that this ID is correct (Somebody can send any random ID through the request) before and how? Now there is a problem if I want to validated something based on some property of the lookup entity. Do I perform a database lookup of the entity just to perform the validation? thanks.

    Read the article

  • Is there a standard place to store Spring library jar files?

    - by richj
    I've downloaded Spring 3.0.2 with dependencies and found that it contains 405 jar files. I usually keep third party libraries in a "lib" subdirectory, but there are so many Spring jars that it seems sensible to keep them separately so that they don't swamp the other libraries and to simplify version upgrades. I suspect that I want to keep the full set of libraries in Subversion, but only deploy the subset that is actually used. Do Spring users have a standard way to deal with this problem?

    Read the article

  • are there any useful datasets available on the web for data mining?

    - by niko
    Hi, Does anyone know any good resource where example (real) data can be downloaded for experimenting statistics and machine learning techniques such as decision trees etc? Currently I am studying machine learning techniques and it would be very helpful to have real data for evaluating the accuracy of various tools. If anyone knows any good resource (perhaps csv, xls files or any other format) I would be very thankful for a suggestion.

    Read the article

  • @ExceptionHandler doesn't handle the thrown exceptions

    - by Javi
    Hello, I have a method in my controller which will handle the exceptions thrown by the application. So I have a method like this one. @Controller public class ExceptionController { @RequestMapping(value="/error") @ExceptionHandler(value={Exception.class, NullPointerException.class}) public String showError(Exception e, Model model){ return "tiles:error"; } } And to try I if it works I throw a NullPointerException in another method in other method controller: boolean a = true; if(a){ throw new NullPointerException(); } After the exception is thrown it is printed in the JSP, but it doesn't go throw my showError() method (I've set a breakpoint there and it never enters). showError() method will catch the exception and will show different error pages depending on the exception type (though now it always shows the same error page). If I go to the url /error it shows the error page so the showError() method is OK. I'm using Spring 3. What can be the problem? Thanks.

    Read the article

  • Looking for a tutorial and/or example for the following: Annotation based Spring 3 with JPA and/or h

    - by Conor
    I want to learn Spring. I'd like to start with Spring 3. I want a simple tutorial and/or example. So no full blown web example please. Also - not a trivial example - so something incorporating persistence (e.g. JPA or hibernate) would be nice. Also - I don't want to get bogged down writing XML. So - Annotation based Spring 3 with JPA and/or hibernate. Yes - there is a good reference for Spring 3.0, but it's XML based. I can't find anything else useful. Thanks in advance.

    Read the article

  • return only one document for each filter defined in the query

    - by Garytxo
    Hi all, In one of my latest projects I use Solr 1.4 for searching products.However I have ran into a slight problem, which I aint sure if its possible to do using Solr. All products are indexed by "country" and "category" and the "id", "class" and "description" are stored values. I now have been requested to extract a sample list of products that we have for a give "category" and "ONLY RETURNING ONE" product for each country where the product is available. In my current implementation, I have a dismax query to get a list of all the countries that correspond to the catergory, then I call again solr to extract all products for each country, limiting the no. rows by the size of the countries found in the previous query. The problem I have with this current implementation is I can not be certain that I have one product for each country in the list. Therefore would anyone know if it possible to tell solr that you want only one product per country provided in the query? Any guidance would be useful.

    Read the article

  • Big Data: Size isn’t everything

    - by Simon Elliston Ball
    Big Data has a big problem; it’s the word “Big”. These days, a quick Google search will uncover terabytes of negative opinion about the futility of relying on huge volumes of data to produce magical, meaningful insight. There are also many clichéd but correct assertions about the difficulties of correlation versus causation, in massive data sets. In reading some of these pieces, I begin to understand how climatologists must feel when people complain ironically about “global warming” during snowfall. Big Data has a name problem. There is a lot more to it than size. Shape, Speed, and…err…Veracity are also key elements (now I understand why Gartner and the gang went with V’s instead of S’s). The need to handle data of different shapes (Variety) is not new. Data developers have always had to mold strange-shaped data into our reporting systems, integrating with semi-structured sources, and even straying into full-text searching. However, what we lacked was an easy way to add semi-structured and unstructured data to our arsenal. New “Big Data” tools such as MongoDB, and other NoSQL (Not Only SQL) databases, or a graph database like Neo4J, fill this gap. Still, to many, they simply introduce noise to the clean signal that is their sensibly normalized data structures. What about speed (Velocity)? It’s not just high frequency trading that generates data faster than a single system can handle. Many other applications need to make trade-offs that traditional databases won’t, in order to cope with high data insert speeds, or to extract quickly the required information from data streams. Unfortunately, many people equate Big Data with the Hadoop platform, whose batch driven queries and job processing queues have little to do with “velocity”. StreamInsight, Esper and Tibco BusinessEvents are examples of Big Data tools designed to handle high-velocity data streams. Again, the name doesn’t do the discipline of Big Data any favors. Ultimately, though, does analyzing fast moving data produce insights as useful as the ones we get through a more considered approach, enabled by traditional BI? Finally, we have Veracity and Value. In many ways, these additions to the classic Volume, Velocity and Variety trio acknowledge the criticism that without high-quality data and genuinely valuable outputs then data, big or otherwise, is worthless. As a discipline, Big Data has recognized this, and data quality and cleaning tools are starting to appear to support it. Rather than simply decrying the irrelevance of Volume, we need as a profession to focus how to improve Veracity and Value. Perhaps we should just declare the ‘Big’ silent, embrace these new data tools and help develop better practices for their use, just as we did the good old RDBMS? What does Big Data mean to you? Which V gives your business the most pain, or the most value? Do you see these new tools as a useful addition to the BI toolbox, or are they just enabling a dangerous trend to find ghosts in the noise?

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • NSURLConnection receives data even if no data was thrown back

    - by Anna Fortuna
    Let me explain my situation. Currently, I am experimenting long-polling using NSURLConnection. I found this and I decided to try it. What I do is send a request to the server with a timeout interval of 300 secs. (or 5 mins.) Here is a code snippet: NSURL *url = [NSURL URLWithString:urlString]; NSURLRequest *request = [NSURLRequest requestWithURL:url cachePolicy:NSURLCacheStorageAllowedInMemoryOnly timeoutInterval:300]; NSData *data = [NSURLConnection sendSynchronousRequest:request returningResponse:&resp error:&err]; Now I want to test if the connection will "hold" the request if no data was thrown back from the server, so what I did was this: if (data != nil) [self performSelectorOnMainThread:@selector(dataReceived:) withObject:data waitUntilDone:YES]; And the function dataReceived: looks like this: - (void)dataReceived:(NSData *)data { NSLog(@"DATA RECEIVED!"); NSString *string = [NSString stringWithUTF8String:[data bytes]]; NSLog(@"THE DATA: %@", string); } Server-side, I created a function that will return a data once it fits the arguments and returns none if nothing fits. Here is a snippet of the PHP function: function retrieveMessages($vardata) { if (!empty($vardata)) { $result = check_data($vardata) //check_data is the function which returns 1 if $vardata //fits the arguments, and 0 if it fails to fit if ($result == 1) { $jsonArray = array('Data' => $vardata); echo json_encode($jsonArray); } } } As you can see, the function will only return data if the $result is equal to 1. However, even if the function returns nothing, NSURLConnection will still perform the function dataReceived: meaning the NSURLConnection still receives data, albeit an empty one. So can anyone help me here? How will I perform long-polling using NSURLConnection? Basically, I want to maintain the connection as long as no data is returned. So how will I do it? NOTE: I am new to PHP, so if my code is wrong, please point it out so I can correct it.

    Read the article

  • How to maintain an ordered table with Core Data (or SQL) with insertions/deletions?

    - by Jean-Denis Muys
    This question is in the context of Core Data, but if I am not mistaken, it applies equally well to a more general SQL case. I want to maintain an ordered table using Core Data, with the possibility for the user to: reorder rows insert new lines anywhere delete any existing line What's the best data model to do that? I can see two ways: 1) Model it as an array: I add an int position property to my entity 2) Model it as a linked list: I add two one-to-one relations, next and previous from my entity to itself 1) makes it easy to sort, but painful to insert or delete as you then have to update the position of all objects that come after 2) makes it easy to insert or delete, but very difficult to sort. In fact, I don't think I know how to express a Sort Descriptor (SQL ORDER BY clause) for that case. Now I can imagine a variation on 1): 3) add an int ordering property to the entity, but instead of having it count one-by-one, have it count 100 by 100 (for example). Then inserting is as simple as finding any number between the ordering of the previous and next existing objects. The expensive renumbering only has to occur when the 100 holes have been filled. Making that property a float rather than an int makes it even better: it's almost always possible to find a new float midway between two floats. Am I on the right track with solution 3), or is there something smarter?

    Read the article

  • How can I scrape specific data from a website

    - by Stoney
    I'm trying to scrape data from a website for research. The urls are nicely organized in an example.com/x format, with x as an ascending number and all of the pages are structured in the same way. I just need to grab certain headings and a few numbers which are always in the same locations. I'll then need to get this data into structured form for analysis in Excel. I have used wget before to download pages, but I can't figure out how to grab specific lines of text. Excel has a feature to grab data from the web (Data-From Web) but from what I can see it only allows me to download tables. Unfortunately, the data I need is not in tables.

    Read the article

  • How should I architect my Model and Data Access layer objects in my website?

    - by Robin Winslow
    I've been tasked with designing Data layer for a website at work, and I am very interested in architecture of code for the best flexibility, maintainability and readability. I am generally acutely aware of the value in completely separating out my actual Models from the Data Access layer, so that the Models are completely naive when it comes to Data Access. And in this case it's particularly useful to do this as the Models may be built from the Database or may be built from a Soap web service. So it seems to me to make sense to have Factories in my data access layer which create Model objects. So here's what I have so far (in my made-up pseudocode): class DataAccess.ProductsFromXml extends DataAccess.ProductFactory {} class DataAccess.ProductsFromDatabase extends DataAccess.ProductFactory {} These then get used in the controller in a fashion similar to the following: var xmlProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); var databaseProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); // Returns array of Product model objects var XmlProducts = databaseProductCreator.Products(); // Returns array of Product model objects var DbProducts = xmlProductCreator.Products(); So my question is, is this a good structure for my Data Access layer? Is it a good idea to use a Factory for building my Model objects from the data? Do you think I've misunderstood something? And are there any general patterns I should read up on for how to write my data access objects to create my Model objects?

    Read the article

  • Automate Restart Of Solr

    - by Brain Buddies
    I have 3 instance of solr running using tomcat (in shell u will find something like -Dcatalina.base=/usr/local/apache-tomcat-6.0.35 -) suing tomcat_1 (in shell u will find something like -Dcatalina.base=/usr/local/apache-tomcat-6.0.35_1 -) using tomcat_2 (in shell u will find something like -Dcatalina.base=/usr/local/apache-tomcat-6.0.35_2 -) Can i write a shell script which can kill the particular instance for 1 : kill tomcat but not tomcat_1 & tomcat_2 for 2 : kill tomcat_1 but not tomcat & tomcat_2 for 3 : kill tomcat_2 but not tomcat & tomcat_1

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >