Search Results

Search found 60688 results on 2428 pages for 'spring data solr'.

Page 18/2428 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Exception handling in Spring MVC with 3 layer architecture

    - by Chorochrondochor
    I am building a simple web applications with 3 layers - DAO, Service, MVC. When in my Controller I want to delete menu group and it contains menus I am getting ConstraintViolationException. Where should I handle this exception? In DAO, Service, or in Controller? Currently I am handling the exception in Controller. My code below. DAO method for deleting menu groups: @Override public void delete(E e){ if (e == null){ throw new DaoException("Entity can't be null."); } getCurrentSession().delete(e); } Service method for deleting menu groups: @Override @Transactional(readOnly = false) public void delete(MenuGroupEntity menuGroupEntity) { menuGroupDao.delete(menuGroupEntity); } Controller method for deleting menu groups in Controller: @RequestMapping(value = "/{menuGroupId}/delete", method = RequestMethod.GET) public ModelAndView delete(@PathVariable Long menuGroupId, RedirectAttributes redirectAttributes){ MenuGroupEntity menuGroupEntity = menuGroupService.find(menuGroupId); if (menuGroupEntity != null){ try { menuGroupService.delete(menuGroupEntity); redirectAttributes.addFlashAttribute("flashMessage", "admin.menu-group-deleted"); redirectAttributes.addFlashAttribute("flashMessageType", "success"); } catch (Exception e){ redirectAttributes.addFlashAttribute("flashMessage", "admin.menu-group-could-not-be-deleted"); redirectAttributes.addFlashAttribute("flashMessageType", "danger"); } } return new ModelAndView("redirect:/admin/menu-group"); }

    Read the article

  • Spring 3 MVC : problem with form:errors and bindingresult

    - by Maxime ARNSTAMM
    Hi, I want to validate my inputs but i can't make it work : nothing appear on the page. My project is in java 5, so no JSR303 (@Valid). My only solution, if i'm not mistaken, is to use BindingResult. My controller : @Controller public class MyController { @RequestMapping(method = RequestMethod.POST, value = "myPage.html") public void myHandler(MyForm myForm, BindingResult result, Model model) { result.reject("field1", "error message 1"); } } My jsp : <form:form commandName="myForm" method="post"> <label>Field 1 : </label> <form:input path="field1" /> <form:errors path="field1" /> <input type="submit" value="Post" /> </form:form> What am i missing ? Thanks !

    Read the article

  • spring mvc nested model validation

    - by hguser
    I have two models : User,Project public class Project{ private int id; @NotEmpty(message="Project Name can not be empty") private String name; private User manager; private User operator; //getter/setter omitted } public class User{ private int id; private String name; //omit other properties and getter/setter } Now, when I create a new Project, I will submit the following parameters to ProjectController: projects?name=jhon&manager.id=1&operator.id=2... Then I will create a new Project object and insert it to db. However I have to validate the id of the manager and operator is valid,that's to say I will validate that if there is matched id in the user table. So I want to know how to implement this kind of validation?

    Read the article

  • ExceptionHandling with Spring 3

    - by mjf
    I have this controller: @RequestMapping(value = "*.xls", method = RequestMethod.GET) public String excel(Model model) { return "excel"; The excel wiew opens actually a ExcelViewer, which is build in method protected void buildExcelDocument(Map<String, Object> map, WritableWorkbook ww, HttpServletRequest hsr, HttpServletResponse hsr1) throws Exception { Class.writecontent Class.writeMoreContent Called methods write content to the Excel sheet and they can throw e.g biffException. How can I show a certain error page when Exception is occured? I tried this @Controller public class ExcelController { @ExceptionHandler(BiffException.class) public String handleException(BiffException ex) { return "fail"; } @RequestMapping(value = "*.xls", method = RequestMethod.GET) public String excel(Model model) { return "excel"; } } But I'm getting the server's error message about Exceptions. Maybe a bean definition missing?

    Read the article

  • Spring 3.0 MVC mvc:view-controller tag

    - by gouki
    Here's a snippet of my mvc-config.xml <bean class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="prefix" value="/WEB-INF/views/"/> <property name="suffix" value=".jsp"/> </bean> <mvc:view-controller path="/index" view-name="welcome"/> <mvc:view-controller path="/static/login" view-name="/static/login"/> <mvc:view-controller path="/login" view-name="/static/login"/> I have the welcome.jsp on /WEB-INF/view/ directory and login.jsp on /WEB-INF/view/static/. This work for '/index' and '/login' paths. But I'm getting 404 response for '/static/login' when invoked from the browser. I'm expecting that '/static/login/' and '/login' should behave the same. What could be wrong here? Would appreciate any help. Thanks!

    Read the article

  • Spring Framework HttpRequestHandler failure

    - by sharadva
    We have an application which communicates via REST requests made by clients. The REST requests contain "region name" and a "ID" as parameters So, a request would look something like this (for a DELETE) http://host:port/regionnameID These REST requests between regions in a federation are properly URL encoded I find that these request fail if the region name has a slash ("/") in it. Then, the request would look like so http://host:port/region/nameID This is due to incorrect interpretation of the Rest URL by HttpRequesthandler when there is a '/' in the region name. Now, we have no control over clients sending REST request with "/" in the Region name. Is there any method / configuration / workaround that can be done to prevent the HttpRequestHandler from returning 404

    Read the article

  • Rendering Views as String with Spring MVC and Apache Tiles

    - by lynxforest
    I am trying to reuse some of my tiles in a controller which is returning a json response to the client. I would like to return a json response similar to the following format: { 'success': <true or false>, 'response': <the contents of an apache tile> } In my controller I would like to perform logic similar to this pseudocode: boolean valid = validator.validate(modelObj) String response = "" if(valid){ response = successView.render() // im looking for a way to actually accomplish // this, where the successView is the apache tiles view. // I would also need to pass a model map to the view somehow. }else{ response = errorView.render() } writeJsonResponse(httpResponse, /* a Map whose json representation looks like the one I described above. */)

    Read the article

  • Return to the same view using ModelAndview in Spring MVC framework

    - by ria
    I have a page with a link. On clicking the link a page opens up in a separate window. This new page has a form. On submitting this form, some operation takes place at the server. The result of which needs to be redirected to the same page. However after the operation on using the following: return new ModelAndView("newUser"); //This view "newUser" actually maps to the popped up window. A similar new window again pops up and the message gets displayed on this new page. How to go about this?

    Read the article

  • Spring MVC referencing params variable from RequestMapping

    - by NomNomNom
    Hi guys, I have the method below: @RequestMapping(value = "/path/to/{iconId}", params="size={iconSize}", method = RequestMethod.GET) public void webletIconData(@PathVariable String iconId, @PathVariable String iconSize, HttpServletResponse response) throws IOException { // Implementation here } I know how to pass the variable "webletId" from the RequestMapping using the @PathVariable, but how do I reference the variable "iconSize" from params? The same way? Thanks a lot.

    Read the article

  • Big Data Appliance X4-2 Release Announcement

    - by Jean-Pierre Dijcks
    Today we are announcing the release of the 3rd generation Big Data Appliance. Read the Press Release here. Software Focus The focus for this 3rd generation of Big Data Appliance is: Comprehensive and Open - Big Data Appliance now includes all Cloudera Software, including Back-up and Disaster Recovery (BDR), Search, Impala, Navigator as well as the previously included components (like CDH, HBase and Cloudera Manager) and Oracle NoSQL Database (CE or EE). Lower TCO then DIY Hadoop Systems Simplified Operations while providing an open platform for the organization Comprehensive security including the new Audit Vault and Database Firewall software, Apache Sentry and Kerberos configured out-of-the-box Hardware Update A good place to start is to quickly review the hardware differences (no price changes!). On a per node basis the following is a comparison between old and new (X3-2) hardware: Big Data Appliance X3-2 Big Data Appliance X4-2 CPU 2 x 8-Core Intel® Xeon® E5-2660 (2.2 GHz) 2 x 8-Core Intel® Xeon® E5-2650 V2 (2.6 GHz) Memory 64GB 64GB Disk 12 x 3TB High Capacity SAS 12 x 4TB High Capacity SAS InfiniBand 40Gb/sec 40Gb/sec Ethernet 10Gb/sec 10Gb/sec For all the details on the environmentals and other useful information, review the data sheet for Big Data Appliance X4-2. The larger disks give BDA X4-2 33% more capacity over the previous generation while adding faster CPUs. Memory for BDA is expandable to 512 GB per node and can be done on a per-node basis, for example for NameNodes or for HBase region servers, or for NoSQL Database nodes. Software Details More details in terms of software and the current versions (note BDA follows a three monthly update cycle for Cloudera and other software): Big Data Appliance 2.2 Software Stack Big Data Appliance 2.3 Software Stack Linux Oracle Linux 5.8 with UEK 1 Oracle Linux 6.4 with UEK 2 JDK JDK 6 JDK 7 Cloudera CDH CDH 4.3 CDH 4.4 Cloudera Manager CM 4.6 CM 4.7 And like we said at the beginning it is important to understand that all other Cloudera components are now included in the price of Oracle Big Data Appliance. They are fully supported by Oracle and available for all BDA customers. For more information: Big Data Appliance Data Sheet Big Data Connectors Data Sheet Oracle NoSQL Database Data Sheet (CE | EE) Oracle Advanced Analytics Data Sheet

    Read the article

  • Review: Data Modeling 101

    I just recently read “Data Modeling 101”by Scott W. Ambler where he gave an overview of fundamental data modeling skills. I think this article was excellent for anyone who was just starting to learn or refresh their skills in regards to the modeling of data.  Scott defines data modeling as the act of exploring data oriented structures.  He goes on to explain about how data models are actually used by defining three different types of models. Types of Data Models Conceptual Data Model  Logical Data Model (LDMs) Physical Data Model(PDMs) He further expands on modeling by exploring common data modeling notations because there are no industry standards for the practice of data modeling. Scott then defines how to actually model data by expanding on entities, attributes, identities, and relationships which are the basic building blocks of data models. In addition he discusses the value of normalization for redundancy and demoralization for performance. Finally, he discuss ways in which Developers and DBAs can become better data modelers through the use of practice, and seeking guidance from more experienced data modelers.

    Read the article

  • HTML5 data-* (custom data attribute)

    - by Renso
    Goal: Store custom data with the data attribute on any DOM element and retrieve it. Previously under HTML4 we used to use classes to store custom data, something to the affect of <input class="account void limit-5000 over-4999" /> and then have to parse the data out of the class In a book published by Peter-Paul Koch in 2007, ppk on JavaScript, he explains why and how to use custom attributes to make data more accessible to JavaScript, using name-value pairs. Accessing a custom attribute account-limit=5000 is much easier and more intuitive than trying to parse it out of a class, Plus, what if the class name for example "color-5" has a representative class definition in a CSS stylesheet that hides it away or worse some JavaScript plugin that automatically adds 5000 to it, or something crazy like that, just because it is a valid class name. As you can see there are quite a few reasons why using classes is a bad design and why it was important to define custom data attributes in HTML5. Syntax: You define the data attribute by simply prefixing any data item you want to store with any HTML element with "data-". For example to store our customers account data with a hidden input element: <input type="hidden" data-account="void" data-limit=5000 data-over=4999  /> How to access the data: account  -     element.dataset.account limit    -     element.dataset.limit You can also access it by using the more traditional get/setAttribute method or if using jQuery $('#element').attr('data-account','void') Browser support: All except for IE. There is an IE hack around this at http://gist.github.com/362081. Special Note: Be AWARE, do not use upper-case when defining your data elements as it is all converted to lower-case when reading it, so: data-myAccount="A1234" will not be found when you read it with: element.dataset.myAccount Use only lowercase when reading so this will work: element.dataset.myaccount

    Read the article

  • Master Data Management – A Foundation for Big Data Analysis

    - by Manouj Tahiliani
    While Master Data Management has crossed the proverbial chasm and is on its way to becoming mainstream, businesses are being hammered by a new megatrend called Big Data. Big Data is characterized by massive volumes, its high frequency, the variety of less structured data sources such as email, sensors, smart meters, social networks, and Weblogs, and the need to analyze vast amounts of data to determine value to improve upon management decisions. Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data. For analytical success from Big Data or in other words ROI from Big Data Investments, businesses need to acquire, organize and analyze the deluge of data to make better decisions. There will need to be a coexistence of structured and unstructured data and to maintain a tight link between the two to extract maximum insights. MDM is the catalyst that helps maintain that tight linkage by providing an understanding about the identity, characteristics of Persons, Companies, Products, Suppliers, etc. associated with the Big Data and thereby help accelerate ROI. In my next post I will discuss about patterns for co-existing Big Data Solutions and MDM. Feel free to provide comments and thoughts on above as well as Integration or Architectural patterns.

    Read the article

  • How to Assure an Effective Data Model

    As a general rule in my opinion the effectiveness of a data model can be directly related to the accuracy and complexity of a project’s requirements. For example there is no need to work on very detailed data models when the details surrounding a specific data model have not been defined or even clarified. Developing data models when the clarity of project requirements is limited tends to introduce designed issues because the proper details to create an effective data model are not even known. One way to avoid this issue is to create data models that correspond to the complexity of the existing project requirements so that when requirements are updated then new data models can be created based any new discoveries regarding requirements on a fine grain level.  This allows for data models to be composed of general entities to be created initially when a project’s requirements are very vague and then the entities are refined as new and more substantial requirements are defined or redefined. This promotes communication amongst all stakeholders within a project as they go through the process of defining and finalizing project requirements.In addition, here are some general tips that can be applied to projects in regards to data modeling.Initially model all data generally and slowly reactor the data model as new requirements and business constraints are applied to a project.Ensure that data modelers have the proper tools and training they need to design a data model accurately.Create a common location for all project documents so that everyone will be able to review a project’s data models along with any other project documentation.All data models should follow a clear naming schema that tells readers the intended purpose for the data and how it is going to be applied within a project.

    Read the article

  • Apache Solr: Setting HTTP Response Headers From solrconfig.xml For CORS

    - by Noah Freitas
    Is it possible to setup the sending of a custom HTTP response header from within the solrconfig.xml file? I am thinking that it might be possible to add some configuration to the <requestDispatcher> since it controls caching headers. I am sure this is possible in the servlet container configuration (Jetty, Tomcat, etc.), but I would like to do this from within Solr's configuration files if at all possible. If this makes any difference, I am attempting to set an Access-Control-Allow-Origin header for CORS AJAX requests for a different host.

    Read the article

  • Highlighting in Solr 1.4 - requireFieldMatch

    - by Mark Redding
    I have an object Title : foo Summary : foo bar Body : this is a published story about a foo and a bar All three are set up as fields with stored=true. The user searches across my system for the word "foo" I would like to highlight foo in all three places. The user searches for the word foo in the title "title:foo" I only want to highlight foo within the title. When I added hl.requireFieldMatch=true and hl.usePhraseHighlighter=true as part of my query over to SOLR I am unable to get the highlighting in all three places when doing a generic non fielded search. Is there a way to get both scenarios to work? I had these two items turned off, but I am adding in some fielded portions of the query that the user does not see which only display Published items for instance. the problem is (foo AND status:published) is causing the word published in the body to highlight when the user only searched for the word "foo".

    Read the article

  • Random noise in Solr score

    - by Andrea Campi
    I am looking for a way of introducing random noise into my scoring function, and I'm at a loss on how to best proceed. Some background: We use Solr for a web application that manages large-ish sets of photos for agencies. One customer has an interesting requirement for scoring: 'quality' field, maintained by editors, from 1 (highest) to 3 (lowest); 'date' field, boosting more recent photos; I would probably use a logarithmic function; However, due to how the stock photo market works, this will likely result in many similar photos appearing together. Their request is to give 'quality' a large boost, but introduce some randomness so that photos will not appear in a strict date order. Any idea? EDITED: a key requirement is to have "stable" query results: if I search twice for "tropical island" I can get a slightly different result set, but if I ask for the first page, then the second, then the first, I'd better get the same results :)

    Read the article

  • SOLR date faceting and BC / BCE dates / negative date ranges

    - by Nigel_V_Thomas
    Date ranges including BC dates is this possible? I would like to return facets for all years between 11000 BCE (BC) and 9000 BCE (BC) using SOLR. A sample query might be with date ranges converted to ISO 8601: q=*:*&facet.date=myfield_earliestDate&facet.date.end=-92009-01-01T00:00:00&facet.date.gap=%2B1000YEAR&facet.date.other=all&facet=on&f.myfield_earliestDate.facet.date.start=-112009-01-01T00:00:00 However the returned results seem to be suggest that dates are in positive range, ie CE, not BCE... see sample returned results <response> <lst name="responseHeader"> <int name="status">0</int> <int name="QTime">6</int> <lst name="params"> <str name="f.vra.work.creation.earliestDate.facet.date.start">-112009-01-01T00:00:00Z</str> <str name="facet">on</str> <str name="q">*:*</str> <str name="facet.date">vra.work.creation.earliestDate</str> <str name="facet.date.gap">+1000YEAR</str> <str name="facet.date.other">all</str> <str name="facet.date.end">-92009-01-01T00:00:00Z</str> </lst> </lst> <result name="response" numFound="9556" start="0">ommitted</result> <lst name="facet_counts"> <lst name="facet_queries"/> <lst name="facet_fields"/> <lst name="facet_dates"> <lst name="vra.work.creation.earliestDate"> <int name="112010-01-01T00:00:00Z">0</int> <int name="111010-01-01T00:00:00Z">0</int> <int name="110010-01-01T00:00:00Z">0</int> <int name="109010-01-01T00:00:00Z">0</int> <int name="108010-01-01T00:00:00Z">0</int> <int name="107010-01-01T00:00:00Z">0</int> <int name="106010-01-01T00:00:00Z">0</int> <int name="105010-01-01T00:00:00Z">0</int> <int name="104010-01-01T00:00:00Z">0</int> <int name="103010-01-01T00:00:00Z">0</int> <int name="102010-01-01T00:00:00Z">0</int> <int name="101010-01-01T00:00:00Z">0</int> <int name="100010-01-01T00:00:00Z">5781</int> <int name="99010-01-01T00:00:00Z">0</int> <int name="98010-01-01T00:00:00Z">0</int> <int name="97010-01-01T00:00:00Z">0</int> <int name="96010-01-01T00:00:00Z">0</int> <int name="95010-01-01T00:00:00Z">0</int> <int name="94010-01-01T00:00:00Z">0</int> <int name="93010-01-01T00:00:00Z">0</int> <str name="gap">+1000YEAR</str> <date name="end">92010-01-01T00:00:00Z</date> <int name="before">224</int> <int name="after">0</int> <int name="between">5690</int> </lst> </lst> </lst> </response> Any ideas why this is the case, can solr handle negative dates such as -112009-01-01T00:00:00Z?

    Read the article

  • Per query relevance elevation for solr?

    - by plusplus
    I want to tune the relevance of solr search results on a per user basis - based on the number of times the user has clicked through a result before. Frequently hit items FOR THAT USER should rise to the top of their search results. Is there a way to provide custom boost/elevation for particular document ids on the query? I'm thinking in the order of ~100s of particular documents to elevate. The elevation should have no effect if the rest of the query doesn't find those documents. Alternatively, if this isn't possible, what is a sane way for setting up an alternative indexing approach that would make this possible? Could I add a field per user in the index to store their scores? I'm thinking in the order of 1000 users. The major drawback of that approach is the number of times a document would need to be reindexed (i.e. each time it was used by the user).

    Read the article

  • Best Practice of Field Collapsing in SOLR 1.4

    - by Dominik
    I need a way to collapse duplicate (defined in terms of a string field with an id) results in solr. I know that such a feature is comming in the next version (1.5), but I can't wait for that. What would be the best way to remove duplicates using the current stable version 1.4? Given that finding duplicates in my case is really easy (comparison of a string field), should it be a Filter, should I overwrite the existing SearchComponent or write a new Component, or use some external libraries like carrot2? The overall result count should reflect the shortened result.

    Read the article

  • what this `^` mean here in solr

    - by Rahul Mehta
    I am confuse her but i want to clear my doubt. I think it is stupid question but i want to know. Use a TokenFilter that outputs two tokens (one original and one lowercased) for each input token. For queries, the client would need to expand any search terms containing upper case characters to two terms, one lowercased and one original. The original search term may be given a boost, although it may not be necessary given that a match on both terms will produce a higher score. text:NeXT ==> (text:NeXT^10 OR text:next) what this ^ mean here . http://wiki.apache.org/solr/SolrRelevancyCookbook#Relevancy_and_Case_Matching

    Read the article

  • Use different Solr Similarity algo for every search

    - by snickernet
    Hi Guys, Is possible in Solr 1.4 to specify which similarity class to use for every search within a single index? Let's say, I got 2 type of search (keyword and brand). For keyword search, I want to use the DefaultSimilarity class. But, for brand search, I want to use my CustomSimilarity class. I've been modifying the schema.xml to specify a single similarity class to use. But, I came to this requirement that I have to use 2 different similarity classes. I'll be glad to here your thoughts on this. Thanks in advance.

    Read the article

  • Why Solr admin query page interprets UTF-8 as ISO-8859-1

    - by Scott Chu
    I deploy a war to my Tomcat 6.0.35 on Win7 64bit and when I use full-interface query page (I mean form.jsp) in Solr Admin to query 2 Chinese character (say it's C1C2) , the debug info shows: <lst name="debug"> <str name="rawquerystring">æ°è</str> <str name="querystring">æ°è</str> <str name="parsedquery">NEWSID:æ°è</str> <str name="parsedquery_toString">NEWSID:æ°è</str> ... You can see C1C2 becomes æ°è. I deploy same war file to Tomcat on Linux or on another Win7 64bit of my colleagues' computer, the encoding acts well. Does anyone know why and how can I avoid this problem? Thanks in advance!

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >