Search Results

Search found 284 results on 12 pages for 'solr'.

Page 9/12 | < Previous Page | 5 6 7 8 9 10 11 12  | Next Page >

  • Solr authentication possible? (or apache port authentication would also work)

    - by Camran
    Currently anybody can access the solr admin page by going to my_ip:8983/solr I can't have it like that, so how can I make it prompt for password or something? I have setup my servers apache2.conf file to prompt for password whenever my site is accessed by www.mydomain.com. But when using another port, the "require password" wont show up. Any ideas how to secure this? Don't point me to the SolrSecurity wiki because it's simply too outdated. I have tried it without luck. Thanks

    Read the article

  • Moving a file using PuTTY

    - by Paul Trotter
    I am newbie struggling to move a file on a Linux VPS using PuTTY. I can log in with a user in PuTTY at this point I can navigate to see the file I wish to move (~/servers/apache-solr-3.6.2/example/webapps/solr.war). By using cd .. a couple of times from the directory I begin at when I first log in to PuTTY I can then navigate to the location I wish to move the file to: usr/local/jakarta/apache-tomcat-5.5.36/webapps/ I know that I need to use cp to copy the file and have tried variations on: cp ~/servers/apache-solr-3.6.2/example/webapps/solr.war usr/local/jakarta/apache-tomcat-5.5.36/webapps However each time I get 'No such file or directory' I have tried excluding the ~/ and the start and I have tried specifying solr.war at the end of the command. Please excuse the newbie question, but I would really appreciate some advice on what I am doing wrong here.

    Read the article

  • How do I move a linked file on Unix?

    - by r3mbol
    I have a bunch of files in one directory and links to each one of those files in another directory. So ls -l looks something like this: lrwxrwxrwx 1 rembol rembol 89 Jan 25 10:00 copyright.txt -> /home/rembol/solr/target/deploy/data/core/copyright.txt lrwxrwxrwx 1 rembol rembol 92 Jan 25 10:00 jar-versions.xml -> /home/rembol/solr/target/deploy/data/core/jar-versions.xml lrwxrwxrwx 1 rembol rembol 85 Jan 25 10:00 lgpl.html -> /home/rembol/solr/target/deploy/data/core/lgpl.html lrwxrwxrwx 1 rembol rembol 79 Jan 25 10:00 lib -> /home/rembol/solr/target/deploy/data/core/lib lrwxrwxrwx 1 rembol rembol 87 Jan 25 10:00 readme.html -> /home/rembol/solr/target/deploy/data/core/readme.html drwxr-xr-x 3 rembol rembol 4096 Jan 25 10:00 server drwxr-xr-x 2 rembol rembol 4096 Jan 25 10:00 startup Now I want to move those linked files from /home/rembol/solr/target/deploy to /home/rembol/output/. If I do that my simply calling mv, links will break. I don't want to re-link each file separately, cause there are hundreds of them (they are generated automatically). Is there some clever way to move linked files, rather than writing a script that unlinks, moves and relinks recursively for each file in each subdirectory?

    Read the article

  • How to copy symlinks to target as normal folders

    - by Marek
    Hi i have a folder with symlinks: marek@marek$ ls -al /usr/share/solr/ razem 36 drwxr-xr-x 5 root root 4096 2010-11-30 08:25 . drwxr-xr-x 358 root root 12288 2010-11-26 12:25 .. drwxr-xr-x 3 root root 4096 2010-11-24 14:29 admin lrwxrwxrwx 1 root root 14 2010-11-24 14:29 conf -> /etc/solr/conf i want to copy it to ~/solrTest but i want to copy files from symlink as well when i try to cp -r /usr/share/solr/ ~/solrTest i will have symlink here: marek@marek$ ls -al ~/solrTest razem 36 drwxr-xr-x 5 root root 4096 2010-11-30 08:25 . drwxr-xr-x 358 root root 12288 2010-11-26 12:25 .. drwxr-xr-x 3 root root 4096 2010-11-24 14:29 admin lrwxrwxrwx 1 root root 14 2010-11-24 14:29 conf -> /etc/solr/conf

    Read the article

  • RSolr::Error::Http (RSolr::Error::Http - 404 Not Found) heruku

    - by Sapna
    I'm working on web solar in my rails application,my Application user WEBSolr for searching. My local everything working fine but when I deploy my code to heruko, my application get stopped , and its giving me error of RSolr::Error::Http (RSolr::Error::Http - 404 Not Found) also below are the actual error that I find in Heroku log, Any help is appreciate . HTTP Status 404 - /solr/b36591faf4e_m0/selecttype Status reportmessage /solr/b36591faf4e_m0/selectdescription The requested resource (/solr/b36591faf4e_m0/select) is not available.Apache Tomcat/6.0.28

    Read the article

  • In 'apt-cache depends' output, what is the meaning of Suggests, Recommends, |, <>?

    - by fred.bear
    I've checked the man/info page, but there is no reference to some aspects of the output fomat of apt-cache depends The man/info page tried to be helpful (in an obtuse manner); quote: "For the specific meaning of the remainder of the output it is best to consult the apt source code" Now in fairness to the info page, that quote was in regards to the 'showpkg' option which it had reasonably explained, but my option had no such explanation... I understand that Linux info comes from many sources (not just man/info pages), and I don't particularly want to rummage through the source (altough somtimes I do), so here is an example of what I'd like to know the meaning of. # I can assume what these mean, but... # What does | mean? (probably means 'or'???) # What does <pkg> and the following indentations mean? # At the end, the interaction(?) of Suggest and Recommends puzzles me. $ apt-cache depends solr-common solr-common Depends: debconf |Depends: openjdk-6-jre-headless |Depends: <java5-runtime-headless> default-jre-headless gcj-4.4-jre-headless gcj-jre-headless gij-4.3 openjdk-6-jre-headless Depends: <java6-runtime-headless> default-jre-headless openjdk-6-jre-headless Depends: libcommons-codec-java Depends: libcommons-csv-java Depends: libcommons-fileupload-java Depends: libcommons-httpclient-java Depends: libcommons-io-java Depends: libjaxp1.3-java Depends: libjetty-java Depends: liblucene2-java Depends: libservlet2.5-java Depends: libslf4j-java Depends: libxml-commons-external-java Suggests: libmysql-java |Recommends: solr-tomcat Recommends: solr-jetty

    Read the article

  • Using an alternate search platform in Commerce Server 2009

    - by Lewis Benge
    Although Microsoft Commerce Server 2009's architecture is built upon Microsoft SQL Server, and has the full power of the SQL Full Text Indexing Search Platform, there are time however when you may require a richer or alternate search platform. One of these scenarios if when you want to implement a faceted (refinement) search into your site, which provides dynamic refinements based on the search results dataset. Faceted search is becoming popular in most online retail environments as a way of providing an enhanced user experience when browsing a larger catalogue. This is powerful for two reasons, firstly with a traditional search it is down to a user to think of a search term suitable for the product they are trying to find. This typically will not return similar products or help in any way to refine a larger dataset. Faceted searches on the other hand provide a comprehensive list of product properties, grouped together by similarity to help the user narrow down the results returned, as the user progressively restricts the search criteria by selecting additional criteria to search again, these facets needs to continually refresh. The whole experience allows users to explore alternate brands, price-ranges, or find products they hadn't initially thought of or where looking for in a bid to enhance cross sell in the retail environment. The second advantage of this type of search from a business perspective is also to harvest the search result to start to profile your user. Even though anonymous users may routinely visit your site, and will not necessarily register or complete a transaction to build up marketing data- profiling, you can still achieve the same result by recording search facets used within the search sequence. Below is a faceted search scenario generated from eBay using the search term "server". By creating a search profile of clicking through Computer & Networking -> Servers -> Dell - > New and recording this information against my user profile you can start to predict with a lot more certainty what types of products I am interested in. This will allow you to apply shopping-cart analysis against your search data and provide great cross-sale or advertising opportunity, or personalise the user experience based on your prediction of what the user may be interested in. This type of search is extremely beneficial in e-Commerce environments but achieving it out of the box with Commerce Server and SQL Full Text indexing can be challenging. In many deployments it is often easier to use an alternate search platform such as Microsoft's FAST, Apache SOLR, or Endecca, however you still want these products to integrate natively into Commerce Server to ensure that up-to-date inventory information is presented, profile information is generated, and you provide a consistant API. To do so we make the most of the Commerce Server extensibilty points called operation sequence components. In this example I will be talking about Apache Solr hosted on Apache Tomcat, in this specific example I have used the SolrNet C# library to interface to the Java platform. Also I am not going to talk about Solr configuration of indexing – but in a production envionrment this would typically happen by using Powershell to call the Commerce Server management webservice to export your catalog as XML, apply an XSLT transform to the file to make it conform to SOLR and use a simple HTTP Post to send it to the search enginge for indexing. Essentially a sequance component is a step in a serial workflow used to call a data repository (which in most cases is usually the Commerce Server pipelines or databases) and map to and from a Commerce Entity object whilst enforcing any business rules. So the first step in the process is to add a new class library to your existing Commerce Server site. You will need to use a new library as Sequence Components will need to be strongly named to be deployed. Once you are inside of your new project, add a new class file and add a reference to the Microsoft.Commerce.Providers, Microsoft.Commerce.Contracts and the Microsoft.Commerce.Broker assemblies. Now make your new class derive from the base object Microsoft.Commerce.Providers.Components.OperationSequanceComponent and overide the ExecuteQueryMethod. Your screen will then look something similar ot this: As all we are doing on this component is conducting a search we are only interested in the ExecuteQuery method. This method accepts three arguments, queryOperation, operationCache, and response. The queryOperation will be the object in which we receive our search parameters, the cache allows access to the Commerce Server cache allowing us to store regulary accessed information, and the response object is the object which we will return the result of our search upon. Inside this method is simply where we are going to inject our logic for our third party search platform. As I am not going to explain the inner-workings of actually making a SOLR call, I'll simply provide the sample code here. I would highly recommend however looking at the SolrNet wiki as they have some great explinations of how the API works. What you will find however is that there are some further extensions required when attempting to integrate a custom search provider. Firstly you out of the box the CommerceQueryOperation you will receive into the method when conducting a search against a catalog is specifically geared towards a SQL Full Text Search with properties such as a Where clause. To make the operation you receive more relevant you will need to create another class, this time derived from Microsoft.Commerce.Contract.Messages.CommerceSearchCriteria and within this you need to detail the properties you will require to allow you to submit as parameters to the SOLR search API. My exmaple looks like this: [DataContract(Namespace = "http://schemas.microsoft.com/microsoft-multi-channel-commerce-foundation/types/2008/03")] public class CommerceCatalogSolrSearch : CommerceSearchCriteria { private Dictionary<string, string> _facetQueries;   public CommerceCatalogSolrSearch() { _facetQueries = new Dictionary<String, String>();   }     public Dictionary<String, String> FacetQueries { get { return _facetQueries; } set { _facetQueries = value; } }   public String SearchPhrase{ get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public IEnumerable<String> Facets { get; set; }   public string Sort { get; set; }   public new int FirstItemIndex { get { return (PageIndex-1)*PageSize; } }   public int LastItemIndex { get { return FirstItemIndex + PageSize; } } }  To allow you to construct a CommerceQueryOperation call within the API you will also need to construct another class to derived from Microsoft.Commerce.Common.MessageBuilders.CommerceSearchCriteriaBuilder and is simply used to construct an instance of the CommerceQueryOperation you have just created and expose the properties you want set. My Message builder looks like this: public class CommerceCatalogSolrSearchBuilder : CommerceSearchCriteriaBuilder { private CommerceCatalogSolrSearch _solrSearch;   public CommerceCatalogSolrSearchBuilder() { _solrSearch = new CommerceCatalogSolrSearch(); }   public String SearchPhrase { get { return _solrSearch.SearchPhrase; } set { _solrSearch.SearchPhrase = value; } }   public int PageIndex { get { return _solrSearch.PageIndex; } set { _solrSearch.PageIndex = value; } }   public int PageSize { get { return _solrSearch.PageSize; } set { _solrSearch.PageSize = value; } }   public Dictionary<String,String> FacetQueries { get { return _solrSearch.FacetQueries; } set { _solrSearch.FacetQueries = value; } }   public String[] Facets { get { return _solrSearch.Facets.ToArray(); } set { _solrSearch.Facets = value; } } public override CommerceSearchCriteria ToSearchCriteria() { return _solrSearch; } }  Once you have these two classes in place you can now safely cast the CommerceOperation you receive as an argument of the overidden ExecuteQuery method in the SequenceComponent to the CommerceCatalogSolrSearch operation you have just created, e.g. public CommerceCatalogSolrSearch TryGetSearchCriteria(CommerceOperation operation) { var searchCriteria = operation as CommerceQueryOperation; if (searchCriteria == null) throw new Exception("No search criteria present");   var local = (CommerceCatalogSolrSearch) searchCriteria.SearchCriteria; if (local == null) throw new Exception("Unexpected Search Criteria in Operation");   return local; }  Now you have all of your search parameters present, you can go off an call the external search platform API. You will of-course get proprietry objects returned, so the next step in the process is to convert the results being returned back into CommerceEntities. You do this via another extensibility point within the Commerce Server API called translatators. Translators are another separate class, this time derived inheriting the interface Microsoft.Commerce.Providers.Translators.IToCommerceEntityTranslator . As you can imaginge this interface is specific for the conversion of the object TO a CommerceEntity, you will need to implement a separate interface if you also need to go in the opposite direction. If you implement the required method for the interace you will get a single translate method which has a source onkect, destination CommerceEntity, and a collection of properties as arguments. For simplicity sake in this example I have hard-coded the mappings, however best practice would dictate you map the objects using your metadatadefintions.xml file . Once complete your translator would look something like the following: public class SolrEntityTranslator : IToCommerceEntityTranslator { #region IToCommerceEntityTranslator Members   public void Translate(object source, CommerceEntity destinationCommerceEntity, CommercePropertyCollection propertiesToReturn) { if (source.GetType().Equals(typeof (SearchProduct))) { var searchResult = (SearchProduct) source;   destinationCommerceEntity.Id = searchResult.ProductId; destinationCommerceEntity.SetPropertyValue("DisplayName", searchResult.Title); destinationCommerceEntity.ModelName = "Product";   } }  Once you have a translator in place you can then safely map the results of your search platform into Commerce Entities and attach them on to the CommerceResponse object in a fashion similar to this: foreach (SearchProduct result in matchingProducts) { var destinationEntity = new CommerceEntity(_returnModelName);   Translator.ToCommerceEntity(result, destinationEntity, _queryOperation.Model.Properties); response.CommerceEntities.Add(destinationEntity); }  In SOLR I actually have two objects being returned – a product, and a collection of facets so I have an additional translator for facet (which maps to a custom facet CommerceEntity) and my facet response from SOLR is passed into the Translator helper class seperatley. When all of this is pieced together you have sucessfully completed the extensiblity point coding. You would have created a new OperationSequanceComponent, a custom SearchCritiera object and message builder class, and translators to convert the objects into Commerce Entities. Now you simply need to configure them, and can start calling them in your code. Make sure you sign you assembly, compile it and identiy its signature. Next you need to put this a reference of your new assembly into the Channel.Config configuration file replacing that of the existing SQL Full Text component: You will also need to add your translators to the Translators node of your Channel.Config too: Lastly add any custom CommerceEntities you have developed to your MetaDataDefintions.xml file. Your configuration is now complete, and you should now be able to happily make a call to the Commerce Foundation API, which will act as a proxy to your third party search platform and return back CommerceEntities of your search results. If you require data to be enriched, or logged, or any other logic applied then simply add further sequence components into the OperationSequence (obviously keeping the search response first) to the node of your Channel.Config file. Now to call your code you simply request it as per any other CommerceQuery operation, but taking into account you may be receiving multiple types of CommerceEntity returned: public KeyValuePair<FacetCollection ,List<Product>> DoFacetedProductQuerySearch(string searchPhrase, string orderKey, string sortOrder, int recordIndex, int recordsPerPage, Dictionary<string, string> facetQueries, out int totalItemCount) { var products = new List<Product>(); var query = new CommerceQuery<CatalogEntity, CommerceCatalogSolrSearchBuilder>();   query.SearchCriteria.PageIndex = recordIndex; query.SearchCriteria.PageSize = recordsPerPage; query.SearchCriteria.SearchPhrase = searchPhrase; query.SearchCriteria.FacetQueries = facetQueries;     totalItemCount = 0; CommerceResponse response = SiteContext.ProcessRequest(query.ToRequest()); var queryResponse = response.OperationResponses[0] as CommerceQueryOperationResponse;   // No results. Return the empty list if (queryResponse != null && queryResponse.CommerceEntities.Count == 0) return new KeyValuePair<FacetCollection, List<Product>>();   totalItemCount = (int)queryResponse.TotalItemCount;   // Prepare a multi-operation to retrieve the product variants var multiOperation = new CommerceMultiOperation();     //Add products to results foreach (Product product in queryResponse.CommerceEntities.Where(x => x.ModelName == "Product")) { var productQuery = new CommerceQuery<Product>(Product.ModelNameDefinition); productQuery.SearchCriteria.Model.Id = product.Id; productQuery.SearchCriteria.Model.CatalogId = product.CatalogId;   var variantQuery = new CommerceQueryRelatedItem<Variant>(Product.RelationshipName.Variants);   productQuery.RelatedOperations.Add(variantQuery);   multiOperation.Add(productQuery); }   CommerceResponse variantsResponse = SiteContext.ProcessRequest(multiOperation.ToRequest()); foreach (CommerceQueryOperationResponse queryOpResponse in variantsResponse.OperationResponses) { if (queryOpResponse.CommerceEntities.Count() > 0) products.Add(queryOpResponse.CommerceEntities[0]); }   //Get facet collection FacetCollection facetCollection = queryResponse.CommerceEntities.Where(x => x.ModelName == "FacetCollection").FirstOrDefault();     return new KeyValuePair<FacetCollection, List<Product>>(facetCollection, products); }    ..And that is it – simply a few classes and some configuration will allow you to extend the Commerce Server query operations to call a third party search platform, whilst still maintaing a unifed API in the remainder of your code. This logic stands for any extensibility within CommerceServer, which requires excution in a serial fashioon such as call to LOB systems or web service to validate or enrich data. Feel free to use this example on other applications, and if you have any questions please feel free to e-mail and I'll help out where I can!

    Read the article

  • How to do query auto-completion/suggestions in Lucene?

    - by Mat Mannion
    I'm looking for a way to do query auto-completion/suggestions in Lucene. I've Googled around a bit and played around a bit, but all of the examples I've seen seem to be setting up filters in Solr. We don't use Solr and aren't planning to move to using Solr in the near future, and Solr is obviously just wrapping around Lucene anyway, so I imagine there must be a way to do it! I've looked into using EdgeNGramFilter, and I realise that I'd have to run the filter on the index fields and get the tokens out and then compare them against the inputted Query... I'm just struggling to make the connection between the two into a bit of code, so help is much appreciated! To be clear on what I'm looking for (I realised I wasn't being overly clear, sorry) - I'm looking for a solution where when searching for a term, it'd return a list of suggested queries. When typing 'inter' into the search field, it'll come back with a list of suggested queries, such as 'internet', 'international', etc.

    Read the article

  • How can I request a url using AJAX

    - by Nikhil Dinesh
    I am quite new in this area. I need to find out how to make a request to my solr server using Ajax How can I give a url(my solr server's url) in request Any body know how to deal with this? How can i make a request to the below mentioned url http://mysite:8080/solr/select/?q=%2A%3A%2A&version=2.2&start=0&rows=100&indent=on See here: Corrected the Code Snippet as below function getProductIds() { var xmlhttp; if (window.XMLHttpRequest) xmlhttp = new XMLHttpRequest(); else xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); xmlhttp.onreadystatechange = function () { if (xmlhttp.readyState == 4 && xmlhttp.status == 200) console.dir(xmlhttp); else alert('no response'); var ajaxURL = "http://localhost:8080/solr/select/?q=*:*&version=2.2&start=0&rows=100&indent=on"; xmlhttp.open("GET", ajaxURL, true); xmlhttp.send(); } This is my code, it always showing "no response" Thanks.

    Read the article

  • Php security question

    - by Camran
    I have a linux server, and I am about to upload a classifieds website to it. The website is php based. That means php code adds/removes classifieds, with the help of the users offcourse. The php-code then adds/removes a classified to a database index called Solr (like MySql). Problem is that anybody can currently access the database, but I only want the website to access the database (solr). Solr is on port 8983 as standard btw. My Q is, if I add a rule in my firewall (iptables), to only allow connections coming from the servers IP to the Solr port nr, would this solve my issue? Thanks

    Read the article

  • Do I need JDK or only JRE?

    - by Camran
    I use Solr in my website, and now I am about to configure my VPS account. I am at the stage where I need to install java in order to make Solr work. Now, I only plan on running solr, and using it as it is (I have no java programming skills at all), so my Q is, do I need the entire JDK which includes JRE, or is JRE enough? Thanks BTW: My server OS is Linux (ubuntu 9.10). Thanks

    Read the article

  • Servlet container; What is it and do I need it in my case?

    - by Camran
    I have just ordered a VPS from my provider. I have some Q however... My website uses Solr, which requires the following according to their website: "Solr requires Java 1.5 and an Application server (such as Tomcat) which supports the Servlet 2.4 standard" I also need php 5, MySql, and the usual javascript etc... The OS is Ubuntu 9.10 1- So what do I need to install then? 2- What is a servlet container? 3- The solr I have downloaded came with Jetty. Is Jetty a Servlet container? Thanks

    Read the article

  • Java process eating CPU; Why?

    - by Camran
    I have a Linux server which I have installed Java on. Sometimes, and only sometimes when a large nr of visitors visit my website, the site hangs. When I open the terminal and enter the "top" command to see whats going on, I can see that "Java" process is eating CPU! Like 400%. I have also tried ps aux command, and can see that the command is from usr/bin/java I have little experience in troubleshooting this kind of things, so I turn to you guys for help. I have a java container installed (Jetty) which I must have in order to use SOLR (search engine) which is integrated into my website. I can start and stop SOLR by: etc/init.d/solr stop But this didn't remove the java process from the "Top" command. Still java was eating 400% CPU. Is there other methods to restart java only? This has happened twice to me, and each time I have now restarted my entire servers and everithing is fine. If you need more input let me know! Thanks

    Read the article

  • Thin Web Services Tier

    - by Lici
    Hi folks, currently i'm working on exposing some solr queries via web services . The behavior should be the following: Client makes a request: http://api.mysite.com/hottestNews?apiKey=XXX The ws tier validates apiKey ... some aditional stuff ws tier redirects the request to : someFinder:8080/solr/select/qt=hottestNews The response is brought to client Can restlet framework help me? Thx

    Read the article

  • Transferring websites from x64 to x86 server

    - by Ke
    Hi, I run a x64 staging server here along with the following: Solr Java etc. However, I am about to get a linode vps for production and quickly realising that x86 is the way to go for their lowest RAM package (thinking to upgrade later). My staging server is x64 with 12gb ram, so going down to 300mb ram is going to feel devilishly slow ;/ Here are my questions: 1) Will I have problems transferring my scripts, dbs etc from a x64 to x86 server? e.g. solr indexes 2) Is it worth going for the x86 package? I am probably going to upgrade later down the line and x64 might be better for the servers with more RAM? should I stick with x64 instead as there isnt much difference when using with low RAM? Cheers Ke

    Read the article

  • Highly robust and scalable search server needed for managing and analyze files

    - by ChrisBenyamin
    Hi everybody, I am looking for a professional search server system with functionality, like e.g. solr http://lucene.apache.org/solr/ holds. Place of action should be a centralized location, whereon many hosts would request data. Furthermore the system should be extensible for implementing statistical procedures. (e.g. a kind of heatmap (or common diagrams) of a (or more) file(s) (which has a guid), that is spread on different hosts.) This software doesn't have to be opensource. thanks. chris

    Read the article

  • PHP-based indexing and search implementation

    - by Chris
    Is there such thing? I designed a while back a rudimentary form based app for my users. We receive from our suppliers hardware manufacturing data in XML files: file name is made of eleven fields separated by tildes, with each field having its own meaning. R&D guys wanted to be able to search each field of the file names so I used regex() with decent results. Problem is that we have now in the upwards of 2.5 million files. And my app can't hack it anymore. I looked at Apache Lucene & Solr. Though it seemed like the best solution to my problem, the fields in the filenames are not peers to the file content. Big no-no with Solr. What is the best way to implement a PHP app with indexing and search capability with such large number of files? Do I have to buy Zend and use Zend_Search? Is it the only way? Thanks for your input.

    Read the article

  • Specifying --host1 as localhost with port 8983 in autobench

    - by mamatha
    I am using autobench for benchmarking in ubuntu 8.10 autobench --single_host --host1 localhost --uri1 /solr/admin --low_rate 20 --high_rate 200 --rate_step 20 --num_call 10 --num_conn 5000 --timeout 5 --file bench1.tsv This is the command which I gave. It is taking the default port as 80 and the number of replies and requests are as shown below **Errors: total 5000 client-timo 0 socket-timo 0 connrefused 5000 connreset 0 Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0 Zero replies received, test invalid: rate 20 httperf --timeout=5 --client=0/1 --server=localhost --port=80 --uri=/solr/admin --rate=40 --send-buffer=4096 --recv-buffer=16384 --num-conns=5000 --num-calls=10 Maximum connect burst length: 4 Total: connections 5000 requests 0 replies 0 test-duration 124.976 s** But, I want the port to be 8983. In all the examples that I have seen in the autobench tutorial, --host1 is a website (such as, www.test.com). Can anyone suggest how to use localhost taking the port as 8983? Thanks, in advance.

    Read the article

  • Can't connect to Sunspot server in Ubuntu server

    - by Chris Benseler
    I followed the steps in https://github.com/outoftime/sunspot/wiki/Adding-Sunspot-search-to-Rails-in-5-minutes-or-less to install & set up Sunspot search in Rails in a Mac OS and it is ok. In a Ubuntu server, there's connection refused error. When I run rake sunspot:solr:start and the proccess starts. The file sunspot-solr-development.pid is created in /tmp/pids But when I try to reindex rake sunspot:reindex ... rake aborted! Connection refused - connect(2) I tried to run the commands with sudo and gave permission 777 to the project files, but there's still error. Rails 3.0.8 Don't know where else to search for a solution...

    Read the article

  • How do I configure integration tests using rspec 2?

    - by Jamie Monserrate
    I need to have different settings for my unit tests and different settings for my integration tests. Example For unit tests, I would like to do WebMock.disable_net_connect!(:allow_localhost => true) And for integration tests, I would like to do WebMock.allow_net_connect! Also, before the start of an integration test, I would like to make sure that solr is started. Hence I want to be able to call config.before(:suite) do SunspotStarter.start end BUT, only for integration tests. I do not want to start my solr if its a unit test. How do I keep their configurations separate? Right now, I have solved this by keeping my integration tests in a folder outside the spec folder, which has its own spec_helper. Is there any better way?

    Read the article

  • Building a code search engine for java code in git repositories

    - by zero1
    I'm trying to build a Java code search engine. Apart from just searching for keywords, I would also like cross-referencing between classes to work. It should work the way eclipse's referencing works - click on anything to open the definition. Bonus would be if something like search-all-usages-of-foo works. I'm thinking of using Apache Solr to index the files and build the basic search. But I'm not sure how I'd do the crossreferencing part since Solr doesn't understand Java code. Any suggestions on what I could use here? EDIT: I mainly want to index a lot of java git repositories.

    Read the article

  • Convincing my coworkers to use Hudson CI

    - by in0de
    Im really aware of some benefits of using Hudson as CI server. But, im facing the problem to convince my coworkers to install and use it. To put some context, we are developing two different products (one is an enterprise search engine based on Apache Solr) and several enterprise search projects. We are facing a lot of versioning issues and i think Hudson will solve this problems. They argued about its productivity and learning curve What Hudson's benefits would you spotlight?

    Read the article

  • Where do all these Java technologies lead to? [on hold]

    - by user1502178
    For a new job, the HR gave me list of Java technologies to study: JSP/Servlet Ant, Maven Hibernate Spring Core, Spring MVC REST JMS Mongo, Cassandra Solr, Elastic Search I have never been a java guy, but I am ready to learn these, but I need to know where all these technologies lead to, are they worth doing? and how long will they approximately take if I have university level experience in programming and CS?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12  | Next Page >