Search Results

Search found 204 results on 9 pages for 'mongo'.

Page 7/9 | < Previous Page | 3 4 5 6 7 8 9  | Next Page >

  • Best ORM, Simple data Structures, Strong Query analysis.

    - by sayth
    What is the best ORM db combination for simple data structures. That is data that contains names as identifiers and locations, but whose main interaction will be numerical data for times(sports durations), and currency related data. I initially want to create a sports data base that will take names and statistics. Secondarily I plan to start into an investment and stock analysis db. Which ORM suits storing many numerical types and have strong query functions? I really am not biased to db engine (most likely use sqlite or mongo) so any suggestions to best network less db server to suit said ORM appreciated.

    Read the article

  • MongoDB C# Driver Unable to Find by Object ID?

    - by Hery
    Using MongoDB C# driver (http://github.com/samus/mongodb-csharp), seems that I'm unable to get the data by ObjectId. Below the command that I'm using: var spec = new Document { { "_id", id } }; var doc = mc.FindOne(spec); I also tried this: var spec = new Document { { "_id", "ObjectId(\"" + id + "\")" } }; var doc = mc.FindOne(spec); Both return nothing. Meanwhile, if I query it from the mongo console, it returns the expected result. My question is, does that driver actually support the lookup by ObjectId? Thanks..

    Read the article

  • Spring Data MongoDB Date between two Dates

    - by sics
    i'm using Spring Data for MongoDB and got the following classes class A { List<B> b; } class B { Date startDate; Date endDate; } when i save an object of A it gets persisted like { "_id" : "DQDVDE000VFP8E39", "b" : [ { "startDate" : ISODate("2009-10-05T22:00:00Z"), "endDate" : ISODate("2009-10-29T23:00:00Z") }, { "startDate" : ISODate("2009-11-01T23:00:00Z"), "endDate" : ISODate("2009-12-30T23:00:00Z") } ] } Now i want to query the db for documents matching entries in b where a given date is between startDate and endDate. Query query = new Query(Criteria.where("a").elemMatch( Criteria.where("startDate").gte(date) .and("endDate").lte(date) ); Which results in the following mongo query: { "margins": { "$elemMatch": { "startDate" : { "$gte" : { "$date" : "2009-11-03T23:00:00.000Z"}}, "endDate" : { "$lte" : { "$date" : "2009-11-03T23:00:00.000Z"}} } } } but returns no resulting documents. Does anybody know what i'm doing wrong? I don't get it... Thank you very much in advance!!

    Read the article

  • Is Using Python to MapReduce for Cassandra Dumb?

    - by UltimateBrent
    Since Cassandra doesn't have MapReduce built in yet (I think it's coming in 0.7), is it dumb to try and MapReduce with my Python client or should I just use CouchDB or Mongo or something? The application is stats collection, so I need to be able to sum values with grouping to increment counters. I'm not, but pretend I'm making Google analytics so I want to keep track of which browsers appear, which pages they went to, and visits vs. pageviews. I would just atomically update my counters on write, but Cassandra isn't very good at counters either. May Cassandra just isn't the right choice for this? Thanks!

    Read the article

  • Setting up a non-emacs Common Lisp Dev Env for web application development?

    - by Ravi S
    I am trying to set up a Common Lisp Dev Env for web application development on my Ubuntu 10.04 LTS 64-bit box and I can't find a single decent guide that is targeted at noobs. The closest I came is with Peter Seibel's Lisp in a box but I detest Emacs with a passion and it seems to have older versions of SBCL and CLISP (which are my preferred CL implementations). I do not want to use any of the commercial implementations. I am looking for a simple setup to write some very basic CRUD apps involving possibly hunchentoot, some framework like weblocks,CL-WHO, CL-SQl, sqlite or some datastores from the nosql family like mongo and couch.. Assuming, I go with either SBCL or CLISP on Linux, what is the best tool to manage packages and libraries? ASDF? I am looking for simplicity and consistency and I don't expect to use a ton of libs...

    Read the article

  • High performance querying - Suggestions please

    - by Alex Takitani
    Supposing that I have millions of user profiles, with hundreds of fields (name, gender, preferred pet and so on...). You want to make searches on profiles. Ex.:All profiles that has age between x and y, loves butterflies, hates chocolate.... With database would you choose? Suppose that You have a Facebook like load. Speed is a must. Open Source preferred. I've read a lot about Cassandra, HBase, Mongo, Mysql... I just can't decide.....

    Read the article

  • MongoDB - how to join parent and child products by reference

    - by Jaro
    my mongo collection stores products. There are two product types: child and parent. Parent product holds array of its child as reference. Use case: use mydb; child1 = { _id: 1, name: "Child 1", is_child: true, is_parent: false, children : [] } child2 = { _id: 2, name: "Child 2", is_child: true, is_parent: false, children : [] } parent = { _id: 3, name: "Parent product", is_child: false, is_parent: true, children : [1, 2] } db.product.insert( [child1, child2, parent] ); And I'm looking for any query returning { _id: 3, name: "Parent product", is_child: false, is_parent: true, children: [ { _id: 1, name: "Child 1", is_child: true, is_parent: false, children : [] }, { _id: 2, name: "Child 2", is_child: true, is_parent: false, children : [] } ] } I'm newbie to mongodb, but I guess an usage of map-reduce could solve the problem. Can anyone advice? Thx

    Read the article

  • Intersection of sets Mongodb

    - by afvasd
    Hi everyone I am new to mongo, this is my db design: product := { name: str group: ref, comments: [ ref, ref, ref, ref ] } comments := { ... a bunch of comments stuff } tag := { _id: int, #Need this for online requests tag: str, products: [ {product: ref, score: float}, ... ], comments: [ {comment: ref, score: float}, ...], } So my usage pattern is: GIVEN a product, find comments that have certain tag and sort them accordingly. My current approach involves: Look for that tag object that has tag=myTag pull all the comments out, sorted look for that product where product.name=myProduct pull all the comments out (which are dbrefs by the way) loop through the result of 2, and checking if they are in 4, (this I can do a limit 10) etc. It's pretty inefficient. Any better methods?

    Read the article

  • Compressing a hex string in Ruby/Rails

    - by PreciousBodilyFluids
    I'm using MongoDB as a backend for a Rails app I'm building. Mongo, by default, generates 24-character hexadecimal ids for its records to make sharding easier, so my URLs wind up looking like: example.com/companies/4b3fc1400de0690bf2000001/employees/4b3ea6e30de0691552000001 Which is not very pretty. I'd like to stick to the Rails url conventions, but also leave these ids as they are in the database. I think a happy compromise would be to compress these hex ids to shorter collections using more characters, so they'd look something like: example.com/companies/3ewqkvr5nj/employees/9srbsjlb2r Then in my controller I'd reverse the compression, get the original hex id and use that to look up the record. My question is, what's the best way to convert these ids back and forth? I'd of course want them to be as short as possible, but also url-safe and simple to convert. Thanks!

    Read the article

  • how to test if a string is a valid UTF16 string?

    - by superb
    I am using mongodb and javascript to do some string processing. Now I got some error like: Sun May 23 07:42:20 Assertion failure JS_EncodeCharacters( _context , s , srclen , dst , &len) scripting/engine_spidermonkey.cpp 152 0x80f4f7e 0x80f8794 0x811525b 0x811a953 0x8119fc4 0x8111bc5 0x81b408e 0x81c4ee7 0x81b4a10 0x817a881 0x817a7d8 0x817a6e2 0x811e1bb 0x80a777b 0x80a8f8a 0xb7cb2455 0x80a37a1 mongodb-linux-i686-1.4.2/bin/mongo(_ZN5mongo12sayDbContextEPKc+0xfe) [0x80f4f7e] After doing some google, I find that JS_EncodeCharacters return false if the input is not a valid UTF16 string. (if spidermonkey is build with UTF-8 enabled) So I was wondering how to test if the input string if a proper UTF16 string? so I can skip such kind of string to avoid problem ... Thanks

    Read the article

  • MongoDB efficient dealing with embedded documents

    - by Sebastian Nowak
    I have serious trouble finding anything useful in Mongo documentation about dealing with embedded documents. Let's say I have a following schema: { _id: ObjectId, ... data: [ { _childId: ObjectId // let's use custom name so we can distinguish them ... } ] } What's the most efficient way to remove everything inside data for particular _id? What's the most efficient way to remove embedded document with particular _childId inside given _id? What's the performance here, can _childId be indexed in order to achieve logarithmic (or similar) complexity instead of linear lookup? If so, how? What's the most efficient way to insert a lot of (let's say a 1000) documents into data for given _id? And like above, can we get O(n log n) or similar complexity with proper indexing? What's the most efficient way to get the count of documents inside data for given _id?

    Read the article

  • Adding a MongoDB collection to Netbeans

    - by Saif Bechan
    In Netbeans I have an option to add my mysql databases to netbeans. This way I can easily browse and so small queries. Now I am working on a MongoDB project, and I want to know if it is possible to use the same functionality. I see that on the website of mongo there is a list of drivers, and I see that you can add drivers in netbeans. I do not know if the same thing, or if this can be used. I have tried google, but no luck. Anyone have an idea?

    Read the article

  • mongoDB many to many with one query?

    - by PowderKeg
    in mysql i use JOIN and one query is no problem. what about mongo? imagine categories and products. products may have more categories. categories may have more product. (many to many structure) and administrator may edit categories in administration (categories must be separated) its possible write product with categories names in one query? i used this structure categories { name:"categoryName", product_id:["4b5783300334000000000aa9","5783300334000000000aa943","6c6793300334001000000006"] } products { name:"productName", category_id:["4b5783300334000000000bb9","5783300334000000000bb943","6c6793300334001000000116"] } now i can simply get all product categories, and product in some category and categories alone for editation. but if i want write product with categories names i need two queries - one to get product categories id and second to get categories names from categories by that ids. is this the right way? or this structure is unsuitable? i would like to have only one query but i dont know if its possible.

    Read the article

  • Syncing a table records with a Service response frequently

    - by Karthik Dheeraj
    I am requesting data from a service whose response in stored in a database.First, I have an empty table, whenever I make my very first request the records from the service comes to my database table. from now, whenever I make second request, the service will provide me some records which may be same as my first response, may be new records, may be updated records etc. my query is to how to update my table with respect to the responses coming from the service during my second request on-wards? so that Unchanged records will remain same, New records will be added, updated records will be updated.Do I need to write any stored procedure on my DB or any workaround ?what might be the scenario if I use Nomysql DB's like mongo DB ? Thanks In Advance.

    Read the article

  • Does it make sense to use BOTH mongodb and mysql in the same rails application?

    - by Brian Armstrong
    I have a good reason to use mongodb for part of my app. But people generally describe it as not a good fit for "transactional" applications like a bank where transactions have to be exact/consistent, etc. Does it make sense to split the models up in Rails and have some of them use MySql and others mongo? Or will this generally cause more problems than it's worth? I'm not building a banking app or anything, but was thinking it might make sense for my users table or or transactions table (recording revenue) to do that part in MySql.

    Read the article

  • How to write following MongoDB query in C# Driver?

    - by user3043457
    I wrote the exact query I need in Mongo console, but I'm having trouble rewriting it in C# driver. Here's a sample of the document, it's simple dictionary: { "_id" : ObjectId("539716bc101c588f941e2c27"), "_t" : "DictionaryDocument", "CsvSeparator" : ",", "SelectedAccounts" : "0", ... } Here's the query: db.settings.find({"SelectedAccounts" :{$exists:true}},{"SelectedAccounts":1, "_id":0} ) Now, I got the first part, Find with exists working, but how to write the second parameter in C# driver? I'd just like a single string as a result, not entire document. Here's C# code I got so far: _collection.FindOneAs(typeof(DictionaryDocument), Query.Exists(key)); key in this case is "SelectedAccounts". I'd like the query to filter and return only the data I need, I don't want to return all the results and search on the C# side. EDIT: I wouldn't mind if _id was passed back, but I don't need it. So only this part would work if it could be converted in C#: db.settings.find({"SelectedAccounts" :{$exists:true}},{"SelectedAccounts":1} )

    Read the article

  • How to batch retrieve documents with mongoDB?

    - by edude05
    Hello everyone, I have an application that queries data from a mongoDB using the mongoDB C# driver something like this: public void main() { foreach (int i in listOfKey) { list.add(getObjectfromDB(i); } } public myObject getObjFromDb(int primaryKey) { document query = new document(); query["primKey"] = primaryKey; document result= mongo["myDatabase"]["myCollection"].findOne(query); return parseObject(result); } On my local (development) machine to get 100 object this way takes less than a second. However, I recently moved the database to a server on the internet, and this query takes about 30 seconds to execute for the same number of object. Furthermore, looking at the mongoDB log, it seems to open about 8-10 connections to the DB to perform this query. So what I'd like to do is have the query the database for an array of primaryKeys and get them all back at once, then do the parsing in a loop afterwards, using one connection if possible. How could I optimize my query to do so? Thanks, --Michael

    Read the article

  • Can you use MongoDB map/reduce to migrate data?

    - by Brian Armstrong
    I have a large collection where I want to modify all the documents by populating a field. A simple example might be caching the comment count on each post: class Post field :comment_count, type: Integer has_many :comments end class Comment belongs_to :post end I can run it in serial with something like: Post.all.each do |p| p.udpate_attribute :comment_count, p.comments.count end But it's taking 24 hours to run (large collection). I was wondering if mongo's map/reduce could be used for this? But I haven't seen a great example yet. I imagine you would map off the comments collection and then store the reduced results in the posts collection. Am I on the right track?

    Read the article

  • Which NoSQL db to use with C?

    - by systemsfault
    Hello all, I'm working on an application that I'm going to write with C and i am considering to use a nosql db for storing timeseries data with at most 8 or 9 fields. But in every 5 minutes there will huge write operations such as 2-10 million rows and then there will be reads(but performance is not as crucial in read as in the write operation). I'm considering to use a NoSQL db here in order to store the data but couldn't decide on which one to use. Couchdb seems to have a stable driver called pillowtalk for C; but Mongo's driver doesn't look as promising as pillowtalk. I'm also open to other suggestions. What is your recommendation?

    Read the article

  • Mongodb, simple IN problem

    - by afvasd
    Hi everyone I am new to mongo, this is my db design: product := { name: str group: ref, comments: [ ref, ref, ref, ref ] } comments := { ... a bunch of comments stuff } tag := { _id: int, #Need this for online requests tag: str, products: [ {product: ref, score: float}, ... ], comments: [ {comment: ref, score: float}, ...], } So my usage pattern is: GIVEN a product, find comments that have certain tag and sort them accordingly. My current approach involves: Look for that tag object that has tag=myTag pull all the comments out, sorted look for that product where product.name=myProduct pull all the comments out (which are dbrefs by the way) loop through the result of 2, and checking if they are in 4, (this I can do a limit 10) etc. It's pretty inefficient. Any better methods?

    Read the article

  • Nginx and PHP Fundamentals

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/08/01/nginx-and-php-fundamentals.aspxHot on the heels of my .NET caching course, I’ve had my first “fundamentals” course released on Pluralsight: Nginx and PHP Fundamentals. It’s a practical look at two of the biggest technologies on the web – Nginx, which is the fastest growing HTTP server around (currently hosting 100+ million sites), and PHP, which powers more websites than any other server-side framework (currently 240+ million sites). The two technologies work well together, both are open-source and cross-platform and both are lightweight and easy to get started with - you just need to download and unzip the runtimes, and with a text editor you can create and host dynamic websites. I’ve used PHP as a second (sometimes third) language since 2005 when I was brought cold into an established codebase to help improve performance, and Nginx to host tier 2 apps for the last couple of years. As with any training course, you learn new things as you produce it, and it was good to focus on a different stack from my commercial .NET world. In the course I start with a website in two parts – one which is just static content, and one which processes a user registration form using ASP.NET MVC, both running in IIS. Over four modules I migrate the app to Nginx and PHP: Hosting Static Content in Nginx – how to deploy and configure Nginx for a basic website; PHP Part 1: Basic Web Forms – installing PHP and an IDE, and building a simple form with server-side validation; PHP Part 2: Packages and Integration – using PECL and Composer for packages to connect to Azure, AWS, Mongo and reCAPTCHA; Hosting PHP in Nginx – configuring Nginx to host our PHP site. Along the way I run some performance stats with JMeter, and the headlines are that Nginx running on Linux outperforms IIS on Windows for static content,by 800 requests per second over 1000 concurrent requests; and Linux+Ngnix+PHP outperforms Windows+IIS+ASP.NET MVC by 700 request per second with the same load. Of course, the headline stats don’t tell the whole story, and when you add OpCode caching for PHP and the ASP.NET Output Cache, the results are very different. As Web architecture moves away from heavy server-side processing, to Single Page Apps with client-side frameworks like AngularJS and Knockout, I think there’s an increasing need for high-performance, low-cost server technologies, and the combination of Nginx and PHP makes a compelling case.

    Read the article

  • mongoDB Management Studio

    - by Liam McLennan
    This weekend I have been in Sydney at the MS Web Camp, learning about web application development. At the end of the first day we came up with application ideas and pitched them. My idea was to build a web management application for mongoDB. mongoDB I pitched my idea, put down the microphone, and then someone asked, “what’s mongo?”. Good question. MongoDB is a document database that stores JSON style documents. This is a JSON document for a tweet from twitter: db.tweets.find()[0] { "_id" : ObjectId("4bfe4946cfbfb01420000001"), "created_at" : "Thu, 27 May 2010 10:25:46 +0000", "profile_image_url" : "http://a3.twimg.com/profile_images/600304197/Snapshot_2009-07-26_13-12-43_normal.jpg", "from_user" : "drearyclocks", "text" : "Does anyone know who has better coverage, Optus or Vodafone? Telstra is still too expensive.", "to_user_id" : null, "metadata" : { "result_type" : "recent" }, "id" : { "floatApprox" : 14825648892 }, "geo" : null, "from_user_id" : 6825770, "search_term" : "telstra", "iso_language_code" : "en", "source" : "&lt;a href=&quot;http://www.tweetdeck.com&quot; rel=&quot;nofollow&quot;&gt;TweetDeck&lt;/a&gt;" } A mongodb server can have many databases, each database has many collections (instead of tables) and a collection has many documents (instead of rows). Development Day 2 of the Sydney MS Web Camp was allocated to building our applications. First thing in the morning I identified the stories that I wanted to implement: Scenario: View databases Scenario: View Collections in a database Scenario: View Documents in a Collection Scenario: Delete a Collection Scenario: Delete a Database Scenario: Delete Documents Over the course of the day the team (3.5 developers) implemented all of the planned stories (except ‘delete a database’) and also implemented the following: Scenario: Create Database Scenario: Create Collection Lessons Learned I’m new to MongoDB and in the past I have only accessed it from Ruby (for my hare-brained scheme). When it came to implementing our MongoDB management studio we discovered that their is no official MongoDB driver for .NET. We chose to use NoRM, honestly just because it was the only one I had heard of. NoRM was a challenge. I think it is a fine library but it is focused on mapping strongly typed objects to MongoDB. For our application we had no prior knowledge of the types that would be in the MongoDB database so NoRM was probably a poor choice. Here are some screens (click to enlarge):

    Read the article

  • best way to "introduce" OOP/OOD to team of experienced C++ engineers

    - by DXM
    I am looking for an efficient way, that also doesn't come off as an insult, to introduce OOP concepts to existing team members? My teammates are not new to OO languages. We've been doing C++/C# for a long time so technology itself is familiar. However, I look around and without major infusion of effort (mostly in the form of code reviews), it seems what we are producing is C code that happens to be inside classes. There's almost no use of single responsibility principle, abstractions or attempts to minimize coupling, just to name a few. I've seen classes that don't have a constructor but get memset to 0 every time they are instantiated. But every time I bring up OOP, everyone always nods and makes it seem like they know exactly what I'm talking about. Knowing the concepts is good, but we (some more than others) seem to have very hard time applying them when it comes to delivering actual work. Code reviews have been very helpful but the problem with code reviews is that they only occur after the fact so to some it seems we end up rewriting (it's mostly refactoring, but still takes lots of time) code that was just written. Also code reviews only give feedback to an individual engineer, not the entire team. I am toying with the idea of doing a presentation (or a series) and try to bring up OOP again along with some examples of existing code that could've been written better and could be refactored. I could use some really old projects that no one owns anymore so at least that part shouldn't be a sensitive issue. However, will this work? As I said most people have done C++ for a long time so my guess is that a) they'll sit there thinking why I'm telling them stuff they already know or b) they might actually take it as an insult because I'm telling them they don't know how to do the job they've been doing for years if not decades. Is there another approach which would reach broader audience than a code review would, but at the same time wouldn't feel like a punishment lecture? I'm not a fresh kid out of college who has utopian ideals of perfectly designed code and I don't expect that from anyone. The reason I'm writing this is because I just did a review of a person who actually had decent high-level design on paper. However if you picture classes: A - B - C - D, in the code B, C and D all implement almost the same public interface and B/C have one liner functions so that top-most class A is doing absolutely all the work (down to memory management, string parsing, setup negotiations...) primarily in 4 mongo methods and, for all intents and purposes, calls almost directly into D. Update: I'm a tech lead(6 months in this role) and do have full support of the group manager. We are working on a very mature product and maintenance costs are definitely letting themselves be known.

    Read the article

  • Can't setup 3 nodes MongoDB recplica set

    - by Victor Lin
    I just follow instructions in MongoDB document Replica Sets - Basics to setup a 3-node Replica set. Everything goes fine when I do the initiate and add first node in the primary. [foo@host-a mongodb]$ bin/mongo localhost MongoDB shell version: 1.8.2 connecting to: localhost > rs.initiate() { "info2" : "no configuration explicitly specified -- making one", "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } > rs.add("host-b") { "ok" : 1 } So far so good, but when I try to add third node myset:PRIMARY> rs.addArb("host-c") Sun Aug 7 22:57:09 MessagingPort recv() errno:104 Connection reset by peer 127.0.0.1:27017 Sun Aug 7 22:57:09 SocketException: remote: error: 9001 socket exception [1] Sun Aug 7 22:57:09 DBClientCursor::init call() failed Sun Aug 7 22:57:09 query failed : local.$cmd { count: "system.replset", query: {}, fields: {} } to: 127.0.0.1 Sun Aug 7 22:57:09 Error: error doing query: failed shell/collection.js:150 Sun Aug 7 22:57:09 trying reconnect to 127.0.0.1 Sun Aug 7 22:57:09 reconnect 127.0.0.1 ok As result, the current primary became secondary, and the host-b was marked as dead, but actually, it is still alive. myset:SECONDARY> rs.status() { "set" : "myset", "date" : ISODate("2011-08-08T04:03:23Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "host-a:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "optime" : { "t" : 1312775799000, "i" : 1 }, "optimeDate" : ISODate("2011-08-08T03:56:39Z"), "self" : true }, { "_id" : 1, "name" : "host-b", "health" : 0, "state" : 6, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2011-08-08T04:03:22Z"), "errmsg" : "still initializing" } ], "ok" : 1 } How could this happen? I just follow the guide in the document, did I do something wrong? Moreover, I can't do anything on current secondary server. It doesn't allow me to reconfig on the secondary node, but the problem is there is no primary node. myset:SECONDARY> rs.reconfig({}) { "errmsg" : "replSetReconfig command must be sent to the current replica set primary.", "ok" : 0 } Any ideas?

    Read the article

  • Replicated MongoDB server slower than simple shards

    - by displayName
    I tried to compare the performance of a sharded configuration against a sharded and replicated configuration. The sharded configuration consists of 8 shards each running on three different machines thereby constituting a total of 24 shards. All 8 of these shards run in the same partition on each machine. The sharded and replicated version is 8 shards again just like plain sharding, and all 8 mongods run on the same partition in each machine. But apart from this, each of these three machine now run additional 16 threads on another partition which serve as the secondary for the 8 mongods running on other machines. This is the way I prepared a sharded and replicated configuration with data chunks having replication factor of 3. Important point to note is that once the data has been loaded, it is not modified. So after primary and secondaries have synchronized then it doesn't matter which one i read from. To run the queries, I use an entirely different machine (let's call it config) which runs mongos and this machine's only purpose is to receive queries and run them on the cluster. Contrary to my expectations, plain sharding of 8 threads on each machine (total = 3 * 8 = 24) is performing better for queries than the sharded + replicated configuration. I have a script written to perform the query. So in order to time the scripts, I use time ./testScript and see the result. I tried changing the reading preference for replicated cluster by logging to mongo of config and run db.getMongo().setReadPref('secondary') and then exit the shell and run the queries like time ./testScript. The questions are: Where am i going wrong in the replication? Why is it slower than its plain sharding version? Does the db.getMongo().ReadPref('secondary') persist when i leave the shell and try to perform the query? All the four machines are running Linux and i have already increased the ulimit -n to 2048 from initial value of 1024 to allow more connections. The collections are properly distributed and all the mongods have equal number of chunks. Goes without saying that indices in both configurations are the same.

    Read the article

< Previous Page | 3 4 5 6 7 8 9  | Next Page >