Search Results

Search found 666 results on 27 pages for 'mongodb'.

Page 4/27 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to install mongoDB on windows?

    - by Industrial
    Hi! I am trying to test out mongoDB and see if it is anything for me. I downloaded the 32bit windows version, but have no idea on how to continue from now on. I normally use the WAMP services for developing on my local computer. Can i run mongoDB on Wamp? However, what's the best (easiest!) way to make it work on windows? Thanks!

    Read the article

  • Getting geospatial indexes to work in MongoDB 1.4.3

    - by Marcel J.
    I wanted to try geospatial indexes with MongoDB, but all I get is > db.map_nodes.find( { coodinate: { $near: [54, 10] } } ) error: { "$err" : "invalid operator: $near" } and > db.map_nodes.runCommand({geoNear:"coordinates", near:[50,50]}) { "errmsg" : "no such cmd", "bad cmd" : { "geoNear" : "coordinates", "near" : [ 50, 50 ] }, "ok" : 0 } I am using MongoDB 1.4.3. What am I doing wrong?

    Read the article

  • Data aggregation mongodb vs mysql

    - by Dimitris Stefanidis
    I am currently researching on a backend to use for a project with demanding data aggregation requirements. The main project requirements are the following. Store millions of records for each user. Users might have more than 1 million entries per year so even with 100 users we are talking about 100 million entries per year. Data aggregation on those entries must be performed on the fly. The users need to be able to filter on the entries by a ton of available filters and then present summaries (totals , averages e.t.c) and graphs on the results. Obviously I cannot precalculate any of the aggregation results because the filter combinations (and thus the result sets) are huge. Users are going to have access on their own data only but it would be nice if anonymous stats could be calculated for all the data. The data is going to be most of the time in batch. e.g the user will upload the data every day and it could like 3000 records. In some later version there could be automated programs that upload every few minutes in smaller batches of 100 items for example. I made a simple test of creating a table with 1 million rows and performing a simple sum of 1 column both in mongodb and in mysql and the performance difference was huge. I do not remember the exact numbers but it was something like mysql = 200ms , mongodb = 20 sec. I have also made the test with couchdb and had much worse results. What seems promising speed wise is cassandra which I was very enthusiastic about when I first discovered it. However the documentation is scarce and I haven't found any solid examples on how to perform sums and other aggregate functions on the data. Is that possible ? As it seems from my test (Maybe I have done something wrong) with the current performance its impossible to use mongodb for such a project although the automated sharding functionality seems like a perfect fit for it. Does anybody have experience with data aggregation in mongodb or have any insights that might be of help for the implementation of the project ? Thanks, Dimitris

    Read the article

  • Connect to Mongodb in python

    - by SpawnCxy
    I'm a little confused by the document when I tried to connect to the Mongodb.And I find it's different from mysql.I want to create a new database named "mydb" and insert some posts into it.The follows is what I'm trying. from pymongo.connection import Connection import datetime host = 'localhost' port = 27017 user = 'ucenter' passwd = '123' connection = Connection(host,port) db = connection['mydb'] post = {'author':'mike', 'text':'my first blog post!', 'tags':['mongodb','python','pymongo'], 'date':datetime.datetime.utcnow()} posts = db.posts posts.insert(post) #print str(db.collection_names()) And I got an error as pymongo.errors.OperationFailure: database error: unauthorized.How can I do the authorizing part?Thanks.

    Read the article

  • PHP can't connect to Mongodb

    - by mdm414
    Hi, I followed the windows installation instructions in mongodb's website but I still can't connect to MongoDB through PHP because of this error: Class 'Mongo' not found Why isn't the file containing the Mongo Class not being loaded? I've also found this error: PHP Warning: PHP Startup: mongo: Unable to initialize module Module compiled with module API=20090626, debug=0, thread-safety=1 PHP compiled with module API=20060613, debug=0, thread-safety=1 These options need to match in Unknown on line 0 I'm using php 5.2.5 and the mongo-php-driver is Windows PHP 5.2 VC6 thread safe Thanks

    Read the article

  • MongoDB and visual C++ 2008 linker errors

    - by pedlar
    i'm trying to get the c++ client for mongodb working in visual studio 2008. i can reference the includes, but whenever i tell the linker about the mongodb .lib file i get the following error: "fatal error LNK1257: code generation failed". if visual studio can't find the .lib, then i get a bunch of unresolved externals errors. i'm really pretty lost at this point.

    Read the article

  • Two Phase Commit with MongoDB

    - by mattcodes
    Heres what Im thinking. Do you see any issues with this workaround to emulate 2 phase commit when using something like MongoDB where each operation is atomic and there is no support for transactions outside of that? transaction_scope: read message from servicebus - UpdateCustomerAddress get customer aggregate from docdb, replay events where commited =1 call customer.updateAddress validates creates customer address updated event apply event event store as uncommitted events do optimistic concurrency update against docdb pushing uncommitted events (single op to ensure consistency) publish event to service bus update docdb set events just published to commited = 1 (again one 1 op - at least in mongodb) transaction_complete

    Read the article

  • Haskell, mongodb, date

    - by r.sendecky
    How would I insert or auto insert date into mongodb from haskell? What is the best way to convert from mongo date type to haskell data type? Say, in a situation where I insert blog post records (any haskell web framework) and I want to date stamp every record automatically. How would I go about it? The question is more about type conversion and mongodb date type creation from within haskell driver.

    Read the article

  • mongodb insert and return id with REST API

    - by abhi
    New to Mongodb,trying to get _id after mongodb insert without a round trip. $.ajax( { url: "https://api.mongolab.com/api/1/databases/xxx/collections/xx?apiKey=xxx", data: JSON.stringify( [ { "x" : 2,"c1" : 34,"c2" : getUrlVars()["c2"]} ] ), type: "POST", contentType: "application/json" } ); Thanks edit: Solved buy removing square bracers JSON.stringify( { "x" : 2,"c1" : 34,"c2" : getUrlVars()["c2"]} )

    Read the article

  • PHP + Mongodb error

    - by mdm414
    Hi, I followed the windows installation instructions in mongodb's website but I still can't connect to MongoDB through PHP because of this error: Class 'Mongo' not found Why isn't the file containing the Mongo Class not being loaded? Thanks

    Read the article

  • how to Solve the "Digg" problem in MongoDB

    - by user193116
    A while back,a Digg developer had posted this blog ,"http://about.digg.com/blog/looking-future-cassandra", where the he described one of the issues that were not optimally solved in MySQL. This was cited as one of the reasons for their move to Cassandra. I have been playing with MongoDB and I would like to understand how to implement the MongoDB collections for this problem From the article, the schema for this information in MySQL : CREATE TABLE Diggs ( id INT(11), itemid INT(11), userid INT(11), digdate DATETIME, PRIMARY KEY (id), KEY user (userid), KEY item (itemid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE Friends ( id INT(10) AUTO_INCREMENT, userid INT(10), username VARCHAR(15), friendid INT(10), friendname VARCHAR(15), mutual TINYINT(1), date_created DATETIME, PRIMARY KEY (id), UNIQUE KEY Friend_unique (userid,friendid), KEY Friend_friend (friendid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; This problem is ubiquitous in social networking scenario implementation. People befriend a lot of people and they in turn digg a lot of things. Quickly showing a user what his/her friends are up to is very critical. I understand that several blogs have since then provided a pure RDBMs solution with indexes for this issue; however I am curious as to how this could be solved in MongoDB.

    Read the article

  • What is the standard system architecture for MongoDB

    - by learner
    I know this question is too vague, so I would like to add some key numbers to give insights about what the scenario is Each Document size - 360KB Total Documents - 1.5 million Document created/day - 2k read intensive - YES Availability requirement - HIGH With these requirements in mind, here is what I believe should be the architecture, but not too sure, please share your experiences and point me to right directions 2 Linux Box(Ubuntu 11 would do)(on a different rack setup for availability) 64-bit Mongo Database 1-master(for read/wr1te) and 1-slave(read-only with replication ON) Sharding not needed at this point in time Thank you in advance

    Read the article

  • Replicated MongoDB server slower than simple shards

    - by displayName
    I tried to compare the performance of a sharded configuration against a sharded and replicated configuration. The sharded configuration consists of 8 shards each running on three different machines thereby constituting a total of 24 shards. All 8 of these shards run in the same partition on each machine. The sharded and replicated version is 8 shards again just like plain sharding, and all 8 mongods run on the same partition in each machine. But apart from this, each of these three machine now run additional 16 threads on another partition which serve as the secondary for the 8 mongods running on other machines. This is the way I prepared a sharded and replicated configuration with data chunks having replication factor of 3. Important point to note is that once the data has been loaded, it is not modified. So after primary and secondaries have synchronized then it doesn't matter which one i read from. To run the queries, I use an entirely different machine (let's call it config) which runs mongos and this machine's only purpose is to receive queries and run them on the cluster. Contrary to my expectations, plain sharding of 8 threads on each machine (total = 3 * 8 = 24) is performing better for queries than the sharded + replicated configuration. I have a script written to perform the query. So in order to time the scripts, I use time ./testScript and see the result. I tried changing the reading preference for replicated cluster by logging to mongo of config and run db.getMongo().setReadPref('secondary') and then exit the shell and run the queries like time ./testScript. The questions are: Where am i going wrong in the replication? Why is it slower than its plain sharding version? Does the db.getMongo().ReadPref('secondary') persist when i leave the shell and try to perform the query? All the four machines are running Linux and i have already increased the ulimit -n to 2048 from initial value of 1024 to allow more connections. The collections are properly distributed and all the mongods have equal number of chunks. Goes without saying that indices in both configurations are the same.

    Read the article

  • Mongodb Slave replication lag

    - by Leonid Bugaev
    We using standard mongo setup: 2 replicas + 1 arbiter. Both replica servers use same AWS m1.medium with RAID10 EBS. We experiencing constantly growing replication lag on secondary replica. I tried to do full-resync, you can see it on graph, but it helped only for some hours. Our mongo usage is really low now, and frankly i can't understan why it can be. iostat 1 for secondary: avg-cpu: %user %nice %system %iowait %steal %idle 80.39 0.00 2.94 0.00 16.67 0.00 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn xvdap1 0.00 0.00 0.00 0 0 xvdb 0.00 0.00 0.00 0 0 xvdfp4 12.75 0.00 189.22 0 193 xvdfp3 12.75 0.00 189.22 0 193 xvdfp2 7.84 0.00 40.20 0 41 xvdfp1 7.84 0.00 40.20 0 41 md127 19.61 0.00 219.61 0 224 mongostat for secondary (why 100% locks? i guess its the problem): insert query update delete getmore command flushes mapped vsize res faults locked % idx miss % qr|qw ar|aw netIn netOut conn set repl time *10 *0 *16 *0 0 2|4 0 30.9g 62.4g 1.65g 0 107 0 0|0 0|0 198b 1k 16 replset-01 SEC 06:55:37 *4 *0 *8 *0 0 12|0 0 30.9g 62.4g 1.65g 0 91.7 0 0|0 0|0 837b 5k 16 replset-01 SEC 06:55:38 *4 *0 *7 *0 0 3|0 0 30.9g 62.4g 1.64g 0 110 0 0|0 0|0 342b 1k 16 replset-01 SEC 06:55:39 *4 *0 *8 *0 0 1|0 0 30.9g 62.4g 1.64g 0 82.9 0 0|0 0|0 62b 1k 16 replset-01 SEC 06:55:40 *3 *0 *7 *0 0 5|0 0 30.9g 62.4g 1.6g 0 75.2 0 0|0 0|0 466b 2k 16 replset-01 SEC 06:55:41 *4 *0 *7 *0 0 1|0 0 30.9g 62.4g 1.64g 0 138 0 0|0 0|1 62b 1k 16 replset-01 SEC 06:55:42 *7 *0 *15 *0 0 3|0 0 30.9g 62.4g 1.64g 0 95.4 0 0|0 0|0 342b 1k 16 replset-01 SEC 06:55:43 *7 *0 *14 *0 0 1|0 0 30.9g 62.4g 1.64g 0 98 0 0|0 0|0 62b 1k 16 replset-01 SEC 06:55:44 *8 *0 *17 *0 0 3|0 0 30.9g 62.4g 1.64g 0 96.3 0 0|0 0|0 342b 1k 16 replset-01 SEC 06:55:45 *7 *0 *14 *0 0 3|0 0 30.9g 62.4g 1.64g 0 96.1 0 0|0 0|0 186b 2k 16 replset-01 SEC 06:55:46 mongostat for primary insert query update delete getmore command flushes mapped vsize res faults locked % idx miss % qr|qw ar|aw netIn netOut conn set repl time 12 30 20 0 0 3 0 30.9g 62.6g 641m 0 0.9 0 0|0 0|0 212k 619k 48 replset-01 M 06:56:41 5 17 10 0 0 2 0 30.9g 62.6g 641m 0 0.5 0 0|0 0|0 159k 429k 48 replset-01 M 06:56:42 9 22 16 0 0 3 0 30.9g 62.6g 642m 0 0.7 0 0|0 0|0 158k 276k 48 replset-01 M 06:56:43 6 18 12 0 0 2 0 30.9g 62.6g 640m 0 0.7 0 0|0 0|0 93k 231k 48 replset-01 M 06:56:44 6 12 8 0 0 3 0 30.9g 62.6g 640m 0 0.3 0 0|0 0|0 80k 125k 48 replset-01 M 06:56:45 8 21 14 0 0 9 0 30.9g 62.6g 641m 0 0.6 0 0|0 0|0 118k 419k 48 replset-01 M 06:56:46 10 34 20 0 0 6 0 30.9g 62.6g 640m 0 1.3 0 0|0 0|0 164k 527k 48 replset-01 M 06:56:47 6 21 13 0 0 2 0 30.9g 62.6g 641m 0 0.7 0 0|0 0|0 111k 477k 48 replset-01 M 06:56:48 8 21 15 0 0 2 0 30.9g 62.6g 641m 0 0.7 0 0|0 0|0 204k 336k 48 replset-01 M 06:56:49 4 12 8 0 0 8 0 30.9g 62.6g 641m 0 0.5 0 0|0 0|0 156k 530k 48 replset-01 M 06:56:50 Mongo version: 2.0.6

    Read the article

  • How to get the MongoDB' current working set size

    - by Howard
    From the doc , it said "For best performance, the majority of your active set should fit in RAM." So for example, my db.stats() give me { "db" : "mydb", "collections" : 16, "objects" : 21452, "avgObjSize" : 768.0516501957859, "dataSize" : 16476244, "storageSize" : 25385984, "numExtents" : 43, "indexes" : 70, "indexSize" : 15450112, "fileSize" : 469762048, "ok" : 1 } Which value is the working set size?

    Read the article

  • mongodb replication: no primary elected

    - by Max
    I have three servers with mongod installed on it running as a replication set. Suddenly the two secondories became unavailable (the mongod process died) - I think because they were too stale. The problem is that the original PRIMARY is now the SECONDARY and my application doesn't work because it can't connect to a PRIMARY. I mean, in which way does that help me? If the replica set can't do failover?! Am I missing something? Furhtermore I am asking myself why did the SECONDARIES die / why are they too stale? What can I do about it? FYI: My database is quite big (40GB on disk).

    Read the article

  • MongoDB Schema Design - Real-time Chat

    - by Nick
    I'm starting a project which I think will be particularly suited to MongoDB due to the speed and scalability it affords. The module I'm currently interested in is to do with real-time chat. If I was to do this in a traditional RDBMS I'd split it out into: Channel (A channel has many users) User (A user has one channel but many messages) Message (A message has a user) The the purpose of this use case, I'd like to assume that there will be typically 5 channels active at one time, each handling at most 5 messages per second. Specific queries that need to be fast: Fetch new messages (based on an bookmark, time stamp maybe, or an incrementing counter?) Post a message to a channel Verify that a user can post in a channel Bearing in mind that the document limit with MongoDB is 4mb, how would you go about designing the schema? What would yours look like? Are there any gotchas I should watch out for?

    Read the article

  • Rails and MongoDB with MongoMapper

    - by FCastellanos
    I'm new to Rails development and I'm starting with MongoDB also. I have been following this Railscast tutorial about complex forms with Rails but I'm using MongoDB as my database. I'm having no problems inserting documents with it's childs and retrieving the data to the edit form, but when I try to update it I get this error undefined method `assert_valid_keys' for false:FalseClass this is my entity class class Project include MongoMapper::Document key :name, String, :required => true key :priority, Integer many :tasks after_update :save_tasks def task_attributes=(task_attributes) task_attributes.each do |attributes| if attributes[:id].blank? tasks.build(attributes) else task = tasks.detect { |t| t.id.to_s == attributes[:id].to_s } task.attributes = attributes end end end def save_tasks tasks.each do |t| t.save(false) end end end Does anyone knows whats happening here? Thanks

    Read the article

  • MongoDB: embedding performance question

    - by Alex
    I just started learning MongoDB, and I really like the idea of embedding collections instead of referencing them. MongoDB's documentation recommends to use embedding if performance is needed. I just thought about a simple forum model. Let's say, every board category has several boards, every board has several topics, and every topic has several messages. All of these collections are embedded. After some time the size of the board category will be huge. Way more than the 2MB limit. Does this mean that there's a flaw in this design?

    Read the article

  • MongoDB db.serverStatus() gives error when running using tunnel that is targetted to api.cloudfoundry.com

    - by Ajay
    Following is the console session... C:\Users\xxx>vmc tunnel myMongoDB Getting tunnel connection info: OK Service connection info: username : uuuu password : pppp name : db url : mongodb://uuuu:[email protected]:25200/db Starting tunnel to myMongoDB on port 10000. 1: none 2: mongo 3: mongodump 4: mongorestore Which client would you like to start?: 2 Launching 'mongo --host localhost --port 10000 -u uuuu -p pppp db' MongoDB shell version: 2.0.6 connecting to: localhost:10000/db > db.serverStatus() { "errmsg" : "need to login", "ok" : 0 } > Which credentials should I use to login (assuming should use db.auth) to get rid of the error "{ "errmsg" : "need to login", "ok" : 0 }". When I run the same in micro CF on my machine it works ok and gives me the expected output. P.S. I'm trying this to get to know the current connections on my application, written in node.js. Trying to debug some issues with connections to the DB. If there is any other alternative that I can use please suggest that as well.

    Read the article

  • Facebook user_id as MongoDB BSON ObjectId?

    - by MattDiPasquale
    I'm rebuilding Lovers on Facebook with Sinatra & Redis. I like Redis because it doesn't have the long (12-byte) BSON ObjectIds and I am storing sets of Facebook user_ids for each user. The sets are requests_sent, requests_received, & relationships, and they all contain Facebook user ids. I'm thinking of switching to MongoDB because I want to use it's geospatial indexing. If I do, I'd want to use the FB user ids as the _id field because I want the sets to be small and I want the JSON responses to be small. But, is the BSON ObjectId better (more efficient for MongoDB) to use than just an integer (fb user_id)?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >