Search Results

Search found 204 results on 9 pages for 'mongo'.

Page 2/9 | < Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >

  • What techniques would you use for a next generation java web application?

    - by jakob
    I'm working at a site similar to Foursquare and Yelp, with approximately 100000 unique requests each week that generates content, growing steadily. We are currently using: Seam as Java web framework. MySQL as DB Hibernate as ORM Hibernate Search as Index EhCache for Caching. Since our site is slowly growing out of the current setup and has a lot of legacy code, it is time for us to start thinking about a major refactoring/changing setup. Web framework We are not ready to change the language but we are leaning towards Spring Web Framework, since: Seam is no more. Almost all of us have worked with Spring and liked it. DB and ORM We have done a little research and we are thinking about MongoDB. Index Do we need to have a separate Index if we use MongoDB? Cache ? So my question is basically: If you take Spring Web Framework and MongoDB into consideration, how would a good setup be for a web application that is growing and handles a lot of logged in users generating input and performing searches?

    Read the article

  • File system implementation in MongoDB with GridFS

    - by Ralph
    I am working on two projects that will both implement a Webdav server backed by a MongoDB GridFS. In each case, there is the potential for the system to store tens of millions of files spread across thousands of hierarchical directories. I can come up with two different ways of storing the directory structure: As a "true" hierarchical file system, with directories containing the IDs (_id) of subdirectories and regular files. The paths will be separated by slashes (/) as in a POSIX-compliant file system. The path /a/b/c will be represented as a directory a containing a directory b containing a file c. As a flat file system, where file names include the slashes. The path /a/b/c will be stored as a single file with the name /a/b/c What are the advantages and disadvantages of each, with respect to a "real" folder-based file system?

    Read the article

  • Which Python API should be used with Mongo DB and Django

    - by Thomas
    I have been going back and forth over which Python API to use when interacting with Mongo. I did a quick survey of the landscape and identified three leading candidates. PyMongo MongoEngine Ming If you were designing a new content-heavy website using the django framework, what API would you choose and why? MongoEngine looks like it was built specifically with Django in mind. PyMongo appears to be a thin wrapper around Mongo. It has a lot of power, though loses a lot of the abstractions gained through using django as a framework. Ming represents an interesting middle ground between PyMongo and MongoEngine, though I haven't had the opportunity to take it for a test drive.

    Read the article

  • Mongodb: why is my mongo server using two PID's?

    - by Lucas
    I started my mongo with the following command: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data 2014-06-07T08:46:30.507+0000 [initandlisten] MongoDB starting : pid=6409 port=27017 dbpat h=/home/lucas/node/nodetest2/data 64-bit host=ecoinstance 2014-06-07T08:46:30.508+0000 [initandlisten] db version v2.6.1 2014-06-07T08:46:30.508+0000 [initandlisten] git version: 4b95b086d2374bdcfcdf2249272fb55 2c9c726e8 2014-06-07T08:46:30.508+0000 [initandlisten] build info: Linux build14.nj1.10gen.cc 2.6.3 2-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49 2014-06-07T08:46:30.509+0000 [initandlisten] allocator: tcmalloc 2014-06-07T08:46:30.509+0000 [initandlisten] options: { storage: { dbPath: "/home/lucas/n ode/nodetest2/data" } } 2014-06-07T08:46:30.520+0000 [initandlisten] journal dir=/home/lucas/node/nodetest2/data/ journal 2014-06-07T08:46:30.520+0000 [initandlisten] recover : no journal files present, no recov ery needed 2014-06-07T08:46:30.527+0000 [initandlisten] waiting for connections on port 27017 It appears to be working, as I can execute mongo and access the server. However, here are the process running mongo: [lucas@ecoinstance]~/node/testSite$ ps aux | grep mongo root 6540 0.0 0.2 33424 1664 pts/3 S+ 08:52 0:00 sudo mongod --dbpath /ho me/lucas/node/nodetest2/data root 6541 0.6 8.6 522140 52512 pts/3 Sl+ 08:52 0:00 mongod --dbpath /home/lu cas/node/nodetest2/data lucas 6554 0.0 0.1 7836 876 pts/4 S+ 08:52 0:00 grep mongo As you can see, there are two PID's for mongo. Before I ran sudo mongod --dbpath /home/lucas/node/nodetest2/data, there were none (besides the grep of course). How did my command spawn two PID's, and should I be concerned? Any suggestions or tips would be great. Additional Info In addition, I may have other issues that might suggest a cause. I tried running mongo with --fork --logpath /home/lucas..., but it did not work. More information below: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data --fork --logpath /home/lucas/node/nodetest2/data/ about to fork child process, waiting until server is ready for connections. forked process: 6578 ERROR: child process failed, exited with error number 1 [lucas@ecoinstance]~/node/nodetest2$ ls -l data/ total 163852 drwxr-xr-x 2 mongodb nogroup 4096 Jun 7 08:54 journal -rw------- 1 mongodb nogroup 67108864 Jun 7 08:52 local.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 08:52 local.ns -rwxr-xr-x 1 mongodb nogroup 0 Jun 7 08:54 mongod.lock -rw------- 1 mongodb nogroup 67108864 Jun 7 02:08 nodetest1.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 02:08 nodetest1.ns Also, my db path folder is not the original location. It was originally created under the default /var/lib/mongodb/ and moved to my local data folder. This was done after shutting down the server via /etc/init.d/mongod stop. I have a Debian Wheezy server, if it matters.

    Read the article

  • please explain these mongo statistics

    - by sivann
    My setup: I have 2 hosts, and 2 shards each. Host1 has 2 shards, and is the master of the replicas host2 has the secondaries of the 2 shards. . host1: shard1 (repset1),shard2 (repset2) host2: shard1 (repset1),shard2 (repset2) There's also a 3rd host that acts as arbitrer. I have 50 threads writing randomly to both shards (using a hash) via mongos with REPLICA_SAFE WriteConcern set on each insert. The questions: mongostat displays about 90% locked for both shards in host1 and about 1% locked on host2. Since I use REPLICA_SAFE which supposedly writes to both servers shouldn't the locks be the same? mongostat reports qr=30 for both shards of host1, and qw=0 always. Since I perform only writes how is this possible? Moreover on host2 all queues are reported 0. Faults are abut the same in all shards/hosts (arround 80). netIn/netOut on the secondaries (host2) are always about 200bytes/sec. Too low. mongos has 53 connections, host1's shards have 71 and 71 and host2's shards have 9 and 8. How is this? Please answer whatever you can. Thanks!

    Read the article

  • mongoid with rails - Database should be a Mongo::DB, not NilClass"

    - by Adam T
    Greetings I am trying to get Mongoid to work with my Rails app and I am getting an error: "Mongoid::Errors::InvalidDatabase in 'Shipment bol should be unique' Database should be a Mongo::DB, not NilClass" I have created the mongoid.yml file in my config directory and have mongodb running as a daemon. The config file is like so: defaults: &defaults host: localhost development: <<: *defaults database: ship-it-development test: <<: *defaults database: ship-it-test production: <<: *defaults host: <%= ENV['MONGOID_HOST'] % port: <%= ENV['MONGOID_PORT'] % database: <%= ENV['MONGOID_DATABASE'] % All of my specs fail with the above error. I am using rails 2.3.8. Anyone have ideas?

    Read the article

  • Query Mongo Db and filter by associative array key

    - by Failpunk
    How can I search for results in Mongo DB documents using an associative array key. Something like: SELECT * FROM table WHERE keyword like '%searchterm%'; Here is the basic document structure [id] => 31605 [keywords] => Array ( [keyword1] => Array ( [name] => KeyWord1 ) [keyword2] => Array ( [name] => KeyWord2 ) ... ) I would like to do a search within the keywords array on the associative array key [keyword1, keyword2]. The issue is that the name key holds the case-sensitive version of the keyword and the array key is the lower-case keyword name. I could store the lowercase keyword twice, but that seems silly.

    Read the article

  • Mongo: Finding from multiple queries

    - by waxical
    New to Mongo here. I'm using the PHP lib and trying to work out how I can find in a collection from multiple queries. I could do this by repeating the query with a different query, but I wondered if it can be done in one. I.e. $idsToLookFor = array(2124,4241,5553); $query = $db->thisCollection->find(array('id' => $idsToLookFor)); That's what I'd like to do. However it doesn't work. What I'm trying to do is find a set of results for all the id's at one time. Possible or just do a findOne on each with a foreach/for?

    Read the article

  • Issues with MongoDB install on Ubuntu 8.04 LTS

    - by Tom
    I am installing MongoDB (1.4.1) on Ubuntu (8.04 LTS) and I continuously have a problem where I can be in /usr/local/mongodb/bin and run ./mongo or ./mongod and I am returned "No such file or directory." Let me be very clear here... the files ARE there! The obvious go-to solution is that it is because of permission issues but the permissions are fine. I've even tried others out, still without any luck. I'm really at the end here and any help would be MUCH appreciated. Thank you!

    Read the article

  • Rails HTML Table update fields in mongo with AJAX

    - by qwexar
    I'm building a Rails app backed by mongodb using mongoid. It's a one page app, with a HTML table, every field for every row of which, needs to be editable without refreshing the page. This is your usual Rails view ( like many in rails casts ) showing a table with rows and columns containing data. For example. I'm showing cars, and showing their make, model and notes They way I'm doing this is by appending _id of a mongo document to every column and marking it's field name in html id too. Then I pick up the value for $("#id") and send it to rails controller via AJAX and run @car.update_attributes method accordingly. Currently, one of my rows looks like this. <tr> <td id=<%= car.id %>_make> <%= car.make %> </td> <td id=<%= car.id %>_model> <%= car.model %> </td> <td id=<%= car.id %>_notes> <%= car.notes %> </td> </tr> // my function which is called onChange for every column function update_attributes(id){ var id = id.split[0]; var attribute = id.split[1]; $.ajax("sending id and attribute to rails controller"); } Is there any built it Rails magic which would let me update only a field in a model without refreshing the page? or. Is there a Rails plugin for this?

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • mongodb read/write performance and mongo hosting in the cloud

    - by z3cko
    we are currently developing a high traffic rails application with facebooker (facebook game). since amazon simpledb (aws-sdb) is really slow, we are thinking of using a dedicated mongodb server as offered by mongoHQ for example. questions: what is the read/writes peak value for a mongodb server running on a amazon ec2 instance? what would be a recommended setup for a ec2 hosted app with mongodb - a master on amazon EBS and replicas on the ec2 instances? any examples or experiences? is there a company that offers mongodb hosting in the cloud? thanks, mz

    Read the article

  • directoryperdb issue

    - by Rich Blumer
    I installed MongoDB to run as a Windows Service on Win 7 and everything runs well. However, when I attempt to use the command --directoryperdb, it does not recognize this command. Does anyone know how to resolve this issue?

    Read the article

  • How to make a secure MongoDB server?

    - by Earlz
    Hello, I'm wanting my website to use MongoDB as it's datastore. I've used MongoDB in my development environment with no worries, but I'm worried about security with a public server. My server is a VPS running Arch Linux. The web application will also be running on it, so it only needs to accept connections from localhost. And no other users(by ssh or otherwise) will have direct access to my server. What should I do to secure my instance of MongoDB?

    Read the article

  • MongoDB directoryperdb issue

    - by Rich Blumer
    I installed MongoDB to run as a Windows Service on Win 7 and everything runs well. However, when I attempt to use the command --directoryperdb, it does not recognize this command. Does anyone know how to resolve this issue?

    Read the article

  • MapReduce for counting parameter values

    - by cnkt
    I have document like this: { "_id": ObjectId("4d17c7963ffcf60c1100002f"), "title": "Text", "params": { "brand": "BMW", "model": "i3" } } { "_id": ObjectId("4d17c7963ffcf60c1100002f"), "title": "Text", "params": { "brand": "BMW", "model": "i5" } } What i need is the count of every params values. like: brand --------- BMW (2) model --------- i3 (1) i5 (1) I think i have to write map/reduce functions. How can i do this? Thanks.

    Read the article

  • directoryperdb issue

    - by Rich Blumer
    I installed MongoDB to run as a Windows Service on Win 7 and everything runs well. However, when I attempt to use the command --directoryperdb, it does not recognize this command. Does anyone know how to resolve this issue?

    Read the article

  • Is it possible to execute a function in Mongo that accepts any parameters?

    - by joshua.clayton
    I'm looking to write a function to do a custom query on a collection in Mongo. Problem is, I want to reuse that function. My thought was this (obviously contrived): var awesome = function(count) { return function() { return this.size == parseInt(count); }; } So then I could do something along the lines of: db.collection.find(awesome(5)); However, I get this error: error: { "$err" : "error on invocation of $where function: JS Error: ReferenceError: count is not defined nofile_b:1" } So, it looks like Mongo isn't honoring scope, but I'm really not sure why. Any insight would be appreciated. To go into more depth of what I'd like to do: A collection of documents has lat/lng values, and I want to find all documents within a concave or convex polygon. I have the function written but would ideally be able to reuse the function, so I want to pass in an array of points composing my polygon to the function I execute on Mongo's end. I've looked at Mongo's geospatial querying and it currently on supports circle and box queries - I need something more complex.

    Read the article

  • So how does one use rockmongo to connect to a mongo sharded setup with replicasets?

    - by Tom
    I try to use rockmongo, to connect to our cluster. Our setup is a setup of two shards each consisting of a replicaset. I try to connect to the mongos instance and while rockmongo connects I get an error when trying to list the dbs: Execute failed:not master function (){ return db.getCollectionNames(); } This has something to do with the replica sets and everybody points to: $MONGO["servers"][$i] = array("replicaSet" => "xxxxx"); This is all fine, but I have two replicasets and as far as I understand I should connect to the mongos instance and not directly to the members of the set. So how does one use rockmongo to connect to a mongo sharded setup with replicasets?

    Read the article

  • Ruby. Mongoid. Relations

    - by Scepion1d
    I've encountered some problems with MongoID. I have three models: require 'mongoid' class Configuration include Mongoid::Document belongs_to :user field :links, :type => Array field :root, :type => String field :objects, :type => Array field :categories, :type => Array has_many :entries end class TimeDim include Mongoid::Document field :day, :type => Integer field :month, :type => Integer field :year, :type => Integer field :day_of_week, :type => Integer field :minute, :type => Integer field :hour, :type => Integer has_many :entries end class Entry include Mongoid::Document belongs_to :configuration belongs_to :time_dim field :category, :type => String # any other dynamic fields end Creating documents for Configurations and TimeDims is successful. But when i've trying to execute following code: params = Hash.new params[:configuration] = config # an instance of Configuration from DB entry.each do |key, value| params[key.to_sym] = value # String end unless Entry.exists?(conditions: params) params[:time_dim] = self.generate_time_dim # an instance of TimeDim from DB params[:category] = self.detect_category(descr) # String Entry.new(params).save end ... i saw following output: /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/bson-1.6.1/lib/bson/bson_c.rb:24:in `serialize': Cannot serialize an object of class Configuration into BSON. (BSON::InvalidDocument) from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/bson-1.6.1/lib/bson/bson_c.rb:24:in `serialize' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:604:in `construct_query_message' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:465:in `send_initial_query' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:458:in `refresh' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:128:in `next' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/db.rb:509:in `command' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:191:in `count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/cursor.rb:42:in `block in count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/collections/retry.rb:29:in `retry_on_connection_failure' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/cursor.rb:41:in `count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/contexts/mongo.rb:93:in `count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/criteria.rb:45:in `count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/finders.rb:60:in `exists?' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:110:in `block (2 levels) in push_entries_to_db' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:103:in `each' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:103:in `block in push_entries_to_db' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:102:in `each' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:102:in `push_entries_to_db' from main_starter.rb:15:in `<main>' Can anyone tell what am I doing wrong?

    Read the article

  • Problem with MongoDB Ruby Driver

    - by Paul
    I'm on Ubuntu, and I've done install gem mongo which reported Successfully installed bson-1.0 Successfully installed mongo-1.0 2 gems installed I've started mongod Now I cd to the mongo gem directory and try > ruby examples/simple.rb and I get the error ./examples/../lib/mongo.rb:31:in `require': no such file to load -- bson (LoadError) from ./examples/../lib/mongo.rb:31 from examples/simple.rb:3:in `require' from examples/simple.rb:3 which I can't make sense of, since the bson gem is installed > gem list *** LOCAL GEMS *** bson (1.0) bson_ext (1.0) mongo (1.0) rack (1.1.0) sinatra (1.0) Any suggestions what's up here?

    Read the article

  • Is there a better alternative to the Mongo shell?

    - by afvasd
    Dear Everyone, Is there a better shell than the native mongo shell? When I press UP I am seeing ^[[A . Does the shell not support last query? Tabbing in the shell does not autocomplete either. Of course, if there's a shell with syntax highlighting that would be great Is there an alternative that has the following features? (Or at least has some of them).

    Read the article

  • How to construct query to update nested array document in mongo?

    - by GowtGM
    I am having following document in mongo, { "_id" : ObjectId("506e9e54a4e8f51423679428"), "description" : "ffffffffffffffff", "menus" : [ { "_id" : ObjectId("506e9e5aa4e8f51423679429"), "description" : "ffffffffffffffffffff", "items" : [ { "name" : "xcvxc", "description" : "vxvxcvxc", "text" : "vxcvxcvx", "menuKey" : "0", "onSelect" : "1", "_id" : ObjectId("506e9f07a4e8f5142367942f") } , { "name" : "abcd", "description" : "qqq", "text" : "qqq", "menuKey" : "0", "onSelect" : "3", "_id" : ObjectId("507e9f07a4e8f5142367942f") } ] }, { "_id" : ObjectId("506e9e5aa4e8f51423679429"), "description" : "rrrrr", "items" : [ { "name" : "xcc", "description" : "vx", "text" : "vxc", "menuKey" : "0", "onSelect" : "2", "_id" : ObjectId("506e9f07a4e8f5142367942f") } ] } ] } Now , i want to update the following document : { "name" : "abcd", "description" : "qqq", "text" : "qqq", "menuKey" : "0", "onSelect" : "3", "_id" : ObjectId("507e9f07a4e8f5142367942f") } I am having main documnet id: "_id" : ObjectId("506e9e54a4e8f51423679428") and menus id "_id" : ObjectId("506e9e54a4e8f51423679428") as well as items id "_id" : ObjectId("507e9f07a4e8f5142367942f") which is to be updated. I have tried using the following query: db.collection.update({ "_id" : { "$oid" : "506e9e54a4e8f51423679428"} , "menus._id" : { "$oid" : "506e9e5aa4e8f51423679429"}},{ "$set" : { "menus.$.items" : { "_id" : { "$oid" : "506e9f07a4e8f5142367942f"}} , "menus.$.items.$.name" : "xcvxc66666", ...}},false,false); but its not working...

    Read the article

  • Most proper way to use inherited classes with shared scopes in Mongo?

    - by Trip
    I have the TestVisual class that is inherited by the Game class : class TestVisual < Game include MongoMapper::Document end class Game include MongoMapper::Document belongs_to :maestra key :incorrect, Integer key :correct, Integer key :time_to_complete, Integer key :maestra_id, ObjectId timestamps! end As you can see it belongs to Maestra. So I can do Maestra.first.games But I can not to Maestra.first.test_visuals Since I'm working specifically with TestVisuals, that is ideally what I would like to pull. Is this possible with Mongo. If it isn't or if it isn't necessary, is there any other better way to reach the TestVisual object from Maestra and still have it inherit Game ?

    Read the article

  • How to install 64bit version of Mongodb

    - by slownage
    How can I install the 64bit (x86_64) version of MongoDB? I've specified in the 10gen.repo the 64bit: baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64 But when I run: yum install mongo-10gen mongo-10gen-server It's the 32bit (see the i686) that it's set to be installed. Failed to set locale, defaulting to C Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.fdcservers.net * epel: mirror.steadfast.net * extras: mirror.fdcservers.net * rpmforge: mirror.rit.edu * updates: mirror.fdcservers.net 10gen | 951 B 00:00 Not using downloaded repomd.xml because it is older than what we have: Current : Tue Oct 30 15:55:02 2012 Downloaded: Tue Oct 30 15:54:51 2012 Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package mongo-10gen.i686 0:2.2.1-mongodb_1 will be installed ---> Package mongo-10gen-server.i686 0:2.2.1-mongodb_1 will be installed --> Finished Dependency Resolution Dependencies Resolved ====================================================================================================================================================== Package Arch Version Repository Size ====================================================================================================================================================== Installing: mongo-10gen i686 2.2.1-mongodb_1 10gen 42 M mongo-10gen-server i686 2.2.1-mongodb_1 10gen 6.5 M Transaction Summary ====================================================================================================================================================== Install 2 Package(s) Total download size: 48 M Installed size: 118 M I think I know why it want's to install the 32bit version: the first time I've made the 10gen.repo file I had in there the 32bit link specified, and installed the 32bit, which later I've deleted. Maybe something has been cached. Could someone help me out with this.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >