Search Results

Search found 666 results on 27 pages for 'mongodb'.

Page 17/27 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • What are some "mental steps" a developer must take to begin moving from SQL to NO-SQL (CouchDB, Fath

    - by Byron Sommardahl
    I have my mind firmly wrapped around relational databases and how to code efficiently against them. Most of my experience is with MySQL and SQL. I like many of the things I'm hearing about document-based databases, especially when someone in a recent podcast mentioned huge performance benefits. So, if I'm going to go down that road, what are some of the mental steps I must take to shift from SQL to NO-SQL? If it makes any difference in your answer, I'm a C# developer primarily (today, anyhow). I'm used to ORM's like EF and Linq to SQL. Before ORMs, I rolled my own objects with generics and datareaders. Maybe that matters, maybe it doesn't. Here are some more specific: How do I need to think about joins? How will I query without a SELECT statement? What happens to my existing stored objects when I add a property in my code? (feel free to add questions of your own here)

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Concept: Is mongo right for applying schemas?

    - by Jan
    I am currently in charge of checking wether it is valuable for one of our upcoming products to be developed on mongo. Without going too much into detail, I'll try to explain, what the app does. The app simply has "entities". These entities are technical stuff, like cell phones, TVs, Laptops, tablet pcs, and so forth. Of course, a cell phone has other attributes than a Tablet PCs and a Laptop has even other attributes, like RAM, CPU, display size and so on. Now I want to have something that we wanna call a scheme: We define that we need to have saved the display size, amount of ram size of flash devices, processor type, processor speed and so on for tablet pcs. For cell phone we might save display size, GSM, Edge, 3g, 4g, processor, ram, touch screen technology, bla bla bla. I think you got it :) What I want to realize is, that each "category" has a schema and when one of the system's users enters a new product (let's say the new iphone 4), the app constructs the form to be filled out with the appropriate attributes. So far it sounds nice and should not be a problem with mongo. But now the tough for which I could not find a clean solution.... An attribute modeled in mongo looks like: { _id: 1234456, name: "Attribute name", type: 0, "description" } But what to do, if i need this attribute in several languages, like: { en: {name: "Attribute name", type: 0, "description"}, de: {name: "Name des Attributs, type: 0, "Beschreibung"} } I also need to ensure that the german attribute gets updated as soon as the english gets updated, for instance when type changes from 0 to 1. Any ideas on that?

    Read the article

  • How to construct query to update nested array document in mongo?

    - by GowtGM
    I am having following document in mongo, { "_id" : ObjectId("506e9e54a4e8f51423679428"), "description" : "ffffffffffffffff", "menus" : [ { "_id" : ObjectId("506e9e5aa4e8f51423679429"), "description" : "ffffffffffffffffffff", "items" : [ { "name" : "xcvxc", "description" : "vxvxcvxc", "text" : "vxcvxcvx", "menuKey" : "0", "onSelect" : "1", "_id" : ObjectId("506e9f07a4e8f5142367942f") } , { "name" : "abcd", "description" : "qqq", "text" : "qqq", "menuKey" : "0", "onSelect" : "3", "_id" : ObjectId("507e9f07a4e8f5142367942f") } ] }, { "_id" : ObjectId("506e9e5aa4e8f51423679429"), "description" : "rrrrr", "items" : [ { "name" : "xcc", "description" : "vx", "text" : "vxc", "menuKey" : "0", "onSelect" : "2", "_id" : ObjectId("506e9f07a4e8f5142367942f") } ] } ] } Now , i want to update the following document : { "name" : "abcd", "description" : "qqq", "text" : "qqq", "menuKey" : "0", "onSelect" : "3", "_id" : ObjectId("507e9f07a4e8f5142367942f") } I am having main documnet id: "_id" : ObjectId("506e9e54a4e8f51423679428") and menus id "_id" : ObjectId("506e9e54a4e8f51423679428") as well as items id "_id" : ObjectId("507e9f07a4e8f5142367942f") which is to be updated. I have tried using the following query: db.collection.update({ "_id" : { "$oid" : "506e9e54a4e8f51423679428"} , "menus._id" : { "$oid" : "506e9e5aa4e8f51423679429"}},{ "$set" : { "menus.$.items" : { "_id" : { "$oid" : "506e9f07a4e8f5142367942f"}} , "menus.$.items.$.name" : "xcvxc66666", ...}},false,false); but its not working...

    Read the article

  • Defining a different primary key in Mongomapper

    - by ming yeow
    I am defining a primary key in MongoMapper. class B key :_id, string key :externalId, string end The problem is that everything i add a new record in B, it appears that I need to explicity specify the _id, when it is already defined in the external id B.new(:_id=>"123", :external_id=>"123 ) That does not quite make sense. There should be a way to specify externalId as the primary key, no?

    Read the article

  • Calling functions outside paths

    - by user1775718
    In mongojs, when you do: var birds = db.birds.find(searchTerm, callback); ...how do you pass arguments to the callback? I've tried bind, as in: birds = db.birds.find(searchTerm, app.get('getBirds').bind(res)); ...but to no avail. Just fyi I'm trying to pass the response object of the GET route so that the callback can render using res.send(results). The other option is to set app.set('res': res); and call app.get('res') from the callback - I'm not sure this is a good idea. It works, but it doesn't obey the events loop model too well - I think the request back to the app may be costly? Any help would be gratefully accepted. :)

    Read the article

  • making a password-only auth with bcrypt and mongoose

    - by user3081123
    I want to create service that let you login only with password. You type a password and if this password exists - you are logged in and if it's not - username is generated and password is encrypted. I'm having some misunderstandings and hope someone would help me to show where I'm mistaken. I guess, it would look somewhat like this in agularjs First we receive a password in login controller. $scope.signup = function() { var user = { password: $scope.password, }; $http.post('/auth/signup', user); }; Send it via http.post and get in in our node server file. We are provided with a compare password bcrypt function userSchema.methods.comparePassword = function(candidatePassword, cb) { bcrypt.compare(candidatePassword, this.password, function(err, isMatch) { if (err) return cb(err); cb(null, isMatch); }); }; So right now we are creating function to catch our http request app.post('/auth/signup', function(req, res, next) { Inside we use a compair password function to realize if such password exists or not yet. So we have to encrypt a password with bcrypt to make a comparison First we hash it same way as in .pre var encPass; bcrypt.genSalt(10, function(err, salt) { if (err) return next(err); bcrypt.hash(req.body.password, salt, function(err, hash) { if (err) return next(err); encPass=hash; )}; )}; We have encrypted password stored in encPass so now we follow to finding a user in database with this password User.findOne({ password: encPass }, function(err, user) { if (user) { //user exists, it means we should pass an ID of this user to a controller to display it in a view. I don't know how. res.send({user.name}) //like this? How should controller receive this? With $http.post? } else { and now if user doesn't exist - we should create it with user ID generated by my function var nUser = new User({ name: generId(), password: req.body.password }); nUser.save(function(err) { if (err) return next(err); )}; )}; )}; Am I doing anything right? I'm pretty new to js and angular. If so - how do I throw a username back at controller? If someone is interested - this service exists for 100+ symbol passphrases so possibility of entering same passphrase as someone else is miserable. And yeah, If someone logged in under 123 password - the other guy will log in as same user if he entered 123 password, but hey, you are warned to make a big passphrase. So I'm confident about the idea and I only need a help with understanding and realization.

    Read the article

  • syntax error, unexpected ',', expecting ')' RoR

    - by McDoku
    I am trying to get a collection select from an another model and I keep getting the above error. Looked everywhere, got rails casts but nothing makes sense. _form.rb <%= f.label :city %><br /> <%= f.collection_select (:share ,:city_id, City.all , :id, :name ) %> It highlights 'form' on the error report <h1>New share</h1> <%= render 'form' %> <%= link_to 'Back', shares_path %> Here are my models... class Share include Mongoid::Document field :name, type: String field :type, type: String field :summary, type: String field :description, type: String field :city, type: String embedded_in :city has_many :category end class City include Mongoid::Document embedded_in :share field :name, type: String field :country, type: String attr_accessible :name, :city_id, :id end Searched everywhere and I cannot figure it out. It must be something silly.

    Read the article

  • Most proper way to use inherited classes with shared scopes in Mongo?

    - by Trip
    I have the TestVisual class that is inherited by the Game class : class TestVisual < Game include MongoMapper::Document end class Game include MongoMapper::Document belongs_to :maestra key :incorrect, Integer key :correct, Integer key :time_to_complete, Integer key :maestra_id, ObjectId timestamps! end As you can see it belongs to Maestra. So I can do Maestra.first.games But I can not to Maestra.first.test_visuals Since I'm working specifically with TestVisuals, that is ideally what I would like to pull. Is this possible with Mongo. If it isn't or if it isn't necessary, is there any other better way to reach the TestVisual object from Maestra and still have it inherit Game ?

    Read the article

  • Is there a better alternative to the Mongo shell?

    - by afvasd
    Dear Everyone, Is there a better shell than the native mongo shell? When I press UP I am seeing ^[[A . Does the shell not support last query? Tabbing in the shell does not autocomplete either. Of course, if there's a shell with syntax highlighting that would be great Is there an alternative that has the following features? (Or at least has some of them).

    Read the article

  • When to save a mongoose model

    - by kentcdodds
    This is an architectural question. I have models like this: var foo = new mongoose.Schema({ name: String, bars: [{type: ObjectId, ref: 'Bar'}] }); var FooModel = mongoose.model('Foo', foo); var bar = new mongoose.Schema({ foobar: String }); var BarModel = mongoose.model('Bar', bar); Then I want to implement a convenience method like this: BarModel.methods.addFoo = function(foo) { foo.bars = foo.bars || []; // Side note, is this something I should check here? foo.bars.push(this.id); // Here's the line I'm wondering about... Should I include the line below? foo.save(); } The biggest con I see about this is that if I did include foo.save() then I should pass in a callback to addFoo so I avoid issues with the async operation. I'm thinking this is not preferable. But I also think it would be nice to include because addFoo hasn't really "addedFoo" until it's been saved... Am I breaking any design best practices doing it either way?

    Read the article

  • Query Mongo Db and filter by associative array key

    - by Failpunk
    How can I search for results in Mongo DB documents using an associative array key. Something like: SELECT * FROM table WHERE keyword like '%searchterm%'; Here is the basic document structure [id] => 31605 [keywords] => Array ( [keyword1] => Array ( [name] => KeyWord1 ) [keyword2] => Array ( [name] => KeyWord2 ) ... ) I would like to do a search within the keywords array on the associative array key [keyword1, keyword2]. The issue is that the name key holds the case-sensitive version of the keyword and the array key is the lower-case keyword name. I could store the lowercase keyword twice, but that seems silly.

    Read the article

  • Loading a view routed by a URL parameter (e.g., /users/:id) in MEAN stack

    - by Matt Rowles
    I am having difficulties with trying to load a user by their id, for some reason my http.get call isn't hitting my controller. I get the following error in the browser console: TypeError: undefined is not a function at new <anonymous> (http://localhost:9000/scripts/controllers/users.js:10:8) Update I've fixed my code up as per comments below, but now my code just enters an infinite loop in the angular users controllers (see code below). I am using the Angular Express Generator for reference Backend - nodejs, express, mongo routes.js: // not sure if this is required, but have used it before? app.param('username', users.show); app.route('/api/users/:username') .get(users.show); controller.js: // This never gets hit exports.show = function (req, res, next, username) { User.findOne({ username: username }) .exec(function (err, user) { req.user = user; res.json(req.user || null); }); }; Frontend - angular app.js: $routeProvider .when('/users/:username', { templateUrl: function( params ){ return 'users/view/' + params.username; }, controller: 'UsersCtrl' }) services/user.js: angular.module('app') .factory('User', function ($resource) { return $resource('/api/users/:username', { username: '@username' }, { update: { method: 'PUT', params: {} }, get: { method: 'GET', params: { username:'username' } } }); }); controllers/users.js: angular.module('app') .controller('UsersCtrl', ['$scope', '$http', '$routeParams', '$route', 'User', function ($scope, $http, $routeParams, $route, User) { // this returns the error above $http.get( '/api/users/' + $routeParams.username ) .success(function( user ) { $scope.user = user; }) .error(function( err) { console.log( err ); }); }]); If it helps, I'm using this setup

    Read the article

  • Which NoSQL db to use with C?

    - by systemsfault
    Hello all, I'm working on an application that I'm going to write with C and i am considering to use a nosql db for storing timeseries data with at most 8 or 9 fields. But in every 5 minutes there will huge write operations such as 2-10 million rows and then there will be reads(but performance is not as crucial in read as in the write operation). I'm considering to use a NoSQL db here in order to store the data but couldn't decide on which one to use. Couchdb seems to have a stable driver called pillowtalk for C; but Mongo's driver doesn't look as promising as pillowtalk. I'm also open to other suggestions. What is your recommendation?

    Read the article

  • Mongo: Finding from multiple queries

    - by waxical
    New to Mongo here. I'm using the PHP lib and trying to work out how I can find in a collection from multiple queries. I could do this by repeating the query with a different query, but I wondered if it can be done in one. I.e. $idsToLookFor = array(2124,4241,5553); $query = $db->thisCollection->find(array('id' => $idsToLookFor)); That's what I'd like to do. However it doesn't work. What I'm trying to do is find a set of results for all the id's at one time. Possible or just do a findOne on each with a foreach/for?

    Read the article

  • La Maison-Blanche fait de l'open source et publie sa première application sur GitHub, We The People est sous Drupal et MongoDB

    La Maison-Blanche fait de l'open source Et publie sa première application sur GitHub, We The People est sous Drupal et MongoDB C'est une première. La Maison-Blanche vient de distribuer la première application open source créée par un gouvernement, disponible dans son dépôt GitHub officiel. Il s'agit d'une application permettant à tout citoyen de créer, voter et faire voter une pétition. C'est le code même qui propulse l'application « We The People » (nous le peuple) qu'on retrouve sur le site de la Maison-Blanche. C'est en fait la concrétisation d'un engagement pris par le président Barack Obama en septembre 2011 : « Parmi nos engagements, nous sommes en train de lancer un out...

    Read the article

  • Proper way to use hiera with puppetlabs-spec-helper?

    - by Lee Lowder
    I am trying to write some rspec tests for my modules. Most of them now use hiera. I have a .fixures.yml: fixtures: repositories: stdlib: https://github.com/puppetlabs/puppetlabs-stdlib.git hiera-puppet: https://github.com/puppetlabs/hiera-puppet.git symlinks: mongodb: "#{source_dir}" and a spec/classes/mongodb_spec.rb: require 'spec_helper' describe 'mongodb', :type => 'class' do context "On an Ubuntu install, admin and single user" do let :facts do { :osfamily => 'Debian', :operatingsystem => 'Ubuntu', :operatingsystemrelease => '12.04' } end it { should contain_user('XXXX').with( { 'uid' => '***' } ) should contain_group('XXXX').with( { 'gid' => '***' } ) should contain_package('mongodb').with( { 'name' => 'mongodb' } ) should contain_service('mongodb').with( { 'name' => 'mongodb' } ) } end end but when I run the spec test, I get: # rake spec /usr/bin/ruby1.8 -S rspec spec/classes/mongodb_spec.rb --color F Failures: 1) mongodb On an Ubuntu install, admin and single user Failure/Error: should contain_user('XXXX').with( { 'uid' => '***' } ) LoadError: no such file to load -- hiera_puppet # ./spec/fixtures/modules/hiera-puppet/lib/puppet/parser/functions/hiera.rb:3:in `function_hiera' # ./spec/classes/mongodb_spec.rb:15 Finished in 0.05415 seconds 1 example, 1 failure Failed examples: rspec ./spec/classes/mongodb_spec.rb:14 # mongodb On an Ubuntu install, admin and single user rake aborted! /usr/bin/ruby1.8 -S rspec spec/classes/mongodb_spec.rb --color failed Tasks: TOP => spec_standalone (See full trace by running task with --trace) Module spec testing is relatively new, as is hiera. So far I have been unable to find any suitable solutions. (the back and forth on puppet-dev was interesting, but not helpful). What changes do I need to make to get this to work? Installing puppet from a gem and hacking on rubylib isn't a viable solution due to corporate policy. I am using Ubuntu 12.04 LTS + Puppet 2.7.17 + hiera 0.3.0.

    Read the article

  • Running a Mongo Replica Set on Azure VM Roles

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/15/running-a-mongo-replica-set-on-azure-vm-roles.aspxSetting up a MongoDB Replica Set with a bunch of Azure VMs is straightforward stuff. Here’s a step-by-step which gets you from 0 to fully-redundant 3-node document database in about 30 minutes (most of which will be spent waiting for VMs to fire up). First, create yourself 3 VM roles, which is the minimum number of nodes you need for high availability. You can use any OS that Mongo supports. This guide uses Windows but the only difference will be the mechanism for starting the Mongo service when the VM starts (Windows Service, daemon etc.) While the VMs are provisioning, download and install Mongo locally, so you can set up the replica set with the Mongo shell. We’ll create our replica set from scratch, doing one machine at a time (if you have a single node you want to upgrade to a replica set, it’s the same from step 3 onwards): 1. Setup Mongo Log into the first node, download mongo and unzip it to C:. Rename the folder to remove the version – so you have c:\MongoDB\bin etc. – and create a new folder for the logs, c:\MongoDB\logs. 2. Setup your data disk When you initialize a node in a replica set, Mongo pre-allocates a whole chunk of storage to use for data replication. It will use up to 5% of your data disk, so if you use a Windows VM image with a defsault 120Gb disk and host your data on C:, then Mongo will allocate 6Gb for replication. And that takes a while. Instead you can create yourself a new partition by shrinking down the C: drive in Computer Management, by say 10Gb, and then creating a new logical disk for your data from that spare 10Gb, which will be allocated as E:. Create a new folder, e:\data. 3. Start Mongo When that’s done, start a command line, point to the mongo binaries folder, install Mongo as a Windows Service, running in replica set mode, and start the service: cd c:\mongodb\bin mongod -logpath c:\mongodb\logs\mongod.log -dbpath e:\data -replSet TheReplicaSet –install net start mongodb 4. Open the ports Mongo uses port 27017 by default, so you need to allow access in the machine and in Azure. In the VM, open Windows Firewall and create a new inbound rule to allow access via port 27017. Then in the Azure Management Console for the VM role, under the Configure tab add a new rule, again to allow port 27017. 5. Initialise the replica set Start up your local mongo shell, connecting to your Azure VM, and initiate the replica set: c:\mongodb\bin\mongo sc-xyz-db1.cloudapp.net rs.initiate() This is the bit where the new node (at this point the only node) allocates its replication files, so if your data disk is large, this can take a long time (if you’re using the default C: drive with 120Gb, it may take so long that rs.initiate() never responds. If you’re sat waiting more than 20 minutes, start another instance of the mongo shell pointing to the same machine to check on it). Run rs.conf() and you should see one node configured. 6. Fix the host name for the primary – *don’t miss this one* For the first node in the replica set, Mongo on Windows doesn’t populate the full machine name. Run rs.conf() and the name of the primary is sc-xyz-db1, which isn’t accessible to the outside world. The replica set configuration needs the full DNS name of every node, so you need to manually rename it in your shell, which you can do like this: cfg = rs.conf() cfg.members[0].host = ‘sc-xyz-db1.cloudapp.net:27017’ rs.reconfig(cfg) When that returns, rs.conf() will have your full DNS name for the primary, and the other nodes will be able to connect. At this point you have a working database, so you can start adding documents, but there’s no replication yet. 7. Add more nodes For the next two VMs, follow steps 1 through to 4, which will give you a working Mongo database on each node, which you can add to the replica set from the shell with rs.add(), using the full DNS name of the new node and the port you’re using: rs.add(‘sc-xyz-db2.cloudapp.net:27017’) Run rs.status() and you’ll see your new node in STARTUP2 state, which means its initializing and replicating from the PRIMARY. Repeat for your third node: rs.add(‘sc-xyz-db3.cloudapp.net:27017’) When all nodes are finished initializing, you will have a PRIMARY and two SECONDARY nodes showing in rs.status(). Now you have high availability, so you can happily stop db1, and one of the other nodes will become the PRIMARY with no loss of data or service. Note – the process for AWS EC2 is exactly the same, but with one important difference. On the Azure Windows Server 2012 base image, the MongoDB release for 64-bit 2008R2+ works fine, but on the base 2012 AMI that release keeps failing with a UAC permission error. The standard 64-bit release is fine, but it lacks some optimizations that are in the 2008R2+ version.

    Read the article

  • Best database setup for one click games

    - by ewizard
    I am building a one click game website/mobile app, and I am debating between using MySQL and MongoDB for the backend. The way I have been exploring it is with a NodeJS/Express/Angular/Passport/MongoDB stack - I have also implemented Socket.io. I have gotten to the point where I am sending data from the flash game to the server (NodeJS). The only data that needs to be sent is basic user information, the players score at the end of each game, and some x,y positions for each players game (for anti-cheating). It seems like MySQL would work fine, but as I am already using MongoDB - are there any major drawbacks to continuing to work with MongoDB on this project?

    Read the article

  • Problem adding public key for apt

    - by highBandWidth
    I was trying to get the official mongodb for Ubuntu, following the instructions at http://www.mongodb.org/display/DOCS/Ubuntu+and+Debian+packages After adding the deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen line in my sources, I need to add the pgp key since synaptic says W: GPG error: http://downloads-distro.mongodb.org dist Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 9ECBEC467F0CEB10 Again following instructions, I did sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10 this says Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv 7F0CEB10 gpg: requesting key 7F0CEB10 from hkp server keyserver.ubuntu.com ?: keyserver.ubuntu.com: Connection refused gpgkeys: HTTP fetch error 7: couldn't connect: Connection refused gpg: no valid OpenPGP data found. gpg: Total number processed: 0 Interestingly, I also get $ apt-key list gpg: fatal: /home/myname/.gnupg: directory does not exist! secmem usage: 0/0 bytes in 0/0 blocks of pool 0/32768 How can I get apt to use this source?

    Read the article

  • Proper SSH keys location for a system user ?

    - by Thibaut Barrère
    I have a system account with which I run a database (namely mongodb). By default it has no home. Now I'd like to trigger scp commands from that account, with ssh keys authentication to a remote server, to export backups. Should I just create a /home/mongodb and /home/mongodb/.ssh folders manually to store the SSH keys, like the default for regular users ? Is it still considered a system account after that ? Thanks!

    Read the article

  • Can you use MongoDB map/reduce to migrate data?

    - by Brian Armstrong
    I have a large collection where I want to modify all the documents by populating a field. A simple example might be caching the comment count on each post: class Post field :comment_count, type: Integer has_many :comments end class Comment belongs_to :post end I can run it in serial with something like: Post.all.each do |p| p.udpate_attribute :comment_count, p.comments.count end But it's taking 24 hours to run (large collection). I was wondering if mongo's map/reduce could be used for this? But I haven't seen a great example yet. I imagine you would map off the comments collection and then store the reduced results in the posts collection. Am I on the right track?

    Read the article

  • Mongodb, linq driver. How to construct Contains with variable or statements

    - by Syska
    I'm using the LINQ Driver for C#, works great. Sorting a lot of properties but heres a problem I can't solve, its probebly simple. var identifierList = new []{"10", "20", "30"}; var newList = list.Where(x => identifierList.Contains(x.Identifier)); This is NOT supported ... So I could do something like: var newList = list.Where(x => x.Identifier == "10" || x.Identifier == "20" || x.Identifier == "30"); But since the list is variable ... how do I construct the above? Or are there even better alternatives? The list is of type IQueryable<MyCustomClass> For information ... this is used as a filter of alot of properties. In SQL I could have a parent - child relationship. But as I can't as the parent for the main ID I need to take all the ID's out and then construct it like this. Hopes this makes sense. If needed I will explain more.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >