Search Results

Search found 1015 results on 41 pages for 'mongodb csharp'.

Page 3/41 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Write-only collections in MongoDB

    - by rcoder
    I'm currently using MongoDB to record application logs, and while I'm quite happy with both the performance and with being able to dump arbitrary structured data into log records, I'm troubled by the mutability of log records once stored. In a traditional database, I would structure the grants for my log tables such that the application user had INSERT and SELECT privileges, but not UPDATE or DELETE. Similarly, in CouchDB, I could write a update validator function that rejected all attempts to modify an existing document. However, I've been unable to find a way to restrict operations on a MongoDB database or collection beyond the three access levels (no access, read-only, "god mode") documented in the security topic on the MongoDB wiki. Has anyone else deployed MongoDB as a document store in a setting where immutability (or at least change tracking) for documents was a requirement? What tricks or techniques did you use to ensure that poorly-written or malicious application code could not modify or destroy existing log records? Do I need to wrap my MongoDB logging in a service layer that enforces the write-only policy, or can I use some combination of configuration, query hacking, and replication to ensure a consistent, audit-able record is maintained?

    Read the article

  • mongodb read/write performance and mongo hosting in the cloud

    - by z3cko
    we are currently developing a high traffic rails application with facebooker (facebook game). since amazon simpledb (aws-sdb) is really slow, we are thinking of using a dedicated mongodb server as offered by mongoHQ for example. questions: what is the read/writes peak value for a mongodb server running on a amazon ec2 instance? what would be a recommended setup for a ec2 hosted app with mongodb - a master on amazon EBS and replicas on the ec2 instances? any examples or experiences? is there a company that offers mongodb hosting in the cloud? thanks, mz

    Read the article

  • How know when my mongoDB database overhead ?

    - by shingara
    I installed a MongoDB database on my server. My server is in 32Bit and I can't change it soon. When you use MongoDB in a 32Bit architecture you have a limit of 2,5Go of data, as mentionned in this MongoDB blog post. The thing is that I have several database. So how can I know if I am close or not to this limit ?

    Read the article

  • MongoDB vs CouchDB (Speed optimization)

    - by Edward83
    Hi! I made some tests of speed to compare MongoDB and CouchDB. Only inserts were while testing. I got MongoDB 15x faster than CouchDB. I know that it is because of sockets vs http. But, it is very interesting for me how can I optimize inserts in CouchDB? Test platform: Windows XP SP3 32 bit. I used last versions of MongoDB, MongoDB C# Driver and last version of installation package of CouchDB for Windows. Thanks!

    Read the article

  • Database over 2GB in MongoDB

    - by configurator
    We've got a file-based program we want to convert to use a document database, specifically MongoDB. Problem is, MongoDB is limited to 2GB on 32-bit machines (according to http://www.mongodb.org/display/DOCS/FAQ#FAQ-Whatarethe32bitlimitations%3F), and a lot of our users will have over 2GB of data. Is there a way to have MongoDB use more than one file somehow? I thought perhaps I could implement sharding on a single machine, meaning I'd run more than one mongod on the same machine and they'd somehow communicate. Could that work?

    Read the article

  • MongoDB or CouchDB - fit for production?

    - by Alan
    I was wondering if anyone can tell me if MongoDB or CouchDB are ready for a production environment. I'm now looking at these storage solutions (I'm favouring MongoDB at the moment), however these projects are quite young and so I foresee that I'm going to have to work quite hard to convince my manager that we should adopt this new technology. What I'd like to know is: 1) Who is using MongoDB or CouchDB today in a production environment? 2) How are you using MongoDB/CouchDB? 3) What problems (if any) did you come across when you adopted this new storage mechanism (and how did you overcome them)? 4) How did you deal with any migration issues that you had to deal with? 5) Do you have any good/bad experiences with either of these solutions that you'd like to share? Thanks.

    Read the article

  • Storing data in XML or MongoDB

    - by user766473
    Here is my usecase. 1.Have some data, which I am storing now in the xml files. The data that I am storing is not persistent i.e I would be deleting the user data once the user logs out. 2.My server communicates with the client using the XML requests and responses. So initially we decided, since we are sending the XML as response, lets store it in XML so that conversion from database to XML format time is saved. 3.Client will request for XML based on some filter conditions. So will have to use XQUERY. 4.Maximum of 100 entries will be there in an XML, atleast as of now. Now I would like to hear some advice on whether I should use XML or mongodb. My Concerns : 1. How good is it to store temporary data in mongodb and delete/take backup once done with session 2. Conversion from mongodb json format to XML. 3. Handling the changes in the schema design. Cant use any other DB than mongodb. As some persistent operation or still done on mongodb. Thanks in advance.

    Read the article

  • MongoDB: ReplicaSet slower than a corresponding Master/Slave config

    - by SecondThought
    Is it true that a mongoDB configured as a replicaset (lets say two nodes + an arbiter) will always be slower than the same DB and server specs but configured as a Master? I've run some tests and found out that for a fresh DB, RS is a little quicker than Master/Slave config but when the DB is getting bigger than ~100k records the latter is getting much snappier. am I missing something here? PS: I was testing it with mongoid driver for ruby.

    Read the article

  • Run a MongoDB configuration server without 3GB of journal files

    - by Thilo
    For a production sharded MongoDB installation we need 3 configuration servers. According to the documentation "the config server mongod process is fairly lightweight and can be ran on machines performing other work". However, in the default configuration, they all have journalling enabled, and with preallocation this takes up 3 GB of disk space. I assume that the actual data and transaction volume of a config server is quite small, so that this seems a bit too much. Is there a way to (safely!) run these config servers with much less disk use for the journal? Do I need journalling at all on config servers? Can I set the journal size to be smaller?

    Read the article

  • MongoDB ReplicaSet Elections when some nodes are down

    - by SecondThought
    I'm trying to get into ReplicaSet concept, and found something weird in mongoDB Documentation: For a node to be elected primary, it must receive a majority of votes. This is a majority of all votes in the set: if you have a 5-member set and 4 members are down, a majority of the set is still 3 members (floor(5/2)+1). Each member of the set receives a single vote and knows the total number of available votes. If no node can reach a majority, then no primary can be elected and no data can be written to that replica set (although reads to secondaries are still possible). (taken from here) So, If I got that right, in the 5-member case mentioned there the one node that's still standing WILL NOT be chosen as primary and the whole set will not get any writes? and that's even if this single node was the last primary before the elections? If it's true there can be many less-radical cases which will end up with a degenerated set. How can we avoid this?

    Read the article

  • Login authentication vanished from MongoDB install

    - by Robert Oschler
    A few months ago I enabled password protection on my MongoDB install. Today I ran the Mongo client and forgot to use my login details. Instead of rejecting nearly everything I try to do from the shell, like it should, I had complete access to all the databases and collections. Fortunately this instance is only running a few test apps, so I quickly shutdown the MongoD instance until I figure this out. Has anybody ever seen this kind of behavior before and knows what is going on? The MongoD instance is running on a Linux VM hosted by Azure. The only thing I can think of is that perhaps Azure restored an old copy of the VM, but I received no E-mails to that effect and everything else on the server seems to be proper, including new daemon processes that I added after I enabled password protection on MongoD.

    Read the article

  • Is this a valid backup strategy for MongoDB?

    - by James Simpson
    I've got a single dedicated server with a MongoDB database of around 10GB. I need to do daily backups, but I can't have downtime with the database. Is it possible to use a replica set on a single disk (with 2 instances of mongod running on different ports), and simply take the secondary one offline and backup the data files to an offsite storage such as S3 (journaling is turned on)? Or would using master/slave be better than a replica set? Is this viable, and if so, what potential problems could I have? If not, how do I conceptualize this to work?

    Read the article

  • MongoDB PHP EC2 Setup Configuration

    - by nathansizemore
    I am new to web development and server set up. I am looking for some advice or a link to a tutorial on setting up a production system up. Right now, I have a server (Ubuntu, Apache, MongoDB, and PHP). It receives a request, PHP queries Mongo, and PHP sends out the requested data. How do I make that work with more servers? I've read that you can make a cluster of a primary and two slave nodes which work as separate servers running Mongo, but do those also run PHP? Or is the primary the only one running the PHP? I have read some docs on Mongo site and a video of someone from 10gen going through it, but they are geared towards people that seem to already understand this stuff, I have no idea and need to start from a beginning stage. If anyone can help me understand where PHP (Acting as my API) lives in these clusters, that would be greatly appreciated! Thanks in advance for any help!

    Read the article

  • mongodb eating 48G in 1min

    - by ledy
    In mongodb i work with this collection: Size 55.93g Data Size 39.82g Storage Size 41.08g Extents 53 Indexes 4 Index Size 9.64g It takes few seconds of mongdb being up with this single collection and all 48GB RAM on the dedicated server are gone. That's worse because there is also a mysqld+nginx/fcgi on this machine which should be allowed to use at least 24GB together. I.e. remaining 24GB, enough for the mongod! However, it does not share in a fair way. Everybody says that the memory for mongod is managed by OS and releases unneccessary space for other processes if they demand RAM. On my machine it is not releasing RAM. What's wrong? free total used free shared buffers cached` Mem: 49559136 49403908 155228 0 57284 47247564 -/+ buffers/cache: 2099060 47460076 Swap: 8008392 164 8008228

    Read the article

  • mongodb segmentation fault(11) macosx

    - by Wish
    I have problem, i cant figure out, how to fix.. So i am on MacOSX machine, running php 5.3.15 version, using mongo 1.3.1 version. When i try to execute php script, in which i try to connect to remote mongodb server, I get segmentation fault(11).. I installed php driver with sudo pecl install mongo I have seen, that this problem is quite popular, but havent found real solution yet.. I dont know if I am asking this question in correct stack site.. If you need anything else, just ask.

    Read the article

  • Mongodb: why is my mongo server using two PID's?

    - by Lucas
    I started my mongo with the following command: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data 2014-06-07T08:46:30.507+0000 [initandlisten] MongoDB starting : pid=6409 port=27017 dbpat h=/home/lucas/node/nodetest2/data 64-bit host=ecoinstance 2014-06-07T08:46:30.508+0000 [initandlisten] db version v2.6.1 2014-06-07T08:46:30.508+0000 [initandlisten] git version: 4b95b086d2374bdcfcdf2249272fb55 2c9c726e8 2014-06-07T08:46:30.508+0000 [initandlisten] build info: Linux build14.nj1.10gen.cc 2.6.3 2-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49 2014-06-07T08:46:30.509+0000 [initandlisten] allocator: tcmalloc 2014-06-07T08:46:30.509+0000 [initandlisten] options: { storage: { dbPath: "/home/lucas/n ode/nodetest2/data" } } 2014-06-07T08:46:30.520+0000 [initandlisten] journal dir=/home/lucas/node/nodetest2/data/ journal 2014-06-07T08:46:30.520+0000 [initandlisten] recover : no journal files present, no recov ery needed 2014-06-07T08:46:30.527+0000 [initandlisten] waiting for connections on port 27017 It appears to be working, as I can execute mongo and access the server. However, here are the process running mongo: [lucas@ecoinstance]~/node/testSite$ ps aux | grep mongo root 6540 0.0 0.2 33424 1664 pts/3 S+ 08:52 0:00 sudo mongod --dbpath /ho me/lucas/node/nodetest2/data root 6541 0.6 8.6 522140 52512 pts/3 Sl+ 08:52 0:00 mongod --dbpath /home/lu cas/node/nodetest2/data lucas 6554 0.0 0.1 7836 876 pts/4 S+ 08:52 0:00 grep mongo As you can see, there are two PID's for mongo. Before I ran sudo mongod --dbpath /home/lucas/node/nodetest2/data, there were none (besides the grep of course). How did my command spawn two PID's, and should I be concerned? Any suggestions or tips would be great. Additional Info In addition, I may have other issues that might suggest a cause. I tried running mongo with --fork --logpath /home/lucas..., but it did not work. More information below: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data --fork --logpath /home/lucas/node/nodetest2/data/ about to fork child process, waiting until server is ready for connections. forked process: 6578 ERROR: child process failed, exited with error number 1 [lucas@ecoinstance]~/node/nodetest2$ ls -l data/ total 163852 drwxr-xr-x 2 mongodb nogroup 4096 Jun 7 08:54 journal -rw------- 1 mongodb nogroup 67108864 Jun 7 08:52 local.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 08:52 local.ns -rwxr-xr-x 1 mongodb nogroup 0 Jun 7 08:54 mongod.lock -rw------- 1 mongodb nogroup 67108864 Jun 7 02:08 nodetest1.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 02:08 nodetest1.ns Also, my db path folder is not the original location. It was originally created under the default /var/lib/mongodb/ and moved to my local data folder. This was done after shutting down the server via /etc/init.d/mongod stop. I have a Debian Wheezy server, if it matters.

    Read the article

  • Using MongoDB with Ruby On Rails and the Mongomapper plugin

    - by Micke
    Hello, i am currently trying to learn Ruby On Rails as i am a long-time PHP developer so i am building my own community like page. I have came pritty far and have made the user models and suchs using MySQL. But then i heard of MongoDB and looked in to it a little bit more and i find it kinda nice. So i have set it up and i am using mongomapper for the connection between rails and MongoDB. And i am now using it for the News page on the site. I also have a profile page for every User which includes their own guestbook so other users can come to their profile and write a little message to them. My thought now is to change the User models from using MySQL to start using MongoDB. I can start by showing how the models for each User is set up. The user model: class User < ActiveRecord::Base has_one :guestbook, :class_name => "User::Guestbook" The Guestbook model model: class User::Guestbook < ActiveRecord::Base belongs_to :user has_many :posts, :class_name => "User::Guestbook::Posts", :foreign_key => "user_id" And then the Guestbook posts model: class User::Guestbook::Posts < ActiveRecord::Base belongs_to :guestbook, :class_name => "User::Guestbook" I have divided it like this for my own convenience but now when i am going to try to migrate to MongoDB i dont know how to make the tables. I would like to have one table for each user and in that table a "column" for all the guestbook entries since MongoDB can have a EmbeddedDocument. I would like to do this so i just have one Table for each user and not like now when i have three tables just to be able to have a guestbook. So my thought is to have it like this: The user model: class User include MongoMapper::Document one :guestbook, :class_name => "User::Guestbook" The Guestbook model model: class User::Guestbook include MongoMapper::EmbeddedDocument belongs_to :user many :posts, :class_name => "User::Guestbook::Posts", :foreign_key => "user_id" And then the Guestbook posts model: class User::Guestbook::Posts include MongoMapper::EmbeddedDocument belongs_to :guestbook, :class_name => "User::Guestbook" But then i can think of one problem.. That when i just want to fetch the user information like a nickname and a birthdate then it will have to fetch all the users guestbook posts. And if each user has like a thousand posts in the guestbook it will get really much to fetch for the system. Or am i wrong? Do you think i should do it any other way? Thanks in advance and sorry if i am hard to understand but i am not so educated in the english language :)

    Read the article

  • php/mongodb: how does references work in php?

    - by harald
    hello, i asked this in the mongodb user-group, but was not satisfied with the answer, so -- maybe someone at stackoverflow can enlighten me: the question was: $b = array('x' => 1); $ref = &$b; $collection->insert($ref); var_dump($ref); $ref does not contain '_id', because it's a reference to $b, the handbook states. (the code snippet is taken from the php mongo documentation) i should add, that: $b = array('x' => 1); $ref = $b; $collection->insert($ref); var_dump($ref); in this case $ref contains the _id -- for those, who do not know, what the insert method of mongodb-php-driver does -- because $ref is passed by reference (note the $b with and without referencing '&'). on the other hand ... function test(&$data) { $data['_id'] = time(); } $b = array('x' => 1); $ref =& $b; test($ref); var_dump($ref); $ref contains _id, when i call a userland function. my question is: how does the references in these cases differ? my question is probably not mongodb specific -- i thought i would know how references in php work, but apparently i do not: the answer in the mongodb user-group was, that this was the way, how references in php work. so ... how do they work -- explained with these two code-snippets? thanks in advance!!!

    Read the article

  • MongoDB and datasets that don't fit in RAM no matter how hard you shove

    - by sysadmin1138
    This is very system dependent, but chances are near certain we'll scale past some arbitrary cliff and get into Real Trouble. I'm curious what kind of rules-of-thumb exist for a good RAM to Disk-space ratio. We're planning our next round of systems, and need to make some choices regarding RAM, SSDs, and how much of each the new nodes will get. But now for some performance details! During normal workflow of a single project-run, MongoDB is hit with a very high percentage of writes (70-80%). Once the second stage of the processing pipeline hits, it's extremely high read as it needs to deduplicate records identified in the first half of processing. This is the workflow for which "keep your working set in RAM" is made for, and we're designing around that assumption. The entire dataset is continually hit with random queries from end-user derived sources; though the frequency is irregular, the size is usually pretty small (groups of 10 documents). Since this is user-facing, the replies need to be under the "bored-now" threshold of 3 seconds. This access pattern is much less likely to be in cache, so will be very likely to incur disk hits. A secondary processing workflow is high read of previous processing runs that may be days, weeks, or even months old, and is run infrequently but still needs to be zippy. Up to 100% of the documents in the previous processing run will be accessed. No amount of cache-warming can help with this, I suspect. Finished document sizes vary widely, but the median size is about 8K. The high-read portion of the normal project processing strongly suggests the use of Replicas to help distribute the Read traffic. I have read elsewhere that a 1:10 RAM-GB to HD-GB is a good rule-of-thumb for slow disks, As we are seriously considering using much faster SSDs, I'd like to know if there is a similar rule of thumb for fast disks. I know we're using Mongo in a way where cache-everything really isn't going to fly, which is why I'm looking at ways to engineer a system that can survive such usage. The entire dataset will likely be most of a TB within half a year and keep growing.

    Read the article

  • How to scale MongoDB

    - by terence410
    I know that MongoDB can scale vertically. What about if I running out of disk? I am currently using EC2 with EBS. As you know, I have to assign EBS for a fixed size. What if the mongodb growth bigger than the EBS size? Do I have to create a larger EBS and Copy & Paste the files? Or shall we start more MongoDB instance and each connect to different EBS disk? In such case, I could connect to a different instance for different databases.

    Read the article

  • Looking for a generic handler/service for mongodb and asp.net / c#

    - by JohnAgan
    I am new to MongoDB and have a perfect place in mind to use it. However, it's only worth it if I can make the queries from JavaScript and return JSON. I read another post on here of someone asking a similar question, but not specific to C#. What's the easiest way I can implement a generic service/handler in asp.net/c# that would allow me to interact with mongodb via JavaScript? I understand JavaScript can't call mongodb directly, so the next best thing is what I'm looking for.

    Read the article

  • Embedded MongoDB when running integration tests

    - by seanhodges
    My question is a variation of this one. Since my Java Web-app project requires a lot of read filters/queries and interfaces with tools like GridFS, I'm struggling to think of a sensible way to simulate MongoDB in the way the above solution suggests. Therefore, I'm considering running an embedded instance of MongoDB alongside my integration tests. I'd like it to start up automatically (either for each test or the whole suite), flush the database for every test, and shut down at the end. These tests might be run on development machines as well as the CI server, so my solution will also need to be portable. Can anyone with more knowledge on MongoDB help me get idea of the feasibility of this approach, and/or perhaps suggest any reading material that might help me get started? I'm also open to other suggestions people might have on how I could approach this problem...

    Read the article

  • How to batch retrieve documents with mongoDB?

    - by edude05
    Hello everyone, I have an application that queries data from a mongoDB using the mongoDB C# driver something like this: public void main() { foreach (int i in listOfKey) { list.add(getObjectfromDB(i); } } public myObject getObjFromDb(int primaryKey) { document query = new document(); query["primKey"] = primaryKey; document result= mongo["myDatabase"]["myCollection"].findOne(query); return parseObject(result); } On my local (development) machine to get 100 object this way takes less than a second. However, I recently moved the database to a server on the internet, and this query takes about 30 seconds to execute for the same number of object. Furthermore, looking at the mongoDB log, it seems to open about 8-10 connections to the DB to perform this query. So what I'd like to do is have the query the database for an array of primaryKeys and get them all back at once, then do the parsing in a loop afterwards, using one connection if possible. How could I optimize my query to do so? Thanks, --Michael

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >