Search Results

Search found 1015 results on 41 pages for 'mongodb csharp'.

Page 17/41 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Are there any e-commerce websites that use NoSQL databases

    - by Saif Bechan
    I have read a lot lately about 'NoSQL' databases such as CouchDB, MongoDB etc. Most of the websites I have seen using this are mainly text based websites such as The New York Times and Source forge. I was wondering if you could apply this to websites where payment is a huge issue. I am thinking of the following issues: How well can you secure the data Do these system provide an easy backup/restore machanism How are transactions handled commit/rollback I have read the following articles that cover some aspects: Can I do transactions and locks in CouchDB? Pros/Cons of document based database vs relational database In these posts the aspect of transactions if covered. However the questions of security and backups is not covered. Can someone shed some light on this subject? And if possible, does anyone know of some e-commerce websites that have successfully implemented the document based database.

    Read the article

  • Compressing a hex string in Ruby/Rails

    - by PreciousBodilyFluids
    I'm using MongoDB as a backend for a Rails app I'm building. Mongo, by default, generates 24-character hexadecimal ids for its records to make sharding easier, so my URLs wind up looking like: example.com/companies/4b3fc1400de0690bf2000001/employees/4b3ea6e30de0691552000001 Which is not very pretty. I'd like to stick to the Rails url conventions, but also leave these ids as they are in the database. I think a happy compromise would be to compress these hex ids to shorter collections using more characters, so they'd look something like: example.com/companies/3ewqkvr5nj/employees/9srbsjlb2r Then in my controller I'd reverse the compression, get the original hex id and use that to look up the record. My question is, what's the best way to convert these ids back and forth? I'd of course want them to be as short as possible, but also url-safe and simple to convert. Thanks!

    Read the article

  • JSON documents and SQL database tables

    - by Sharmi
    Do JSON documents in RavenDB cost more than the SQL Server tables in terms of the storage and query costs. And also for centralized access, which one is better? What are the disadvantages of NON-SQL databases like RavenDB,CouchDB,MongoDB, etc... ? I can get that some of these are open source and support more datatypes like enums,objects,etc. but otherwise i don't see any big advantage? Currently there is a problem of storing huge amount of logs from various locations. I am planning to suggest these to my manager so just need a clear idea.

    Read the article

  • Drive space hungry NoSQL's databases

    - by forum_inquisitor
    I've tested NoSQL databases like CouchDB, MongoDB and Cassandra and observed tendence to absorbing very large amount of drive space relative to inserted key-value pairs. When comparing CouchDB and MySQL schemaless databases CouchDB is consuming much more drive space than MySQL. I know about that key-value DBs by default are versioning and have long uuid and need key optimalisation - the comparison was between about 15 mln rows in MySQL and 1-5 mln documents listed NoSQL DB's. My question is : Is there any NoSQL with good compaction / compression of data? So that I can have NoSQL database with a size closer to 5GB than 50GB?

    Read the article

  • Has anyone used an object database with a large amount of data?

    - by Jon Kruger
    Object databases like MongoDB and db4o are getting lots of pub lately. Everyone that plays with them seems to love it. I'm guessing that they are dealing with about 640K of data in their sample apps. Has anyone tried to use an object database with a large amount of data (say, 50GB or more)? Are you able to still execute complex queries against it (like from a search screen)? How does it compare to your usual relational database of choice? I'm just curious. I want to take the object database plunge, but I need to know if it'll work on something more than a sample app.

    Read the article

  • MapReduce results seem limited to 100?

    - by user1813867
    I'm playing around with Map Reduce in MongoDB and python and I've run into a strange limitation. I'm just trying to count the number of "book" records. It works when there are less than 100 records but when it goes over 100 records the count resets for some reason. Here is my MR code and some sample outputs: var M = function () { book = this.book; emit(book, {count : 1}); } var R = function (key, values) { var sum = 0; values.forEach(function(x) { sum += 1; }); var result = { count : sum }; return result; } MR output when record count is 99: {u'_id': u'superiors', u'value': {u'count': 99}} MR output when record count is 101: {u'_id': u'superiors', u'value': {u'count': 2.0}} Any ideas?

    Read the article

  • Is there is something like stored procedures in NOSQL databases?

    - by Amr ElGarhy
    I am new to NOSQL world and still comparing between nosql and sql databases, I Just tried making few samples using mongodb. I am asking about stored procedures when we send few parameters to one stored procedure and this procedure execute number of other stored procedures in the database, will get data from stored procedures and send data to others. In other words, will make the logic happen on the database side using sequence of functions and stored procedures. Is that behavior or something the same already exist on NOSQL databases, or its completely different and i am thinking in the wrong way?

    Read the article

  • Creating a User Registration Page using MongoEngine

    - by Drew Watkins
    I am currently working an a webapp, using mongoengine and django, which will require users to create an account from a registration page. I know MongoEngine has an authentication backend, but does it also include a registration form, etc..., like django itself does? If not, are there any example projects which show how to implement this? The only open-source mongoengine project I've found is django-mumblr, but I can't find the examples I want in it. I'm not interested in alternative options, such as MongoKit or mango for handling authentication. I am just getting started with django and mongoDB, so please excuse my lack of knowledge. Thanks in advance for the help!

    Read the article

  • Mongoose Not Creating Indexes

    - by wintzer
    I have been trying all afternoon to get my node.js application to create MongoDB indexes properly. I am using the Mongoose ODM and in my schema definition below I have the username field set to a unique index. The collection and document all get created properly, it's just the indexes that aren't working. All the documentation says that the ensureIndex command should be run at startup to create any indexes, but none are being made. I'm using MongoLab for hosting if that matters. I have also repeatedly dropped the collection. Please tell me what I'm doing wrong. var schemaUser = new mongoose.Schema({ username: {type: String, index: { unique: true }, required: true}, hash: String, created: {type: Date, default: Date.now} }, { collection:'Users' }); var User = mongoose.model('Users', schemaUser); var newUser = new Users({username:'wintzer'}) newUser.save(function(err) { if (err) console.log(err); });

    Read the article

  • manipulating 15+ million records in mysql with php?

    - by Nithish
    Hey, I got a user table containing 15+ million records and while doing the registration function i wish to check whether the username already exist. I did indexing for username column and when i run the query "select count(uid) from users where username='webdev'" ,. hmmm, its keep on loading blank screen finally hanged up. I'm doing this in my localhost with php 5 & mysql 5. So suggest me some technique to handle this situation. Is that mongodb is good alternative for handling this process in our local machine? Thanks, Nithish.

    Read the article

  • New replicaset resident memory is larger than the existing sets

    - by eded
    From the mongodb tutorial of how to resync a set, I wipe all the files in /data/db and restart the mongod process to resync the data. Everything looks ok, I get the same number of documents as the existing two sets(primary and one secondary). However, when I check the memory on MMS. it shows me my new resynced set/mongod process has a different memory status value than the other two. For existing twos using db.serverStatus.mem shows like the following: "mem" : { "bits" : 64, "resident" : 239, "virtual" : 66348, "supported" : true, "mapped" : 32865, "mappedWithJournal" : 65730 } however, the new resynced set shows like: "mem" : { "bits" : 64, "resident" : 1239, "virtual" : 52447, "supported" : true, "mapped" : 25700, "mappedWithJournal" : 51400 } the resynced resident memory is 6-10 times more than the existing ones. I wouder if it is normal because all data comes in suddenly during the resyncing?? and even virtual and mapped value are different too. Can anyone explain?? thanks

    Read the article

  • How can I optimize this code?

    - by loop0
    Hi, I'm developing a logger daemon to squid to grab the logs on a mongodb database. But I'm experiencing too much cpu utilization. How can I optimize this code? from sys import stdin from pymongo import Connection connection = Connection() db = connection.squid logs = db.logs buffer = [] a = 'timestamp' b = 'resp_time' c = 'src_ip' d = 'cache_status' e = 'reply_size' f = 'req_method' g = 'req_url' h = 'username' i = 'dst_ip' j = 'mime_type' L = 'L' while True: l = stdin.readline() if l[0] == L: l = l[1:].split() buffer.append({ a: float(l[0]), b: int(l[1]), c: l[2], d: l[3], e: int(l[4]), f: l[5], g: l[6], h: l[7], i: l[8], j: l[9] } ) if len(buffer) == 1000: logs.insert(buffer) buffer = [] if not l: break connection.disconnect()

    Read the article

  • Large scale storage for incrementally-appended documents?

    - by Ben Dilts
    I need to store hundreds of thousands (right now, potentially many millions) of documents that start out empty and are appended to frequently, but never updated otherwise or deleted. These documents are not interrelated in any way, and just need to be accessed by some unique ID. Read accesses are some subset of the document, which almost always starts midway through at some indexed location (e.g. "document #4324319, save #53 to the end"). These documents start very small, at several KB. They typically reach a final size around 500KB, but many reach 10MB or more. I'm currently using MySQL (InnoDB) to store these documents. Each of the incremental saves is just dumped into one big table with the document ID it belongs to, so reading part of a document looks like "select * from saves where document_id=14 and save_id 53 order by save_id", then manually concatenating it all together in code. Ideally, I'd like the storage solution to be easily horizontally scalable, with redundancy across servers (e.g. each document stored on at least 3 nodes) with easy recovery of crashed servers. I've looked at CouchDB and MongoDB as possible replacements for MySQL, but I'm not sure that either of them make a whole lot of sense for this particular application, though I'm open to being convinced. Any input on a good storage solution?

    Read the article

  • What scalability problems have you solved using a NoSQL data store?

    - by knorv
    NoSQL refers to non-relational data stores that break with the history of relational databases and ACID guarantees. Popular open source NoSQL data stores include: Cassandra (tabular, written in Java, used by Facebook, Twitter, Digg, Rackspace, Mahalo and Reddit) CouchDB (document, written in Erlang, used by Engine Yard and BBC) Dynomite (key-value, written in C++, used by Powerset) HBase (key-value, written in Java, used by Bing) Hypertable (tabular, written in C++, used by Baidu) Kai (key-value, written in Erlang) MemcacheDB (key-value, written in C, used by Reddit) MongoDB (document, written in C++, used by Sourceforge, Github, Electronic Arts and NY Times) Neo4j (graph, written in Java, used by Swedish Universities) Project Voldemort (key-value, written in Java, used by LinkedIn) Redis (key-value, written in C, used by Engine Yard, Github and Craigslist) Riak (key-value, written in Erlang, used by Comcast and Mochi Media) Ringo (key-value, written in Erlang, used by Nokia) Scalaris (key-value, written in Erlang, used by OnScale) ThruDB (document, written in C++, used by JunkDepot.com) Tokyo Cabinet/Tokyo Tyrant (key-value, written in C, used by Mixi.jp (Japanese social networking site)) I'd like to know about specific problems you - the SO reader - have solved using data stores and what NoSQL data store you used. Questions: What scalability problems have you used NoSQL data stores to solve? What NoSQL data store did you use? What database did you use before switching to a NoSQL data store? I'm looking for first-hand experiences, so please do not answer unless you have that.

    Read the article

  • restrict documents for mapreduce with mongoid

    - by theBernd
    I implemented the pearson product correlation via map / reduce / finalize. The missing part is to restrict the documents (representing users) to be processed via a filter query. For a simple query like mapreduce(mapper, reducer, :finalize => finalizer, :query => { :name => 'Bernd' }) I get this to work. But my filter criteria is a little bit more complicated: I have one set of preferences which need to have at least one common element and another set of preferences which may not have a common element. In a later step I also want to restrict this to documents (users) within a certain geographical distance. Currently I have this code working in my map function, but I would prefer to separate this into either query params as supported by mongoid or a javascript function. All my attempts to solve this failed since the code is either ignored or raises an error. I did a couple of tests. A regular find like User.where(:name.in => ['Arno', 'Bernd', 'Claudia']) works and returns #<Mongoid::Criteria:0x00000101f0ea40 @selector={:name=>{"$in"=>["Arno", "Bernd", "Claudia"]}}, @options={}, @klass=User, @documents=[]> Trying the same with mapreduce User.collection. mapreduce(mapper, reducer, :finalize => finalizer, :query => { :name.in => ['Arno', 'Bernd', 'Claudia'] }) fails with `serialize': keys must be strings or symbols (TypeError) in bson-1.1.5 The intermediate query parameter looks like this :query=>{#<Mongoid::Criterion::Complex:0x00000101a209e8 @key=:name, @operator="in">=>["Arno", "Bernd", "Claudia"]} and at least @operator looks a bit weird to me. I'm also uncertain if the class name can be omitted. BTW - I'm using mongodb 1.6.5-x86_64, and the mongoid 2.0.0.beta.20, mongo 1.1.5 and bson 1.1.5 gems on MacOS. What am I doing wrong? Thanks in advance.

    Read the article

  • python mongokit Connection() AssertionError

    - by zalew
    just installed mongokit and can't figure out why I get AssertionError python console: >>> from mongokit import Connection >>> c = Connection() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/mongokit-0.5.3-py2.6.egg/mongokit/connection.py", line 35, in __init__ super(Connection, self).__init__(*args, **kwargs) File "build/bdist.linux-i686/egg/pymongo/connection.py", line 169, in __init__ File "build/bdist.linux-i686/egg/pymongo/connection.py", line 338, in __find_master File "build/bdist.linux-i686/egg/pymongo/connection.py", line 226, in __master File "build/bdist.linux-i686/egg/pymongo/database.py", line 220, in command File "build/bdist.linux-i686/egg/pymongo/collection.py", line 356, in find_one File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 485, in next File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 461, in _refresh File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 429, in __send_message File "build/bdist.linux-i686/egg/pymongo/helpers.py", line 98, in _unpack_response AssertionError >>> mongodb console: Wed Mar 31 10:27:34 connection accepted from 127.0.0.1:60480 #30 Wed Mar 31 10:27:34 end connection 127.0.0.1:60480 db 1.5 pymongo 1.5 (tested also on 1.4.) mongokit 0.5.3 (also 0.5.2)

    Read the article

  • Linq causes collection to disappear when trying to use OrderByDescending

    - by Jeremy B.
    For background, I am using MongoDB and Rob Conery's linq driver. The code I am attempting is thus: using (var session = new Session<ContentItem>()) { var contentCollection = session.QueryCollection.Where(x => x.CreatedOn < DateTime.Now).OrderByDescending(y => y.CreatedOn).ToList(); ViewData.Model = contentCollection; } this will work on one machine, but on another machine I get back no results. To get results i have to do using (var session = new Session<ContentItem>()) { var contentCollection = session.QueryCollection.Where(x => x.CreatedOn < DateTime.Now).ToList(); ViewData.Model = contentCollection.OrderByDescending(y => y.CreatedOn).ToList(); } I have to do ToList() on both lines, or no results. If I try to chain anything it breaks. This is the same project, all dll's are locally loaded. Both machines have the same framework, versions of Visual studio and addons. the only difference is one has VisualSVN the other AnkhSVN. I can't see those causing the problem. Also, while debugging, on the machine that does not work you can see the items in the collection, and if you remove ordering all together it will work. This has got me completely stumped.

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • Error when loading YAML config files in Rails

    - by ZelluX
    I am configuring Rails with MongoDB, and find a strange problem when paring config/mongo.yml file. config/mongo.yml is generated by executing script/rails generate mongo_mapper:config, and it looks like following: defaults: &defaults host: 127.0.0.1 port: 27017 development: <<: *defaults database: tc_web_development test: <<: *defaults database: tc_web_test From the config file we can see the objects development and test should both have a database field. But when it is parsed and loaded in config/initializers/mongo.db, config = YAML::load(File.read(Rails.root.join('config/mongo.yml'))) puts config.inspect MongoMapper.setup(config, Rails.env) the strange thing comes: the output of puts config.inspect is {"defaults"=>{"host"=>"127.0.0.1", "port"=>27017}, "development"=>{"host"=>"127.0.0.1", "port"=>27017}, "test"=>{"host"=>"127.0.0.1", "port"=>27017}} which does not contain database attribute. But when I execute the same statements in a plain ruby console, instead of using rails console, mongo.yml is parsed in a right way. {"defaults"=>{"host"=>"127.0.0.1", "port"=>27017}, "development"=>{"host"=>"127.0.0.1", "port"=>27017, "database"=>"tc_web_development"}, "test"=>{"host"=>"127.0.0.1", "port"=>27017, "database"=>"tc_web_test"}} I am wondering what may be the cause of this problem. Any ideas? Thanks.

    Read the article

  • BasicDBObject or QueryBuilder and some newbie questions of Java and mongo

    - by Kevin Xu
    hi I'm a fresh newbie to mongodb Q1 using query=new BasicDBObject(); query.put("i", new BasicDBObject("$gt",13)); and query=new QueryBuilder().put("i").Greaterthan(13).get() is there any difference inside of the system? Q2 I've created a class class findkv extends BasicDBObject{ //is gt gte lt lte public findkv(String fieldname,String op,Object tvalue) { if (op=="") this.put(fieldname,tvalue); else this.put(fieldname, new BasicDBObject(op,tvalue)); } } shall I use it or shall I just use original function? Q3 I've used mongo shell for a few weeks, and was customed to it, and find writing in mongo shell faster and shorter, which side has more advantage, writing in mongo or in java? I shall dump them from mongo to mysql Q4 I've an if (statement==true) return else dowhat; seems can't be compiled I know I can write if (statement!=true) dowhat else return, but can I still write in first style? q5 my eclipse is Eclipse Java EE IDE for Web Developers. Version: Juno Release Build id: 20120614-1722 I'd like to install Perl which I haven't learned yet I choose Install Update http://e-p-i-c.sf.net/updates/testing but it doesn't work, any method to install perl to eclipse manually?

    Read the article

  • Is it better to use a relational database or document-based database for an app like Wufoo?

    - by mboyle
    I'm working on an application that's similar to Wufoo in that it allows our users to create their own databases and collect/present records with auto generated forms and views. Since every user is creating a different schema (one user might have a database of their baseball card collection, another might have a database of their recipes) our current approach is using MySQL to create separate databases for every user with its own tables. So in other words, the databases our MySQL server contains look like: main-web-app-db (our web app containing tables for users account info, billing, etc) user_1_db (baseball_cards_table) user_2_db (recipes_table) .... And so on. If a user wants to set up a new database to keep track of their DVD collection, we'd do a "create database ..." with "create table ...". If they enter some data in and then decide they want to change a column we'd do an "alter table ....". Now, the further along I get with building this out the more it seems like MySQL is poorly suited to handling this. 1) My first concern is that switching databases every request, first to our main app's database for authentication etc, and then to the user's personal database, is going to be inefficient. 2) The second concern I have is that there's going to be a limit to the number of databases a single MySQL server can host. Pretending for a moment this application had 500,000 user databases, is MySQL designed to operate this way? What if it were a million, or more? 3) Lastly, is this method going to be a nightmare to support and scale? I've never heard of MySQL being used in this way so I do worry about how this affects things like replication and other methods of scaling. To me, it seems like MySQL wasn't built to be used in this way but what do I know. I've been looking at document-based databases like MongoDB, CouchDB, and Redis as alternatives because it seems like a schema-less approach to this particular problem makes a lot of sense. Can anyone offer some advice on this?

    Read the article

  • Rails HTML Table update fields in mongo with AJAX

    - by qwexar
    I'm building a Rails app backed by mongodb using mongoid. It's a one page app, with a HTML table, every field for every row of which, needs to be editable without refreshing the page. This is your usual Rails view ( like many in rails casts ) showing a table with rows and columns containing data. For example. I'm showing cars, and showing their make, model and notes They way I'm doing this is by appending _id of a mongo document to every column and marking it's field name in html id too. Then I pick up the value for $("#id") and send it to rails controller via AJAX and run @car.update_attributes method accordingly. Currently, one of my rows looks like this. <tr> <td id=<%= car.id %>_make> <%= car.make %> </td> <td id=<%= car.id %>_model> <%= car.model %> </td> <td id=<%= car.id %>_notes> <%= car.notes %> </td> </tr> // my function which is called onChange for every column function update_attributes(id){ var id = id.split[0]; var attribute = id.split[1]; $.ajax("sending id and attribute to rails controller"); } Is there any built it Rails magic which would let me update only a field in a model without refreshing the page? or. Is there a Rails plugin for this?

    Read the article

  • heroku mongohq and mongoid Mongo::ConnectionFailure

    - by Ole Morten Amundsen
    I have added the mongoHQ addon for mongodb at heroku. It crashes with something like this. connect_to_master': failed to connect to any given host:port (Mongo::ConnectionFailure) The descriptions online (heroku mongohq) are more directed towards mongomapper, as I see it. I'm running ruby 1.9.1 and rails 3-beta with mongoid. My feeling says that there's something with ENV['MONGOHQ_URL'], which it says the MongoHQ addon sets, but I haven't set MONGOHQ_URL anywhere in my app. I guess the problem is in my mongoid.yml ? defaults: &defaults host: localhost development: <<: *defaults database: aliado_development test: <<: *defaults database: aliado_test # set these environment variables on your prod server production: <<: *defaults host: <%= ENV['MONGOID_HOST'] %> port: <%= ENV['MONGOID_PORT'] %> username: <%= ENV['MONGOID_USERNAME'] %> password: <%= ENV['MONGOID_PASSWORD'] %> database: <%= ENV['MONGOID_DATABASE'] %> It works fine locally, but fails at heroku, more stack trace: ==> crashlog.log <== Cannot write to outdated .bundle/environment.rb to update it /disk1/home/slugs/176479_b14df52_b875/mnt/.bundle/gems/gems/rack-1.1.0/lib/rack.rb:14: warning: already initialized constant VERSION /disk1/home/slugs/176479_b14df52_b875/mnt/.bundle/gems/gems/mongo-0.20.1/lib/mongo/connection.rb:435:in `connect_to_master': failed to connect to any given host:port (Mongo::ConnectionFailure) from /disk1/home/slugs/176479_b14df52_b875/mnt/.bundle/gems/gems/mongo-0.20.1/lib/mongo/connection.rb:112:in `initialize' from /disk1/home/slugs/176479_b14df52_b875/mnt/.bundle/gems/gems/mongoid-2.0.0.beta4 /lib/mongoid/railtie.rb:32:in `new' from /disk1/home/slugs/176479_b14df52_b875/mnt/.bundle/gems/gems/mongoid-2.0.0.beta4/lib/mongoid/railtie.rb:32:in `block (2 levels) in <class:Railtie>' from /disk1/home/slugs/176479_b14df52_b875/mnt/.bundle/gems/gems/mongoid-2.0.0.beta4/lib/mongoid.rb:110:in `configure' from /disk1/home/slugs/176479_b14df52_b875/mnt/.bundle/gems/gems/mongoid-2.0.0.beta4/lib/mongoid/railtie.rb:21:in `block in <class:Railtie>' from /disk1/home/slugs/176479_b14df52_b875/mnt/.bundle/gems/gems/railties-3.0.0.beta3/lib/rails/initializable.rb:25:in `instance_exec' ..... It all works locally, both tests and app. I'm out of ideas... Any suggestions? PS: Somebody with high repu mind create the tag 'mongohq'?

    Read the article

  • How to filter Backbone.js Collection and Rerender App View?

    - by Jeremy H.
    Is is a total Backbone.js noob question. I am working off of the ToDo Backbone.js example trying to build out a fairly simple single app interface. While the todo project is more about user input, this app is more about filtering the data based on the user options (click events). I am completely new to Backbone.js and Mongoose and have been unable to find a good example of what I am trying to do. I have been able to get my api to pull the data from the MongoDB collection and drop it into the Backbone.js collection which renders it in the app. What I cannot for the life of me figure out how to do is filter that data and re-render the app view. I am trying to filter by the "type" field in the document. Here is my script: (I am totally aware of some major refactoring needed, I am just rapid prototyping a concept.) $(function() { window.Job = Backbone.Model.extend({ idAttribute: "_id", defaults: function() { return { attachments: false } } }); window.JobsList = Backbone.Collection.extend({ model: Job, url: '/api/jobs', leads: function() { return this.filter(function(job){ return job.get('type') == "Lead"; }); } }); window.Jobs = new JobsList; window.JobView = Backbone.View.extend({ tagName: "div", className: "item", template: _.template($('#item-template').html()), initialize: function() { this.model.bind('change', this.render, this); this.model.bind('destroy', this.remove, this); }, render: function() { $(this.el).html(this.template(this.model.toJSON())); this.setText(); return this; }, setText: function() { var month=new Array(); month[0]="Jan"; month[1]="Feb"; month[2]="Mar"; month[3]="Apr"; month[4]="May"; month[5]="Jun"; month[6]="Jul"; month[7]="Aug"; month[8]="Sep"; month[9]="Oct"; month[10]="Nov"; month[11]="Dec"; var title = this.model.get('title'); var description = this.model.get('description'); var datemonth = this.model.get('datem'); var dateday = this.model.get('dated'); var jobtype = this.model.get('type'); var jobstatus = this.model.get('status'); var amount = this.model.get('amount'); var paymentstatus = this.model.get('paymentstatus') var type = this.$('.status .jobtype'); var status = this.$('.status .jobstatus'); this.$('.title a').text(title); this.$('.description').text(description); this.$('.date .month').text(month[datemonth]); this.$('.date .day').text(dateday); type.text(jobtype); status.text(jobstatus); if(amount > 0) this.$('.paymentamount').text(amount) if(paymentstatus) this.$('.paymentstatus').text(paymentstatus) if(jobstatus === 'New') { status.addClass('new'); } else if (jobstatus === 'Past Due') { status.addClass('pastdue') }; if(jobtype === 'Lead') { type.addClass('lead'); } else if (jobtype === '') { type.addClass(''); }; }, remove: function() { $(this.el).remove(); }, clear: function() { this.model.destroy(); } }); window.AppView = Backbone.View.extend({ el: $("#main"), events: { "click #leads .highlight" : "filterLeads" }, initialize: function() { Jobs.bind('add', this.addOne, this); Jobs.bind('reset', this.addAll, this); Jobs.bind('all', this.render, this); Jobs.fetch(); }, addOne: function(job) { var view = new JobView({model: job}); this.$("#activitystream").append(view.render().el); }, addAll: function() { Jobs.each(this.addOne); }, filterLeads: function() { // left here, this event fires but i need to figure out how to filter the activity list. } }); window.App = new AppView; });

    Read the article

  • mongo mapper with STI with more than one type?

    - by holden
    I have a series of models all which inherit from a base model Properties For example Bars, Restaurants, Cafes, etc. class Property include MongoMapper::Document key :name, String key :_type, String end class Bar < Property What I'm wondering is what to do with the case when a record happens to be both a Bar & a Restaurant? Is there a way for a single object to inherit the attributes of both models? And how would it work with the key :_type?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >