Search Results

Search found 60846 results on 2434 pages for 'spring data mongodb'.

Page 25/2434 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Almost Realtime Data and Web application

    - by Chris G.
    I have a computer that is recording 100 different data points into an OPC server. I've written a simple OPC client that can read all of this data. I have a front-end website on a different network that I would like to consume this data. I could easily set the OPC client to send the data to a SQL server and the website could read from it, but that would be a lot of writes. If I wanted the data to be updated every 10 seconds I'd be writing to the database every 10 seconds. (I could probably just serialize the 100 points to get 1 write / 10 seconds but that would also limit my ability to search the data later). This solution wouldn't scale very well. If I had 100 of these computers the situation would quickly grow out of hand. Obviously I am well out of my league here and I have no experience with working with a large amount of data like this. What are my options and what should I research?

    Read the article

  • Spring to Java EE, Part Three - new tech article on otn/java

    - by Janice J. Heiss
    In a new article up on otn/java, Java EE expert David Heffelfinger continues his series exploring the relative strengths and weaknesses of Java EE and Spring. Here, he demonstrates how easy it is to develop the data layer of an application using Java EE, JPA, and the NetBeans IDE instead of the Spring Framework.In the first two parts of the series, he generated a complete Java EE application by using JavaServer Faces (JSF) 2.0, Enterprise JavaBeans (EJB) 3.1, and Java Persistence API (JPA) 2.0 from Spring’s Pet Clinic MySQL schema, thus showing how easy it is to develop an application whose functionality equaled that of the Spring sample application.In his new article, Heffelfinger tweaks the application to make it more user friendly.From the article:“The generated application displays primary keys on some of the pages, and these keys are surrogate primary keys—meaning that they have no business value and are used strictly as a unique identifier—so there is no reason why they should be visible to the user. In addition, we will modify some of the generated labels to make them more user-friendly.”He concludes the article with a summary:“The Java EE version of the application is not a straight port of the Spring version. For example, the Java EE version enables us to create, update, and delete veterinarians as well as veterinary specialties, whereas the Spring version of the application enables us only to view veterinarians and specialties. Additionally, the Spring version has a single page for managing/viewing owners, pets, and visits, whereas the Java EE version of the application has separate pages for each of these entities.The other thing we should keep in mind is that we didn’t actually write a lot of the code and markup for the Java EE version of the application, because the bulk of it was generated by the NetBeans wizard.” Have a look at the complete article here.

    Read the article

  • Confused as to how to validate spring mvc form, what are my options?

    - by Blankman
    Latest spring mvc, using freemarker. Hoping someone could tell me what my options are in terms of validating a form with spring mvc, and what the recommend way would be to do this. I have a form that doesn't map directly to a model, it has input fields that when posted, will be used to initialze 2 model objects which I will then need to validate, and if they pass I will save them. If they fail, I want to return back to the form, pre-fill the values with what the user entered and display the error messages. I have read here and there about 2 methods, once of which I have done and understand how it works: @RequestMapping(...., method = RequestMethod.POST) public ModelAndView myMethod(@Valid MyModel, BindingResult bindingResult) { ModelAndView mav = new ModelAndView("some/view"); mav.addObject("mymodel", myModel); if(bindingResult.hasErrors()) { return mav; } } Now this worked if my form mapped directly to the form, but in my situation I have: form fields that don't map to any specific model, they have a few properties from 2 models. before validation occurrs, I need to create the 2 models manually, set the values from the values from the form, and manually set some properties also: Call validate on both models (model1, model2), and append these error messages to the errors collection which I need to pass back to the same view page if things don't work. when the form posts, I have to do some database calls, and based on those results may need to add additional messages to the errors collection Can someone tell me how to do this sort of validation? Pseudo code below: Model1 model1 = new Model1(); Model2 model2 = new Model2(); // manually or somehow automatically set the posted form values to model1 and model2. // set some fields manually, not from posted form model1.setProperty10(GlobalSettings.getDefaultProperty10()); model2.setProperty11(GlobalSettings.getDefaultProperty11()); // db calls, if they fail, add to errors collection if(bindingResult.hasErrors()) { return mav; } // validation passed, save Model1Service.save(model1); Model2Service.save(model2); redirect to another view Update I have using the JSR 303 annotations on my models right now, and it would great if I can use those still. Update II Please read the bounty description below for a summary of what I am looking for.

    Read the article

  • SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

    - by Pinal Dave
    In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia. Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs. You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers. When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs. After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data. Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated. With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc. You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically. When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data. In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need. Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox. Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia. Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Cloud Computing

    Read the article

  • SQL SERVER – Why Do We Need Master Data Management – Importance and Significance of Master Data Management (MDM)

    - by pinaldave
    Let me paint a picture of everyday life for you.  Let’s say you and your wife both have address books for your groups of friends.  There is definitely overlap between them, so that you both have the addresses for your mutual friends, and there are addresses that only you know, and some only she knows.  They also might be organized differently.  You might list your friend under “J” for “Joe” or even under “W” for “Work,” while she might list him under “S” for “Joe Smith” or under your name because he is your friend.  If you happened to trade, neither of you would be able to find anything! This is where data management would be very important.  If you were to consolidate into one address book, you would have to set rules about how to organize the book, and both of you would have to follow them.  You would also make sure that poor Joe doesn’t get entered twice under “J” and under “S.” This might be a familiar situation to you, whether you are thinking about address books, record collections, books, or even shopping lists.  Wherever there is a lot of data to consolidate, you are going to run into problems unless everyone is following the same rules. I’m sure that my readers can figure out where I am going with this.  What is SQL Server but a computerized way to organize data?  And Microsoft is making it easier and easier to get all your “addresses” into one place.  In the  2008 version of SQL they introduced a new tool called Master Data Services (MDS) for Master Data Management, and they have improved it for the new 2012 version. MDM was hailed as a major improvement for business intelligence.  You might not think that an organizational system is terribly exciting, but think about the kind of “address books” a company might have.  Many companies have lots of important information, like addresses, credit card numbers, purchase history, and so much more.  To organize all this efficiently so that customers are well cared for and properly billed (only once, not never or multiple times!) is a major part of business intelligence. MDM comes into play because it will comb through these mountains of data and make sure that all the information is consistent, accurate, and all placed in one database so that employees don’t have to search high and low and waste their time. MDM also has operational MDM functions.  This is not a redundancy.  Operational MDM means that when one employee updates one bit of information in the database, for example – updating a new address for a customer, operational MDM ensures that this address is updated throughout the system so that all departments will have the correct information. Another cool thing about MDM is that it features Master Data Services Configuration Manager, which is exactly what it sounds like.  It has a built-in “helper” that lets you set up your database quickly, easily, and with the correct configurations.  While talking about cool features, I can’t skip over the add-in for Excel.  This allows you to link certain data to Excel files for easier sharing and uploading. In summary, I want to emphasize that the scariest part of the database is slowly disappearing.  Everyone knows that a database – one consolidated area for all your data – is a good idea, but the idea of setting one up is daunting.  But SQL Server is making data management easier and easier with features like Master Data Services (MDS). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Master Data Services, MDM

    Read the article

  • Data breakpoints to find points where data gets broken

    - by raccoon_tim
    When working with a large code base, finding reasons for bizarre bugs can often be like finding a needle in a hay stack. Finding out why an object gets corrupted without no apparent reason can be quite daunting, especially when it seems to happen randomly and totally out of context. Scenario Take the following scenario as an example. You have defined the a class that contains an array of characters that is 256 characters long. You now implement a method for filling this buffer with a string passed as an argument. At this point you mistakenly expect the buffer to be 256 characters long. At some point you notice that you require another character buffer and you add that after the previous one in the class definition. You now figure that you don’t need the 256 characters that the first member can hold and you shorten that to 128 to conserve space. At this point you should start thinking that you also have to modify the method defined above to safeguard against buffer overflow. It so happens, however, that in this not so perfect world this does not cross your mind. Buffer overflow is one of the most frequent sources for errors in a piece of software and often one of the most difficult ones to detect, especially when data is read from an outside source. Many mass copy functions provided by the C run-time provide versions that have boundary checking (defined with the _s suffix) but they can not guard against hard coded buffer lengths that at some point get changed. Finding the bug Getting back to the scenario, you’re now wondering why does the second string get modified with data that makes no sense at all. Luckily, Visual Studio provides you with a tool to help you with finding just these kinds of errors. It’s called data breakpoints. To add a data breakpoint, you first run your application in debug mode or attach to it in the usual way, and then go to Debug, select New Breakpoint and New Data Breakpoint. In the popup that opens, you can type in the memory address and the amount of bytes you wish to monitor. You can also use an expression here, but it’s often difficult to come up with an expression for data in an object allocated on the heap when not in the context of a certain stack frame. There are a couple of things to note about data breakpoints, however. First of all, Visual Studio supports a maximum of four data breakpoints at any given time. Another important thing to notice is that some C run-time functions modify memory in kernel space which does not trigger the data breakpoint. For instance, calling ReadFile on a buffer that is monitored by a data breakpoint will not trigger the breakpoint. The application will now break at the address you specified it to. Often you might immediately spot the issue but the very least this feature can do is point you in the right direction in search for the real reason why the memory gets inadvertently modified. Conclusions Data breakpoints are a great feature, especially when doing a lot of low level operations where multiple locations modify the same data. With the exception of some special cases, like kernel memory modification, you can use it whenever you need to check when memory at a certain location gets changed on purpose or inadvertently.

    Read the article

  • MongoDB, Carrierwave, GridFS and prevention of files' duplication

    - by Arkan
    I am dealing with Mongoid, carrierwave and gridFS to store my uploads. For example, I have a model Article, containing a file upload(a picture). class Article include Mongoid::Document field :title, :type => String field :content, :type => String mount_uploader :asset, AssetUploader end But I would like to only store the file once, in the case where I'll upload many times the same file for differents articles. I saw GridFS has a MD5 checksum. What would be the best way to prevent duplication of identicals files ? Thanks

    Read the article

  • MongoDB-PHP: JOIN-like query

    - by mdm414
    Here are the objects: courses { "name" : "Biology", "_id" : ObjectId("4b0552b0f0da7d1eb6f126a1") } students { "name" : "Joe", "classes" : [ { "$ref" : "courses", "$id" : ObjectId("4b0552b0f0da7d1eb6f126a1") } ], "_id" : ObjectId("4b0552e4f0da7d1eb6f126a2") } Using the PHP Mongo Class, how do I get all the students that has a biology course? Thanks

    Read the article

  • MongoDB architectural question

    - by pex
    I have to store 4 Models. Let's say a Post that has many and belongs to many Categories. Category on the other hand has many Qualities. At the moment I'm of the opinion, that Post and Categories are Documents. Qualities becomes an EmbeddedDocument of Categories. We're coming to the root problem: There are a lot of Votes on Qualities that belong to a Post. I thought about embed Votes in Post and give it a quality_id. I am really expecting a lot of Votes and there has to be a possibility to filter them (e.g by Username / Usergroup / Date voted). I worked with MongoMapper and I think the missing existence of find methods for EmbeddedDocuments could become a killer. On the other hand I'm wondering about performance issues. What if I want to provide a Post without all the Votes, but only a few. Or, what if I define an own Document for Votes and have tons of Vote-Documents? Wouldn't that become a performance killer?

    Read the article

  • Problem with MongoDB Ruby Driver

    - by Paul
    I'm on Ubuntu, and I've done install gem mongo which reported Successfully installed bson-1.0 Successfully installed mongo-1.0 2 gems installed I've started mongod Now I cd to the mongo gem directory and try > ruby examples/simple.rb and I get the error ./examples/../lib/mongo.rb:31:in `require': no such file to load -- bson (LoadError) from ./examples/../lib/mongo.rb:31 from examples/simple.rb:3:in `require' from examples/simple.rb:3 which I can't make sense of, since the bson gem is installed > gem list *** LOCAL GEMS *** bson (1.0) bson_ext (1.0) mongo (1.0) rack (1.1.0) sinatra (1.0) Any suggestions what's up here?

    Read the article

  • MongoDB ruby dates

    - by MB
    I have a collection with an index on :created_at (which in this particular case should be a date) From rails what is the proper way to save an entry and then retrieve it by the date? I'm trying something like: Model: field :created_at, :type = Time script: Col.create(:created_at = Time.parse(another_model.created_at).to_s and Col.find(:all, :conditions = { :created_at = Time.parse(same thing) }) and it's not returning anything

    Read the article

  • Intersection of sets Mongodb

    - by afvasd
    Hi everyone I am new to mongo, this is my db design: product := { name: str group: ref, comments: [ ref, ref, ref, ref ] } comments := { ... a bunch of comments stuff } tag := { _id: int, #Need this for online requests tag: str, products: [ {product: ref, score: float}, ... ], comments: [ {comment: ref, score: float}, ...], } So my usage pattern is: GIVEN a product, find comments that have certain tag and sort them accordingly. My current approach involves: Look for that tag object that has tag=myTag pull all the comments out, sorted look for that product where product.name=myProduct pull all the comments out (which are dbrefs by the way) loop through the result of 2, and checking if they are in 4, (this I can do a limit 10) etc. It's pretty inefficient. Any better methods?

    Read the article

  • Can't append to array using string field name [$] when performing update on array fields

    - by Haraldo
    rowsI am attempting to perform a mongodb update on each field in an array of records. An example schema is below: { "_id" : ObjectId("508710f16dc636ec07000022"), "summary" : "", "uid" : "ABCDEF", "username" : "bigcheese", "name" : "Name of this document", "status_id" : 0, "rows" : [ { "score" : 12, "status_id" : 0, "uid" : 1 }, { "score" : 51, "status_id" : 0, "uid" : 2 } ] } So far I have been able to perform single updates like this: db.mycollection.update({"uid":"ABCDEF","rows.uid":1}, {$set:{"rows.$.status_id":1}},false,false) However, I am struggling as to how to perform an update that will update all array records to a status_id of 1 (for instance). Below is how I imagine it should work: db.mycollection.update({"uid":"ABCDEF"}, {$set:{"rows.$.status_id":1}},false,true) However I get the error: can't append to array using string field name [$] I have tried for quite a while with no luck. Any pointers?

    Read the article

  • MongoDb - $match filter not working in subdocument

    - by Ranjith
    This is Collection Structure [{ "_id" : "....", "name" : "aaaa", "level_max_leaves" : [ { level : "ObjectIdString 1", max_leaves : 4, } ] }, { "_id" : "....", "name" : "bbbb", "level_max_leaves" : [ { level : "ObjectIdString 2", max_leaves : 2, } ] }] I need to find the subdocument value of level_max_leaves.level filter when its matching with given input value. And this how I tried, For example, var empLevelId = 'ObjectIdString 1' ; MyModel.aggregate( {$unwind: "$level_max_leaves"}, {$match: {"$level_max_leaves.level": empLevelId } }, {$group: { "_id": "$level_max_leaves.level", "total": { "$sum": "$level_max_leaves.max_leaves" }}}, function (err, res) { console.log(res); }); But here the $match filter is not working. I can't find out exact results of ObjectIdString 1 If I filter with name field, its working fine. like this, {$match: {"$name": "aaaa" } }, But in subdocument level its returns 0. {$match: {"$level_max_leaves.level": "ObjectIdString 1"} }, My expected result was, { "_id" : "ObjectIdString 1", "total" : 4, }

    Read the article

  • Mongodb querying for multiple parameters

    - by gaggina
    I've this collections { "name" : "montalto", "users" : [ { "username" : "ciccio", "email" : "aaaaaaaa", "password" : "aaaaaaaa", "money" : 0 } ], "numers" : "8", "_id" : ObjectId("5040d3fded299bf03a000002") } If I want to search for a collection with the name of montalto and a user named ciccio I'm using the following query: db.coll.find({name:'montalto', users:{username:'ciccio'}}).count() But it does not work. Where I went wrong?

    Read the article

  • mongoDB many to many with one query?

    - by PowderKeg
    in mysql i use JOIN and one query is no problem. what about mongo? imagine categories and products. products may have more categories. categories may have more product. (many to many structure) and administrator may edit categories in administration (categories must be separated) its possible write product with categories names in one query? i used this structure categories { name:"categoryName", product_id:["4b5783300334000000000aa9","5783300334000000000aa943","6c6793300334001000000006"] } products { name:"productName", category_id:["4b5783300334000000000bb9","5783300334000000000bb943","6c6793300334001000000116"] } now i can simply get all product categories, and product in some category and categories alone for editation. but if i want write product with categories names i need two queries - one to get product categories id and second to get categories names from categories by that ids. is this the right way? or this structure is unsuitable? i would like to have only one query but i dont know if its possible.

    Read the article

  • mongoose updating a field in a MongoDB not working

    - by Masiar
    I have this code var UserSchema = new Schema({ Username: {type: String, index: true}, Password: String, Email: String, Points: {type: Number, default: 0} }); [...] var User = db.model('User'); /* * Function to save the points in the user's account */ function savePoints(name, points){ if(name != "unregistered user"){ User.find({Username: name}, function(err, users){ var oldPoints = users[0].Points; var newPoints = oldPoints + points; User.update({name: name}, { $inc: {Points: newPoints}}, function(err){ if(err){ console.log("some error happened when update"); } else{ console.log("update successfull! with name = " + name); User.find({Username: name}, function(err, users) { console.log("updated : " + users[0].Points); }); } }); }); } } savePoints("Masiar", 666); I would like to update my user (by finding it with its name) by updating his/her points. I'm sure oldPoints and points contain a value, but still my user keep being at zero points. The console prints "update successful". What am I doing wrong? Sorry for the stupid / noob question. Masiar

    Read the article

  • MongoDB lists with paginations?

    - by Timmy
    for documents with lists with pagination, is it better to embed or use reference? im reading the custom type "SONManipulator" and it appears to transform every thing on retrieval, even the sub docs. i want to keep the list in the document sorted, should this impact anything?

    Read the article

  • MongoDB with OR and Range Indexes

    - by LMH
    I have a query: {"$query"=>{"user_id"=>"512f7960534dcda22b000491", "$or"=>[{"when_tz"=>{"$gte"=>2010-06-24 04:00:00 UTC, "$lt"=>2010-06-25 04:00:00 UTC}}, {"when_tz"=>{"$gte"=>2011-06-24 04:00:00 UTC, "$lt"=>2011-06-25 04:00:00 UTC}}, {"when_tz"=>{"$gte"=>2012-06-24 04:00:00 UTC, "$lt"=>2012-06-25 04:00:00 UTC}}], "_type"=>{"$in"=>["FacebookImageItem", "FoursquareImageItem", "InstagramItem", "TwitterImageItem", "Image"]}}, "$explain"=>true, "$orderby"=>{"when_tz"=>1}} And an index: { user_id: 1, _type: 1, when_tz: 1 } Explain: {"cursor"="BtreeCursor user_id_1__type_1_facebook_id_1 multi", "isMultiKey"=false, "n"=28, "nscannedObjects"=15094, "nscanned"=15098, "nscannedObjectsAllPlans"=181246, "nscannedAllPlans"=241553, "scanAndOrder"=true, "indexOnly"=false, "nYields"=12, "nChunkSkips"=0, "millis"=2869, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "facebook_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}, "allPlans"=[{"cursor"="BtreeCursor user_id_1__type_1_facebook_id_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15098, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "facebook_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_twitter_id_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "twitter_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_instagram_id_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "instagram_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_foursquare_id_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "foursquare_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_phash_1", "n"=21, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "phash"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_aperature_1_shutter_speed_1_when_tz_1", "n"=25, "nscannedObjects"=35, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "aperature"=[[{"$minElement"=1}, {"$maxElement"=1}]], "shutter_speed"=[[{"$minElement"=1}, {"$maxElement"=1}]], "when_tz"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_image_hash_1", "n"=22, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "image_hash"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_time_zone_guessed_1_when_tz_-1", "n"=23, "nscannedObjects"=32, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "time_zone_guessed"=[[{"$minElement"=1}, {"$maxElement"=1}]], "when_tz"=[[{"$maxElement"=1}, {"$minElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_time_zone_guessed_1_when_tz_1", "n"=24, "nscannedObjects"=33, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "time_zone_guessed"=[[{"$minElement"=1}, {"$maxElement"=1}]], "when_tz"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_time_zone_guessed_1_when_utc_-1", "n"=23, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "time_zone_guessed"=[[{"$minElement"=1}, {"$maxElement"=1}]], "when_utc"=[[{"$maxElement"=1}, {"$minElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_time_zone_guessed_1_when_utc_1", "n"=24, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "time_zone_guessed"=[[{"$minElement"=1}, {"$maxElement"=1}]], "when_utc"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_original_shared_item_id_1", "n"=24, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "original_shared_item_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_s3_tmp_file_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "s3_tmp_file"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_processed_-1_uploaded_-1_image_device_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "processed"=[[{"$maxElement"=1}, {"$minElement"=1}]], "uploaded"=[[{"$maxElement"=1}, {"$minElement"=1}]], "image_device"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_when_tz_1 multi", "n"=28, "nscannedObjects"=28, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "when_tz"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BasicCursor", "n"=0, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={}}], "server"=""} Any idea how to get it to hit the indexes?

    Read the article

  • when to index on multiple keys in mongodb

    - by Evan
    say I have an Item document with :price and :qty fields. I sometimes want to find all documents matching a given :price AND :qty, and at other times it will be either :price on its own or :qty on its own. I have already indexed the :price and :qty keys, but do I also need to create a compound index on both together or are the single key indexes enough?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >